How hardware and software contribute to efficiency and effectiveness
Information technology is improving at an accelerating rate. This opens the way for innovative applications, which make organizations and individuals more efficient and effective. This chapter outlines hardware progress, which has led to new forms of software and software development. Software evolution has brought us to the current era of Internet-based software. After describing some of the characteristics of Internet-based software, we ask whether the progress we have enjoyed will continue and conclude with a discussion of some of the non-technical issues, which tend to impede that progress.
I.Hardware progress
II.Software progress
III.Internet based software
IV.Will the progress continue?
V.Bumps in the information technology road
I. Hardware progress
Technology is improving rapidly. It seems that a new cell phone or computer is outdated the day after you buy it. This is nothing new. Consider manned flight for example. The Wright Brothers first flight in 1903 lasted only12 seconds and covered 37 meters.[1] Once we understand the science underlying an invention, engineers make rapid improvements in the technology. Within 66 years of that historical first flight, Apollo 11 landed on the moon.
Would you guess that that information technology progress is slowing down, holding steady, or accelerating? It turns out that it is accelerating – the improvements this year were greater than those of last year, and those of next year will be still greater. We often use the term exponential to describe such improvement. Informally, it means that something is improving very rapidly. More precisely, it means that the improvement is taking place at a constant rate, like compound interest. In this section, we consider three technologies underlying IT – electronics, storage, and communication. Each of these technologies is improving exponentially.
Sidebar: Exponential growth
Take, for example, a startup company with $1,000 sales during the first year. If sales double (100 percent growth rate) every year, the sales curve over the first twelve years will be:
Exhibit 1: Exponential growth graphed with a linear scale.Sales are growing exponentially, but the curve is almost flat at the beginning and then shoots nearly straight up. Graphing the same data using a logarithmic scale on the Y axis gives us a better picture of the constant growth rate:
Exhibit 2: Exponential growth graphed with a logarithmic scale.Note that since the growth rate is constant (100 percent per year in this example), the graph is a straight line. You can experiment with differing growth rates using the attached spreadsheet.
Of course, nothing, not even technology, improves exponentially for ever. At some point, exponential improvement hits limits, and slows down. Consider the following graph of world records in the 100 meter dash:[2]
Exhibit 3: Linear improvement.There is steady improvement, but it is at a roughly linear rate. The record improves by a constant amount, not at a constant rate.
Progress in electronic technology
Transistors are a key component in all electronic devices – cell phones, computers, portable music players, wrist watches, etc. A team of physicists at the Bell Telephone research laboratory in the United States invented the first transistor, shown below.
Exhibit 4: The first transistor.
This prototype was about the size of a 25 cent coin, and, like the Wright brothers’ first plane, it had no practical value, but was a proof of concept. Engineers soon improved upon the design. In 1954, Texas Instruments began manufacturing transistors. They were about the size of a pencil eraser, and several would be wired together to make a circuit for a device like a transistor radio. In the late 1950s, engineers began making integrated circuits (ICs or chips) which combined several transistors and the connections between them on a small piece of silicon. Today, a single IC can contain millions of transistors and the cost per transistor is nearly zero.
Consider the central processing unit (CPU) chip that executes the instructions in personal computers and other devices. Most personal computers use CPU chips manufactured by Intel Corporation, which offered the first commercial microprocessor (a complete CPU on a chip) in 1971. That microprocessor, the Intel 4004, contained 2,300 transistors. As shown here, Intel CPU transistor density has grown exponentially since that time:[3]
Exhibit 5: Improvement in electronic technology.
With the packaging of multiple CPU cores on a single chip, transistor counts are now well over one billion.
Intel co-founder Gordon Moore predicted this exponential growth in 1965. He formulated Moore’s Law, predicting that the number of transistors per chip that yields the minimum cost per transistor would increase exponentially. He showed that transistor counts had pretty much doubled every year up to the time of his article and predicted that improvement rate would remain nearly constant for at least ten years.[4]
We have used Intel CPU chips to illustrate exponential improvement in electronic technology, but we should keep in mind that all information technology uses electronic components. Every computer input, output or storage device is controlled and interfaced electronically, and computer memory is made of ICs. Communication systems, home appliances, autos, and home entertainment systems all incorporate electronic devices.
Progress in storage technology
Storage technology is also improving exponentially. Before the invention of computers, automated information processing systems used punched cards for storage. The popularIBMcard could store up to 80 characters, punched one per column. The position of the rectangular holes determined which character was stored in a column. We see the code for the ten digits, 26 letters and 12 special characters below.[5]
Exhibit 6: Punch card code for alphabetic and numeric symbols.[6]
Punch card storage was not very dense by today’s standards. The cards measured 3 ¼ by 7 3/8 inches,[7] and a deck of 1,000 was about a foot long. Assuming that all 80 columns are fully utilized, that works out to about 48,000 characters per cubic foot, which sounds good until we compare it to PC thumb drives which currently hold up to 8 billion characters.
Every type of data – character, audio, video, etc. – is stored using codes of ones and zeros called bits (short for binary digits).[8] Every storage technology distinguishes a one from a zero differently. Punched cards and tape used the presence or absence of a hole at a particular spot. Magnetic storage differentiates between ones and zeros by magnetizing or not magnetizing small areas of the media. Optical media uses tiny bumps and smooth spots, and electronic storage opens or closes minute transistor “gates” to make ones and zeros.
We make progress both by inventing new technologies and by improving existing technologies. Take, for example, the magnetic disk. The first commercially available magnetic disk drive was on IBM's 305 RAMAC (Random Access Method of Accounting and Control) computer, shown below.
Exhibit 7: RAMAC, the first magnetic disk storage device.
IBM shipped the RAMAC on September 13, 1956. The disk could store 5 million characters (7 bits each) using both sides of 50 two-foot-diameter disks. Monthly rental started at $2,875 ($3,200 if you wanted a printer) or you could buy a RAMAC for $167,850 or $189,950 with a printer.(In 1956, a cola or candy bar cost five cents and a nice house in Los Angeles $20,000).
Contrast that to a modern disk drive for consumer electronic devices like portable music and video players. The capacity of the disk drive shown here is about 2,700 times that of a RAMAC drive, and its data access speed and transfer rate are far faster, yet it measures only 40x30x5 millimeters, weighs 14 grams, uses a small battery for power. The disk itself is approximately the size of a US quarter dollar.
Exhibit 8: A modern disk storage device, manufactured by Seagate Technology.
Progress in communication technology
People have communicated at a distance using fires, smoke, lanterns, flags, semaphores, etc. since ancient times, but the telegraph was the first electronic communication technology. Several inventors developed electronic telegraphs, but Samuel Morse’s hardware and code (using dots and dashes) caught on and became a commercial success. Computer-based data communication experiments began just after World War II, and they led to systems like MIT’s Project Whirlwind, which gathered and displayed telemetry data, and SAGE, an early warning system designed to detect Soviet bombers. The ARPANet, a general purpose network, followed SAGE. In the late 1980s, the US National Science Foundation created the NSFNet, an experimental network linking the ARPANet and several others – it was an internetwork. The NSFNetwork was the start of today’s Internet.[9]
Improvement in the Internet illustrates communication technology progress. There are several important metrics for the quality of a communication link, but speed is basic.[10] Speed is typically measured in bits per second – the number of ones and zeros that can be sent from one network node to another in a second. Initially the link speed between NSFNet nodes was 64 kilobits per second, but it was soon increased to 1.5 megabits per second then to 45 megabits per second.[11]
Exhibit 9: The NSFNet backbone connected connected 13 NSF supercomputer centers and regional networks.
Sidebar: Commonly used prefixes
Memory and storage capacity are measured in bits – the number of ones and zeros that can be stored. Data transmission rates are measured in bits per a unit of time, typically bits per second. Since capacities and speeds are very high, we typically use shorthand prefixes. So, instead of saying a disk drive has a capacity of 100 billion bits, we say it has a capacity of 100 gigabits.
The following table shows some other prefixes:
Prefix / 1,024n / English term / Approximate numberkilo / 1 / Thousand / 1,000
mega / 2 / Million / 1,000,000
giga / 3 / Billion / 1,000,000,000
tera / 4 / Trillion / 1,000,000,000,000
peta / 5 / quadrillion / 1,000,000,000,000,000
exa / 6 / Quintillion / 1,000,000,000,000,000,0000
zetta / 7 / Sextillion / 1,000,000,000,000,000,0000,000
yotta / 8 / Octillion / 1,000,000,000,000,000,0000,000,000
Note that the numbers in the fourth column are approximate. For example, strictly speaking, a megabit is not one million (1,000,000) bits it is 1,024 x 1,024 (1,048,576) bits. Still, they are close enough for most purposes, so we often speak of, say, a gigabit as one billion bits.
Capacities and rates may also be stated in bytes rather than bits. There are 8 bits in a byte, so dividing by 8 will convert bits to bytes and multiplying by 8 will convert bytes to bits.
The NSFnet was the first nationwide Internet backbone, but today there are hundreds of national and international backbone networks. High speed links now commonly transmit 10 gigabits per second, and a single fiber can carry multiple data streams, each using a different light frequency (color).[12] Of course, progress continues. For example, Siemens researchers have reliably transmitted data at 107 gigabits per second over a one hundred mile link and much faster speeds are achieved in the lab.[13]
There has been similar improvement in local area network(LAN) technology. Ethernet is the most common LAN technology. When introduced in 1980 Ethernet links required short, thick cables and ran at only 10 megabits per second. Today, we use flexible wires, Ethernet speed is 10 gigabits per second today, and standards groups are working on 40 and 100 gigabits per second.[14]
Individuals and organizations also use wireless communication. The WiFi[15]standard in conjunction with the availability of license-free radio frequency bands led to the rapid proliferation of wireless local area networks in homes and offices. When away from the home or office, we often connect to the Internet at WiFi hotspots, public locations with Internet-connected WiFi radios. We also connect distant locations with wide-area wireless links when installing cable is impractical. And we can use satellite links to reach remote locations, but they are expensive and introduce a delay of about .24 seconds because of the distance the signal must travel.[16]
Cellular telephone networks are also growing rapidly. There were 1,263 million cellular users in 2000, and that had increased to 2,168 five years later.[17] (The number of wired telephone lines fell from 979 million to 740 million during the same period). Cellular communication has improved with time, and, in developed nations, we are deploying third and even fourth generation technology, which is fast enough for many Internet applications.
Internet resource:Listen to TCP/IP co-inventor Vint Cerf’s StanfordUniversitypresentation on the Internet and its future.
In this historic video, Cerf’s collaborator Bob Kahn and other Internet pioneers describe the architecture and applications of their then brand new research network.
II. Software progress[18]
The hardware progress we have enjoyed would be meaningless if it did not lead to new forms of software and applications. The first computers worked primarily on numeric data, but early computer scientists understood their potential for working on many types of application and data. By 1960, researchers were experimenting non-numeric data like text, images, audio and video; however, these lab prototypes were far too expensive for commercial use. Technological improvement steadily extended the range of affordable data types. The following table shows the decades in which the processing of selected types of data became economically feasible:
Exhibit 10: Commercial viability of data types.
Decade / Data Type1950s / Numeric
1960s / Alphanumeric
1970s / Text
1980s / Images, speech
1990s / Music, low-quality video
2000s / High-quality video
But none of this would have been possible without software, and we have seen evolution in the software we use, and, underlying that, the platforms we use to develop and distribute it. Let us consider the evolution of software development and distribution platforms from batch processing to time sharing, personal computers, local area networks, and wide area networks.
Batch processing
The first commercial computers, in the 1950s, were extremely slow and expensive by today’s standards, so it was important to keep them busy doing productive work at all times. In those days, programmers punched their programs into decks of cards like the one shown above, and passed them to operators who either fed the cards directly into the computer or copied them onto magnetic tape for input to the computer. To keep the computer busy, the operators made a job queue – placing the decks for several programs in the card reader or onto a tape. A master program called theoperating system monitored the progress of the application program that was running on the computer. As soon as one application program ended, the operating system loaded and executed the next one.
Batch processing kept the computers busy at all times, but wasted a lot of human time. If a programmer made a small error in a program and submitted the job, it was typically several hours before he or she got the resultant error message back. Computer operators also had to be paid. Finally, professional keypunch operators did data entry using machines with typewriter-like keyboards that punched holes in the cards. This tradeoff of human for computer time reflected the fact that computers were extremely expensive.
Time sharing
By the early 1960s, technology had progressed to the point, where computers could work on several programs at a time, and time-shared operating systems emerged as a viable platform for programming and running applications. Several terminals (keyboard/printers) were connected to a single computer running a time-sharing operating system. Programmers entering instructions or data entry operators used the terminals. They received immediate feedback from the computer, increasing their productivity.
Let’s say there were ten programmers working at their own terminals. The operating system would spend a small “slice” of time – say a twentieth of a second – on one job, then move to the next one. If a programmer was thinking or doing something else when his or her time slice came up, the operating system skipped them. Since the time slices were short, each programmer had the illusion that the computer was working only on their job and they got immediate feedback in testing their programs. The computer “wasted” time switching from one job to the next, but it paid off in saving programmer time.
Exhibit 11. An early timesharing terminal.
Time-sharing terminals were also used for data entry, so we began to see applications in which users, for example, airline reservation clerks entered their own data. Professional keypunch operators began to disappear.
Personal computers
Time-sharing continued to improve resulting in a proliferation of ever smaller and cheaper “mini-computers.” They might be the size of a refrigerator rather than filling a room, but users still shared them. As hardware improved, we eventually reached the point where it was economical to give computers to individuals. The MITS Altair, introduced in 1975, was the first low-cost personal computer powerful enough to improve productivity. By the late 1970s, programmers, professional users and data entry workers were using personal computers. They were much less powerful than today’s PC, but they began to displace time-sharing systems.