Computer Generations

The First Generation Computers

Do you remember this computer?

Bendix G-15 Computer

It is the Bendix G-15 General Purpose Digital Computer, a First Generation computer introduced in 1956.

Another picture (66k). And another (105k). And you can download larger versions of the following pictures on this page by clicking on them. (But be aware, they vary in size between 0.5MB and 1.5MB and downloading will be slow).  Our G-15, front view  Our G-15, front view

Why this interest in the Bendix G-15?

Against the odds, the Western Australian branch of The Australian Computer Museum Inc has rescued one from the scrap heap. That's it, over on the right.
It is in pretty good condition, considering its age, and we hope one day we can get it working again. We also have various programming, operating and technical manuals, and schematics. They have been scanned and you can download them here.
This web site started life in 1998 as a sort of begging letter, seeking more information about the maintenance procedures. We have since been told that there was no formal maintenance manual and that our documentation is complete so far as maintaining the machine is concerned. Still, if you can help with some of the other items we are missing or add anything at all to our store of knowledge about the Bendix G-15, please get in touch with me, David Green at email address. StatCounter - Free Web Tracker and Counter

First Generation Computers.

The first generation of computers is said by some to have started in 1946 with ENIAC, the first 'computer' to use electronic valves (ie. vacuum tubes). Others would say it started in May 1949 with the introduction of EDSAC, the first stored program computer. Whichever, the distinguishing feature of the first generation computers was the use of electronic valves. My personal take on this is that ENIAC was the World's first electronic calculator and that the era of the first generation computers began in 1946 because that was the year when people consciously set out to build stored program computers (many won't agree, and I don't intend to debate it). The first past the post, as it were, was the EDSAC in 1949. The period closed about 1958 with the introduction of transistors and the general adoption of ferrite core memories.
OECD figures indicate that by the end of 1958 about 2,500 first generation computers were installed world-wide. (Compare this with the number of PCs shipped world-wide in 1997, quoted as 82 million by Dataquest).
Two key events took place in the summer of 1946 at the Moore School of Electrical Engineering at the University of Pennsylvania. One was the completion of the ENIAC. The other was the delivery of a course of lectures on "The Theory and Techniques of Electronic Digital Computers". In particular, they described the need to store the instructions to manipulate data in the computer along with the data. The design features worked out by John von Neumann and his colleagues and described in these lectures laid the foundation for the development of the first generation of computers. That just left the technical problems!  Bendix G-15, side panel open  Bendix G-15, side panel open
One of the projects to commence in 1946 was the construction of the IAS computer at the Institute of Advanced Study at Princeton. The IAS computer used a random access electrostatic storage system and parallel binary arithmetic. It was very fast when compared with the delay line computers, with their sequential memories and serial arithmetic.
The Princeton group was liberal with information about their computer and before long many universities around the world were building their own, close copies. One of these was the SILLIAC at Sydney University in Australia.
I have written an emulator for SILLIAC. You can find it here, along with a link to a copy of the SILLIAC Programming Manual.

First Generation Technologies

In 1946 there was no 'best' way of storing instructions and data in a computer memory. There were four competing technologies for providing computer memory: electrostatic storage tubes, acoustic delay lines (mercury or nickel), magnetic drums (and disks?), and magnetic core storage. A high-speed electrostatic store was the heart of several early computers, including the computer at the Institute for Advanced Studies in Princeton. Professor F. C. Williams and Dr. T. Kilburn, who invented this type of store, described it in Proc.I.E.E. 96, Pt.III, 40 (March, 1949). A simple account of the Williams tube is given here.
The great advantage of this type of "memory" is that, by suitably controlling the deflector plates of the cathode ray tube, it is possible to redirect the beam almost instantaneously to any part of the screen: random access memory.
Acoustic delay lines are based on the principle that electricity travels at the speed of light while mechanical vibrations travel at about the speed of sound. So data can be stored as a string of mechanical pulses circulating in a loop, through a delay line with its output connected electrically back to its input. Of course, converting electric pulses to mechanical pulses and back again uses up energy, and travel through the delay line distorts the pulses, so the output has to be amplified and reshaped before it is fed back to the start of the tube.  Bendix G-15, side panel and side door open  Bendix G-15, side panel and side door open
The sequence of bits flowing through the delay line is just a continuously repeating stream of pulses and spaces, so a separate source of regular clock pulses is needed to determine the boundaries between words in the stream and to regulate the use of the stream.
Delay lines have some obvious drawbacks. One is that the match between their length and the speed of the pulses is critical, yet both are dependent on temperature. This required precision engineering on the one hand and careful temperature control on the other. Another is a programming consideration. The data is available only at the instant it leaves the delay line. If it is not used then, it is not available again until all the other pulses have made their way through the line. This made for very entertaining programming!
A mercury delay line is a tube filled with mercury, with a piezo-electric crystal at each end. Piezo-electric crystals, such as quartz, have the special property that they expand or contract when the electrical voltage across the crystal faces is changed. Conversley, they generate a change in electrical voltage when they are deformed. So when a series of electrical pulses representing binary data is applied to the transmitting crystal at one end of the mercury tube, it is transformed into corresponding mechanical pressure waves. The waves travel through the mercury until they hit the receiving crystal at the far end of the tube, where the crystal transforms the mechanical vibrations back into the original electrical pulses.
Mercury delay lines had been developed for data storage in radar applications. Although far from ideal, they were an available form of computer memory around which a computer could be designed. Computers using mercury delay lines included the ACE computer developed at the National Physical Laboratory, Teddington, and its successor, the English Electric DEUCE.
A good deal of information about DEUCE (manuals, operating instructions, program and subroutine codes and so on) is available on the Web and you can find links to it here.
Nickel delay lines take the form of a nickel wire. Pulses of current representing bits of data are passed through a coil surrounding one end of the wire. They set up pulses of mechanical stress due to the 'magnetostrictive' effect. A receiving coil at the other end of the wire is used to convert these pressure waves back into electrical pulses. The Elliott 400 series, including the 401, 402, 403 used nickel delay lines. Much later, in 1966, the Olivetti Programma 101 desk top calculator also used nickel delay lines.  Bendix G-15, side door fully open  Bendix G-15, side door fully open
The magnetic drum is a more familiar technology, comparable with modern magnetic discs. It consisted of a non-magnetic cylinder coated with a magnetic material, and an array of read/write heads to provide a set of parallel tracks of data round the circumference of the cylinder as it rotated. Drums had the same program optimisation problem as delay lines.
Two of the most (commercially) successful computers of the time, the IBM 650 and the Bendix G-15, used magnetic drums as their main memory.
The Massachusetts Institute of Technology Whirlwind 1 was another early computer and building started in 1947. However, the most important contribution made by the MIT group was the development of the magnetic core memory, which they later installed in Whirlwind. The MIT group made their core memory designs available to the computer industry and core memories rapidly superceded the other three memory technologies.

Where Does the Bendix G-15 Fit In?

Table 1 shows, in chronological order between 1950 and 1958, the initial operating date of computing systems in the USA. This is not to suggest that all of these computers were first generation computers, or that no first generation computers were made after 1958. It does give a rough guide to the number of first generation computers made. Bendix introduced their G-15 in 1956. It was not the first Bendix computing machine. They introduced a model named the D-12, in 1954. However, the D-12 was a digital differential analyser and not a general purpose computer.
We don't know when the last Bendix G-15 was built, but about three hundred of the computers were ultimately installed in the USA. Three found their way to Australia. The one we have was purchased by the Department of Main Roads in Perth in 1962. It was used in the design of the Mitchell Freeway, the main road connecting the Northern suburbs to the city.
The G-15 was superceded by the second generation (transistorised) Bendix G-20.
Table 2 shows the computers installed or on order, in Australia, about December 1962. The three Bendix G-15s were in Perth (Department of Main Roads), Sydney (A.W.A. Service Bureau) and Melbourne (E.D.P Pty Ltd). Close-up of packages in situ Close-up of packages in situ

Overview of the G-15

The Bendix G-15 was a fairly sophisticated, medium size computer for its day. It used a magnetic drum for internal memory storage and had 180 tube packages and 300 germanium diode packages for logical circuitry. Cooling was by internal forced air. Storage on the Magnetic Drum comprised 2160 words in twenty channels of 108 words each. Average access time was 4.5 milliseconds. In addition, there were 16 words of fast-access storage in four channels of 4 words each, with average access time of 0.54 milliseconds; and eight words in registers consisting of 1 one-word command register, 1 one-word arithmetic register, and 3 two-word arithmetic registers for double-precision operations.
A 108-word buffer channel on the magnetic drum allowed input-output to proceed simultaneously with computation.
Word size was 29 bits, allowing single-precision numbers of seven decimal digits plus sign during input-output and twenty nine binary digits internally, and double-precision numbers of fourteen decimal digits plus sign during input-output, fifty eight binary digits internally.
Each machine language instruction specified the address of the operand and the address of the next instruction. Double-length arithmetic registers permitted the programming of double-precision operations with the same ease as single-precision ones. A CA155 valve package A CA155 valve package
An interpreter called Intercom 1000 and a compiler called Algo provided simpler alternatives to machine language programming. Algo followed the principles set forth in the international algorithmic language, Algol, and permitted the programmer to state a problem in algebraic form. The Bendix Corporation claimed to be the first manufacturer to introduce a programming system patterned on Algol.
The basic computation times, in milliseconds, were as follows (including the time required for the computer to read the command prior to its execution). The time range for multiplication and division represents the range between single decimal digit precision and maximum precision.
Single-Precision       Double-Precision

Addition or Subtraction                     0.54                   0.81

Multiplication or Division          2.43 to 16.7           2.43 to 33.1
External Storage was provided on searchable paper tape (2,500 words per magazine) and, optionally, on one to four, magnetic tape units with 300,000 words per tape unit reel.
More detail about the Bendix G-15 General Purpose Digital Computer.
Links

The Next Generation in Human Computer Interfaces

For decades our options for interacting with the digital world have been limited to keyboards, mice, and joysticks. Now with a new generation of exciting new interfaces in the pipeline our interaction with the digital world will be forever changed. In this post we will look at some amazing demonstrations, mostly videos, that showcase new ways of interacting with the digital world. Enjoy!
First up we have a video of  MIT’s David Merrill demonstrating a technology called Siftables at the 2009 TED conference.  Siftables are cookie-sized, computerized blocks you can stack and shuffle in your hands.  By arranging them in different configurations or tilting them at different angles you can do math, play music, spell worlds, pour virtual paint, and more.  The implications for hands on learning and manipulation of data are fantastic!  We have not seen any word on how/when this technology will be commercialized, but we hope it will be soon!>/p>


Next we have a technology for making music called Reactables.  By arranging and manipulating computerized blocks on a special table, musicians are presented with a completely new way of creating and interacting with music.  As seen in the previous video, Siftables are also capable of music composition, but reactables are unique in their singular focus on doing only music.  Whereas the siftables can perform many functions, the reactables are specialized for one task only, and in the coming years we can expect them to far outstrip the ability of Siftables when it comes to music.  Orginally created by Music Technology Group at the Universitat Pompeu Fabra in Barcelona Spain, Reactables have recently been spun off into a private company that is hard at work commercializing this exciting product.  For those that are really interested in this technology, there is a competing effort from Sony that may be of interest:


Kommerz from Austria brings us the mixed reality interface.  Using representative objects in the real world a person is able to manipulate objects in 3D space on a computer screen.  The possibilities for a new gaming interface look especially promising with this technology.  Check it out:


Thanks to Andreas for the above video.
This next demo from Sony has been around for many years, yet it is still very cool.  Why isn’t this technology finding a commercial market after all these years?  We have no idea.


Jeff Han from NYU demonstrates the capabilities of a multitouch interface at the TED conference in 2006.  Since then he has started a company around the technology called perceptive pixel.  This technology was recently used on CNN for presidential election coverage.


Speaking of multitouch interfaces, Microsoft has a technology called Microsoft Surface that is similar to Jeff Han’s technology, but in typical Microsoft fashion the company just doesn’t seem to get it.  Check out first a video from Microsoft that showcases the technology, followed by a hilarious parody from sarcastic gamer that shows how misguided Microsoft’s vision is:


The Khronos Projector is an interactive-art installation allowing people to explore pre-recorded movie content in an entirely new way. From the official site: “by touching the projection screen, the user is able to send parts of the image forward or backwards in time. By actually touching a deformable projection screen, shaking it or curling it, separate “islands of time” as well as “temporal waves” are created within the visible frame. This is done by interactively reshaping a two-dimensional spatio-temporal surface that “cuts” the spatio-temporal volume of data generated by a movie.”



Sixthsense from MIT is a technology that we have already covered in depth previously. Check out our detailed review for more information:


The world of interactive technology is literally exploding.  There must be several technologies we have overlooked in this review.  If you know of any that we missed, please let us know in the comments and we will try to add your suggestion to this post in an update.

Tags: , , ,

Computer Generations

Generations, Computers
Early modern computers are typically grouped into four "generations." Each generation is marked by improvements in basic technology. These improvements in technology have been extraordinary and each advance has resulted in computers of lower cost, higher speed, greater memory capacity, and smaller size.
This grouping into generations is not clear-cut nor is it without debate. Many of the inventions and discoveries that contributed to the modern computer era do not neatly fit into these strict categories. The reader should not interpret these dates as strict historical boundaries.
First Generation (1945–1959)
The vacuum tube was invented in 1906 by an electrical engineer named Lee De Forest (1873–1961). During the first half of the twentieth century, it was the fundamental technology that was used to construct radios, televisions, radar, X-ray machines, and a wide variety of other electronic devices. It is also the primary technology associated with the first generation of computing machines.
The first operational electronic general-purpose computer, named the ENIAC (Electronic Numerical Integrator and Computer), was built in 1943 and used 18,000 vacuum tubes. It was constructed with government funding at the University of Pennsylvania's Moore School of Engineering, and its chief designers were J. Presper Eckert, Jr. (1919–1995) and John W. Mauchly (1907–1980). It was almost 30.5 meters (100 feet) long and had twenty 10-digit registers for temporary calculations. It used punched cards for input and output and was programmed with plug board wiring. The ENIAC was able to compute at the rate of 1,900 additions per second. It was used primarily for war-related computations such as the construction of ballistic firing tables and calculations to aid in the building of the atomic bomb.
The Colossus was another machine that was built during these years to help fight World War II. A British machine, it was used to help decode secret enemy messages. Using 1,500 vacuum tubes, the machine, like the ENIAC, was programmed using plug board wiring.
These early machines were typically controlled by plug board wiring or by a series of directions encoded on paper tape. Certain computations would require one wiring while other computations would require another. So, while these machines were clearly programmable, their programs were not stored internally. This would change with the development of the stored program computer.
The team working on the ENIAC was probably the first to recognize the importance of the stored program concept. Some of the people involved in the early developments of this concept were J. Presper Eckert Jr. (1919–1955) and John W. Mauchly (1907–1980), and John von Neumann (1903–1957). During the summer of 1946, a seminar was held at the Moore School that focused great attention on the design of a stored program computer. About thirty scientists from both sides of the Atlantic Ocean attended these discussions and several stored programmed machines were soon built.
One of the attendees at the Moore School seminar, Maurice Wilkes (1913–), led a British team that built the EDSAC (Electronic Delay Storage Automatic Calculator) at Cambridge in 1949. On the American side, Richard Snyder led the team that completed the EDVAC (Electronic Discrete Variable Automatic Computer) at the Moore School. Von Neumann helped design the IAS (Institute for Advanced Study) machine that was built at Princeton University in 1952. These machines, while still using vacuum tubes, were all built so that their programs could be stored internally.
Another important stored program machine of this generation was the UNIVAC (UNIVersal Automatic Computer). It was the first successful commercially available machine. The UNIVAC was designed by Eckert and Mauchly. It used more than 5,000 vacuum tubes and employed magnetic tape for bulk storage. The machine was used for tasks such as accounting, actuarial table computation, and election prediction. Forty-six of these machines were eventually installed.
The UNIVAC, which ran its first program in 1949, was able to execute ten times as many additions per second as the ENIAC. In modern dollars, the UNIVAC was priced at $4,996,000. Also, during this period, the first IBM computer was shipped. It was called the IBM 701 and nineteen of these machines were sold.
Second Generation (1960–1964)
As commercial interest in computer technology intensified during the late 1950s and 1960s, the second generation of computer technology was introduced—based not on vacuum tubes but on transistors.
John Bardeen (1908–1991), William B. Shockley (1910–1989), and Walter H. Brattain (1902–1987) invented the transistor at Bell Telephone Laboratories in the mid-1940s. By 1948 it was obvious to many that the transistor would probably replace the vacuum tube in devices such as radios, television sets, and computers.
One of the first computing machines based on the transistor was the Philco Corporation's Transac S-2000 in 1958. IBM soon followed with the transistor-based IBM 7090. These second generation machines were programmed in languages such as COBOL (Common Business Oriented Language) and FORTRAN (Formula Translator) and were used for a widevariety of business and scientific tasks. Magnetic disks and tape were often used for data storage.
The creation of the UNIVAC, the first electronic computer built for commercial use, began the computer boom.The creation of the UNIVAC, the first electronic computer built for commercial use, began the computer boom.

Third Generation (1964–1970)
The third generation of computer technology was based on integrated circuit technology and extended from approximately 1964 to 1970. Jack Kilby (1923–) of Texas Instruments and Robert Noyce (1927–1990) of Fairchild Semiconductor were the first to develop the idea of the integrated circuit in 1959. The integrated circuit is a single device that contains many transistors.
Arguably the most important machine built during this period was the IBM System/360. Some say that this machine single handedly introduced the third generation. It was not simply a new computer but a new approach to computer design. It introduced a single computer architecture over a range or family of devices. In other words, a program designed to run on one machine in the family could also run on all of the others. IBM spent approximately $5 billion to develop the System/360.
One member of the family, the IBM System/360 Model 50, was able to execute 500,000 additions per second at a price in today's dollars of $4,140,257. This computer was about 263 times as fast as the ENIAC.
During the third generation of computers, the central processor was constructed by using many integrated circuits. It was not until the fourth generation that an entire processor would be placed on a single silicon chip—smaller than a postage stamp.
Fourth Generation (1970–?)
The fourth generation of computer technology is based on the microprocessor. Microprocessors employ Large Scale Integration (LSI) and Very Large Scale Integration (VLSI) techniques to pack thousands or millions of transistors on a single chip.
The Intel 4004 was the first processor to be built on a single silicon chip. It contained 2,300 transistors. Built in 1971, it marked the beginning of a generation of computers whose lineage would stretch to the current day.
In 1981 IBM selected the Intel Corporation as the builder of the microprocessor (the Intel 8086) for its new machine, the IBM-PC. This new computer was able to execute 240,000 additions per second. Although much slower than the computers in the IBM 360 family, this computer cost only $4,000 in today's dollars! This price/performance ratio caused a boom in the personal computer market.
In 1996, the Intel Corporation's Pentium Pro PC was able to execute 400,000,000 additions per second. This was about 210,000 times as fast as the ENIAC–the workhorse of World War II. The machine cost only $4,400 in inflation-adjusted dollars.
Microprocessor technology is now found in all modern computers. The chips themselves can be made inexpensively and in large quantities. Processor chips are used as central processors and memory chips are used for dynamic random access memory (RAM). Both types of chips make use of the millions of transistors etched on their silicon surface. The future could bring chips that combine the processor and the memory on a single silicon die.
During the late 1980s and into the 1990s cached, pipelined, and superscaler microprocessors became commonplace. Because many transistors could be concentrated in a very small space, scientists were able to design these single chip processors with on-board memory (called a cache) and were able to exploit instruction level parallelism by using instruction pipelines along with designs that permitted more than one instruction to be executed at a time (called superscaler). The Intel Pentium Pro PC was a cached, superscaler, pipelined microprocessor.
Also, during this period, an increase in the use of parallel processors has occurred. These machines combine many processors, linked in various ways, to compute results in parallel. They have been used for scientific computations and are now being used for database and file servers as well. They are not as ubiquitous as uniprocessors because, after many years of research, they are still very hard to program and many problems may not lend themselves to a parallel solution.
The early developments in computer technology were based on revolutionary advances in technology. Inventions and new technology were the driving force. The more recent developments are probably best viewed as evolutionary rather than revolutionary.
It has been suggested that if the airline industry had improved at the same rate as the computer industry, one could travel from New York to San Franscisco in 5 seconds for 50 cents. In the late 1990s, microprocessors were improving in performance at the rate of 55 percent per year. If that trend continues, and it is not absolutely certain that it will, by the year 2020 a single microprocessor could possess all the computing power of all the computers in Silicon Valley at the dawn of the twenty-first century.
Michael J. McCarthy
Computer History

Computer Generation: Visions and Demands

Paper presented at the third international conference on Military Applications of Synthetic Environments and Virtual Reality, MASEVR'97, Sweden.by Anders Sandberg <asa@nada.kth.se> and Robert Söderberg <md87-rso@nada.kth.se>

Abstract

The computer generation consists of the young people who have grown up with the spread of personal computers, cellular phones and information networks. Their visions, ideas and demands will likely affect the development of virtual reality systems strongly, and likely affect future forms of organisation, interaction and politics. This is an attempt to analyse some of the ideas of the computer generation.

The computer generation?

Dividing people into generations is an exercise in futility, since there is no limit to the individuality and complexity of people. The more we try to define individuals, the more truth slip through our fingers. But at the same time, as we grow up we are imprinted with views, culture and ideas (memes [Dawkins76]) from society and especially from people of our own age and social group. Ideas which are imprinted during a formative period can persist throughout life, creating long term effects as a generation grows up and influences society.The computer generation is a loose term, intended to suggest the generation of people who have grown up with computers.
The real computer generation, born in the 80's, has grown up with computers in the home. It is the first generation who has had access to computers all their lives and take them for granted. To the older members of the computer generation, born in the late 60's and 70's, computers are something new and interesting. We still remember the Sinclair ZX-81, the Commodore VIC 20 and the Apple II. Computers and information technology is something that appeared during our youth before our eyes, but except for some enthusiasts they were not in common use until recently (the early 90's or so). However, the younger members of the computer generation have always had computers, and don't regard them as something new or special. They are tools to be used, just like the cellular phone, the other defining technology of the computer generation, a generation which could as well be called the communication generation.
In the following, we will look at some of tendencies, visions and demands of the computer generation. Some of these concepts are technologically driven, some are the result of changing societal attitudes; they interact to form the world-view of the computer generation.

Media Awareness

The number and amount of available media has been steadily growing during this century, and each generation has adapted to them in its own way. "Generation X", described by Douglas Coupland (and defined by the marketing establishment) grew up with the modern media and mature advertising techniques; the result has been a surprising level of media awareness, an intimate knowledge about the languages of the different media and how they are intended to affect us.This has led to the idea of the "ironic generation" which sees through much of the surrounding lies and manipulation attempts and takes a perverse delight in parodying them. The advertising business quickly caught on, and invented rebel advertising, advertising that plays on anti-advertising views. The result has of course been an ever escalating co-evolution of advertising and media awareness.
The computer generation has grown up with this, and moves the media evolution into cyberspace. It is at present hard to tell what will happen, but judging from the current trends many will no longer want to be information consumers, they want to take an active part in shaping the media around them.

The child of the remote control may indeed have a shorter attention span as defined by the behavioral psychologists of our prechaotic culture's academic institutions (which are themselves dedicated to little more than preserving their own historical stature). But this same child also has a much broader attention range. The skill to be valued in the twenty-first century is not length of attention span but the ability to multitask---to do many things at once, well. Douglas Rushkoff, http://jeffco.k12.co.us/di/navigate.html
The computer generation has not much need for long attention spans. In fact, in the high-bandwidth, interactive, fast-moving environment they have grown up in a long attention span is likely to lead to confusion as too much happens. Instead an increased attention range is useful, the ability to deal with many different and contradictory things at once without being stressed by them.

Mistrust of Authorities


Bread and Circuits: The electronic era tendency to view party politics as corny -- no longer relevant or meaningful or useful to modern societal issues, and in may cases dangerous. Douglas Coupland
The computer generation is not anti-authoritarian. It is un-authoritarian.There is a widespread loss of confidence in authorities today, ranging from contempt for politicians to doubts about the big institutions of society such as traditional religion, science and economy. The computer generation takes this further: it sees no need for authorities, and does not consider them legitimate. This is a profound difference from older generations, who might mistrust authorities but still accept them as authorities; to the computer generation they may have power and wealth but no intrinsic status.
Attempts to wield authority or manipulate are resisted; a common reaction is to ignore them. Attempts that cannot be ignored often lead to one of three different responses: a sullen compliance, well planned to waste as much resources and energy as possible for the authority through deliberate inefficiency, attempts to counteract the authority by circumventing it and its rules, or a militant, angry response which lashes out in general.
An illuminating example is the reactions of the users of Internet to various attempts to ban certain materials; this has in several cases led to the creation of multiple overseas mirror sites, regardless of what kind of material was threatened [Huber96]. The interesting thing is that this kind of disobedience so far has worked; it is becoming increasingly apparent that ordinary forms of censorship does not work well on the net.
On a personal level, the decay of the authority figure has also been accompanied by the disappearance of the idol. The computer generation doesn't attempt to emulate idols in everything like previous generations, but instead pick and choose aspects they admire or like instead of taking an entire mental "package deal".

Locally - Globally

Another effect of the presence of powerful, ubiquitous communications and mistrust of authorities is the increasing importance of the personal network of friends and acquaintances.One of the main reasons authorities are less important to the computer generation is that they are remote and hard to reach. Remoteness is not measured physically, but rather socially. The personal network is close in social space, and hence gains importance, while the authorities fade into the distance.
These social networks are unbound by physical location; through cellular phones, Internet and email it is easy to remain in touch regardless of where in the world you are. To the computer generation there is no real difference between acting in a physical community and acting in a virtual community, except the obvious differences in how communication is done.
While the personal, local networks are the basis for social interaction and shared world-views, the computer generation is also becoming increasingly global. Through modern media and communications the world is becoming accessible. It is not the global village, rather the global network with a multitude of levels and subnetworks unbound by geography.

The Presence of Computers and Communications

While older people struggle with developing social norms to handle the use of cellular phones (for example how to deal with pagers and phones during lectures or restaurant visits) the younger generations have already adapted to them, seamlessly integrating them into their lifestyle. The same occurred a few decades ago when the teenagers occupied the phones, using them in "frivolous" ways their parent's would never have dared to think of.

Universal access

SAN JOSE, Calif. (Reuter) - The number of personal computers connected to the Internet will rise 71 percent this year to 82 million, driven by demand by businesses to stay in touch with their customers, a report by a market researcher said Wednesday. 970820
One of the major demands of the computer generation is universal access: access to the infosphere everywhere, everytime for everyone.The technological details doesn't really matter to them, what is important is how the systems can be made to serve their purposes. What the computer generation want is systems that do what they want without being obtrusive, an universal interface to the information they take for granted. When this link is broken, the loss is felt strongly; being cut off from the Net is almost physically painful.
The spread of the Internet, and the accompanying hype is instructive. It is almost taken as a natural fact that in a few years almost everyone will have access to it, and since this is a self-fulfilling prophecy (it motivates people to get access in order to avoid being left out and companies to invest in this future mega-market, which of course makes the net grow faster).
At present, around a third of all households in Sweden and the US have a personal computer, and the percentage is rising at a high rate. While there is some lag between different countries, the trend is fairly clear. The same trend is noticeable with the spread of cellular phones and other new communications technologies: access anywhere, anytime, and an increasing speed of adoption.
When I asked a young friend what his demand for the future was he shouted: "Bandwidth to the people!".

Games

Video games are the first example of a computer technology that is having a socializing effect on the next generation on a mass scale, and even on a world-wide basis," says Patricia Greenfield in Mind and Media (Harvard University Press). As anyone who has played with game-addicted youngsters knows, they often have extraordinary semiotic skills. Describing the embarrassing experience of being thrashed at Pac-Man by a 5 year old, Greenfield says, "as a person socialised into the world of static visual information, I made the unconscious assumption that Pac-man would not change visual form. Children socialized with television and film are more used to dealing with dynamic visual change." At some things, it seems, our kids are destined to be smarter than us. McKenzie Wark, Super Mario Mania
One of the the more unexpected sides of the spread of computers is the effects of games on the mindset and skills of the computer generation.According to surveys, after word processing the most common application of personal computers is computer games. Games are even more common among younger people, needless to say. The computer game industry has gone from an insignificant offshoot of the arcade game industry in the early 80's to a major player in the media world, earning billions of dollars. By becoming a vital part of the entertainment industry it influences the world-view, imagery and ideas presented in other media.
The entertainment industry is also driving much of the development of consumer electronics and advanced computer graphics such as virtual reality; at present it is unclear if the most advanced and portable VR systems can be found in academic, military or commercial research labs, but the commercial systems will definitely have the biggest impact on the population. And as the games get more graphically and computationally demanding, the consumers buy more powerful computers; today's computer games are one of the most important forces in the development of personal computers.
But the presence of computer games also influences the players.
Computer games are a new phenomenon, not similar to any previous play activity. But just like all games played by children, they serve to train new skills.
What is especially interesting in the context of this conference is of course the popularity of strategic or tactical games such as Command & Conquer, Close Combat and Harpoon, as well as simulation games such as Sim City and Civilization. It seems that as the graphics and interfaces improve, a wider and wider audience has begun to play strategy games which were previously almost solely confined to a small strategist subculture.
Another class of games that are very relevant to this conference are networked games such as MUDs, Xpilot, Doom, Quake and Descent. These games are based on having several players interacting in the same game environment simultaneously, either working together or against each other. It is quite common for teams with shared goals to form, and work together against other teams. The members can be distributed across the network, sometimes with no out-of-game contact.
What skills does these game teach?
  • They teach the player how to manipulate virtual worlds, using iconic or direct-manipulation interfaces not unlike those suggested for "real" virtual reality applications. This suggests that the computer generation will have grown up with interaction skills significantly better than the older generations.
  • They teach practical strategy and tactics. Many of these games embody many important strategic concepts (like "concentrate your forces on the weak points in the enemy's defense" or "mobility is very important"), and through repeated playing the players learn them by experience. In fact, in some of the real-time games the players are taught to react tactically instinctively. This suggests that while the computer generation will not necessarily consist of Sun Tzus or Machiavellis, strategic skills will be noticeably more widespread than today.
  • The networked games not only train the same tactical skills the single-player games do, they also train the ability to form teams and act together as a team. Team formation is highly flexible, and does not rely on prior meetings or any formal command structure. Instead it relies on informal, on-the-fly decisions, and the teams can split, merge or dissolve depending on need. These digital intrapersonal skills fit well in with the emergence of virtual communities (which the networked games, especially the MUDs, have largely influenced).

The Net

The spread of the Internet will have a profound effect, not only on a practical level but also symbolically. As we have mentioned, the growth of the net is self-enhancing, and has come to symbolize many of the values of the computer generation: global access, freedom from authorities, global reach but with personal networks, and fast and flexible growth.At present the Internet is still very limited, but there is no sign that the growth will decrease, quite the opposite [Sciam97, Brimelow97].

Interactive information

...many members of the current computer generation are using the Internet, cable, and newly established video news services rather than the newspapers or even their own locally-originated tv morning and evening news programs as the source of their information and, more importantly, marketing services.
David Graham Halliday, Point of View 7.02, http://www.viewfinder.com/pov/pov702/stwise.html
One of the developing markets is interactive information services. Traditional media have increasingly begun the transition towards becoming more interactive. Typical examples are the net-versions of newspapers, magazines and journals, the rise in "viewer's choice" television where the viewers are invited to influence the program, and especially the attempts to create new, interactive forms of entertainment and information intended to replace the current systems.The demand for universal access both makes this possible and desirable; for the media producers it is a potential market, for the media consumers it is content. The computer generation goes further: it is used to interactive information and does not like static information that just sits there - ideally it should be possible to not only interact with, but use, edit, influence and participate in. It is a more experiential mode of participation than the reflexive information used by past generations [Norman94]. In order to be taken seriously by the computer generation, you have to communicate with its member, not just speak to them.

...the marvel of postmodern communications [makes it so that we] recede from one another literally at the speed of light. We need never see or talk to anybody with whom we don't agree." Harper's Magazine
At the same time, we are seeing how media are becoming increasingly fragmented as everyone can freely choose what they want to see and what they want to ignore. In a newspaper it is not possible to ignore the front page headlines, but on the net it is trivial to set up a killfile to remove postings from irritating people or to not access websites with differing views. This leads to an increasing divergence of world-view as the Net and Net media grows more important: people already see the world in fundamentally different ways, with the help of information filtering they can create their own (tunnel-) realities.

Accelerating Change

Things are changing faster than ever, and the rate is increasing. To the computer generation this is the normal state, easy to take for granted.We are increasingly seeing a divergence into a "fast lane" and "slow lane" society, where different parts change at different speed. This creates huge tensions as old institutions gradually fall behind and struggle to keep up - witness the legal problems caused by the Internet or genetic engineering, which are clearly beyond the ability of the current legal system to handle.
The flexible, ambitious people of the fast lane feel constrained by the slow lane, and want to break free or at least overtake it. This will drive the emergence of even more diversity and new forms of organization, as well as increasing tensions between the more and less adaptive people.
It is interesting to note that not even the computer generation is completely positive about the speed of change. They see its downsides quite well, but instead of trying to halt change as certain extreme neo-conservatives and luddites suggest, they want to find new ways of dealing with change in order to avoid the tensions and risks. As a friend put it: "It is funny to realize you are nostalgic at 20".

Diversity and Divergence

The computer generation does not have an unified vision. It has a multitude of individual visions, all different. Some individuals have very clear visions of the future and how to get there, while others lack them and see a depressing view of the future similar to the present (only worse).There are clear social differences here; education, social class and sex does influence the amount of optimism and ambition which is emerging. In general the cities tend to be more optimistic and future-oriented than rural areas, but there are notable exceptions. A good education and a learned flexibility goes a long way to make you optimistic, at least about your own future. The women are quickly edging in, and may in the long run get an advantage over men in the networked society that is emerging.
There is a noticeable risk that there will be a division between the people ambitious and interested enough to learn and use the new technologies, and the people who do not care or dare to learn them. We are already seeing not just a generation gap between generations, but a intra-generation gap between different parts of the same generation.

New, Flexible Organizations

Corporate bureaucracies and centralized planning will not be favored in the Digital Age. Plans must be made quickly and acted upon quickly. Time to market will become even more critical, with a couple of months delay being perhaps fatal. Individuals throughout organizations will need to be empowered to make decisions. Fobairt Internet Report
As we have seen, one of the major demands of the computer generation is flexibility and speed, and it tends to organize along informal networked lines. This suggests the need and eventual emergence of new forms of organization, likely based on the virtual organizations today, that can quickly adapt to changes and meet new demands.If an organization isn't flexible or respond fast enough, the computer generation tend to circumvent it, go to another organization or make up their own.
The importance of this cannot be overstated. A typical example is the Swedish Young Scientist Association, which was founded around 1969 and is a traditional association with chapters organized into districts, with traditional statutes and elections. At present it has around 3000 members, slowly decreasing. Compare this to Sverok, the Swedish Roleplaying and Conflict Game Association, which was founded in 1988 and at present has 28000 members, quickly increasing.
What is the difference? Both are directed towards the same age-group and often have overlapping interests; formally they appear to be very similar. Sverok is basically a virtual organization, with little need for a traditional bureaucracy since most of the individual chapters are completely independent but keep in touch through the net (while the young scientist association still largely relies on traditional newsletters and meetings). The organization can quickly adapt, present an unified face outwards if needed but otherwise act on a very local level. Obviously this has paid off.
The same will apply in other fields; we are already seeing how traditional, large inflexible corporations are faced with competition from small, flexible networked companies. Likely this trend will continue in other areas, including politics and societal institutions. Do we really need all that bureaucracy when it could be handled by computer?

The Eclipse of the National State

Political, social and commercial systems are being outpaced and outmoded by technology Peter Cochrane, Head of Research BT Laboratories
Few organizations are as stable as national states (they are intended to work that way). This suggests that they and their parts will have increasing problems of staying relevant as things change and the demands for flexibility increases. The idea of the eclipse of the national state is spreading, and taking on momentum.The rise of virtual communities (with increasing economic and political power), digital, untaxable money moving across the net [McHugh97], unauthoritarian thinking and the globalisation tendencies all serve to undermine the national state. Just as the legitimacy of authorities is being questioned, the legitimacy and relevance of the national state idea is being questioned by the computer generation. When the only difference between countries is the domain name, why bother?
This is in many ways a naive idea, but it is also self-fulfilling since the more people ignore national boundaries and institutions, the less important they will become. And very real developments such as strong cryptography and digital cash with accompanying strong corporate and financial interests are on the internationalization side.
As the national states are becoming less important, loyalty to one's network, friends and culture become more important. This is no longer linked to geography, rather to the social geography of cyberspace. It is quite possible to enjoy one's culture without linking it to a state.

Enhanced reality

Computer games and virtual communities are virtual worlds, but most people actually prefer to live in the real world. One development which we believe will become increasingly important is enhanced reality, the combination of virtual and real.One way of bringing this about is wearable computers, also known as smart clothes. The idea is to integrate computers and communications equipment into the clothes of the user, a system which is always present, adapts to the user and allows the user to both interact with the real world and the information environment.
At present wearable computers are just experimental systems and toys, but it seems likely they will grow in use since they can provide many of the things the computer generation wants: universal access, flexibility, quick access and "existential media" [Mann97], media which allows self-expression and self-control.
Beside the obvious changes in how people interact with computers (is there a need for offices if the computers are part of the clothing? Why not work in the park? And is the computer separate from yourself, part of you or an "exoself"?) the social implications are interesting. Virtual communities can easily "tune in" to the same information channel, experiencing a shared enhanced reality. A multiplayer networked game set in enhanced reality appears entirely feasible, and would combine the excitement of current games on personal computers with the physical excitement of "laser tag"; such games will certainly be developed once the technology is adopted by enough people.
We may see new tools to create virtual communities, "social software" intended to facilitate social interaction, sharing of information and world-views. As reality itself becomes more tunable (edit out the parts you don't like) the divergence of world-views seen today will grow exponentially [Chislenko96]. It is also likely that the way we see reality itself will change.
In the long run, technologies such as nanotechnology [Drexler87] may even make the physical environment just as mercurial as the software world. If that happens, then enhanced reality takes on a whole new physical meaning - reality has finally become an information medium among others.

Conclusion

The computer generation is the logical result of the technological and social development of the late 20th century: spoiled in some sense with easy access and material wealth, quickly adapting to a world that changes faster and faster, networked and used to technology. It is already one of the driving forces in the development of the Internet, virtual environments and the ideas that will determine the direction of global society in the early decades of the 21st century.The computer generation is right now mainly in its teens. Long before 2010 it will be a noticeable political, economical and technological factor. It's visions can be described as:
  • Access for everyone, everywhere, everytime.
  • Networks instead of authoritarian institutions.
  • Social geography more important than physical geography.
  • Interactive, engaging information.
  • Diversity instead of uniformity.
  • Flexibility and speed instead of tradition.
  • Reality as software.

References

Picture Sources