Reckoners, 1st Gen., page 0131
|
6
To the First Generation
.
|
The unexpected results would then enable machine owners to say with
something akin to parental pride, "My machine (instead of `my little boy')
said such a funny thing this morning!"
| -Sarah Turing, quoting her son Alan M. Turing, c. 1950
|
|
At the end of the Second World War computing machines abounded in an astonishing
variety. There were network analyzers, differential analyzers, analog computers for
directing gunfire; cross-footing punches, punched-card readers, tabulators, sorters,
collaters, and multipliers. There were key-driven adding machines, and mechanical
calculators: hand-cranked, lever-actuated, and electrically driven; also slide rules,
planimeters, and other analog instruments. There were machines that solved certain
differential equations, and machines that solved simultaneous equations: some analog,
others digital. And there were automatic digital computers: Zuse's Z4, the Harvard Mark I,
the Bell Labs Relay Computers.
Finally there was the ENIAC: an electronic, programmable digital computer. In some
ways it was like the other computers. It cost as much as the Mark I; it was as complex as
the Differential Analyzer; it computed with electronic circuits like the IBM 603 Multiplier.
But its combination of features made it unique. And in Britain there was a corresponding
electronic computer, the Colossus (actually, ten copies were built by 1946). It did not do
arithmetic, but used vacuum tube circuits to help break the codes of intercepted German
radio messages. (Only a handful of persons knew of its existence; not until 1970 was it first
publicly mentioned in a paper by I. J. Good.) Like the ENIAC the Colossus was to be an
influential machine, but for most in Britain the first news of an electronic digital computer
was Hartree's note on the ENIAC in Nature in 1946.1
Reckoners, 1st Gen., page 0132
|
Besides those computers, there were others in 1946 still in the planning stage or under
construction: the IBM SSEC, the Bell Labs Model VI, the EDVAC from the Moore
School, and the Mark II from Howard Aiken's lab at Harvard. The end of the war lifted a
lot of pressure to get the machines running quickly, so some of those projects dragged on
through the end of the decade. The result was
that the ENIAC was the only large-scale electronic computer in service in America for a
few years, while in Germany the Z4 was the only digital computer in continental Europe
until 1950, and even it was only marginally operational. The Colossus computers were
probably also kept in service, even if there were no more German codes to break. They
were just too important to send to the scrap-heap.
If computer construction slowed down right after the war, it was not for a lack of
interest. Conferences were held at both the Moore School and Harvard where interested
persons from all over were able to compare their individual projects, as well as get a
close-up view of two of the biggest computers of them all. Those conferences were a forum
where the many different theoretical approaches to computing could be criticized, and
where a consensus about what computers ought to look like first took form. The
proceedings of those conferences were published later in the decade and were widely read
throughout America and Europe. They serve as a bench mark to fix the state of the art of
machine computation as it stood in the immediate postwar years.2
In both the Moore School Lectures and the Harvard Symposium on Large-Scale
Calculating Machinery the number of persons wanting to attend far exceeded expectations.
Herman Goldstine saw that as a sign that "it was obvious to everyone that there was an
extremely large but latent worldwide need for computers.3 Of course, he wrote those
words twenty-five years after the events, at a time when computers had
become cheap, small, and almost ubiquitous. The excitement of those conferences was
genuine; it reflected the discovery by many computing pioneers that they were not alone.
Except for the Colossus, the computers from that era were all numerical processing
machines. They tackled problems by doing a lot of arithmetic on arrays of input data,
boiling the numbers down to a determinant, an integral, or the range of a shell's trajectory.
The end of the war signaled an end to the pressure to do those calculations, and there was
an opportunity to think about just what problems a computer should be designed to solve.
Emphasis continued on numerical capabilities, but from 1946 on there was a gradual
understanding that the computer is a machine capable of processing any data that could be
put into coded form-and that encompassed a lot more than numerical data.
That shift was not at all obvious in 1946. (Konrad Zuse had defined "reckoning" that
way, but he did not participate in these postwar conferences.) An understanding of the
true nature of computing emerged from discussions among the computer pioneers about
their machines, and especially about a feature they were considering for future computers.
That feature was of course the stored-program concept, which, when finally understood,
revealed the logical basis of what they had just invented.
Reckoners, 1st Gen., page 0133
|
THE STORED-PROGRAM CONCEPT
A modern digital computer stores its instructions and data in the same physical memory
units. Any memory register can store either a program's instructions or
the data on which it acts. One reason for that is speed-as Eckert and Mauchly recognized
in their thoughts on the ENIAC's successor, an electronic computer must have its
instructions as well as its data supplied at speeds comparable to that of its electronic
processing units.
But why should both be stored in the same way in the same registers?
lnstructruction do not look anything like numbers: they are usually much shorter, and
they do not otherwise seem to have much in common with numbers.
One practical reason is that every problem has a different mix of instructions and data;
furthermore bulk memory devices are not cheap. Therefore in the interests of economy it
pays to pack everything onto the same physical devices, rather than have separate
memories for instructions and data, with wasted space in one or both units.
But that is only part of the answer. The reason a computer stores them identically is
that fundamentally they are identical - they are equivalent to each other.
That means, amongother things, that each can a converted to the other. There is
simply no good reason for not keeping them together, especially for the kinds of
problems computers have been doing sInce the war - problems not characterized
as long sequeences of numerical calculations. (Indeed, the machine the does only
numerical calculations may store data and program instructions separately, so
long as both are in a high-speed memory. Modern programmable calculators
have such a structure, and thereby take advantage of the relatively shorter length
of calculator instructions.) The stored program principle, when recognized and
implemented, was the key to the power of the electronic digital computer; it gave
the world since 1945 the "computer revolution."
Decimal multiplication illustrates that equivalence. Recall that a calculator designer has
two choices: the machine can multiply either by repeated addition or by looking up
products in a times table. The two methods are equivalent in that each calculates the same
answer. But they proceed in different ways. The former consists of a lot of detailed
instructions: add two numbers, shift a radix point to the left or right, add again, etc. The
latter consists of fewer instructions, but requires looking up partial products in a data
table. A designer can trade off one for the other, thereby giving the computer the desired
speed, ease of construction, and cost. The example on page 134 shows that equivalence.
Machine multiplication is perhaps a trivial example, but the principle it illustrates holds
true for more complicated computer operations as well.
Another way of stating the relationship is to say that a computer program causes the
machine to perform actions that rearrange its input data into a structure different from
what it was when those data were fed into it. The (dynamic) action of a computer is thus
described by the (static) structure of the data it contains at the end of a computation.
Reckoners, 1st Gen., page 0134
|
For practical purposes there is no good reason to draw a distinction which for many
problems might turn out to be artificial and restrictive. So besides the advantage of high
speed and efficient use of memory space, a computer will store its program internally in
the same memory units as its data. Beyond a reasonable
1
MULTIPLICATION OF 355 x 4
By Repeated Addition By Table Look-up
355 355
x 4 x 4
----- -----
355 200 units part of product
+ 355 + 122 tens part ("carry")
----- ----- obtained from a table
710 add 355 to 1420
+ 355
----- itself four
1065
+ 355 times
-----
1420
limit, the designer or programmer has a lot of leeway in choosing how the
computer will do its business. He or she can choose to replace instructions by
sets of data, "hardware" by "software," or vice versa. (The limits are
theoretically very small a machine havin a repertoire of only a few simple operations,
an enough memory space, can do an computation that any computer, no matter
low sophisticated, can do.)
That relationship is like the equivalence of mass and energy in relativistic physics, or
like the relation between position and momentum in quantum mechanics. At the smallest
level of observation they cannot be distinguished (cf. Heisenberg's Uncertainty Principle).
By analogy, at the level of an individual yes-or-no binary digit (bit), one cannot say
whether it signifies an action as part of a program or whether it is part of the data on
which the program acts.4
But the concept is of more than just theoretical interest. A machine which
stores its data and instructions together can take advantage of that fact. What's
more, for certain types of problems the advantages of not separating them
be-come significant. Those problems are precisely the ones that involve a lot of
logical switching and branching of instructions, mixed in with numerical processing.
THE STORED PROGRAM: ALAN M. TURING
Reckoners, 1st Gen., page 0135
|
In 1936, a decade before the first real computing machines were built, the British
mathematician Alan Turing wrote a paper concerning the concept of computing, in which
he ester the equivalence of data and instructions.5 Alan Turing's contribution to the history
of computing is difficult to measure. In the postwar conferences on the design of
computers he was almost never cited; few persons attending them even knew who he was.
Even in the development of British computers there is little agreement as to what he
contributed. He was a member
of the team at Bletchley Park that designed and built the Colossus. But just what he did
there is still classified information, although it is known that he made absolutely crucial
contributions to their cracking of the coded messages sent by the Germans' "Enigma" and
Geheimschreiber ("secret writer") machines. After the war Turing was involved with two
important British computing projects, the Automatic Computing Engine (ACE) at the
National Physical Laboratory in Teddington, and later the computing projects at the
University of Manchester. But his contribution to those projects is by no means as clear
as, say, Eckert's to the ENIAC or Stibitz's to the Complex Number Computer. (He died in
1954 before it occurred to anyone to ask him those questions. So we may never know.)
Yet he has a central place in the history of computing for that 1936 paper
alone. It was there that he answered the question "What does a computer do
when it computes?" adequately for the first time-yielding among other things
the relationship between a machine's operations and its data just described.6
No stored-program digital computers existed in 1936, but there were of course human
computers as well as computing installations where people manipulated numerical data
with electromechanical or punched card machinery. And a century earlier Charles Babbage
had a clear idea of what his Analytical Engine was supposed to do. But that empirical
understanding was inadequate for mathematicians concerned with the question of whether
one may decide in advance whether one may or may not compute the answer to a
mathematical problem, by whatever means. That question was called the
"Entscheidungsproblem," German for "decision problem." The Gottingen mathematician
David Hilbert first posed it in the 1920s. For a problem like:
4x2 + 2x - 3 = 0
there is no trouble, but what about lengthier problems, especially ones involving not
ordinary algebra but statements of symbolic logic and their rules of combination? Was
there an effective procedure for testing such a problem and finding out whether a solution
to such a problem exists?
Turing proved that there is no such procedures, he proved it by giving for the
first time a logical rigorous definition of what it means to compute a solution to
a problem by some definite process. That definition had be good enough to
convince the mathematical logicians. who were a skeptical lot, to begin with. It
did not really have to convince those who worked with computing machines:
they went merrily on with their computing, oblivious to the debate over
the logical fundamentals. But as stored-program electronic
computers with enormous power and speed appeared after the war, such a definition was needed.
Turing's has served that need ever since.
Reckoners, 1st Gen., page 0136
|
Alan Turing was born in 1912 and according to his mother was "a scholar born." He
showed a precocious talent for mathematics, and entered King's College, Cambridge, in
1931, where he excelled-passing Part 11 of the Tripos with distinction in 1934 ("He came
out a Wrangler with a B star"), being elected
to a fellowship at King's in 1935, and winning the Smith's Prize in 1936 for his thesis on
"The Gaussian Error Function."7
That same year (1936) he submitted his paper "On Computable Numbers, with an
Application to the Entscheidungsproblem" to the Proceedings of the London
Mathematical Society, where it appeared early the next year. Mathematical logicians
quickly recognized it as a fundamental contribution to their field, and Turing as a brilliant
and wholly original thinker.
The "computing machine" he constructed to test the decision-problem was a
paper abstraction of the activity a human being goes through when computing a
number. A human computer would presumably have at his or her disposal
enough sheets of paper, a pencil or other instrument to make marks on that paper
(and an eraser to erase marks from that paper). Turing gave his "machine" not
sheets but a long strip of paper, of which only one space at a time was accessible
for writing or erasing. But his machine could advance or back up along that strip,
and the strip was considered to be of indefinite length, so it was effectively the
same as ordinary sheets of paper. Likewise his machine had a restricted set of
symbols that. could be marked, on the strip: the binary digits 1 and 0, plus a few
more; but that, too was only an abstraction of the presumably richer repertoire of
symbols that humans use when they do arithmetic.
The most interesting part of his "machine" was the way it acted depending on
what symbol it saw on the the tape at any moment. Turing's machine was capable of
assuming a finite number of states (which he called "configurations"); its action
at any moment depended on the combination of the state it was in and the symbol
it it scanned on the tape. It could write a new symbol on the tape, erase whatever
symbol was there, move the tape a space to the left or right, or stop the machine
completely.
The closest human analogy to this idea is to think of a person doing a computation who is
constantly being interrupted as he works: the telephone rings, someone comes to the door, etc.
Each time that happens he has to stop what he is doing and make a note telling himself where he
left off, so that he can resume the calculation later on. So he writes a description of the "state" of the computation,
writing it down somewhere on the same sheets of paper, using the same symbols
'as those used for the computation itself.
Now imagine that he is interrupted at every little step of the computation. His sheets of
paper will then contain not only the computation itself but also a complete description of
the steps necessary to proceed with it.
Reckoners, 1st Gen., page 0137
|
It is not difficult to conceive of a machine endowed with those capabilities of
scanning a tape, writing or erasing a symbol on that tape, moving the tape
backwards or forwards a square at a time, and halting. Such a primitive machine
could compute the answer to any problem whose description was supplied to it by a
sequence of symbols on a paper tape; it could do that as well as any other machine, however
sophisticated. (By showing that there were problems for which his machine would never
halt, Turing proved that the decision-problem
could have no solution.)
From the standpoint of actual computers like the kinds built after 1940, Turing's machine has several interesting aspects., First, unlike, say, Zuse's Z3,
whose program takes the form of a linear list of instructions, Turing's machine
jumps around at every step; depending on the results of its last action. Every step
Turing machine makes is a "conditional branch" - this in spite of the fact that it
has only a linear tape on which to operate. Second, the actions specified by the
machine's internal table and by the sequence of symbols on the tape are so
intertwined with the data on the tape or in the machine that it is impossible to say
which is which. The machine's configurations represent both its internally stored
data and its program. This in 1936 Turing first described the stored program
principle.
Finally, he showed that his machine could not only compute numbers, but that it could also
compute a set of symbols, which when fed into another Turing machine, would make it act
like the first. In other words a computer can produce programs as well as data as output.
Furthermore, any general-purpose computer can be programmed to simulate the behavion of
any other special-purpose computer. Turing proved, mathematically, what the
ENIAC's programmers knew they were doing each time they rewired their computer for a
new job.
How influential was that paper on the development of digital computers (that is, aside
from its well-documented influence on mathematical logic)? It must have played a role in
the design of the British code-breaking machines built at Bletchley Park, including the
electronic Colossus. Turing was not ignorant of electrical engineering, and even in 1936 he
had talked of the possibility of building an actual machine that did what his paper machine
could do.8
In 1945 he wrote a report on the design of _a full-sccale.general-purpose digital
computer along the-litres of the American EDVAC, then being planned by the
ENIAC team. That report describes a computer much influenced by the Americans'
ideas but at the same time reveals that Turing had a different understanding
of the nature of a computer. In particular, he proposed a machine that would
of only store its program internally, but in contrast to the EDVAC design,
would not necessarily store its instructions as a linear list in the tradition of the
paper tape machines Instead it would have a program decoder that would allow
instructions to be placed anywhere in the computer's memory, in any order.
The 'Automatic Computing Engine, or ACE, was to be very much in the spirit of
Turing's theoretical work of the previous decade. Turing left the ACE project in
1947, and the machines that the National Physical Laboratory eventually built
(the Pilot ACE in 1950, and the ACE in 1959) did not conform to his ideas that
strongly. But his influence was there nonetheless.
Reckoners, 1st Gen., page 0138
|
THE STORED PROGRAM: JOHN VON NEUMANN
The central figure in the transition of computing from machines like the ENIAC to the
modern stored-program computer is John von Neumann (1903-1957), the Hungarian-born
mathematician whose contributions to that "queen of sciences" ranged wide and deep.
Von Neumann came to America in 1930, and with the onset of the Second World War
he willingly offered his services to the Allied cause. He managed somehow to involve
himself with half a dozen wartime projects simultaneously. Among them was the
Manhattan Project, where he concentrated his mathematical genius on the complex
problem of designing the explosive lenses for the first atomic bomb.
Indeed it is surprising that computing machinery was something he was not involved
with at the beginning of the war. That changed in the summer of 1944, when by chance he
learned of the ENIAC (then still under construction) while waiting for a train at the
Aberdeen, Maryland, station.
Not long after, von Neumann joined the ENIAC team. (It welcomed him at first
although later on that team broke up, in part because of resentment by some of the other
members of his activities.) By late 1944 the design of the ENIAC was already frozen, but
Eckert and Mauchly had begun designing its successor, the EDVAC. The central feature of
the EDVAC would be its far larger highspeed memory-on the order of a thousand numbers
instead of the ENIAC's twenty. In 1944 Eckert was thinking of using a magnetic drum or
disk to store those numbers, but later he adopted as a storage medium a tube of mercury in
which digits would be stored by acoustic pulses. (Since the speed of sound in mercury is
much lower than the speed of electrical pulses in wires, trains of pulses could be recycled
through the mercury column and periodically refreshed by electrical circuits, thereby
storing a good number of digits.)
Eckert had further recognized as early as January, 1944, that the memory units should
store the program as well as the data. Even though the ENIAC was not finished yet, he
recognized that programming it was going to be tough. The next computer would be much
easier to program by virtue of its internal read/write program storage.9
Into that state of affairs stepped von Neumann. He joined the discussions regarding the
EDVAC-and he grasped its principles almost immediately. By the middle of 1945, those
principles included the following:
First, it would have as much high-speed memory as possible, using the mercury delay
lines. That implied that the memory unit would only store numbers; to have it perform
addition as well would make it complicated and expensive. The EDVAC would have
storage registers, in contrast to the ENIAC's accumulators.
Second, the EDVAC would have a central arithmetic unit which performed all
arithmetic, including addition. There would be only one such arithmetic unit, through
which all computations would flow. That meant that the EDVAC would have fewer tubes
than the ENIAC (despite the EDVAC's larger memory). It also meant that its programs
would consist of a sequential list of instructions, performed one at a time. The EDVAC
could not execute several program steps simultaneously, as the Harvard Mark I or the
ENIAC could. Whatever time was lost they felt would be more than made up by the
ability to set it up for a new problem more quickly.
Reckoners, 1st Gen., page 0139
|
Third, the EDVAC would not only execute programs sequentially: it would
also handle the numbers themselves sequentially-that is, a digit at a timeinstead of in
parallel. That followed mainly from the nature of the storage medium. The mercury delay
lines stored numbers as trains of acoustic pulses, so it was natural that operations such as
addition would be performed digit by digit as each emerged from the mercury tank.
Finally, the EDVAC would use the binary system of numeration throughout. Separate
program routines converted those numbers to and from decimal at the beginning and end of
a computation.
In all but the serial method of doing arithmetic, the EDVAC's "architecture" most
resembled the Zuse relay computers, of all the pre-1945 machines. That overall design
reflected the optimum balance between complexity, power, and speed, both for Zuse's
relay implementation and for the Moore School's electronic approach.
Von Neumann contributed to the EDVAC's design, although it is hard to say in what
specific ways. He summarized its features in a document called "First Draft of a Report on
the EDVAC," written in the spring of 1945. It was not published in full until 1981, but
when it first appeared it was widely circulated and read in typescript. The report was
enormously influential. In it he described the logical design of the EDVAC, with its high-
speed memory and overall architecture. He also described its use of the binary system, and
its ability to store programs internally. Electronic computers have pretty nearly all looked
that way ever since.10
The report also stressed the EDVAC's sequential processing of both program
instructions and individual digits of numbers. But even as that design was taking form at
the Moore School, von Neumann broke with Eckert and Mauchly and went to the Institute
for Advanced Study in Princeton, where he initiated a project for a new computer, similar
to the EDVAC but having a memory device that could deliver all the digits of a number in
parallel, as the ENIAC's accumulators or the Zuse computers did. Descriptions of that
"IAS" computer appeared in 1946, in papers coauthored by von Neumann, Arthur Burks,
and Herman H. Goldstine. (Burks and Goldstine had also left the Moore School when von
Neumann did. Von Neumann's dissatisfaction with his role on the EDVAC team may be
inferred from the fact that the IAS had a policy of not doing experimental work. Von
Neumann used his considerable powers of persuasion to get them to make an exception in
his case.)11
The machine described in those reports would not be completed until 1952, but the
reports themselves established a design philosophy for computers that has persisted to the
present day, despite the amazing shrinkage of the electronic components that go into them.
Reckoners, 1st Gen., page 0140
|
To sum up, the design (which has come to be called the "von Neumann architecture")
resembles the EDVAC's except that all digits of numbers in the machine are manipulated in
parallel, to increase speed. (Program steps are still executed sequentially.) The memory
unit of the computer is further designed to be random-access, with any item as quickly
accessible as any other. Memory is
organized into several hierarchical levels, with slow, high-capacity memory units, such as
magnetic tape or punched card readers, serving as a back-up to the fast, but expensive and
smaller main memory.
Finally, the IAS machine was to have a simple, basic set of a few instructions, out of
which all the computer's actions would be formed. With each instruction (there were about
two dozen) there would -be only one address, giving the memory location of one operand
on which it would act. The other operand, if there was one, was assumed to be already in
one of the arithmetic unit's registers; the result would likewise remain in the arithmetic
unit, to be used for the next step if desired. The address of the next instruction was
likewise not given; it was assumed to be in the memory location right next to the
instruction just executed. Thus although the IAS machine was a stored-program computer,
it normally executed instructions in a steady linear stream coming from the memory, just as
if they were coming off a tape. The difference of course was that among the instructions
were special "branch" commands that directed the program to jump to a different memory
location for the next instruction, and since the computer had a random-access memory,
those jumps were just as fast as any other sequence.
With some modifications the JAS design has survived as the paradigm of modern
computer architectures. But other types have found a place as well. For
example, pocket
calculators process numbers one digit at a time (serially), like the EDVAC,
not because
they use a cyclic memory device, but because that design requires fewer
interconnections
from one part of the calculator to another. Fewer connections mean fewer
wires-hence the
calculator can fit nicely in a shirt pocket. (It is also slower, but for
most calculator
applications that is not a problem.) Probably the most radical deviation
from the von
Neumann type of architecture appears in so-called supercomputers which
process several
streams of data in parallel, like the ENIAC. There are some experimental
computer designs
that mix storage and arithmetic, like the accumulators of the Harvard Mark
I and the
ENIAC. But for the most part the only difference between the IAS and modern
computers
is that the latter have many more instructions than the dozen or two the
EDVAC or the
IAS had. The von Neumann architecture is not without its flaws, but any
reports of its
demise are exaggerated. It hangs on tenaciously through successive
generations of hardware
and programming innovations.
Table 6.1 summarizes the design philosophies of those early computers, with
a few
modern examples thrown in for comparison.
PROGRAMMING: ZUSE
Reckoners, 1st Gen., page 0141
|
Von Neumann's "First Draft on the EDVAC" said little about how one actually
coded a
stored-program computer to solve problems. He wrote out a possible code for
sorting data
on the EDVAC, but it was not included in that draft.12 In 1945, Konrad
Zuse was also
thinking about how to write programs for his
Table 6.1 TYPES OF COMPUTER ARCHITECTURES
--------------------------------------------------------------------------
Handling of Numbers to and from Memory Registers
Method of Executing ------------------------------------------------------
Program Instructions Serially In Parallel
--------------------------------------------------------------------------
EDVAC-type (cyclic von Neumann-type (random-
memory) access memory)
SERIALLY EDSAC (1949) Z3, Z4 (1941, 1945)
UNIVAC (1951) ENIAC after 1948
Pocket calculators Most modern computers
---------------------------------------------------------------------------
Atanasoff-Berry Computer ENIAC before 1948
(1939)
IN PARALLEL ILLIAC IV (1976)
STARAN (Goodyear
Aerospace Co., 1974) Other modern super-
Other modern experimental computers
designs
---------------------------------------------------------------------------
computer, the Z4. He was living in a Bavarian village, where everyone's
first priority was
getting enough food each day. His Z4 had just barely survived, and he had
managed to get
it running intermittently. Zuse did not have a chance to participate in the
immediate
postwar computing conferences centered in the United States. So he found
himself with
some time to think about just what he had invented. Independently he began
developing
ideas about how to program digital computers to take the fullest advantage
of their powers.
Like his earlier work, Zuse's development of a theory of programming
paralleled the
Anglo-Americans' in many ways, yet because of his isolation it looked
different in its
details. We are fortunate to have it because it lay completely outside of
the mainstream. It
shows us how different things might have been, how what we have today is not
necessarily the best of all possible worlds.
Most of the wartime computers were designed as sequential number
processors. They
were built to follow a strict plan for every set of inputs, but gradually
their designers
recognized that the machine itself could perform the logical operations of
choosing the plan
of calculation for those numbers, and of automatically switching plans
depending on the
results of a previous operation. Those logical abilities assumed greater
and greater
importance in computer design, so that by 1945 it was recognized that it
was best to treat
logical and numeric operations identically in the internal workings of the
computer.
Reckoners, 1st Gen., page 0142
|
That transition required a rethinking of the ways to describe computer
programs. To
code a computer to evaluate an expression like
F(x) = x3 + 4x2 + 3
for successive values of x from 1 to 10 is easy: the programmer simply
specifies
the sequence of arithmetic for that formula and arranges to have it accept
the successive
values of x as input.
But where there are not one but many algebraic sequences, each invoked
depending on
the results of a previous calculation, that scheme breaks down. A familiar
example is
computing an employee's weekly payroll check: the formula that computes his
pay
depends on whether he worked overtime or not, what kinds of insurance plan
he has, tax
deductions, etc.: all logical decisions that usually mean a different
formula for each and
every employee. Computer programming thus has a dynamic aspect to it that
the
conventions of ordinary algebraic notation cannot adequately describe. John
Mauchly
recognized that in a discussion of programming the EDVAC: "Thus is created
a new set of
operations which might be said to form a calculus of instructions."13
Independently of
Mauchly, Konrad Zuse was developing such a calculus, which he called the
"Plan
Calculus" (Plankalkul). It was a first step toward what we know today as a
"programming
language."14
Whatever form the Plan Calculus took, it had an ultimate requirement that a
machine had
to take its statements and translate them into the correct actions. In
other words, questions
like whether a person worked overtime or not (thereby determining which
formula to use
to compute his wages) ultimately had to be cast in terms of yes-no values
represented by
physical devices within the computer. Another requirement came from the
sequential way
digital computers operated: any program had to be of the form of a long
linear list of
symbols, even if the actions it described may have originally been
concurrent.
The Plan Calculus, as Zuse first sketched it in 1945, did not satisfy the
second
requirement. Its notation was two-dimensional, designed to be read both
across and down
at the same time (reminiscent of Zuse's first thoughts on computing in
graphical terms). It
would have been easy enough to convert it into linear strings of symbols,
however, but he
never got that far with it.
It was not directly readable by machine, either, but at least in the Plan
Calculus every
statement revealed the bit structure implied by its action-that was why it
was a two-
dimensional notation. That is the Plan Calculus's most distinctive feature.
Modern
programming practice dictates the opposite: a programmer should know only
what he
absolutely has to about the computer's inner workings. The rationale for
that philosophy is
that those details would only be confusing and do not contribute anything
to the writing of
better programs. The Plan Calculus has never been implemented on a digital
computer,
although in recent years Zuse has reworked it and cleared up some of its
original
inconsistencies. Recently there has been a renewed interest in the design
of programming
languages, and the design philosophy of the Plan Calculus could certainly
contribute to the
current debate. But it is likely to remain only an historical artifact.
Reckoners, 1st Gen., page 0143
|
The following example shows how the Plan Calculus would describe a test to
see
whether or not a worker is eligible for overtime pay as part of his payroll
computation:
| V >= V .-> V
V | 0 1 2
K | n.6
S | 1.6 1.6 1
In that statement the variables are V0, V1, and V2. Note that the
subscripts are directly
below the letter; the label "V" in the far left column indicates that the
numbers in the
second row are in fact the subscripts of the variable (V =
"Variable-index").
The first variable V0 is a number which represents all the information
about a specific
employee. From that number the program selects those digits which tell how
many hours
he worked the previous week: that information is contained in the six
binary digits which
appear in the nth place. The third row, labelled " IC" for
"Komponent-index," does that.
The last row, labelled "S" for "Structurindex," indicates that the
information thus selected
is of the form of a one-by-six array of binary digits. In that way the
program reveals what
is happening to individual binary values within the computer, if the
programmer wants to
know that.
The next variable, VI, is the value above which overtime wages are paid. It
is also a
simple list of six binary digits, and presumably has already been specified
elsewhere in the
program. The symbol .-> is similar to the conditional statement of symbolic
logic, and states
that if the expression on the left is true (that is, if the employee worked
more than the
specified number of hours), then assign the value "1" to the variable V2.
Thus V2 takes on
the value 1 or 0, for each employee, depending on how many hours he worked.
Aside from the subscripts running below each variable in columns, this
notation is not
all that different from modern programming languages. (One difference is
that it assigns
values from left to right, as the machine itself operates. Nearly all
modern programming
languages place the variable being assigned at the left. That difference
reflects Zuse's
background as an engineer, while other languages reveal their debt to the
mathematician's
notion of a function.)
Reckoners, 1st Gen., page 0144
|
He worked out the details of coding complex problems using the Plan
Calculus, even
showing how it could be used to describe how a computer might play chess.
But in the
long run programming languages developed along different lines-from more
conventional
notations of ordinary algebra. It would be many
years before computers would be playing chess-and when they finally did,
they would be
programmed in algebraic languages not really suitable for that kind of
problem.
In 1944 Zuse envisioned a separate machine that would accept formulas
written in an
algebraic notation (or possibly a version of the Plan Calculus) and produce
as output a
string of binary digits that his Z4 computer would accept as a program. He
called it a "Plan
Preparation Machine" (Planfertigungsgerate); later he called it a
"Programmatic." The machine would keep track of expressions in
parentheses,
and would test for errors such as an unequal number of left and right
parentheses, or more
than one equals sign in an expression. It would further be able to shift
between a main
sequence and subroutines automatically.15
He never built the Plan Preparation Machine. By the time the Z4 was
installed and
running at the Federal Technical Institute in Zurich, it had become clear
that a general-
purpose computer could do both jobs: translate a program from an algebraic
to a binary
machine code, and then execute the new program. Heinz Rutishauser was the
person at
Zurich who recognized this ability. It was a startling idea at the time but
seems obvious
after the fact.
Rutishauser independently rediscovered the concept of a universal machine
that Turing
described in 1936: a computer's output may not only consist of "answers" to
numerical
problems; it may also be another program. (The program which does that is
called a
"compiler.") In that way persons need not be conversant in machine code,
which after all
consists of only zeros and ones, to be able to write programs. Once again
the key to a
computer's ability to do that translation is the stored program principle.
Ironically, though,
Rutishauser developed his notion of a compiler after working with the Z4,
which did not
store its program internally.
IMPLICATIONS OF THE STORED PROGRAM PRINCIPLE
Compiler programs did not appear until the 1950s. The immediate impact of
the stored
program principle was to increase the range of problems a computer could
solve.
Computers before 1945 were best suited to numerical problems coded as long
sequences of
arithmetic operations. Conditional branches were rare, if they appeared at
all. The early
machines like the Bell Labs Model V, which had conditional branch
facilities in the form of
separate loops of tapes, constrained the programmer to use it only in cases
that were well
known in advance. He or she had to know just where in the execution of a
problem a
branch to a subroutine tape would occur, roughly how often that would
occur, and so on.
Reckoners, 1st Gen., page 0145
|
But with, a stored-program computer the conditional branch facility becomes
an integral
part of the programming process. The programmer may specify loops be
performed over
and over until some limit is reached, as before, but his program need only
sketch out the
general flow of instructions. He need not know
the particulars of just where and when those loops enter into a
program-the computer
itself can keep track of that. By having the computer modify the program,
a short piece
of code can express a much longer sequence of machine instructions. To a
computer, a terse
string of a few program statements is like a sonnet: it evokes a much more
complex mix of
machine actions and states. That in turn reduces the length of the code
that has to be stored
internally, reducing the computer's memory requirements to the point where
they become
technically feasible.
The ability of a stored-program computer to generate its own sequence of
actions from a
brief sketch supplied by its programmer implies a redefinition of the
nature of computing
which is still poorly understood. One of the first clear statements about a
computer's
power came from Ada Augusta, Countess of Lovelace, in a note describing
Babbage's
Analytical Engine. It is all the more remarkable considering that she wrote
it in 1842, for a
machine that had not been built (and probably may never be built). In her
notes to F. L.
Menabrea's description of the Analytical Engine, she said:
The Analytical Engine has no pretensions whatever to originate anything. It
can do whatever
we know how to order it to perform. It can follow analysis; but it has no
power of anticipating
any analytical relations or truths. Its province is to assist us in making
available what we are
already acquainted with.16
A corollary to her statement is that a computer is incapable of
"surprising" its operator by
behaving in some unanticipated way.
It is true that a computer can do only what it is programmed to do. It is also true
that it has the
ability to choose between alternate courses of action, depending on the
value of a
previously computed sequence. Whatever action it may take must, however, be
fully
specified in advance by the programmer, even though he or she may not know
how the
computation might proceed for a given input datum.
It is therefore possible for the human operator to chart out in advance all
the possible
paths of a computation, so indeed the machine could not originate an
unanticipated action.
There are, however, two qualifications to that attribute which alter the
picture, and which
have the combined result of endowing a computer with precisely those
abilities to surprise,
originate, and anticipate.
The first qualification is the high speed of electronic components.
An electronic computer can make so many decisions and branch to so many different
sequences in an hour's time that it would take a human being a lifetime
to trace
all of them out for even one piece of input data. In principle, it is,
still possible, but
in practice a human cannot simulate the activity of an electronic computer.
That
was why electronic computers were built in the first place.
Reckoners, 1st Gen., page 0146
|
The second qualification is a direct consequence of the stored program
principle: the fact that a computer may, treat its instructions as data and
perform operations
on them, thereby forming an entirely new set of operations which it may
then execute.
Not only can the machine choose among sequences depending on the result of a
previous
operation: it can also modify its future response to those results - in
short, it can "learn"
from its past experience, in the same sense as that word applies to human
behavior. Note
that we are still within the bounds of Countess Lovelace's statement that a
computer can do
only what it has been told to do. It is still possible to specify in
advance all the potential
ways a computer might act, given a specific input datum. A computer has a
finite number
of elements; if it has n binary devices (memory places, flip-flops, gates,
etc.), then it can
assume one of 2n states. But that is only the number of possible states for
one unit of time.
For the next step in a program it again has that range of possible states,
and so on. The
number of possible states grows astronomically very fast. (Consider for
example a
rudimentary "computer" having a memory of only seven binary digits, and an
arithmetic
unit of only three digits. Thus n = 10, so it is capable of 210 = 1,024
different states. If we
restrict that machine to programs of only ten steps each, there are already
(210)10 possible
programs (over 1030). Even if 99.999 percent of those programs were
meaningless, that
still leaves quite a few. It is true that modern programming wisdom states
that one should
never write a program that modifies itself in the way just described. Doing
that makes the
program almost impossible to understand; it also means that the program
cannot be put
into a read-only-memory. But that does not alter the essential fact: the
number of states a
stored-program computer can assume grows exponentially as the number of
program steps
increases.)
Although in all cases there is only a finite number of possibilities, the
explosive
combinatorial growth of complexity precludes any possibility that a human
can follow a
computer's activities by hand calculations. Therefore. computers can,
indeed, surprise,
originate, create--one might say they can "think"--if programmed to do so.
Computers
programmed to play chess routinely defeat those who programmed them. Of
course if that
upsets the programmer he or she may decide to reprogram the machine so that
it does not
play such a good game. That is the meaning of the popular phrase stating
that computers
will never take over because one can always "pull out the plug." In that
sense the human
being retains control over the machine, as Countess Lovelace said. But only
in that sense.
Talking about computers playing chess, thinking, and otherwise acting
intelligently seemed a bit preposterous in the late 1940's when it was a struggle
just to get a machine running at all. Alan Turing was especially fond of such
talk. He
could always respond to those who were skeptical about the power of a
computer
with a mot juste guaranteed to infuriate and disarm them. Even today the
idea of
a machine acting that way is unsettling, no less so now that hobby shops
sell
pocket-size chess-playing computers that can checkmate most of us-and then
tell us so in a synthesized voice.
Reckoners, 1st Gen., page 0147
|
All that activity, and with it the debate over machine vs. human
intelligence, is
the consequence of the stored program principle, coupled with the high
switching speeds
of electronic circuits. The more immediate consequence was the rapid
development of
programming languages, letting persons from all walks of life avail
themselves of the new
invention. Those languages include familiar ones like BASIC, routinely
taught to school
children, and even "menu"-type languages like those found at automatic bank
tellers, which
allow just about anyone (even the barely literate) to program the bank's
computer in a
rudimentary way. The study of programming languages has paralleled the
development of
algebra from the time of Viete and Descartes, when it was first realized
that one could
manipulate expressions such as
3(x + y) - 4(x - y),
transforming that into
3x + 3y - 4x + 4y,
or
7y - x,
without having to worry about what x or y stood for.
The theoretical study of the nature of computing has taken us far from the
discussions
of pieces of hardware that so much dominated the history of computers
before 1945.
Nevertheless it was against the background of those early machines that the
theory first
took form. By 1950 a few stored-program machines were completed and
running, so the
theory of computing had a chance to merge with its practice. In 1951
commercial
computers appeared for the first time-the UNIVAC in America, the Ferranti
Mark I and
the LEO in Britain. UNIVACs were built in a production run (of a dozen or
so total), and
with the installation of the first at the U. S Bureau of the Census in the
spring of that year,
the computer age entered "the first generation."
NOTES
- I. J. Good, "Some Future Social Repercussions of Computers,
International Journal of
Environmental Studies, 1 (1970), 67-79; for a sampling of the variety of
machines in use at
the time, see Charles and Ray Eames, A Computer Perspective (Cambridge,
Mass.: Harvard,
1973); D. R. Hartree, Calculating Instruments and Machines (Urbana:
University of Illinois
Press, 1949).
- Published as Symposium on Large Scale Calculating Machinery (Cambridge,
Mass.:
Harvard, 1947); the Moore School Lectures appeared as Theory and Techniques
for Design of
Digital Computers (Philadelphia: Moore School of Electrical Engineering,
1947).
Reckoners, 1st Gen., page 0148
|
- Herman Goldstine, The Computer from Pascal to von Neumann (Princeton:
Princeton University Press, 1972), p. 246.
- Dr. K. Brunnstein, private communication.
- A. M. Turing, "On Computable Numbers, with an Application to the
Entscheidungsproblem," Proceedings, London Mathematical Society, 2nd
series, 42 (1936),
230-267.
- Eames, Computer Perspective, pp. 124-125; Sara Turing, Alan M. Turing
(Cambridge,
England: W. Heffer and Sons, 1959), pp. 41-50.
- Sara Turing, p. 47.
- Ibid., p. 49.
- J. Presper Eckert, "A Survey of Digital Memory Systems," Proceedings
IRE, 41
(1953), 1393-1406; N. Metropolis and J. Worlton, "A Trilogy on Errors in
the History of
Computing," First USA-Japan Computer Conference, Proceedings, Tokyo, 1972,
pp. 683-
691.
- John von Neumann, "First Draft of a Report on the EDVAC," published in
Nancy
Stern, From ENIAC to UNIVAC (Bedford, Mass.: Digital Press, 1981), pp.
177-246.
- Arthur Burks, Herman Goldstine, and John von Neumann, "Preliminary
Discussion of
the Logical Design of an Electronic Computing Instrument," Part I
(Princeton, N.J., 1946);
that and later reports have been reprinted in John von Neumann, Collected
Works, vol. 5
(New York: Pergamon, 1963), pp. 34ff.
- Donald Knuth, "Von Neumann's First Computer Program," Computing
Surveys, 2
(1970), 247-260.
- John Mauchly, "Preparation of Problems for EDVAC-type Machines,"
Symposium on
Large Scale Digital Calculating Machinery, pp. 203-207.
- Konrad Zuse, "Der Plankalkul," privately printed, 244 pp., 1967; Konrad
Zuse,
Beschreibung des Plankalkids (Munich, 1977); Joachim Hohmann, Eine
Untersuchung des
Plankalkids, im Vergleich mit algorithmischen Sprachen (Bonn: Gesellschaft
fur
Mathematik and Datenverarbeitung, Report No. 104, 1975); F. L. Bauer, "The
Plankalkul of
Konrad Zuse: A Forerunner of Today's Programming Languages," Communications
ACM, 15
(1972), 678-685.
- Konrad Zuse, "Planfertigungsgerate," MS 010/024, Zuse Archive
(Gesellschaft fur
Mathematik and Datenverarbeitung, Bonn); Konrad Zuse, "Der Programmator,"
Zeitschrift fiir
Angewandte Mathematik and Mechanik, 32 (1952), 246.
- Philip and Emily Morrison, eds., Charles Babbage and his Calculating
Machines
(New York: Dover, 1961), p. 285.
|