return to home
Table of Contents:
a little on APL
- A PERSONAL HISTORY OF APL by Michael S. Montalbano
- a few comments by Curtis A. Jones, Apr 17, 2013
<! end of Montalbano history ->
A PERSONAL HISTORY OF APL
Michael S. Montalbano
International Business Machines Corporation
General Products Division
Santa Teresa Laboratory
San Jose, California
A PERSONAL HISTORY OF APL
I have several reasons for calling this talk a personal history.
For one, I want to make it clear that the opinions I express are my own: they are not the opinions of my employer or of any other organization, group or person. If you agree with them, I am happy to have your concurrence: if you disagree, I'd be happy to defend them. In any event, the praise, blame or indifference my views may inspire in you should be directed to me and to no one else.
What I plan to discuss are things I have done, seen or experienced at first hand. Thus, this talk is merely an opinionated collection of anecdotes. I want to emphasize this from the outset: it is my second reason for calling this a personal history. But my most important reason is that I feel strongly that we need a good, thoughtful, accurate history both of APL and of computing itself. While I'd be flattered to have this account included as part of that history, I don't want anyone to mistake it t as an attempt at the real thing.
The Importance of History
We neglect history at our peril. The truly incredible growth of digital computer technology has transformed our world almost overnight. This transformation is not only continuing, it is accelerating. It gives every promise of continuing to change our institutions and the circumstances of our daily lives at a faster and faster rate. If we are ever to understand where we're going, it's important that we take a long, careful look at where we've been, where we are, and how we got from there to here. If we don't do this, we won't be able to control events; they will control us. As far as I'm concerned, that's what's happening right now; a runaway technology has us at its mercy because we have not developed techniques to understand and control it.
H. G. Wells described human history as a race between education and catastrophe. This observation is more pertinent today than it ever has been in the past. But, in the specific case of the proliferation of stored-program digital computers, the education-catastrophe race, in my opinion, takes a particular form: it is a race between technology and methodology, between gadgets and ideas.
We are developing gadgets at an explosive, accelerating, self-fueling rate we shall be swamped by these gadgets if we don't hasten to develop and apply the ideas we need to control them.
To me, this need defines the importance and the mission of APL. APL provides the best set of gadget-understanding and gadget-controlling ideas currently available. Looking to the future, it provides the best base for the methodology we must develop if we are ever to bring our gadget-based technology under control.
This opinion is based on experience. I was working with computers when they were merely gleams in the eyes of the early designers: I was working with APL when it was nothing but a collection of incomprehensible characters scattered through publications with catchy titles like: The Description of Finite Sequential Processes. In other words, I've been working with both computers and APL since their very early days. And, in looking back on this experience of just over 34 years, (which is my own personal experience of computing), I find that it can best be summarized in n terms of two key ideas:
- The stored-program idea: the idea that a procedure or algorithm can be stored as a collection of switch settings in exactly the same way as the data on which the procedure is to work and, as a consequence, that executing such a stored procedure consists of starting it and letting it flip switches until it is done. This was apparently first stated explicitly in n a paper drafted by John yon Neumann in n 1945. 1
- The efficient-notation idea: the idea that these vast collections of switches, changing their settings in thousandths, millionths, billionths, trillionths,... of a second, would be pretty hard to manage if we didn't develop a good way to describe them and think about them. I don't know whether this idea was ever presented anywhere in just these terms. Instead, it seemed to be implicit in the activities and writings of many people. But, in my opinion, it received its most effective and fruitful expression in the writings of Kenneth Iverson and others who followed his lead. The most important publication expounding this idea is the book from whose title the letters APL derive their significance.2
The stored-program idea provided, and continues to provide, the basis for our current runaway technology. The efficient-notation idea, if we take it seriously and do a lot of thinking and hard work, will help us curb the runaway and direct it t into fruitful and productive channels.
You may find this brief summary of computing history controversial. I hope so. We need vigorous, informed, philosophical controversy. And, as a contribution to this controversy, let me state some of my other biases in n as controversial a manner as I can.
Biases of an APL Bigot
There exists a growing class of people, of which I am happy to consider myself a member, who are called "APL bigots" by friends and foes alike. The friends use the term affectionately; the others do not.
The bigotry consists in believing that APL is the way computing should be done. I think it's fair to say that no one can properly be called an APL bigot who doesn't believe this.
In one respect, my bigotry may not be as great as that of the general run. I believe that APL and assembler language are the way computing should be done. To this extent, I am held suspect by true APL evangelicals.
In another respect, my bigotry is so much greater than theirs that, now that I am making it t known, I fear I may be excommunicated as a schismatic. I believe that, important as APL is in computing, it is even more important as an instrument for rationalizing the management process.
Modern management is in trouble. Don't take my word for it. Read the daily papers, the weekly newsmagazines and the flood of books and articles that describe management's difficulties and offer cures. Some of these, of course, assert that Japanese management is an exception to the general rule; the cure these people offer to nonJapanese management is to learn from the Japanese how to do it right.
I don't believe this. I believe Japanese management is in as much trouble as any other. Its current successes are no indication that its management isn't afflicted by the same difficulty as the management of organizations anywhere else in the world. Expressed simply, this difficulty is:
Nobody knows what's going on.
Your reaction to this assertion may be emotional. You may believe it passionately or reject it passionately. If you have either of these reactions, you may not have understood what I said, so I'll repeat it:The problems of modern management are primarily attributable to one cause:
Nobody knows what's going on.
So there you have it: the basic message that motivates this talk:
There are, of course, many other important difficulties that management faces: obsolescent plant, intensified competition, environmental concerns, employee morale, strained labor relations, residues of past mismanagement (reflected in such things as excessive debt, inadequate capital, incompetents in key positions,...) and so on. But these difficulties are made unmanageable by our inability to describe-them in a way that promotes insight and facilitates communication.
- Management's primary difficulty is that it has no good way to describe its processes and thus develop objective means to correct or improve them.
- Other fields have faced this problem in the past. The ones that have solved it most effectively are those that have developed a notation appropriate to their subject matter. The most conspicuously successful examples of notations that have virtually created the fields they describe are, of 'course, the specialized symbologies of the so-called "hard sciences".
- The workings of management can be expressed as extensive, intricate digital procedures. These procedures cannot be designed, analyzed or described effectively without a notation specifically designed for the purpose.
- APL is such a notation.
Clearly, in expressing this opinion, I am swimming against the tide. The widespread current belief is that we are experiencing an "information explosion". How can I accuse management of being inadequately informed when virtually everyone else says that they are overinformed? How can I say that management doesn't have enough information when everyone else says they have so much information that they are swamped by it?
Let me be blunt: I think the "information explosion" is a myth. I do not deny-- how can anyone deny?--that we are generating vast quantities of paper, tapes, disks, etc., with optical or magnetic symbols recorded on their surfaces. What I am denying is that these recorded symbols, in themselves, are information. They are not. They become information only if, as a minimum:
They are accurately calculated. They present a true picture of reality. They are understood by the person to whom they are presented.
Of these three conditions, only the first stands a better than even chance of being satisfied. For the purposes of effective management, all too much recorded data does not present a true picture of reality and is not understood by the person it is supposed to inform.
What does this all have to do with APL? Let me answer by recounting the pre-APL experiences (at the U. S. National Bureau of Standards, the U. S. Naval Research Laboratory, and the Kaiser Steel Corporation) that convinced me that the computing field needed, more than anything else, an efficient notation for describing digital procedures. Then let me follow up with the post-APL experiences (at IBM and Stanford) that convinced me that APL was the notation we needed.
The U. S. National Bureau of Standards, 1948-1952
I started to work in May, 1948 as a mathematician in the Applied Mathematics Laboratories of the National Bureau of Standards in Washington, D. C. I was given a job description which, like every other one I have ever had, bore no relation whatsoever to what I actually did. My first assignment was to program a division subroutine for the UNIVAC. The UNIVAC that was then being designed by the Eckert-Mauchly Corporation could add, subtract and multiply but it could not divide. (Division was added later. The list of computer instructions or "order codes", to use the terminology of the times, was updated and given an identifying C-number as the UNIVAC's design progressed. If my memory's not playing me tricks, the- order-code 1 list list that was current when I started work was C-7. )
I was given a description of the UNIVAC. It described acoustic delay lines, excess-three notation and end-around carry. I had the feeling that I was entering a dark, eerie world in which words would be used as charms and incantations rather than to communicate definite meanings. I was right.
I was surprised that no one took the trouble to show me the UNIVAC for which I was to program division. After a few days I learned why. The machine described so confidently and completely in the literature that I had been given had not yet been built. (The first UNIVAC was not delivered until three years later. But at least it was delivered. Our group spent a lot of time programming what were then called "feasibility tests" for a lot of machines that never got off the drawing boards.)
Programming for nonexistent machines started to pall after a few months. Our group had punched-card equipment (including the 602 calculating punch) with which we calculated mathematical tables. I switched to this activity at just about the time the Office of the Air Comptroller asked the Bureau of Standards for assistance in using electromechanical (later electronic) equipment to calculate Air Force budgets. With the procedures then in use, it was taking eighteen months to prepare a-yearly budget. There was general agreement that this was not satisfactory.
I was assigned the task of getting budget computation mechanized under the direction of George Dantzig, who was in charge of the Air Force budget project. His job was to devise the calculations we were required to perform, that is, he told me what was needed. I wired the plugboards and, later, wrote the programs that gave him what he specified.
The original calculations were called "triangular model" calculations (I understand they were given the acronym TRIM after I left the project.) The later calculations were solutions of linear-programming problems, applying the simplex technique that George Dantzig originated. The name of the Air Force project, SCOOP, (Scientific Computation Of Optimum Programs), like TRIM, suggests that wherever a computer goes an acronym's sure to follow.
I programmed the triangular-model calculations for the 602, the 602A, the 604 (the electronic calculating punch) and was about to program them for the CPC (the Card-Programmed Calculator) when the SEAC became available and I switched back to programming instead of plugboard wiring.
The SEAC (National Bureau of Standards Eastern Automatic Computer) was the first stored-program digital computer to operate successfully in the United States. (The first stored-program computer to run successfully anywhere in the world was the EDSAC, designed and built by the University Mathematical Laboratory at Cambridge, England.) I introduce this information here because, computing history being what it is, it does not seem to be available anywhere else.
Of the many memories and ideas derived from my four years at the Bureau of Standards, three are relevant to our present purpose:
- I learned to be cautious about how I used the solutions of large systems of equations with uncertain coefficients. I believe the triangular model is an extremely effective, and neglected, tool for a large class of planning problems. But, if the coefficients used in its equations have a large measure of variability or uncertainty, you must use its answers with caution.
- I learned how desperately we needed a good notation for describing algorithms. I sat through many presentations and discussions of solution techniques for linear programming problems. These presentations were largely chalk dust and arm-waving. The essential ideas, which are extremely simple once they are understood, were obscured rather than illuminated by the terminology and notation used to describe them-.
- 1 learned that gadget development was outstripping idea development and would continue to do so unless we did something about it. I suggested that we do something about it. My management was receptive to the idea (or told me it was) but said there was no money in the budget for idea development. That, of course, was the problem. It has been the problem ever since. This failure on the part of my management to fight for research in ideas was one of the reasons I left the Bureau.
U. S. Naval Research Laboratory, 1952-1954
At the Naval Research Laboratory, I became part of an interdisciplinary team applying the latest operations-research techniques to the development of man-machine systems.]
The "disciplines" included physics, mathematics, philosophy, sociology, various brands of psychology, naval science (represented by officers in the United States Navy), and I don't know what all.
Our tasks were the obvious ones: develop ways to make naval operations more efficient by incorporating new gadgets into systems that used them to best advantage. A typical assignment might be either general, for example design of a combat information center for a ship, a task force, or a shore-based control center or specific, for example, taking a new weapon, like a homing torpedo to be fired from the deck of a ship into the water near a distant submarine, and deciding how best to incorporate it into a system including a ship, sensing devices, communicating devices, and other weapons.
I learned many things during my two years at the Naval Research Laboratory (among them that clinical and experimental psychologists tend to despise each other) but for our present purposes the most important things I learned were:
- It's hard to plan effectively for a future you can't predict.
- No numbers are frequently better than some numbers.
- Nobody knows what's going on.3
Predicting the future is, of course, particularly difficult for the military since a future requiring military action is apt to be precipitated by some catastrophic event. I attempted to develop techniques for dealing with this by first describing the three states in which the Navy might have to operate:
and then describing the transformations required to convert from one state to the other in n the most efficient way possible.
- The peacetime state. <! start page 06 ->
- The transition state from peace to war immediately after hostilities have commenced.
- The wartime state.
I didn't get very far with this, but, if I had the responsibility for con contingency planning for any organization, civil or military, I would return to the three-state model I started to develop for the Navy and build on it.
And, again, I would need an efficient notation.
It was during the time I worked at the Naval Research Laboratory that I first became aware of the ease with which people can deceive themselves with meaningless figures. This is not a criticism of the Navy. It is a criticism of virtually all modern management.
You are much better off with no numbers than meaningless ones.. The minute you believe numbers uncritically, that is, without understanding how they're calculated and how well they measure whatever they're supposed to measure, you will generate a breed of employee who will produce numbers and not results. Your data-processing system will then serve not to describe reality but to lie about it.
Kaiser Steel Corporation, 1954-1961
I started with Kaiser Steel as a mathematician, a new kind of employee requiring a new job description. I started in the Fontana Procedures Department of the Controller's Division. The Industrial Engineering Department (which was in the Operations Division and thus always in a kind of uneasy rivalry with the Procedures Department) had the responsibility for administering job descriptions. I learned later that the one we filed threw them into a tizzy; they felt it was another sinister move on the part of the Controller's Division to take over the work they were supposed to do.
I left Kaiser Steel, not quite seven years later, as Director of Research and Computer Planning.
In the interim, I learned about iron and steelmaking both at Kaiser and, through industry association meetings and literature, at other American and Canadian steel plants. I learned about other industries by participation in the activities of cross-industry associations. By the time I left Kaiser Steel, I had had several years in which to observe management in action. The comments I've been making, which you may have regarded as flippancies, are honest descriptions of what I observed.
Consider some examples. They are from Kaiser Steel, because that is where I worked, but I can testify that they are representative of all management.
Precision Rounds. At the time I was there, Kaiser Steel manufactured an alloy steel product called "precision" rounds because the diameters had to be controlled to very close tolerances. There were two schools of thought about the place of precision rounds in our product line. One school held that it was the most profitable product we made. The other said we were losing our shirts on it.
How could this be? Couldn't our cost accountants tell us whether we were making money or losing money?
The answer is: no, they couldn't. Two different groups, working off the same set of figures, reached diametrically opposite conclusions. This was the first of many experiences (including attendance at a Stanford gathering of the most prestigious accounting firms in the world) that led me to the conclusion that most cost accounting is applied metaphysics of an extremely ethereal kind. Counting angels on the head of a pin is a useful exercise in data-gathering compared to much cost accounting. <! start page 07 ->
The problem was, as it always is, the basic data with with which we had to work. Rounds of any kind had to go through a finishing operation. Scheduling finishing operations and describing what took place was one of the most difficult tasks in the mill. Production figures out of the finishing operation were always suspect. The difference between the two schools derived primarily from the different ways they interpreted those figures. At least, that's the best explanation I was ever given.
Tin Mill Flippers. We installed a management incentive plan at Kaiser Steel while I was there. The incentive plan for the tin mill was delayed for a while until we straightened out a rather embarrassing problem that cast doubts on the accuracy of our tin mill production recording. We were reporting more tin plate coming out of our shears than we put into them.
Management was understandably hesitant about paying performance incentives on figures that were so obviously, stupidly wrong.
The difficulty arose from the way we processed rejects. As the sheared plate went by an inspection station, it was examined for pinholes, surface blemishes and other defects. When a defect was detected, the inspector pressed a button that switched the faulty plate to a reject pile. Unfortunately, a few plates before and after the bad one were also diverted to the reject pile. These were usually prime plates that we could not afford to sell at secondary prices.
The prime plates in the reject piles were separated from the true rejects by a group of women called "tin mill flippers". They worked at large tables, examining the plated surfaces carefully and separating the plates into prime and secondary piles. The difficulty arose here. Because of the way reporting was done, some of the plates were counted more than once.
We caught this bad data because it was so obvious. A shear can't produce tin plate; it can only cut it. To shear more tin than you were given was clearly impossible.
But, of course, this kind of mistake indicated just the tip of the iceberg. The errors that weren't so obvious weren't caught and corrected. And they existed, let me assure you of that. Even worse, they were almost certainly manipulated by people who were better at doctoring figures than at making steel.
Slab Inventories. Steel ingots are broken down into blooms if they are to be processed into products like H-beams, I-Beams and the like and into slabs if they are to be processed into hot or cold-reduced product like sheet and strip. Our biggest semi-finished inventory at Kaiser Steel was in slabs; we had between 80,000 and 100,000 tons scattered in piles over wide areas of the mill grounds.
Slabs are big, heavy chunks of steel. You'd think it would be hard to lose one. It's not. It happens every day. Although I've been out of the steel industry for more than twenty years, years in which we've landed men on the moon, I'm willing to bet that right now, somewhere in the world, a rolling mill is idle because it's waiting for a slab that's sitting on the ground not too far away.
My reason for discussing these examples is to illustrate the complexity of the activities we are trying to manage. This particular example, as it happens, also shows how quick management is to seek a technological rather than a methodological solution.
There are many reasons why, despite the existence of a huge slab inventory, a rolling mill has to wait until the slabs it needs are found. The problem as a whole is a complex one and does not lend itself to quick-fix solutions. But a technological quick-fix for part of the problem was proposed and, after I had left the company, bought and installed. It was a costly failure.
At the risk of teaching you more about steelmaking than you ever wanted to know, let me describe this attempt to solve a methodological problem by technological gadgeteering.
<! start page 08 -> The steel produced by one open-hearth melt is called a heat. A major part of a slab's identification is its heat number. Since the process of producing slabs from ingots usually grinds impurities into some of their surfaces, the slabs making up a heat had to be distributed among areas called scarfing bays where men with oxyacetylene torches would burn out the impurities. The slabs then had to be reassembled into a heat; the heat had to be deposited somewhere in the slab yard; and the heat number and slab-yard location had to be reported to the production schedulers.
The existence of surface impurities that required scarfing thus caused much of the delay and confusion that attended the progression of the heat from the slab mill to the reduction mill. The technological solution proposed for this problem was called a "hot scarfer". It burnt off all the surface of a slab immediately after it was reduced to its final dimensions. This was supposed to eliminate the need for splitting a heat up into scarfing bays.
Thus, at this point, management had a choice:
- Methodological. Put in the time, effort and money it would take to develop a satisfactory, realistic scheduling and inventory control system for slabs. This would necessarily require investigation of all the many sources of difficulty, not just the kind of difficulty that the hot scarfer was supposed to eliminate.
- Technological. Buy a hot scarfer and hope the slab scheduling and inventory problem would go away.
No contest. The lure of the tangible, glamorous gadget always wins out over the intangible, colorless idea. They got a hot scarfer. I say "they" because this was done after I left; I am happy to say that I had no part in the decision. To the best of my knowledge, the hot scarfer never worked satisfactorily. It is one item of evidence justifying my belief that technological quick-fixes seldom achieve their objectives. (This, incidentally, is even more true for data processing than it is for steelmaking.) In this particular instance, the gadget was expensive to purchase and to operate, it burnt off good steel as well as surface impurities, and did not do what it was purchased to do: reduce wait-for-steel time in the mills that used slabs as inputs.
What does all this steelmaking jargon have to do with APL?
In all of my computer experience, I have found that the chief obstacle to getting anything done is the absence of any clear, concise, precise, formally manageable way to describe and analyze what we're actually doing and to describe and design a transformation to what we should be doing.
I gave several talks on this topic at professional meetings of various kinds. (I later used the written form of these talks, and other papers I wrote during my employment at Kaiser Steel, as class notes in the Business Information Systems classes I taught at the Stanford Graduate School of Business.) One of the earliest of the talks, "Formalizing Business Problems", was given at the first Electronic Business Systems Conference held in Los Angeles in 1955. This aroused the interest of Murray Lesser, at what was then the newly established IBM facility scattered throughout several locations in downtown San Jose (the one that developed into the General Products Division in which I now work). We met to discuss what we could do to develop some of the ideas we had in common.
The result of our discussions was a joint venture, called the Business Language Research Project, in which employees of IBM, Kaiser Steel and Touche, Niven, Bailey and Smart participated. My contribution to the project was something I called "field-and-branch identification" which I later developed into the approach to systematic systems analysis that I describe in my book on decision tables.4 I will discuss this more fully later.
Lest I leave you with the impression that nothing effective can be done about the data-processing problems of industry, let me assure you that this is not the case. Among the accomplishments of which I'm proudest are some of systems I installed at Kaiser Steel. At least one of them, a tin-mill in-process inventory and production recording system impressed <! start page 09 -> the phone company so much that, since we were using some of their equipment to record production and to communicate between processing points, they ran an ad in the Wall Street Journal featuring a picture of our tin mill and describing our system as an example of what could be achieved if you called in their Production Recording Consultant. We never had the benefit of a Production Recording Consultant's services because the phone company never told us they had one. Maybe he was made available to some of the people who read the ad.
So things can get done. But getting them done is slower, more difficult and more costly than it has to be. That is why we have the application programming backlogs that we do. We need a better, more systematic way of dealing with complex digital procedures.
We need systematic systems analysis.
Now do you see the connection with APL?
I started to work for IBM in the Advanced Systems Development Division, San Jose. I have since worked in the Palo Alto Scientific Center, the Palo Alto Systems Center and the Santa Teresa Laboratory, where I am now in APL Development.
At the time I started my IBM employment, the ASDD library used to circulate a daily list of its acquisitions to all employees. I am a kind of pack rat when it comes to written material and I acquired all kinds of library offerings that, I'm sorry to report, I never took time to read. Among the publications that I thus acquired, scanned, and filed for future reference were several reports containing a strange, exotic notation. I was skeptical about the value of these reports since, by that time, I not only had my own ideas as to what was needed but I had also seen many attempts by many people to develop notations, charting techniques, and other descriptive schemes. I didn't think much of any of them. By and large, history seems to have agreed with me; most of them are happily forgotten.
However, when I learned that the author of several of the papers full of Greek letters, slashes, little circles, curlicues, and other cabalistic symbols was coming out to San Jose to talk about his ideas, I decided that I'd read one or two of his papers as a preparation for his talk.
My life hasn't been the same since.
My first acquaintance with the notation that has since become APL (for several years it was either "Iverson Language", "Iverson Notation", or just plain "Iverson") started with an IBM Research Report by Kenneth Iver- son called, The Description of Finite Sequential Processes.
I don't have the paper handy at the moment so what I'm about to tell you is all memory; it may be mistaken in details but not in essence. I seem to remember that the first page was mostly given over to heading material, possibly an abstract, so that there were only two short columns of reading matter on it. And, again memory, it took me several hours to understand what those two short columns were all about.
The author's approach was so different from anything I'd ever encountered that I had a difficult time adjusting to his frame of reference. At the end of the first page, a fair assessment of my state of mind would be that I had glimmerings but no hope.
The second page took about as much reading time as the first but, since it had twice as much matter, I was clearly improving. The glimmerings were now fitful gleams. One thing had definitely chanced, however. I had no doubts about the value of what I was reading. I was now virtually certain that the author had something to say and that I'd better find out what it was. <! start page 10 -> The third page had an illustration that, in a few short lines, described George Dantzig's simplex algorithm simply and precisely.
That was the overwhelming, crucial experience.
In the previous thirteen years, I had participated in so many murky discussions of what was here presented with crystal clarity that I knew that what I was reading was of enormous significance to the future of computing.
So, when Dr. Kenneth Iverson came out to talk to us at San Jose, I was not only a convert, I had a fair idea of what he had to say. In the upshot, this meant that I was the only one who could ask him questions. Ken had some good, sharp people from Research and Advanced Systems Development in his audience but I'm pretty sure I was the only one who had been lucky enough to read what he had to say beforehand so that I had a fighting chance to follow him when he did what he usually does: hit you with one idea after another so fast that your mind goes numb.
I had been alerted to the fact that Ken might know what he was talking about by a fellow employee named Don Fisher who was working in the same group that I was. After Ken's talk, he came to visit Don and I got a chance to meet him. It's hard to believe that that first meeting took place more than twenty years ago. But it's true. I've been an APL bigot for a long, long time--since before there was an APL, in fact.
Shortly after I went to work for IBM, Dan Teichroew, a friend of mine from the Bureau of Standards days, asked if I would be interested in spending part of my time at Stanford, participating in a study of "Quantititative Management Techniques" being conducted at the Graduate School of Business. Dan was a Professor of Management at the GSB and he wanted my permission to approach IBM about the idea. Naturally I was delighted by his proposal and even more delighted when my management gave its approval.
The next several years were among the most satisfying and productive I've ever spent. And, in my opinion, not only did I benefit from them but so did everybody else who was involved: Stanford, IBM, the students who participated in the research program and attended my Business Information Systems classes and, more to the point of this talk, APL itself, whose first implementation was a FORTRAN-based batch interpreter that was developed on the IBM 7090 in Pine Hall at Stanford.
In the Formalizing Business Problems talk that I had given in 1955, I asserted that the problems we were facing required a partnership among computer users, computer manufacturers and academic institutions if we were ever to develop the body of knowledge we needed to manage computers properly. For a while there at Stanford we had two-thirds of what I'd recommended and, as you will see, I concentrated a good deal of effort on seeing that the third was represented as well.
When I started at Stanford as an Industrial Research Fellow, the present School of Business building did not exist. I occupied an office in Polya Hall, near the temporary buildings used to house Business School faculty and staff. IBM, at that time, shared the 7090 with Stanford and thus had the use of a few offices in Polya Hall, the building which housed the university's Computer Science faculty and staff. I was assigned one of those offices.
This was ideal. I not only participated in the activities of the Graduate Business School: I was also part of the Computer Science complex at the university. Both of these associations had APL implications and I'd like to tell you about them.
My activities at the School of Business can be described in n four parts
- Work with graduate students on the "Quantitative Techniques" project.
- Guest lecturing in Business Information Systems courses taught by Dan Teichroew and John Lubin. <! start page 11 ->
- Development of my own Business Information Systems course after Teichroew and Lubin left the university.
- Teaching Operations Research courses as a Lecturer in Operations and Systems Analysis.
APL and Quantitative Techniques. My purpose in describing my preAPL history to you was to let you know how my ideas about what management needed were formed. If you were paying attention, it should come as no surprise to you that, as soon as I was given an opportunity to do something about it, I started to investigate the implications for management of an efficient notation for describing, analyzing and designing digital procedures.
I investigated two techniques: decision tables and APL. The latter is the one relevant to this talk. I've long regretted that I never wrote up what I did; let me remedy the deficiency now.
A Programming_Language was published just in time for me to use in my researches at Stanford. It was a godsend. I used it to try to answer the question:Is it possible to write programming specifications in such a way that ambiguities, misunderstandings and outright mistakes in programming are minimized?The answer I got surprised even me.
Here's what I did. For a set of problems, of increasing difficulty, I would write a solution procedure, using the notation of A Programming Language I would then ask the graduate student assigned to me to:
The result, which I still find surprising and impressive, was:
- program the procedure in any programming language he chose,
- execute the procedure for a representative set of data values
- give me his program and answers so that I could compare them with the specifications I had written and the answers I had previously calculated.
In every case, what was programmed was exactly what I specified.
I'll comment in a moment on how rare this is under any circumstances. But these particular circumstances were so extreme that they merit some discussion.
A Programming Language was and is crammed full of ideas. I studied it assiduously, and enjoyed it, but it was not easy reading, at least not for someone of my mental capacity. I was lucky that my Stanford assignment provided me some time to spend on it so that I could both use it and explain it when my students asked questions.
But think of them. The life of a graduate student is a busy one. They had less time than I had to study strange notations. How could they make sense of the chicken scratches I said were their programming specifications?
Somehow, they did.
I gave them completely abstract procedure descriptions, even avoiding standard words where they might have provided clues to the nature of the procedure, and they programmed exactly what I specified even though they had no prior experience with the notation and never became expert at using it.
Thinking back on it, I understand why an intimate familiarity with this strange notation would have been desirable but was not essential. The only part of the notation they needed to understand was the part I used. I told them to look up the meaning of the symbols in Ken's book but I also told them that I'd explain and illustrate the meanings myself if they wanted help. But I wouldn't do anything but explain the symbols. They had to translate the symbolic description into a program. <! start page 12 ->
They did, with no arguments and no discussions about what the procedure was supposed to do. The procedure was described completely abstractly. They had no idea what external significance it had. They were not led astray by ideas of their own about it. They figured out what I wanted done and then did it. Contrast this with the usual way in which a business procedure gets programmed. Somebody, usually called a system„s analyst, casts about, comes up with some general ideas, puts them down as programming specifications and presents them to somebody else, usually called a programmer, who
Is what I'm describing familiar to you? Does it, perchance, occur in your organization?
- asks the analyst lots of questions,
- programs what he thinks the analyst wants,
- is told that what he has done is s completely wrong,
- quarrels with the analyst about what he said and what it implied,
- forces a reworking of the specifications,
- tries another program,
- is wrong again, forcing another rework of the specifications,
- and so on until finally, after a long period of false starts and reruns,
- some program is accepted as an implementation of some set of specifications, both programmer and analyst now being so sick of the entire procedure that they no longer care whether what is finally programmed is what was originally wanted.
To me, specifications cannot properly be called specifications until they are as abstract as blueprints or mathematical formulas. Specifications using the cords of everyday speech are always subject to misinterpretation. Most of the costs, delays and other inefficiencies that attend the development of data- processing procedures are due to these misinterpretations. The abstract APL program specifications I tested at Stanford strenghtened my belief in the opinion I've just expressed.
Content. What did I ask the students to program? Let me select four examples.
Rings-O-Seven. The first example was from A Programming Language page 63, Exercise 1.5. It was a solution of a "rings-o-seven" puzzle, in which rings on a bar were to be removed according to certain rules. The bar was represented by a logical vector in which the presence of a ring was indicated by a 1 and the absence by a 0.
This was just a warmup experiment, but it had an informative result. My programming specifications, naturally, described the solution procedure I had devised. It was wrong. But it was programned exactly the way I specified it.
Think of it. No arguments about who misunderstood whom. The systems analyst was wrong; the programmer was right.
Wouldn't you like to be able to make that determination without bickering, recriminations or tears?
(After my blushes subsided, I wrote a correct procedure. It was programmed correctly and gave the right answers. I tell you this because I'm vain and don't want you to remember me for the only mistake I ever made in my whole life.)
Internal Rates of Return on Investment. Financial theory at that time was troubled by the fact that the then current procedures for calculating <! start page 13 -> rates of return on an investment gave ambiguous answers in some cases. The difficulty arose when the cash flows that characterized the investment were a mixture of positive and negative amounts. This led to an equation with multiple roots, so that, in many cases, two equally ridiculous rates of return were calculated.
Dan Teichroew felt that the difficulty lay in the assumption that the same interest rate should be applied to the negative cash flows as to the positive. What he suggested was that there would be a unique, meaningful solution if we assumed that the rate to be applied to the negative flows, which were, after all, borrowings, should be a putative "cost of money" which would not, in general, be the same as the return on the investment.
Given this assumption, I provided a proof that an "internal rate of return" on the investment would be uniquely determined when a "cyst of money" was assumed.
I then specified, in "Iverson language", a procedure that determined a rate of return for a fixed cost of money and a specified series of cash flows. The actual calculation was programmed by J. P. Seagle, a graduate student. I no longer remember what programming language Pete used, possibly a home-grown (Stanford) assembler for the 1401. The results were reported in a couple of papers by authors the very ponderosity of whose names (Teichroew, Robichek, Montalbano) testified to the validity, excellence and importance of the research they described.
Critical-path calculations. Critical-path and PERT calculations were all the rage at that time, with many papers being devoted to efficient schemes for "topological sorting" and for detecting inconsistencies in precedence relationships.
I became interested in the problem and decided that the need for topological sorting could be eliminated and that consistency checking did not require a separate, special program. I used APL (Iverson language) to describe a solution procedure in which topological sorting was not required and consistency-checking was a fallout from the basic critical-path calculation.
This time, in addition to having Pete Seagle program the calculation, I programmed it myself, in MAP, the IBM 7090 assembly language, and FORTRAN (for input-output subroutines). My program exploited almost every bit in the 36-bit 7090 word. This permitted me to store enormous networks internally, so that I was able to achieve calculation speeds far in excess of any other method then available.
I was not then, and have not since become, an expert assembly-language programmer. I had risen to too august an eminence at Kaiser Steel to do much programming, though I snuck some in now and then, when no one was looking. My critical-path algorithm was the first programming I had ever done for the 7090 which, like MAP and FORTRAN, was completely new to me.
With the precise specifications of the APL procedure as my guide, programming assembly language for a machine with which I had little experience went very quickly and with no errors other than mistypings. The program I developed was a useful one that I later used in classes in the Business School and in the International Center for the Advancement of Management Education at Stanford. I described it in a paper called High-Sped Calculation of the Critical Paths of Large Networks that appeared both as a Palo Alto Scientific Center report and as an IBM Systems Journal article. The algorithm presented in the paper used the old notation (the one in the book) since the new (the one for the typewriter) had not yet been designed.
So APL turned out to be a particularly useful way to specify a program for an inexperienced programmer--me.
Linear Programming. The last program specification I want to discuss was done by a student, Don Foster, who had even less time than the general run of student I'd been working with. I believe it was his last term at the School of Business. This is always a hectic time but in Don's case he had the added distraction of planning for a European trip. Even without my assignment he was leading a harried life.
<! start page 14 ->
I gave him a version of the simplex algorithm (basically the one that had originally sold me on APL) from which I'd removed all clues like, "unbounded", "infeasible", etc.
The solution was programmed in jig time, since Don was champing at the bit anyway. The programming language was FORTRAN. The program ran the first time it was tried. It produced the right answers.
What was interesting was Don's reaction when I told him what he'd programmed. Like all good business school students of that era, he'd received instruction in linear programming. But he hadn't been told how easy it was. The arm-waving and chalk dust had concealed the basic Simplicity from him as they had from me.
Business Information Systems
As a guest lecturer in Business School courses, I advanced the argument that I've been advancing throughout this talk, that we need an efficient notation for describing procedures. I illustrated some of what could be done with APL and decision tables.
In my Business Information Systems course, I did the same, but I also encouraged activities that would permit students to find out for themselves whether or not I had valid reasons for what I was recommending. One of the course requirements was completion of an approved project. As an example of the kind of project I had in mind, I would suggest that they go to a local company, get a copy of an important report used by several departments, and visit each of the departments, asking whoever got the report what he thought the report told him, how much he knew about how the figures in the report were determined, what actions he based on the report, and how he decided on his actions.
Few of the students had had detailed business experience at that point in their careers. Business to them was defined by the other business school courses they'd taken: finance, marketing, micro- and macro-economics, accounting, theory of the firm, organizational behavior, and so on. These were good courses but none of them were concerned with or had the time to devote to determining what was happening at the working or first-line management level of an organization.
The intent of my course was to forewarn my students that business was, in practice, a good deal more disorganized than they were likely to realize from most academic discussions.
I had some skeptics in my classes, people who felt I was overstating my case. None of the skeptics who attempted the kind of project I suggested remained skeptics. Some, shattered by their experiences, felt even more strongly than I that no one in management knew what was going on.
Several students caught the APL bug. They went out on missionary activities of various kinds after graduation. The effects of some of these are still being felt--in, for example, organizations like IBM and American Airlines, to name the two I know most about.
Stanford's Computer Science Department.
Although I was housed with members of the Computer Science Department, I had no official connection with it. All of my interaction with faculty, staff or students was informal.
From the standpoint of APL as it now is, however, this interaction was the important one. This was not because of anything I did. It was primarily because I was a reminder of the existence of "Iverson language" and a kind of catalyst who served to bring together the right people at the right time.
The Computing Science Department of those days was an ALGOL stronghold. It had a Burroughs B5000, later upgraded to a B5500, an IBM 7090, and a <! start page 15 -> PDP-1, probably the first computer I ever saw with a cathode-ray tube terminal. I don't know whether "Star Wars" (the game) was developed at Stanford. I do know that a lot of "Star Wars" was played there.
Stanford had developed its own version of ALGOL, called SUBALGOL, for the Burroughs computer that had preceded the 85000. I believe the number was B220, but my memory might be playing me tricks. At Kaiser Steel, our first computer had been a B205, predecessor of the B220, if that's what it was.
The significance of this information from the APL standpoint is that two, possibly three, of the people who played key roles in developing the very first APL system had been instrumental in producing SUBALGOL for Stanford: Larry Breed, Roger Moore, and (the one I'm not sure about) Phil Abrams.
I met Larry as a result of a talk I'd given as one of a series on "Programming Languages" conducted by the Computer Science Department. He expressed an interest in what I'd had to say about what I was doing in the School of Business with the notation described in A Programming Language. He and Phil Abrams took action on this interest in a very real, very Productive way when the IBM Systems Journal article, appeared. What they did, and its aftermath, is described in Appendix A, an annotated verse history of APL's early days.
Larry and Phil not only developed the batch APL interpreter I mention in the verse, they did so many other things that I wish they and others involved in APL's origins would get them down on paper. For example, one of them should tell the story of Elsie (for Low Cost), an APL mini before there were minis.
But, in essence, all I did was happen to be around, saying the right things to the right people. Things took off when the right people got together.
Incidentally, one of the people involved in the Programming Languages seminar to which I referred above was Niklaus Wirth. Unfortunately, Klaus didn't get the proper message from my talk. He went his own way and developed PASCAL.
APL at IBM
The history of APL at IBM has been a curious one. In the early days, those of us who believed in APL were regarded as being a little (perhaps more than a little) strange. Since much of the strangeness was concentrated in IBM Research, this was tolerated. Practical people (the kind of people who make sales and meet payrolls) expect research people to be strange and are usually disappointed when they're not. So the strange people in Research were written off as overhead and left to amuse themselves with their incomprehensible, impractical symbols.
What that particular Research group did, of course, was produce the most solid, dependable, useful time-sharing system anyone had ever seen.
I wish I could tell you what it felt like in those early days to have the use of a system that was up twenty-four hours a day, seven days a week. No one had ever known such a luxury. People who didn't bother to investigate never believed us when we told them about it.
But some people did investigate what the researchers had developed and started to use it to do IBM's key bread-and-butter applications. This way of doing business was so productive that it spread like wildfire. By the time the practical people found out what had happened; APL was so important a part of how IBM ran its business that it could not possibly be uprooted. The wild-eyed researchers had produced a moneymaker. No talks about product "strategies" and the evils of language proliferation prevailed against the simple fact that:
then the best way to get it done was to use APL.
- if you worked for IBM and
- had access to an APL time-sharing service and <! start page 16 ->
- had something you wanted to get done on a computer quickly and economically
I wish someone who knows the details of how that came about would write about it. I can't do it. I was three thousand miles away when it took place. APL (called VS APL for reasons beyond the ken of mortal man) is, of course, now an IBM program product. I don't know how much more practical than that you can get.
Summary: Systematic Systems Analysis
I could go on but I see you're falling asleep. Let me end by rephrasing what is either explicit or implicit in what I've already said.
In the preface to my book on decision tables, I sayif you wish to use digital computers effectively, the first thing you should do is digitize your procedure descriptions.
As usual, this was something I realized I was doing after I'd finished writing and took time to think about what I'd written. The key idea of the book (the potential of which, incidentally, no one has as yet successfully exploited) is that procedures can be digitized in the same sense that bubble-chamber and spark-chamber pictures are digitized for analysis by a digital computer.
A decision table is a digitized procedure description; it describes a correspondence between vectors of decision values and vectors of action values.
The particular form of the digitizing is not important. Decision tables may or may not be the most effective way to get the digitizing done. The important thing is that it be done and done in a way that permits checking for consistency, redundancy, completeness.
But how is such a digitized procedure to be developed, maintained, managed, modeled, interpreted, translated, improved, extended,...?
To me, the answer is clearly APL. If we are ever to do "systems analysis" systematically, we must
We must divert our research to developing ideas rather than gadgets. Good ideas are at hand. Let's develop them.
- Digitize our procedure descriptions
- Manage our digitized descriptions with APL
Epilogue - Twenty Years After
The latest IBM version of APL is an Installed User Program called APL2. For those of us who had to bootleg our APL efforts within IBM for a long time, the announcement of APL2 is gratifying because it indicates the kind of management backing and recognition that we missed when We felt that APL was regarded as a limited tool for a small, specialized audience.
Management put its support on record in another significant way. IBM recently instituted awards for outstanding technical achievements. The first of these to anyone at the Santa Teresa Laboratory was just awarded to Jim Brown, manager of the group that developed APL2.
I haven't had a chance to use APL2 very much yet. I've been too busy writing a workspace manual for VS APL. So I can't pretend to an extensive <! start page 17 -> knowledge of its details. But, recently, I used APL2 to do something that I haven't been able to do for 20 years. Page 19 of A Programming Language describes a bank ledger that has three columns: the first column contains customer names, the second account numbers, the third balances. Unassigned account numbers had the entry "none" in the corresponding row of the name column.
Until APL2, no system available to me within IBM would allow me to form an array of that kind. Nor could I write, in any straightforward fashion, the four programs, producing reports from that ledger, that appear as one-liners in the book with which, twenty years ago, I started my investigation of "Quantitative Techniques in Management" at the Stanford Graduate School of Business.
With APL2, I was able to do precisely that. As a way of rounding out twenty years of APL history, I thought I'd show you what I did. It's contained in Appendix B. <! start page 19 ->
APPENDIX A. APL BLOSSOM TIME -A HISTORY IN VERSE
My contribution to APL 81 was the verse that I discuss below. The most I'd expected, when I wrote it, was that Jim Brown might play it at some informal gathering. I couldn't anticipate what actually happened. A group including Jim Brown, Larry Breed, John Bunda, Diana Dloughy, A1 O'Hara and Rob Skinner rehearsed their guitars and voices until they were of a truly harmonious sweetness and sang "APL Blossom Time" at the APL 81 banquet as part of the evening's entertainment. I. P. Sharp's Peter Wooster prepared overhead transparencies that made it possible for the audience to sing along. And I'm sure other people whose names I was never told contributed to what was, for me, an extremely heartwarming experience: the sound of people singing, laughing and giving every evidence of enjoying the words I'd written.
Despite its frivolity, APL Blossom Time is authentic history. I thought it might be useful to get the details on record by annotating each section of the verse.
APL BLOSSOM TIMEA nostalgic reminiscence of the early days of APL, remembered to the tune of The Battle of New Orleans.
Back in the old days, in 1962, A feller named Ken Iverson decided what to do. He gathered all the papers he'd been writing fer a spell And he put them in a little book and called it APL. Well... He got him a jot and he got him a ravel And he revved his compression up as high as she could go And he did some reduction and he did some expansion And he sheltered all his numbers with a ceiling and a flo'.
If you've read the earlier part of this book, this verse doesn't need annotating. If you haven't, go back and read it.
Now Sussenguth and Falkoff, they thought it would be fine To use the new notation to describe the product line. They got with Dr. Iverson and went behind the scenes And wrote a clear description of a batch of new machines. Well... They wrote down dots and they wrote down squiggles And they wrote down symbols that they didn't even know And they wrote down questions when they didn't know the answer And they made the Systems Journal in nineteen sixty-fo'
Though the scan required that I place Sussenguth's name first, this is perhaps misleading. Ed Sussenguth has done a lot of good work for IBM but, except for his participation in this paper, I don't know of any other in n which he used APL.
Adin Falkoff, on the other hand, was one of those crazy-symbol Iverson-language authors whose papers I started requesting from the ASDD Library when I first joined IBM. I remember one paper in particular. It struck me because it seemed to be written by someone who hated jargon as much as I did. One of the data- processing fads current at that time was <! start page 20 -> the "associative memory". Adin, like the rest of us, had to use the term because everybody else was using it. But he rather wistfully (can you imagine Adin wistful?) pointed out that a more descriptive term would be "content- addressable memory".
And, of course, as you all know (or should know, if you don't), Adin Falkoff, in both technical and administrative capacities, has been in the forefront of APL developments ever since the days of those early, incomprehensible reports out of IBM Research.
The paper referred to in the verse is A Formal E. escription of System/360, by Falkoff, A. D., Iverson, K. E., Sussenguth, H. IBM Systems Journal, Vol. 4, No. 4, October, 1964.
About "questions where they didn't know the answers": the paper was indeed, to the best of my recollection, the first to use the question mark as an APL function.
I gave my copy of that issue of the Systems Journal to Larry Breed. (I had already ordered several more. John Lawrence, editor of the Systems Journal at the time, had the APL functions in the article printed separately for more effective study. I ordered several copies of those, as well.) Larry and Phil Abrams conducted a seminar on the System/360 paper that extended over several weeks. They also produced a list of "cliches", to assist in understanding regularly recurring patterns, and a list of errata, to remind the authors (or the typesetters) that they didn't know it all.
The sessions conducted by Larry and Phil were well attended. When Ken Iverson came out to give a talk at Stanford, he drew the biggest crowd the Computer Science auditorium had seen up to that time. I told my Business Information Systems class to attend since they would hear something better than anything I had to say; this also gave me a chance to attend myself.
Now writing dots and squiggles is a mighty pleasant task But it doesn't answer questions that a lot of people ask. Ken needed an interpreter for folks who couldn't read So he hiked to Californ-i-a to talk to Larry Breed. Oh, he got Larry Breed and he got Phil Abrams And they started coding Fortran just as fast as they could go And they punched up cards and ran them through the reader In Stanford, Palo Alto, on the seventy ninety oh.
Ken Iverson and Larry Breed first met in my office at Polya Hall. Since this may be my only claim to fame, I'm glad to put this historical fact on the record.
Larry was about to graduate. Ken had a job to offer him. We now have APL.
I remember a phone call of Ken's, shortly after Larry had joined him and Adin at IBM Research in Yorktown, in which he said something like: "This young man thinks he can write a translator in a couple of months." He sounded as if he were wondering whether he'd made a bad bargain. I assured him that if Larry said he could do something in a couple of months he would probably do it in a couple of weeks. He and Roger Moore were legends in their own time during Stanford's SUBALGOL days.
Stanford has left a mark on APL second only to that of left-handed Canadians. For a while, there was a theory that all of APL was being dominated by left-handed Canadians. I have been told that when Mike Jenkins, at lunch in the Yorktown cafeteria, was observed to be left-handed, someone facetiously asked him if he happened to be Canadian. He happened. <! start page 21 ->
I tried to start a similar factoid5 about right-handed Brooklynites, hoping to get included along with Falkoff and McDonnell. I forget what happened to that. I think one of them is lefthanded. I know I'm not.
Well a Fortran batch interpreter's a mighty awesome thing But while it hums a pretty tune it doesn't really sing. The thing that we all had to have to make our lives sublime Was an interactive program that would let us share the time. Oh, they got Roger Moore and they got Dick Lathwell, And they got Gene McDonnell with his carets and his sticks, And you should've heard the uproar in the Hudson River valley When they saved the first CLEANSPACE in 1966.
APL bigots seem to be characterized by literacy and a feeling for history. The first time-sharing APL system was implemented (as IVSYS) on an IBM 7090 at Mohansic. In those days, there was no )CLEAR command. To get a CLEAR workspace, you had to load one. The one that came with the system was called CLEANSPACE. Although it was no longer needed when )CLEAR was introduced, CLEANSPACE, along with the time and date it was originally stored, has been preserved in Library 1 of a continuous sequence of systems ever since: the Yorktown Model 50, the Philadelphia Scientific Center Model 75, the Palo Alto model 158, the Santa Teresa model 168 and, as I discovered for the first time just a few hours before I wrote this, the Santa Teresa Model 3033. At one point, after a disaster had caused the loss of CLEANSPACE, it was carefully restored with the correct date and time. The objective, of course, is to preserve a record of the moment when APL first became a time-sharing computer language.
Preserving CLEANSPACE in APL2 presented a problem, since workspace names are limited to eight characters in CMS, the first "environment" in which APL2 has been offered. However, as you can see from the following exhibit, which is a copy of what appeared on the screen in response to a )LOAD 1 CLEANSPACE command that I executed on our APL2 system (which operates under CMS) the problem has been solved, or, better, circumvented.
Note that not only is the time given, the time zone of the area in which the storing was done is also given, indicating that the original workspace was stored when United States Eastern Standard Time was in effect. Note also that the original workspace size was 32K and that the time zone in which CLEANSPACE was loaded to produce this example was Santa Teresa Daylight Savings Time.
What hard workers AP L. bigots are' I've checked my handy perpetual calendar and, as far as I can tell, November 27, 1966 was the Sunday of what <! start page 22 -> must have been a four-day Thanksgiving weekend. What were those loonies doing working such crazy hours during the holiday season?
The !'carets and sticks" reference is to a paper by Gene McDonnell on the logical and relational functions--the ones whose symbols can be constructed out of "carets and sticks".
Well, when A1 Rose saw this he took a little ride In a big station wagon with a type ball by his side. He did a lot of teaching and he had a lot of fun With an old, bent, beat-up 2741. Oh, it typed out stars and it typed out circles And it twisted and it wiggled just like a living thing. Al fed it a tape when he couldn't get a phone line And it purred like a tiger with its trainer in the ring.
A1 Rose was, and I assume still is, one of the most spectacular APL demonstrators there ever has been. The verse refers to a vacation he took in which he was accompanied not only by his family but by what was laughingly called a portable 2741. This was a 2741 that came in two parts which, when ready to be "ported", looked like two big, black pieces of luggage. Wherever Al went, he'd find some likely APL prospects, park the station wagon near an electrical outlet and a phone, lower the tailgate and start hammering on the keys.
In those days, getting connected to a working APL system was a chancy thing. As a safeguard, A1 recorded, on tape, what went across the acoustic coupler during a sample session. When he had problems getting to a real APL system, he`d play the tape into the acoustic coupler and produce a simulated computer session that was an exact copy of the real thing.
I remember that double-black-box 2741 very well myself. I, too, did quite a bit of APL demonstrating in those days. At the University of California at Davis, the demonstration was given on the second floor and there was no elevator. I had to haul those two big boxes up a long flight of stairs. I'm glad I didn't find out until later how heavy they were. When I sent them Air Express to an IBM System Engineer in Seattle, I learned for the first time that they weighed 120 pounds. Well, I'm not too bright but I'm pretty strong.
Now, there's much more to the story, but I just don't have the time (And I doubt you have the patience) for an even longer rhyme. So I'm ending this first chapter of the tale I hope to tell Of how Iverson's notation blossomed into APL. So.. Keep writing nands when you're not writing neithers, And point with an arrow to the place you want to be, But don't forget to bless those early APL sources Who preserved the little seedling that became an APL tree. Dedicated to the pioneers of APL with respect and affection by J. C. L. Guest
J. C. L. Guest is the pseudonym I used for sore light verse I submitted to Datamation several years back. There were four pieces in all. The Far-flung Data Base, SYSABEND Dump, Virtual Memory and Decision Making.
If you were offended by the unkind things I said about modern management in this talk, don't read Decision Making. You won't like it. <! start page 23 ->
APPENDIX B - TWENTY YEARS AFTER
The following four figures show how I applied APL2 to Program 1.9 (Example 1.1) of A Programming Language, page 19.
The first figure shows the sample bank ledger I used and the calculations I performed to illustrate the ledger's shape and various facts about its composition. Note that the name entry for an unassigned account number is a single blank rather than the "none" used in the original example.
The second figure shows two versions of the four reports (P, Q, R, S) required in the example. In the first, the output is unformatted. In the second, "picture format" is used to format the numeric part of the report.
The four required reports are:
- P - name, account number and balance for each account with a balance less than two dollars. (Although the original example did not require this, the illustrated calculations do not include unassigned account numbers in the report.)
- Q - name, account number and balance for each account with a negative balance exceeding one hundred dollars.
- R - name and account number of each account with a balance exceeding one thousand dollars.
- S - all unassigned account numbers
The third and fourth figures show the programs for the unformatted and formatted reports respectively. They could have been written as the four one-liners of the original example, except that report P, the one producing a list of accounts with balances of less than two dollars would have included unassigned account numbers. To avoid this, the unassigned account number report, S, was produced first and an array T, consisting of all assigned accounts, was used to create the subsequent reports.
The other lines in the report merely introduce spaces to separate the successive reports. <! start page 24 ->
<! start page 25 ->
<! start page 26 ->
<! start page 27 ->
- First draft of a report on the EDVAC, J. von Neumann, June 1945, (report), Moore School of Electrical Engineering, Univerity of Pennsylvania, Philadelphia, Pa.
- A Programming Language, Kenneth Iverson, John Wiley, 1962
- In making this comment here, and not earlier when I was describing the work I did for the Air Force, I do not mean to make an invidious comparison of the services. The work I did for the Air Force didn't require me to know whether either the data we used or the answers we calculated corresponded to reality; all I had to do was to supply plugboards or programs. At the Naval Research Laboratory, on the other hand, it was my responsibility to get good answers from good data. That's when I found out there wasn't any good data.
To even things out, let me observe that the Air Force doesn't know what's going on either. As for the Armor, well, I served in the Army. I can tell you about the Army.
- Decision Tables, Michael Montalbano, Science Research Associates, 1974
a few comments by Curtis A. Jones, Apr 17, 2013
I do follow the IBM1130 group and have actually used their APL\1130 at the museum. Getting APL\360 running is pretty cool, and we owe Len a huge debt of gratitude for making the APL\360 source available for that. But APL has come quite a way since then. I refuse to say a "long" way since then because the language itself was well designed and has changed remarkably little. But it's been extended and lots of connections to interfaces and data have been added.
I don't remember Jim's Weigang's "getting started" page, but I can add a little about developments since the October 2003 date of that page. First, note that Jim discusses APL*PLUS for the PC from STSC (Scientific Time Sharing Corp.). Their mainframe part became Manugistics named after a logistics management program that ran on their APL. Their PC products were acquired by APL2000 in Princeton, NJ from which you can purchase APL+WIN and other APLs. http://www.apl2000.com/ APL+WIN Ver 13 was released on Monday. So that APL is still around. I don't see a "hobby" price on the APL2000 site, but the interpreter the Jim mentions can, I think, still be found and downloaded.
The page also mentions IBM's TryAPL2 and Workstation APL2. I think the comments on TryAPL2 are more negative than it deserves, but then I had a small hand in distributing TryAPL2. (Ask me how I coined "guerrilla marketing" around 1991.) I think it served well its intended purpose of providing for free a complete APL for students to use for schoolwork. It's connections to outside files were limited to avoid commercial use. And it's certainly pretty easy to get a time-limited copy of the current Workstation APL2 from IBM. That's what I use. The copy I have running right now came through IBM's Academic Initiative which provides software for school use.
Dyalog APL named their online demo TRYAPL. http://www.tryapl.com/ Dyalog also has "educational, personal and commercial versions" of their APL, which is being actively developed and promoted.
Jim Weigang's page also mentions the I.P. Sharp APLs. It's been said about their PC APL that for the PC/360 (?) customers (an IBM 360 in a PC box) they'd take out the emulator and add a digit to the price. It may be dated, but on today's fast PCs it might perform pretty well! And the SHARP APL for Linux must be pretty good, too. And it's free. There are some old I.P. Sharp hands around the area, partly because they had an office in Palo Alto. At many of the museum events you can see Joey Tuttle, Paul Berry and Larry Breed in the audience.
For Mac and Linux there's APLX: http://www.microapl.co.uk/apl/
For those even more mathematically inclined than the typical APL user, there's J, the language Ken Iverson produced after APL. For the investment banker, Palo Alto's (still?) Arthur Whitney sells K and Q. Steven Wolfram admits that APL was a significant influence on Mathematica.
Wai-Mee Ching has a free APL-like language available called ELI: http://fastarray.appspot.com/default.html
The ACM SIGAPL page lists some current activity around APL: http://www.sigapl.org/
Sam Sirlin's APL FAQ list is updated fairly often in the USENET group comp.lang.apl (which is included Google Groups). A direct link to the December "issue" is ftp://rtfm.mit.edu/pub/usenet-by-group/comp.lang.apl/APL_language_FAQ
Catherine Lathwell is working on a documentary on the history of APL: http://www.aprogramminglanguage.com/