Pittsburgh Years

Mainframes

pittsburgh.jpg

 

A

ll the equipment was painted blue.  The Central Processing Unit, or CPU, could only be glimpsed from the counter: a matrix of small, blinking lights.  Their purpose a mystery to the laity; they told the operator some story of the machines disposition.  A row of large and impressive disk drives dominated the view, resembling Text Box:  
The Magic Box
nothing so much as a Laundromat with box after aligned box of rapidly spinning disks.  Each of these held 7.25MB—mega­bytes!—of permanent storage.  The CPU’s random access memory was 64KB—kilobytes.

(One eventually began to learn this new esoteric jargon which now hardly requires explanation for anyone at all.  Today, the only surprising things: the much larger size of the numbers; the much smaller size of the hardware)

While the facility housing the computer itself was business-plain, it had a certain aura of a church sanctuary: it was not to be entered, except by the cognoscenti; access was barred by a long counter which one approached as though to take communion; the high priest of the department and his several acolytes were neatly arranged in cubicles around the periphery of the machinery.  The facility even included a cubicle for a fulltime representative from IBM—called a systems engineer, a new term.  He made sure everything worked; the technology was new and for most companies the required intellectual infrastructure was not then fully in place; the IBM systems engineer was an insurance policy.  His suit and tie were also of blue.

At the counter, payroll data on punched cards and other information could be submitted.  The system also had a FORTRAN compiler, the language of choice for engineers; we were permitted to use the equipment for programming.  The ‘deck’ of cards that listed one’s program would be corralled by—in Pittsburgh—a ‘gum-band’ and, in the fullness of time, these cards would be loosed and run through the computer’s card reader by one of the machine’s acolytes.  The instructions would then be processed by the machine at lightning speed.  Though afterward, the results would languish at the counter waiting for the next interoffice mail pickup.  If you were in a hurry you could pick up the results yourself: neatly folded striped green and white paper—three printed lines to a stripe—with narrow, removable strips of paper punched with sprocket holes along both edges; your deck of processed punched cards, gum-banded once more, would be lying on top of the print-out.

All this was quite impressive, but then one heard that an upgrade was planned to bring the main memory up to 128KB.  It was staggering; one knew a revolution was in the making but was not sure just what would come of it or how it might be applied.

My very first encounter with the new IBM had come when, one day in the mid 1960s, at Blaw-Knox, I was offered a tour of our Accounting Department by a proud employee.  It seems to me that I was there to straighten out some discrepancy in my expense account.  There I saw a marvelous mechanical device that would sort punched cards so fast that the process could not be followed with one’s eyes.  The founder of the Tabulating Machine Company, TMC—IBM’s first acquisition—one Herman Hollerith, had cleverly, and presciently, developed punched cards as an aid to the tabulation of the United States census.  The same deck of cards could be sorted by birthplace, by income, and by the many other measures that then interested our government.  After each sorting, the deck of cards could be printed in that order.  It could be used for tabulating all sorts of information.  Our Accounting Department also had some early model of an IBM computer.  I think it was a model 1401.  They were beginning to switch over to that.

In the latter part of the 1960s, or early 1970s, after the Blaw-Knox corporation had moved into its new quarters in the One Oliver Plaza building, it leased an IBM 360 – 40 ‘mainframe’ computer.  Its function at Blaw-Knox was primarily that of processing corporate accounting.  But since it was there—and at quite a large monthly cost—other groups were encouraged to use it as well in order to spread the cost around.  But the other departments didn’t quite know what to do with it; computers were new and software for engineering or for business processes other than accounting did not then exist unless it had been developed in-house.  Yet the idea of using computers to do real work was in the air now, and not just at Blaw-Knox; this seed was germinating all over the country, but mainly, the only shoots to have sprouted so far were in the hothouses of the universities, the military and a few of the major business corporations.

There were other companies that manufactured computers at that time as well as IBM, but none of the other were as successful.  The full ensemble of computer-makers was familiarly referred to by people in the business as “IBM and the seven dwarfs.”  The new IBM System 360 line, brought out in the mid sixties, was revolutionary in that it essentially unified the instruction set (the coding—in hardware, firmware and software) of several sizes of computers.  By performing this trick, a client company was then not locked into a particular size of computer: if the company grew larger, and needed a bigger computer, it was available.  The effort involved in this upgrade: a telephone call.  And all the programs that had been developed for the original smaller computer system worked without change on the bigger one; only your monthly lease price went up.  This flexibility was previously unheard of.

I bought a book on FORTRAN.  It was one of those simplified learning books.  I chose it because it had been written by an engineer and the examples all concerned engineering.  That way, if one knew anything at all about that field, one could then pretty much understand the programming, and understand how software could be used, and why it was useful.  This sort of knowledge was information that I always seemed to need before being able to study seriously.  I went carefully through the examples and wrote some simple programs.

Keypunch machines had been scattered around the various floors of our buildings to try to promote use of the computer by other departments.  My guess is that the accounting department, in its written proposal to the company, had assumed a certain use-factor by the rest of the company and they wanted to do their best to encourage it.  The creativity of written financial justifications was not of course born at Blaw-Knox.  I have come to realize, having since performed some of this creative justifying myself, that the on-paper justification for new things is largely obfuscation.  The real justification for things like this, and one probably understood by both the authors of the paper and by its auditors, consists of, in equal parts: leap of faith and burst of optimism, with a dash of ‘toys for boys’ throw in; men, after all, grow from boys.

As a program is coded, it is necessary to suspend certain natural preconceptions: If you had just instructed a program to do a complicated function of ten thousand times, in your heart you knew that if you were to attempt to do this yourself by hand—and you had to be able to imagine doing this in order to write the program—not only would you never finish it, but if you had, you knew with a near-certainty that you would not end with the right answer after that many iterations.  Human brains just don’t work that way.  But computers have the peculiar and unnerving quality of nearly eliminating time, as though one of the fundamental dimensions of the universe, and the root source of tedium, had been stripped completely away; Einstein be damned.  And math mistakes, that bugaboo of us all since entering grade school, had disappeared right along with this temporal distortion.

It seemed that you could ask a machine to do nearly impossible things and it wouldn’t think twice about it.  On top of that, after it had done it, it had the temerity to tell you at the very end of your printout, as though to rub salt in your wounds, that it had done this heretofore impossible task it in some small fraction of a second.  That peculiar truncation of time and its inhuman, unerring accuracy is what seemed new and radical about computers.  It was this aspect of them that inclined one to think that nearly anything was possible now—at least in the beginning.

Unfortunately the least error in the syntax of your program, the slightest miskeying when using the unfamiliar keypunch machine, or an accidental miss-ordering of the deck of punched cards, would result, after hours of thought and more hours of waiting, in a printout returned a day or so later, that listed your program.  At first it usually read “syntax error, syntax error, syntax error, …”, which told you, the budding programmer, that the language you had used had not been phrased correctly, nevermind whether the logic of the program would eventually give the proper answer, you had not even framed the instructions in a legitimate way.  So you made the corrections, submitted the deck of cards again and, recycle, …

Since accountants don’t use slide rules, and seem extravagantly shy with approximation, accountancy seemed more drawn initially to the use of computers than were engineers, for whom approximation is like bread and butter.  In Accounting, thanks to the Romans, not a fraction of a cent was ever out of place—on-paper anyway.  And IBM, that always-sales-oriented organization, had caused programs to be developed for balancing dollars and cents long before they made any programs addressed to the more esoteric requirements of engineering.  As a consequence, accountants were generally spared the need to develop their own programs while engineers at this time were left pretty much on their own at ground zero and had to develop their own software.

The engineers that I knew were cool to the frustrating, nitpicky perfection required for programming, and not a small part of this distaste was that the simple act of keypunching too closely resembled typing which then, in the great hierarchy of organizations, was considered work suitable only for secretaries.  I knew engineers that would write out a program in longhand and have a secretary keypunch the cards; this led to more errors and frustration.  Thus the overall process managed to turn-off most engineers, all but those with a high tolerance for pain, an uncharacteristic humility, or those few who were extravagant futurists.

 

I

Text Box:  
The Cathedral of Learning
n those distant days when God and man seemed to collaborate more intimately with respect to learning, a Cathedral Of Learning had been built at the main campus of the University of Pittsburgh in the district of Pittsburgh known as Oakland.  Entering The Commons Room for the first time one could not help but look up and be awed by its spiritual architecture.  The ceiling of the commons room was a monumental fan of graceful arches high above the appropriately churchy Text Box:  
The Commons Room
stone floor upon which study tables had been arranged for the convenience of students.  But this building does not utilize these arches for support since it is some 40 stories high and all this sanctifying masonry is of course supported by a hidden structural-steel frame.  One went to class using a bank of fast-moving elevators in the center of this ornate structure.

A friend of mine, a structural draftsman at Blaw-Knox—he had been hired the same week as I—told me that he was thinking about taking a course in programming at Pitt (the University of Pittsburgh).  It was to be an evening course in a language called MAD, one of those curious, collegiate, double-meaning acronyms that stood for Michigan Algorithmic Decoder while, at the same time, managing to evoke MAD magazine, then popular among people of that certain age.  Without much thinking, I said that I would take the course with him.

The computer course that my friend and I took was largely a lecture by one or another graduate student in an auditorium.  At first there were several hundred students, few of whom had a direct interest in computer science—at that time not yet considered a science.  The school felt that a certain acquaintance with the new technology was desirable for everyone, as even then it was considered that computing might be applied in many fields.  It was similar in many ways to the discovery of the generation of electricity a century or so earlier; it was known to be an important discovery, but it had yet to be understood just what it might be good for.  After each lecture a small program in MAD was assigned to be written by the students.  Initially they were very simple and oriented ad toward familiarizing students with the system itself.  For example, you might be asked to print out a simple phrase (which today would certainly have been the now-famous Hello world of more recent programming fame), or to multiply one number by another number and print out the product on the system’s printer.

Text Box:  
MAD Magazine 
featuring 
Alfred E. Neuman
At the end of each program, one added instructions to print out what you hoped was the right answer.  Finally you were instructed to insert a special punched-card which called a standard subroutine called GRADE.  This routine looked at your answers, which had somehow been captured, to see if they were correct or not.  If they were not, the subroutine cleverly printed out on a line printer, in a blur of letters and numbers and special characters a page which, if you held it a little away from you, resolved into the toothy, grinning picture of Alfred E. Neuman, the dominant, recurring character in MAD magazine.  Your ‘grade’ was duly recorded somewhere in the computer system, sparing the staff from even looking at your program.  All that was required of them was to lecture.

As it turned out, the aim of this course in MAD was less about learning the language in particular—though that was covered as well—than it was about understanding algorithms in general.  For example you might be instructed to have your program read a set of numbers, stored by the staff ahead of time on a disk, into an array in your code.  Then you were to sort them in order, small to large, and then print them out.  Or you might be asked to find the square root of a number without using the handy square root function in the language.  Instead you were to manage to do it by using what was then termed the half interval method, a structured guesswork method that had been explained in the lecture.

(You will have seen this algorithm at work if you have ever watched the game show The Price Is Right on television: You don’t know what the price of a certain product is but you can guess and the master of ceremonies will tell you whether your guess is too high or too low.  The aim is to guess the correct answer within a certain amount of time.  While most people just randomly guess—and mostly lose—a substantial number of them have learned that the most efficient method is to guess high at first, and then, if that answer is too high, to then cut it in half for the next guess; and then, if that answer is too low, to increase it by half the distance back to the first guess, but if it was not too low, then to cut it in half again, and so forth.  It is quite surprising how fast this half interval method converges on the right answer.)

Methods for performing all these computer algorithms had already been written of course, years before, by computer scientists, but here the aim was to give you an idea how computers ‘thought’, or could be made to ‘think’, a valuable thing to learn.

As the algorithms got more and more difficult, the original class of 300 quickly weeded itself out to about 50 of the most dedicated computationalists.  About half the time I could not manage to get the right answer before the problem of the next week was given out since the turnaround time for keypunching-submitting-keypunching corrections-resubmitting, … was very long.  And of course I had to take the bus to Oakland for each iteration of this process—no Internet then.  (This time at the keypunch was notable for another first in my life: there, I heard another keypuncher—a girl!—say the word ‘shit’ for the first time, yet another indicator of my country naiveté.)

But there was an out: We had been told that if we got the answer to the final assignment, and, in the process, used the least amount of CPU time of any student (CPU time was a valuable commodity in those days, so everyone was encouraged to write ‘efficient’ programs), that you would get an automatic A for the course, nevermind your former shortcomings.  Of course, since I was not going to school for credit, I didn’t have to care about that, but it seemed an interesting problem: calculate all the prime numbers between 1 and 10,000.

I split a full weekend among the Carnegie Library, the dining room table at our home in Greentree, and the keypunch room at the Cathedral Of Learning and I managed to get the second-best time of all, for which they generously awarded me an A-, in spite of my having correctly finished only about half of all the weekly assignments.

The long and the short of all of this experimentation in the new field of computing was that I had learned firsthand that this interesting tool could be useful only at the margins of engineering until some group of program­ming-engineers developed software designed specifically for engineering, in the same way that such a system of programs had already been developed for accounting.  The effort to develop this sort of software would be monumental, far too much for an individual to perform, ad hoc, along with his other work, no matter how talented.  I soon found out that just such an effort had been underway for some time at the Massachusetts Institute of Technology.

In MIT’s Civil Engineering Department a system of programs with the acronym ICES, Integrated Civil Engineering Systems, had been in development and, though not much of it was ready at that time, part of it was.  One of those finished parts was termed the Structural Design Language.  It was of course cutely nicknamed STRUDL in the manner of students and educators everywhere.  (This tendency toward preciousness in invented terminology, rampant in academic circles, is exhibited most extravagantly in the field of subatomic particle physics:  Consider the names for the six flavors of quarks: up, down, charm, strange, top and bottom.)

STRUDL was available freely and it was designed for the IBM 360 family of IBM computers.  (Not the last time that a virtual monopoly has provided substantial benefit for all but true-believers in an alternative universe.)  The program didn’t require extravagant computer resources, so the company’s computer department was willing to add it to our system.  MIT snail-mailed a large reel of tape containing the program to us.  Now we had software aimed specifically at structural design.  In this new software, thought had clearly been given to the likes and dislikes of real-life engineers.  STRUDL is in fact, unlike an accounting system, more of a simplified computer language that is specific to structural design.  It provides (it is still with us, though with different sponsors) great flexibility, and did so within a very broad problem space.

Text Box:  
Tool of Choice
Before engineering computer systems were devised, only relatively simple structures could be engineered, only those for which the stresses could be reasonably calculated using assumptions that lent themselves to the simple—though surprisingly clever—design tools that we had, such as the slide rule.  Even The Empire State Building, perhaps the most grandiose structural design of its time, actually consists of a rather simple, if colossal, structure.  The price paid for this simplicity was that essentially all structures were rectilinear.  Today it is common to see curved and otherwise unusually shaped structures which can only be designed using computers.

I began to use STRUDL in the design of structural steel frameworks that were statically indeterminate.  Formerly we had used the Hardy Cross moment distribution method for structures of this type, an iterative procedure that lent itself to the capabilities of the slide rule; this had been the reigning technique since the mid 1900s when iterative methods first began to be used routinely by structural engineers.  The limitation of moment distribution was that it was, in general, useful only for beams and columns, each relatively simple planar structures.

Sometimes—though engineers tried to avoid this whenever possible—one needs to analyze the stresses in plates, sometimes plates of steel but, in our business, more commonly, plates of concrete—usually foundations that were required to support closely spaced loads, so close that individual foundation “pads” of concrete, much easier to design, would have run into each other.  STRUDL contains a feature called finite element analysis that permitted one to do this difficult analysis using the computer, which is very good at manipulating thousands or millions of numbers.

But as I was to find out, not every engineer was as enamored of the new technology as I was.  To most, it seemed that the old familiar ways we’re still the most congenial—nevermind that this led to designs that were more expensive to build.  Since every design had to be checked by another person, it was a tough sell at first to hand the checker a deck of cards and a printout from the computer instead of the normal design notes to which they had become accustomed over their engineering lifetime.  Yet economics was on my side since usually we were in the engineering and construction business, not just the engineering business and thus we often bore the costs of expensive designs and so had to compete with other design and build companies.

No matter engineers’ natural tendency toward conservatism, the camel’s nose was now in the tent, and progress in using computers would not be stopped.  Mainframe computers began to rule and new engineering vistas beckoned to the young man from Naperville.