Why Some Old Computers are Interesting

Surely there is nothing more useless than an old computer? Progress is so swift that a newly bought machine is out of date by the time it is delivered, a two year old machine is obsolete and after five years a computer has no residual value...

Well ... yes and no. It is true that the capacity and performance of computing equipment has improved exponentially since (at least) the first stored program computers appeared circa 1948 (http://en.wikipedia.org/wiki/Manchester_Mark_I). The price/performance ratio has improved even more dramatically.

Just how fast the improvement has been is not easy to say. "Very fast indeed" is a fair summary, though. Since the dawn of integrated circuit technology "Moore's Law" http://en.wikipedia.org/wiki/Moore's_law) has held pretty much true - the number of transistors that can be feasibly put on a die has doubled about every two years. The "rule of thumb" version of Moore's Law - that performance doubles every 18 months, or a factor of 10 every 5 years - is rather too optimistic. Although for some specific microprocessor families over a relatively short time span, even the "rule of thumb" version has been true.

However, while today's microprocessors are miracles of engineering, available at an almost ridiculously low price, the greater part of the performance increase is attributable to semiconductor process technology. It is not due to architectural innovations.

With the exception of the Intel Itanium family, all of the architectural features that contribute to the performance of today's microprocessors first appeared (and were pretty fully explored) in a series of "mainframe" computers designed between the late 1950's and 1975. Specifically:

It is true that in this period, no single machine combined all the good features, but by the late 1970's there were several examples of "all the good stuff in the same place" - with the exception of vector processing (which is somewhat relevant to today's mainstream microprocessor families in the form of SSEn and similar instruction set extensions).

The architectural history of the microprocessor is very largely the recapitulation of the architectural history of the mainframe - with the benefits of not being the first!

The chief architect of the CDC 6600, CDC 7600 and the Cray-1 was Seymour Cray. His designs were the fastest general purpose computers in the world for at least 20 years - from 1964 to 1984. The 7600 will probably forever hold the record for being the fastest available general purpose computer for a longer time than any other machine type - 1969 to 1975.

Seymour Cray - chief architect of the CDC 6600 and its descendants. James Thornton was responsible for much of the detailed design of the 6600.

The design of the CDC 6600 and its later developments are particularly interesting because they were so innovative. Some of its innovations do not survive in any modern machine - especially the "peripheral processor" (PP) concept. This was a group of 10 or more "small" general purpose computers that performed all i/o functions for the machine, and ran most of the operating system, leaving the "central processor" (CP) almost entirely free for user work. (Interestingly, later machines and operating system releases moved more operating system functions back to the CP.) The PPs were implemented using a single arithmetic and logic unit multiplexed between 10 (or more) memory and register sets. This is why it can claim to be the first system with hardware support for multi-threading.

A CDC 6600 Installation

Admittedly, the 6600 wasn't perfect. It didn't have general purpose registers. Instead, it had separate register banks for data (60 bits wide), addresses and short integers (both 18 bits wide). While it was IMHO undoubtedly the first RISC machine, its load-store instructions are "weird" seen from today's perspective. Storing an address in address registers A1 to A5 caused data to be loaded from central memory into the "partner" data register, X1 to X5. Likewise, storing an address in address register A6 or A7 caused the contents of data register X6 or X7 to be stored in central memory. Personally, I find this rather elegant!

Less elegant was the ones-complement integer data format, and, arguably, the floating point number format. And, of course, it didn't have virtual memory - although with enough real memory (as in some of today's systems) that might not be a disadvantage ... :-) It is also difficult to realize today that the 8-bit byte has not been there forever ... 60 bit words were once a perfectly reasonable design choice. But not today.

Because of the things that don't survive in today's machines, and because it was so successful in meeting its goals (primarily performance), to some extent it represents a "parallel universe alternative" - a direction in which things perhaps could have gone, but didn't. This shouldn't be taken too seriously. But the PP concept and the offloading of the OS from the main CPU is still a very interesting idea - and one which is unlikely to be explored in foreseeable future machines.

One reason why not is that today's "mainstream" operating systems have matured on very different hardware - what is now "mainstream" hardware, of course. Hardware that seen from a sufficient distance clearly owes more to IBM's System/360 than to anything else. The core architecture of Unix and Windows relies on CPU facilities such as two (or more) CPU privilege modes, privileged CPU instructions, and a particular model for memory protection and system calls. Not to mention 8-bit bytes ... Hardware which is designed to run almost all of the operating system on a set of processors other than (and quite different from) the main CPU is too radically different to be compatible with today's mainstream OS's.

Which brings us to the software that runs on "old computers". Surely that must be absurdly primitive? After all, today's pocket calculators have more memory and performance, don't they? Well, not performance ... Memory ... well, maybe ... :-)

In fact, the operating systems that run on "old computers" are suprisingly sophisticated and complex. As an example, the memory protection features of CTSS in 1962 - the first "modern" operating system and the source (directly and indirectly) of so much that has followed - had proper memory protection that wasn't equalled on Microsoft Windows prior to NT ... Nor on Apple's operating systems prior to OSX, come to that.

I don't know much about the IBM operating systems (MVS, VM/CMS, etc. - still "thankfully with us" :-)) and the "old computers" I'm thinking about here are mainframes rather than minis, so I am really talking about CDC NOS in what follows.

Of course, there are no GUIs, and no IP based networks (not entirely true - late versions of CDC NOS do support FTP and similar tools). But there are compilers, librarians, linkers/loaders, command languages, interactive timesharing (as well as batch), debuggers, memory protection ... a complete and highly effective environment for developing and running software. Just not software written in C and C++ that takes a POSIX environment for granted...

Interactive timesharing session on CDC NOS

And, thankfully, one other thing that isn't there is bloatware. Having only a megabyte or so of directly addressable "RAM" concentrates the mind wonderfully!

Apart from raw performance, this lack of "RAM" is, of course, the biggest "shock" when you come to using these machines. For someone essentially brought up with virtual memory (I did use a PDP-11/10 with 56KBytes of "RAM" for a few years - but that was a very long time ago) this is the clearest limitation on what can be done. True - there is "Extended Core Storage" - fast memory suited for bulk data transfers to and from central memory. This gives something roughly equivalent to 16MBytes. But if you put a variable in ECS, you can't use that variable in any kind of expression - you must move it to a CM variable and then use that.

The lack of memory isn't that big a problem for code size. With overlays and enough skill, there probably aren't many programs whose executable code couldn't be made to fit. But for data it is the key limiting factor to what can really be done. True - you could try to devise "out-of-core" algorithms to do whatever it is you are trying to do. But (even if you are clever enough to do that) there is a cost in terms of performance. Disk accesses are not free...

One real suprise about NOS - which seems maybe to be true of IBM's mainframe operating systems too - is how few files are used to contain it. Instead, a single disk (or tape) file contains the system library, the members of which are the various components of the operating system. Library tools are used to maintain the OS. This is very different from the "50,000 files" paradigm of Unix - and, to a slightly lesser extent, NT and VMS. Very tidy.

For an example of a newly written application that runs on NOS, take a look at this page.

So, where can you find working old mainframe computers? The most practical way of "running" an "old mainframe" is often in the form of a software emulation running on a PC. The emulator creates a faithful model of the real hardware that is sufficiently accurate to run the full scale operating systems for the machine - booting from virtual tapes or disks.

For CDC machines, there is Tom Hunter's Desktop Cyber emulator. as well as Kent Olsen's VIMs.

Operator Console running NOS 1.4 on Desktop Cyber.

One problem is where to get an operating system from. Unfortunately, the main CDC operating systems are not freely available (e.g in the public domain) - except for the very early COS (circa 1965, and reputedly written largely by Seymour Cray). This may one day change. For now, if you don't have a tape with a NOS (or other CDC OS) deadstart image on it, COS is the only available system. Unfortunately it has many limitations. Please see Tom Hunter's pages for more information.

An Old and Expensive Computer Clock!

For IBM machines, there is the Hercules emulator. This is an excellent emulator capable of running virtually all IBM mainframe operating systems, including those for the latest 64-bit zSeries machines. There are versions of MVS and VM/CMS from the late 1970's that are freely available for this platform. There is quite a large community of Hercules users, I believe.

Windows XP running VMware running Linux running Hercules running VM/CMS! Full size image

For a wide variety of minicomputers and superminicomputers - ranging up to a VAX emulation capable of running OpenVMS - there is SIMH, available from http://simh.trailing-edge.com/ This is more very impressive stuff. There are a variety of operating systems available. OpenVMS is also freely available under the Hobbyist Program: http://www.openvmshobbyist.org/

A Battered Dual 800MHz P3 Machine used to host the Cyber Emulator

What sort of performance can you expect to get out of an emulated machine? Well, the CDC machines are a particularly difficult case. The PPs mean that the emulation has to be of an assembly of 11 processors instead of 1. Even so, the performance I have measured isn't bad. The emulated machine runs about 350 times slower than the host machine. This can still be faster than the original hardware. For example, I run DtCyber on a dual 800MHz Intel P3 machine (the "dual" bit doesn't help much apart from keeping the GUI for the operator's console out of the way) and get about twice the performance of the real Cyber 173 that is being emulated. That is exponential performance improvement over 30+ years in action! The latest PCs based on fast AMD or Intel chips will give you 10 times a Cyber 173 or more. (The Cyber 173 was largely a later implementation of the CDC 6400 using SSI IC's in place of discrete transistors and semiconductor RAM in place of magnetic core memory. It was introduced in 1973. The 6400 was a simplified version of the 6600 which lacked the multiple functional units and the out-of-order ILP that allowed. In emulation the micro-architectural differences no longer matter.)

Dual CDC Cyber 175 machines at the Leibniz Rechenzentrum in Munich (1979)

With emulated machines, it is possible to use the facilities of the host operating system to make access to the emulated mainframe quite convenient. For example, the emulated Cyber 173 has an emulated card reader, card punch and line printer. It is pretty easy to make small modifications to the emulator to "watch" a directory, and when a file appears there, to "load" its contents into the virtual card reader. Likewise, it is easy to arrange each print job to go into a separate file and an ancilliary program can watch the line printer output directory and rename the output file to reflect the name of the job that created it. Without very much effort, one can hack up a GUI development environment (of sorts) for the beast! Here is a picture of that GUI in action:

Simple GUI Interface to an Emulated Cyber Mainframe. Full size image

This gives a crude (but quite effective) "drag-and-drop" interface to a 30 year old machine! This page tells you more, and also has the software available for download. (The CyberClient GUI is genuinely free software - no strings attached at all. Feel free to do anything you like with it - apart from complain about it to its author! Desktop Cyber is Copyright (C) Tom Hunter. It can be used free of charge, but please see the license for details.)

For more information about CDC mainframes, try these pages:

Go to HCCC Home