I built my first computer when I was sixteen, in my dad’s garage — and literally out of bits of wire.
As a budding mathematician in the early 1960s I was fascinated with the new field of computing and the basic processes behind their operation. Deep down most of what a computer does is add binary numbers (strings of 0s and 1s). This is accomplished through some very simple and basic switches, known as gates. The “AND” gate gives an output of 1 if both the inputs are 1. The “OR” gate gives an output of 1 if either of the inputs is 1. And “NOT” gate gives the opposite of its single input (1 for an input of 0, and 0 for an input of 1). Assemble these three gates in particular way and you can add two binary digits. String these assemblies together and you can add binary numbers. And from there you can build a computer.
The first generation of computers, such as the one Alan Turing’s team built to break the Enigma code, used mechanical switches. My simple, proof of concept, computer likewise used mechanical switches — electromagnet coils which flipped a switch when a current passed through them. My father was in the electrical cabling business so I had all the wire I needed. I built a machine to wind the wire into coils, fixed them to a board, wired them up as AND, OR and NOT gates, and sent the output to a row of lights. And I had my first computer.
A few years later while studying maths at Cambridge University, I got to experience a second generation computer — EDSAC2, one of the early experimental computers at the Cambridge Computing Lab. This generation of computers used electronic valves (otherwise known as vacuum tubes) for its switches. In 1965, I went along to its decommissioning and took away one of its trays of valves and other electronic components. This I hung proudly on the wall of my student room. Recently I read of a team who were trying to reconstruct the EDSAC2, and were looking for anyone who might have information on how it worked. If I had kept that tray of electrical gear it might have become very useful. But sadly it was eventually thrown out.
In 1964 I got to work on a third generation computer. In these machines the switches were transistors, soldered together with capacitors and resistors on a logic board. Much smaller than EDSAC2’s trays, but still large enough to see the individual components. I had six months free before going up to University and took a job at what was then one of Britain’s major computing companies, Elliott Automation. They had an 8K machine — yes, 8K for everything. The cabinets were about the size of four household refrigerators, sitting in a special air-conditioned room.
One of my jobs was to boot up the machine every morning. The term “boot” comes from the expression “to lift oneself up by one’s own bootstraps,” which is effectively how you got a computer started. There was a row of buttons on the console, each representing a binary digit (a 0 or a 1). I had to punch in a binary number by hand, which instructed the computer to go to four lines of code that were hard-wired into it. These four lines were brilliant in their simplicity and power, reading in a series of binary numbers from a paper tape and storing them in memory. These numbers formed a simple program that read in more paper tape containing the basic operating system. When complete, control was handed over to the operating system. The computer had booted itself up.
Some days when the booting up failed, I had to call in a technician who rummaged around in the cabinets and often found the fault was some insect that had crawled in overnight and shorted out one of the boards. There was a bug in the computer. Yes, that’s where the term came from.
My initial work at this company was actually on an analog computer, not a digital one. We hear nothing these days of analog machines, but back then they played an important role. Whereas digital computers work with digits — discrete bits of 0s and 1s — analog computers worked with continuously varying electric currents and were much better suited to solving differential equations, which occur in any system with changing variables. For example, the equation describing the path of a ball thrown through the air is a differential equation. It’s very difficult for a digital computer to calculate the smooth curve of its trajectory; it could take numerous iterations and then only come up with a good approximation. With an analog computer one builds an electrical circuit with resistors and capacitors that is analogous to the problem. Using a system rather like an old telephone switchboard, or sound board in a recording studio, in which cables are plugged into sockets to link the components together, an analogous circuit is created. A current is fed in at one end, and the solution appears as a varying current at the other end — representing, for instance, the path of the ball. Or more practically, whether a system is stable, and how long it may take for an oscillation to damp down. However, within a few years digital computers had progressed to the point where they could give acceptable, if approximate, solutions to differential equations, and the analog computer disappeared into history. But I always treasure the fact that the first computer I actually operated was an analog machine.
Several years later I studied for a post-graduate degree in computer science at the Cambridge Computer Lab. Now there was another generation of digital computers, those based on integrated circuits — the first computer chips. We had one of the most powerful machines in the country at our disposal, the aptly named Titan, but still feeble by today’s standards — a mere 16K central processor. And we had hard drives. Heavy, eighteen-inch wide disks with 30 MB capacity that had to be hand-loaded into cabinets for reading and writing. Downstairs we had a new PDP7. This was one of the first machines to have a visual display — a circular cathode ray tube in the front of the cabinet. The two were linked by a 3-inch thick cable (no match for today’s USB cables).
My thesis focused on the networking of the two computers and programing the visual display. It was entitled “The 2-D stereoscopic representation of the 3-D projection of rotation in four dimensions.” (In those days one could focus on almost any project providing it was sufficiently complex in terms of the computing.)
As a teenager I had been fascinated by Edwin Abbott’s story of life in a two-dimensional world he called Flatland. A 3-D object passing through Flatland would be experienced as a series of slices — a sphere for instance would on first contact appear as a dot, which expanded into a circle, growing in size, then contracting again into a dot as the sphere passed through the plane of Flatland. Or a cube would appear as a 2-D hexagonal slice — similar to its silhouette. If the cube was rotating, then that slice would continuously change shape. I surmised that a 2-D creature in Flatland might get a hint of what the third dimension was like by observing the changing shape of the 2-D slices of the rotating cube. So I wondered if we might be able to get a hint of a fourth spatial dimension by observing how a rotating 4-D cube appeared in our 3-D world.
On the Titan machine upstairs I built a program that modeled a rotating 4-D cube. Projected this down to 3-D. Then created two slightly different 2-D projections — one for the left eye, one for the right. This data was sent down the link to the PDP7, which drew them out, side-by-side, on its screen. Then with a mirror placed edge-on to the screen I arranged for the two images to go one to the left eye and one to the right, so that they gave the appearance of a 3-D object twisting and morphing in space.
The results were fascinating to watch, but sadly revealed no great insight into the fourth dimension. I did however have a lot of fun. Wrote a fascinating thesis for my professor. And in passing created what was probably the first ever virtual reality set-up.
Moreover working on the linking of two computers led me to see that the future of computing was their global networking, giving rise to the basic ideas for my book The Global Brain.
My next computer, the first I ever owned, was an Apple 2E.