Archive for the 'Science/Technology' Category

My First Computer

Wednesday, August 30th, 2017

I built my first computer when I was sixteen, in my dad’s garage—and literally out of bits of wire.

As a budding mathematician in the early 1960s I was fascinated with the new field of computing and the basic processes behind their operation. Deep down most of what a computer does is add binary numbers (strings of 0s and 1s). This is accomplished through some very simple and basic switches, known as gates. The “AND” gate gives an output of 1 if both the inputs are 1. The “OR” gate gives an output of 1 if either of the inputs is 1. And “NOT” gate gives the opposite of its single input (1 for an input of 0, and 0 for an input of 1). Assemble these three gates in particular way and you can add two binary digits. String these assemblies together and you can add binary numbers. And from there you can build a computer.

The first generation of computers, such as the one Alan Turing’s team built to break the Enigma code, used mechanical switches. My simple, proof of concept, computer likewise used mechanical switches—electromagnet coils which flipped a switch when a current passed through them. My father was in the electrical cabling business so I had all the wire I needed. I built a machine to wind the wire into coils, fixed them to a board, wired them up as AND, OR and NOT gates, and sent the output to a row of lights. And I had my first computer.

A few years later while studying maths at Cambridge University, I got to experience a second generation computer—EDSAC2, one of the early experimental computers at the Cambridge Computing Lab. This generation of computers used electronic valves (otherwise known as vacuum tubes) for its switches. In 1965, I went along to its decommissioning and took away one of its trays of valves and other electronic components. This I hung proudly on the wall of my student room. Recently I read of a team who were trying to reconstruct the EDSAC2, and were looking for anyone who might have information on how it worked. If I had kept that tray of electrical gear it might have become very useful. But sadly it was eventually thrown out.

In 1964 I got to work on a third generation computer. In these machines the switches were transistors, soldered together with capacitors and resistors on a logic board. Much smaller than EDSAC2′s trays, but still large enough to see the individual components. I had six months free before going up to University and took a job at what was then one of Britain’s major computing companies, Elliott Automation. They had an 8K machine—yes, 8K for everything. The cabinets were about the size of four household refrigerators, sitting in a special air-conditioned room.

One of my jobs was to boot up the machine every morning. The term “boot” comes from the expression “to lift oneself up by one’s own bootstraps,” which is effectively how you got a computer started. There was a row of buttons on the console, each representing a binary digit (a 0 or a 1). I had to punch in a binary number by hand, which instructed the computer to go to four lines of code that were hard-wired into it. These four lines were brilliant in their simplicity and power, reading in a series of binary numbers from a paper tape and storing them in memory. These numbers formed a simple program that read in more paper tape containing the basic operating system. When complete, control was handed over to the operating system. The computer had booted itself up.

Some days when the booting up failed, I had to call in a technician who rummaged around in the cabinets and often found the fault was some insect that had crawled in overnight and shorted out one of the boards. There was a bug in the computer. Yes, that’s where the term came from.

My initial work at this company was actually on an analog computer, not a digital one. We hear nothing these days of analog machines, but back then they played an important role. Whereas digital computers work with digits—discrete bits of 0s and 1s—analog computers worked with continuously varying electric currents and were much better suited to solving differential equations, which occur in any system with changing variables. For example, the equation describing the path of a ball thrown through the air is a differential equation. It’s very difficult for a digital computer to calculate the smooth curve of its trajectory; it could take numerous iterations and then only come up with a good approximation. With an analog computer one builds an electrical circuit with resistors and capacitors that is analogous to the problem. Using a system rather like an old telephone switchboard, or sound board in a recording studio, in which cables are plugged into sockets to link the components together, an analogous circuit is created. A current is fed in at one end, and the solution appears as a varying current at the other end—representing, for instance, the path of the ball. Or more practically, whether a system is stable, and how long it may take for an oscillation to damp down. However, within a few years digital computers had progressed to the point where they could give acceptable, if approximate, solutions to differential equations, and the analog computer disappeared into history. But I always treasure the fact that the first computer I actually operated was an analog machine.

Several years later I studied for a post-graduate degree in computer science at the Cambridge Computer Lab. Now there was another generation of digital computers, those based on integrated circuits—the first computer chips. We had one of the most powerful machines in the country at our disposal, the aptly named Titan, but still feeble by today’s standards—a mere 16K central processor. And we had hard drives. Heavy, eighteen-inch wide disks with 30 MB capacity that had to be hand-loaded into cabinets for reading and writing. Downstairs we had a new PDP7. This was one of the first machines to have a visual display—a circular cathode ray tube in the front of the cabinet. The two were linked by a 3-inch thick cable (no match for today’s USB cables).

My thesis focused on the networking of the two computers and programing the visual display. It was entitled “The 2-D stereoscopic representation of the 3-D projection of rotation in four dimensions.” (In those days one could focus on almost any project providing it was sufficiently complex in terms of the computing.)

As a teenager I had been fascinated by Edwin Abbott’s story of life in a two-dimensional world he called Flatland. A 3-D object passing through Flatland would be experienced as a series of slices—a sphere for instance would on first contact appear as a dot, which expanded into a circle, growing in size, then contracting again into a dot as the sphere passed through the plane of Flatland. Or a cube would appear as a 2-D hexagonal slice—similar to its silhouette. If the cube was rotating, then that slice would continuously change shape. I surmised that a 2-D creature in Flatland might get a hint of what the third dimension was like by observing the changing shape of the 2-D slices of the rotating cube. So I wondered if we might be able to get a hint of a fourth spatial dimension by observing how a rotating 4-D cube appeared in our 3-D world.

On the Titan machine upstairs I built a program that modeled a rotating 4-D cube. Projected this down to 3-D. Then created two slightly different 2-D projections—one for the left eye, one for the right. This data was sent down the link to the PDP7, which drew them out, side-by-side, on its screen. Then with a mirror placed edge-on to the screen I arranged for the two images to go one to the left eye and one to the right, so that they gave the appearance of a 3-D object twisting and morphing in space.

The results were fascinating to watch, but sadly revealed no great insight into the fourth dimension. I did however have a lot of fun. Wrote a fascinating thesis for my professor. And in passing created what was probably the first ever virtual reality set-up.

Moreover working on the linking of two computers led me to see that the future of computing was their global networking, giving rise to the basic ideas for my book The Global Brain.

My next computer, the first I ever owned, was an Apple 2E.

Get Real Ray

Thursday, June 29th, 2017

A Critique of Ray Kurzweil’s Predictions

Ray Kurzweil recently announced his year-by-year predictions of the future. Here are just a few samples (full list here):

2020 – Personal computers reach a computing power comparable to the human brain.

2025 – The emergence of mass-market human implants.

2031 – 3D printed human organs used in hospitals at all levels.

2041 – Internet bandwidth will be 500 million times more than today.

2045 – The earth will turn into one giant computer.

2099 – The technological singularity extends to the entire Universe.

I’m not sure exactly what this last item means, and how it fits with Einstein’s Special Theory of Relativity limiting communication to the speed of light, and whether he means the entire visible universe up to 4.5 billion light years away, or the, possibly infinite, universe beyond that, and why other advanced civilizations haven’t already triggered this. But I’m sure a mind like his has thought all that through.

Here I would like to point out a more down-to-Earth shortcoming of this genre of utopian technological futurism. They assume business as normal in terms of scientific and technological progress, and generally fail to include the very real crises facing humanity in the coming years.

To sober ourselves up from Kurzweil’s lofty predictions, let us consider some of the challenges and their potential impact.

At the forefront is climate change. It is happening much faster than most scenarios predicted, and given the potential for runaway climate change once the tundra thaws, we could be witnessing some devastating consequences in coming years: major crop failures and famine, extreme weather events, millions dying of heat stroke, massive migration. These and other potential impacts will send shockwaves through our already vulnerable economic and social systems.

It is assumed that the Internet will remain functional, but as cyber-weapons get more powerful, and cyber-criminals more sophisticated, it is very possible that current attacks will escalate into widespread infrastructure shutdowns. Given how totally dependent we are upon the net for commerce, banking, science, technology and almsot eery other segment of society, it could be a catastrophe. Indeed, a widespread failure of the electrical grid for more than a few days would lead to a breakdown of society from which it would be difficult to recover.

The global economy is shaky, to say the least. Ever-deepening national debts, stock market bubbles and banking crises promote the likelihood of widespread global recession and possible collapse of currencies. Not the best environment for high-tech venture capitalists.

Terrorism cannot be ignored either. Previous terrorist movements had a goal in mind—reunification of Ireland, Algerian independence—and were open to political settlement. But those fermented by Islamist movements have deeper ideological goals which cannot be satisfied through any talks or mediation. Current approaches to dealing with them only fuel the flames. They are probably here to stay in the medium term, and with their growing resourcefulness, could have unforeseen impacts on the economy and social stability of many nations.

Nuclear war, deliberate or accidental, remains a distinct probability. So do global epidemics of drug-resistant bacteria and viruses.

These are just a few of the scenarios that could derail the technological dream. The financial and social investment it requires assumes a relatively stable society. If things start falling apart, the progress Kurzweil and others foresee will begin to splutter.

Some blindly assume that artificial intelligences far surpassing human intelligence will be able to solve these various problems, and the steady march of progress will continue unabated. It is possible that advanced AI may helpo solve some of them, but we cannot count on it, and certainly cannot count on it resolving all of them.

Furthermore, although we may play down the likelihood of any one scenario, the chances of one or other happening remains high. If there are say ten scenarios each with only a ten percent chance of happening, then the likelihood of at least one of them occurring is eighty percent. And several of the above, particularly major climate change, have much higher likelihood than ten percent.

Moreover, there is another factor that needs to be taken into account: the stress of accelerating development. Stress can be loosely defined as the inability to adapt to change. Many of us can feel that in our lives, the promised freedom offered by information technology seems only to have filled our lives with more things to take care of, and to do so with less time and increasing urgency, leading to increasing fatigue and burnout. At the other extreme, climate change can be seen as a consequence of accelerating development—the exponential increase in the use of fossil fuels, producing far more carbon dioxide than the atmosphere can easily dispose of, putting the climate under stress in ways that are becoming all too apparent.

The advances that Kurzweil foresees will undoubtedly continue to accelerate the rate of development. Indeed that is one of the fundamental tenets of his vision; what he calls the law of accelerating returns. As the rate of development continues to speed up at an ever-dizzying pace, the stress on all the systems involved—personal, social, economic, geo-political, environmental—will rapidly increase. And increasing stress in a system eventually leads to breakdown and collapse.

So accelerating change may not be such a beneficial trend after all. It could well bring about our ultimate demise. (For more on this, see my essay Blind Spot: The Unforeseen End of Accelerating Change.)

We will, I suspect, see a number of Kurzweil’s technological predictions coming true—although perhaps not as speedily as he envisions—but they will almost certainly be occurring in a world that is dealing with the consequences of major catastrophes. How this will play out I don’t know. But it would serve the likes of Kurzweil to include this level of realism, and the all-too-likely probability of economic, social and environmental chaos, in their predictions.

Big Boat, Big Brother

Wednesday, July 31st, 2013

I recently took a cruise, the first one in my life. It’s never been something I particularly aspired to, but I was invited to give the keynote at a conference being held on a cruise, so found myself with the opportunity to experience this aspect of our culture.

It turned out to be the largest cruise ship in the world, The Oasis of the Seas, a 17-deck, mini-city, with 6,300 passengers and 2,300 crew. A floating entertainment, eating, drinking, sports, gambling, shopping palace. Five-star samsara.

You can get a sense of how big it is by looking at the size of the people alongside the ship…

oasisHaiti

I can’t imagine its ecological footprint. But I doubt many there actually cared; they were having too much fun.

I was fascinated by how every little detail was taken care of, down to the glass plaque in the elevator floors reminding you what day of the week it was.

One thing in particular caught my attention. Cruise photographers roamed the ship taking pictures of guests at events, having dinner, partying. Later you could go to a touch screen, enter your cabin number and see all the photos taken of you. You could then compile them into a DVD for yourself (for a nice price, of course). The photographers never asked your name, or cabin number. But they did not need to. Before you embarked, a photo of you was taken for ID, and stored in the central computer. Face recognition software then made it easy to automatically match your picture with your name.

The cruise took place at the same time as Ed Snowden was hitting the headlines. And the two came together in my mind, as I thought about the state of surveillance a few years hence.

Face recognition software is improving dramatically, and will soon be able to recognize almost anyone’s mug shot out of billions – not just the few thousand on a cruise. Already some people are not putting an image of their face on Facebook for fear that software may recognize them in other situations. But don’t think that will protect you from the NSA; anyone with a photo ID, e.g. a driving license, is already in the system.

Sometime, not long from now, your face may appear on a CCTV feed. You may just be innocently walking down a street, but anyone wanting more information could instantly pull up your name, date of birth, address, social security number, other places you’ve been sighted. By that time, various databases may be more integrated giving access to your IRS records, telephone and email records, web history, political affiliations, etc.. Anything that is “on record.”

Moreover, don’t feel safe for now because we are not at that point yet. The NSA may not be able to analyze all the data they are collecting at the moment. However, as computing power continues its relentless exponential growth, data-mining will reach the point where they will be able to reach back into the past and process the information being gathered now. And those unbreakable encryptions you feel safe behind today may be crackable in the future.

Whether we like it or not, this is the direction the technology is going, and judging from recent government responses, is not going to stop. Big Brother is here and growing fast.

The moral of this unfolding story? Beware. Be very aware.


Ironically, I write this as the conviction of Bradley Manning sets a legal precedent for the release of information on the Internet being classified as espionage. And I thought they were the ones spying on us.

Does Our Brain Really Create Consciusness?

Sunday, June 12th, 2011

[Originally published as a Huffington Post blog - 06/ 9/11]

Western science has had remarkable success in explaining the functioning of the material world, but when it comes to the inner world of the mind, it has very little to say. And when it comes to consciousness itself, science falls curiously silent. There is nothing in physics, chemistry, biology, or any other science that can account for our having an interior world. In a strange way, scientists would be much happier if minds did not exist. Yet without minds there would be no science.

This ever-present paradox may be pushing Western science into what Thomas Kuhn called a paradigm shift–a fundamental change in worldview.

This process begins when the prevalent paradigm encounters an anomaly — an observation that the current worldview can’t explain. As far as the today’s scientific paradigm is concerned, consciousness is certainly one big anomaly. It is the most obvious fact of life: the fact that we are aware and experience an internal world of images, sensations, thoughts, and feelings. Yet there is nothing more difficult to explain. It is easier to explain how the universe evolved from the Big Bang to human beings than it is to explain why any of us should ever have a single inner experience. How does all that electro-chemical activity in the physical matter of the brain ever give rise to conscious experience? Why doesn’t it all just go on in the dark?

The initial response to an anomaly is often simply to ignore it. This is indeed how the scientific world has responded to the anomaly of consciousness. And for seemingly sound reasons.

First, consciousness cannot be observed in the way that material objects can. It cannot be weighed, measured, or otherwise pinned down. Second, science has sought to arrive at universal objective truths that are independent of any particular observer’s viewpoint or state of mind. To this end they have deliberately avoided subjective considerations. And third, there seemed no need to consider it; the functioning of the universe could be explained without having to explore the troublesome subject of consciousness.

However, developments in several fields are now showing that consciousness cannot be so easily sidelined. Quantum physics suggests that, at the atomic level, the act of observation affects the reality that is observed. In medicine, a person’s state of mind can have significant effects on the body’s ability to heal itself. And as neurophysiologists deepen their understanding of brain function questions about the nature of consciousness naturally raise their head.

When the anomaly can no longer be ignored, the common reaction is to attempt to explain it within the current paradigm. Some believe that a deeper understanding of brain chemistry will provide the answers; perhaps consciousness resides in the action of neuropeptides. Others look to quantum physics; the minute microtubules found inside nerve cells could create quantum effects that might somehow contribute to consciousness. Some explore computing theory and believe that consciousness emerges from the complexity of the brain’s processing. Others find sources of hope in chaos theory.

Yet whatever ideas are put forward, one thorny question remains: How can something as immaterial as consciousness ever arise from something as unconscious as matter?

If the anomaly persists, despite all attempts to explain it, then maybe the fundamental assumptions of the prevailing worldview need to be questioned. This is what Copernicus did when confronted with the perplexing motion of the planets. He challenged the geocentric worldview, showing that if the sun, not the earth, was at the center, then the movements of the planets began to make sense. But people don’t easily let go of cherished assumptions. Even when, 70 years later, the discoveries of Galileo and Kepler confirmed Copernicus’s proposal, the establishment was loath to accept the new model. Only when Newton formulated his laws of motion, providing a mathematical explanation of the planets’ paths, did the new paradigm start gaining wider acceptance.

The continued failure of our attempts to account for consciousness suggests that we too should question our basic assumptions. The current scientific worldview holds that the material world–the world of space, time and matter — is the primary reality. It is therefore assumed that the internal world of mind must somehow emerge from the world of matter. But if this assumption is getting us nowhere, perhaps we should consider alternatives.

One alternative that is gaining increasing attention is the view that the capacity for experience is not itself a product of the brain. This is not to say that the brain is not responsible for what we experience — there is ample evidence for a strong correlation between what goes on in the brain and what goes on in the mind — only that the brain is not responsible for experience itself. Instead, the capacity for consciousness is an inherent quality of life itself.

In this model, consciousness is like the light in a film projector. The film needs the light in order for an image to appear, but it does not create the light. In a similar way, the brain creates the images, thoughts, feelings and other experiences of which we are aware, but awareness itself is already present.

All that we have discovered about the correlations between the brain and experience still holds true. This is usually the case with a paradigm shift; the new includes the old. But it also resolves the anomaly that the old could not explain. In this case, we no longer need scratch our heads wondering how the brain generates the capacity for experience.

This proposal is so contrary to the current paradigm, that die-hard materialists easily ridicule and dismiss it. But we should not forget the bishops of Galileo’s time who refused to look through his telescope because they knew his discovery was impossible.

Gaian Perspective on Gulf Oil Leak

Saturday, May 29th, 2010

The Global Brain is Watching.

(This is text from video. Watch the video.)

The live video feed from the fractured oil pipe a mile beneath the surface is allowing anyone with Internet access (currently more than 1.7 billion of us) to watch the plume of oil pouring into the Gulf of Mexico. And, moreover, to watch live the various attempts to plug the leak. It has become, in the words of Associated Press, “an Internet sensation.” Thank you Obama for insisting that BP didn’t cut the video feed.

When I wrote The Global Brain, 30 years ago, I (and just about everyone else in the field) was imagining the embryonic Internet in terms of text and data processing. None of us foresaw the rich audio-visual medium it would become. Or that we, the neurons of the global brain, would be able to watch live as global catastrophes such as this unfold. Our eyes have become the eyes of Gaia, collectively observing our unfolding collective destiny.

And what, from a Gaian perspective, is this oil that threatens, not just the fishing business in Louisiana, but, far more importantly, the coral reefs and sea floor life that lie at the base of the ocean food chain?

Oil is but highly concentrated life. Dead forests from hundreds of millions of years ago, compressed by immense geological pressures into this hydrocarbon rich liquid that we prize so much for all the energy it contains. As Tom Hartmann points out in his book Last Hours of Ancient Sunlight, it is, in the final analysis, energy captured by plants way back in the past.

To the energy of the trapped sunlight was then added the energy of the immense compression it underwent beneath the weight of continents. That compression changed the chemical nature of the vegetable remains. The hydrocarbons we prize so much are seldom found in nature. It took unimaginable pressure to force the chains of carbon atoms to be found throughout life on Earth into rings of carbon atoms. (Today we have taken this a step further, using intense pressures in the laboratory to create spheres of carbon atoms – so-called Buckyballs.)

These hydrocarbon rings don’t fit well with life back on the surface of Gaia. Being immiscible with water, they clog up the metabolic processes of most lifeforms. To us, liberated oil is pollution.

A few bacteria do like this dense nutrient-rich material. They live on the sea floor happily gobbling the trickle of oil that oozes through numerous cracks in the ocean bed, breaking down the carbon rings into more hospitable molecules. But they never evolved to cope with hundreds of thousands of barrels a day pouring out of a single vent.

Where will it end? No one knows. Eventually, microbes on the ocean floor will slowly break down the carbon rings, feeding them back into the food chain hundreds of million years after they were first formed. And life will, in time, cope with the dispersants that have been added to the mix (themselves a product of the oil industry.) In the long term, Gaia shows a remarkable resilience.

Meanwhile, the world watches with baited-breath, half mesmerized, half devastated – and perhaps just a tiny part glad that it may take such a catastrophe to bring this crazy bunch of monkeys to their senses.

The Death of the Mouse

Saturday, April 3rd, 2010

The mouse that sits in our hand so much of the day is on the way out. It has served its time well. But the future is mouse-free. Thanks to the iPad.

Apple launched the mouse nearly thirty years ago as a way of pointing to places on a computer screen. It freed us from having to navigate by keyboard strokes. (Remember MS-DOS?) The mouse was the best that could be done back then. Now with the touch-screen capabilities of the iPhone, iPad, and similar devices, we don’t need mice anymore. We can use our fingers directly, and with much greater power.

It will not be long before laptops and desktops also have touch-screens. We will be interacting with our computers in the way we see in Avatar, manipulating the screen directly with our fingers. In five year’s time the mouse will be “so twentieth-century” we will come across them tucked away in boxes in the closet.

This is one reason I believe the iPad will spawn as great a breakthrough in computing as did the original Macintosh.

Greenhouse Cost of Beef

Saturday, September 15th, 2007

Offset your driving by becoming a vegetarian!

A study by the National Institute of Livestock and Grassland Science in Tsukuba, Japan, showed that producing a kilogram of beef leads to the emission of greenhouse gases with a warming potential equivalent to 36.4 kilograms of carbon dioxide. That is about the same as driving the average European car for 250 kilometres. The production also consumes enough energy to light a 100-watt bulb for nearly 20 days.

Over two-thirds of the energy goes towards producing and transporting the animals’ feed. The calculations did not include the impact of managing farm infrastructure and transporting the meat, so the total environmental load is higher than the study suggests.

(Animal Science Journal, DOI: 10.1111/j.1740-0929.2007.00457.x).