The origin of life and the future of computers
By John Hewitt on January 15, 2013 at 7:30 am http://www.extremetech.com/extreme/143901-the-origin-of-life-and-the-future-of-computers/3The evolution of computers, and the evolution of life, share the common constraint that in going forward beyond a certain level of complexity, advantage goes to that which can build on what is already in hand rather than redesigning from scratch. Life’s crowning achievement, the human brain, seeks to mold for itself the power and directness of the computing machine, while endowing the machine with its own economy of thought and movement. To predict the form their inevitable convergence eventually might take, we can look back now with greater understanding to the early drivers which shaped life, and reinvigorate these ideas to guide our construction.Life is, in effect, a side reaction of an energy-harnessing reaction. It requires vast amounts of energy to go on.Nick Lane, author of a new paper in the journal Cell, was speaking here about critical processes in the origin of life, though his words would also be an apt description of computing in general. His paper, basically, puts forth bold, new ideas for how proto-life forms originated in deep-sea hydrothermal vents by harnessing energy gradients. The strategies employed by life offer some insights into how we might build the ultimate processor of the future.
Many of us have read claims regarding the information storage capacity and processing rate of the human brain, and have wondered — how do they measure that? Well the fact is, they don’t. With our limited understanding of how living systems like the brain work, it is folly at this point to attempt any direct comparison with the operation of computing machines. Empirical guesswork is often attempted, but in the end it is little more than handwaving.
Google, while clearly not a brain of any kind, certainly processes a lot of information. We might ask, how well does it actually perform? It is easy enough to verify that a typical search query [1] takes less than 0.2 seconds. Each server that touches the operation, spends perhaps a few thousandths of a second on it. Google’s engineers have estimated the total work involved with indexing and retrieval amounts to about 0.0003 kWh of energy per search. They did not indicate how they estimated this number, but if we think about it, it is a fascinating result, despite their unfortunate prefixing of the units. Suppose we take the liberty of defining this quantity, the energy per search, as a googlewatt. Such a measure would be a convenient way to characterize a computing ecosystem in much the same way that the Reynolds number [2] qualitatively characterizes flow conditions across aerodynamic systems.
One might then ask, if the size of a completely indexed web crawl is constantly expanding while the energy per elementary search operation contracts with improvements in processor efficiency, how might the googlewatt scale as the ecosystem continues to evolve? In other words, can we hope to continue to query a rapidly diverging database at 0.3Wh per search, or in dollar terms — at $0.0003 per search?
For sake of putting energy-per-search on more familiar terms, Google notes that the average adult requires 8000 kilojoules (kJ) a day from food. It then concludes that a search is equivalent to the energy a person would burn in 10 seconds. No doubt brains perform search much differently from Google, but efforts to explore energy use by brains have proved to be confounding. PET scanning [3], for example, is not a very reliable tool for localizing function to specific parts of the brain. Furthermore, its temporal resolution is pitiful. It is, however, not too bad at measuring global glucose utilization, from which energy use can be inferred. Subjects having their brains imaged by PET scanner while performing a memory retrieval task frequently appear to utilize less energy than when resting. So if we accept the bigger picture in some of these studies, we often see the counterintuitive result that the googlewatt for a brain, at least transiently and locally, can sometimes take on a negative value. This is not totally unexpected since inhibition of neuron activity balances excitability at nearly every turn. The situation is may be likened to that of a teacher silencing the background din of an unruly class and demanding attention at the start of class.
[4]To try to find a more relevant comparison to biological wetware,
let’s take a quick look at IBM’s Watson [5], the supercomputer of Jeopardy!
fame. When operating at 80 teraflops, it is processing some 500GB — the
equivalent of a million books — per second. To achieve this kind of
throughput, Watson replicates the 4TB of data in its filesystem across
16TB of RAM. While no longer the state-of-the-art, Watson is certainly
no slouch.Each of Watson’s 90 Power 750 server nodes [6] has four processor chips, making for a total of 32 cores per node. Each 567mm chip, fabricated with a 45nm process has 1.2 billion transistors. The Power 750 server was based on the earlier 575 server but was designed to be more energy efficient and run without the need for water cooling. As it is air-cooled, the 750 can not consume more than 1600W of power and is therefore limited to 3.3GHz. The 575 could handle 5400W and be run a bit higher at 4.7GHz. Just in case you might be wondering where these processor speeds come from in the first place, it may be comforting to know that they are probably not just pulled out of a hat. They appear to be part of a sequence known as the E6 preferred number series [7], which IBM must have a special fondness for, and which is of course, eminently practical.
Next page: Why Watson’s specs matter… [8]
The point of these details is that to marginally outperform a human at a memory game, Watson churns out 60 MFLOPs/watt while consuming a whopping 140 kilowatts. The human may be running a 50W system with only about 10 or so watts being used by the brain. So if life is a side effect of energy-harnessing reactions, computing in silicon looks more like a side effect of an energy-dissipating system!
In fact, the cooling system for IBM’s new 3-petaflop supercomputer, SuperMUC [9], uses waste heat from the machine to warm the Leibniz Center where it is housed. It seems our brains still have a few tricks to teach their silicon brethren. How then might we apply some of these tricks to computing?
Phillip Ball published articles last month in Nature and Scientific American where he suggests that future supercomputers might not be powered by electrical currents borne along metal wires, but instead driven electrochemically by ions in the coolant flow. The idea of supplying power along with the coolant is not entirely new — jet fuel has long been used to cool aircraft electronics. The problem with wires or circuit board traces is that routing dedicated power and ground traces to each transistor is, beyond a certain scale, a poor use of volume. Designers mitigate these problems in multi-layer boards by employing entire planes for ground, and for any of several supply voltages that might be needed. In large computers, 2D boards are stacked along with their cooling apparatus into 3D forms, but critically, the opportunity for efficient and local 3D interconnectivity is sacrificed.
If
power was accessible anywhere in the volume, the efficient form of a 2D
folded surface within a 3D volume might more readily be brought to
bear. Elements in frequent communication but widely separated on a 2D
surface can be closely opposed when folded. One may argue that high
speed optical interconnects in computers reduce transmission delays and
obviate the need for a more complex geometry. Typically however, the
system of interconnects and opto-electronic hardware required to make
them work takes up an exorbitant volume.Life takes on a unique geometry at all scales according to need. Single-celled algae, for example, and the mitochondria that power cells, optimally pack catalytic surface into their volumes using convoluted folds that continually evolve and intercalate [10] according to need. Mitochondria divide when demand for their products increases, and they undergo fusion to perform error correction when their DNA is mutated by toxic oxygen metabolites. The cerebral cortex also uses an extensively folded and connected surface, as do the dynamic synaptic sculptures within it. Folding and re-folding might even be said to be one of the central preoccupations of life. This holds true whether one is referring to proteins, membranes, or organs. The ability of membranes in particular, to enclose and isolate equipotential volumes for use as local power sources or sinks as demand arises is life’s calling card.
It appears that life takes origin not by chance but in the most predictable, inevitable, and simple way possible. Life needs an energy gradient to drive things, but a gradient not so great that any nascent structure is destroyed before it might be stabilized. While lightning, volcanism, and even cosmic ray bombardment might forge molecular precursors to life, deep sea hydrothermal vents have emerged as the prebiotic mill through which life consistently percolates. The major ions which were segregated by primordial membranes appear to have been H+ and Na+, both logical choices in an ocean environment.
Next page: What computers can learn from organisms… [11]
Their early selection accounts for the present ubiquity of these ions as the currency for cellular charge. Iron-sulfide reactions of the type that led to early forms of metabolism still occur in the thermal vents today. Vast mineral deposits are also found near these vents, in particular rich “ manganese nodules.” (Incidentally, these minerals provided a convenient alibi for the CIA’s dramatic efforts to recover the Soviet K-129 submarine with the Howard Hughes Glomar Explorer [12] in the early 1970’s — but that is a story for another time.)
Nick Lane’s origin of life paper is notable for the primacy that it puts upon membrane bioenergetics as the base upon which life coalesced. The primitive membrane forms were initially devoid of the protein machinery [13] found in modern organisms. One such protein is a rotary engine known as an ATPase. Modern ATPase’s are clusters of up to 300 protein elements printed with a 5A (Angstrom) process, self-assembled, and typically run at around 9000 RPM. Initially leaky and randomly structured, early membranes harnessed energy gradients with marginal efficiency.
[14]Over time those that persisted grew increasingly more complex by
incorporating these nanomachines, culminating in the exquisitely
malleable geometry of the brain. Similarly our first computers made for a
dreadful waste of energy. Over time their efficiency has dramatically
improved and the number of atoms per transistor [15] has shrunk to near
the limits where the current silicon technology can be reliably
supported. New technologies incorporating circuits just a few atoms wide
[16] promise dramatic miniaturization of our current state of the art.Increasingly, heat looms as the single greatest obstacle to processor speed limits. As we have seen in the example of the Power 575, speed and investment in cooling go hand-in-hand. Years ago, the maximum rate of information processing in a given volume was derived under the assumption that irreversible computation would be limited only by the rate at which heat can be removed from that volume. Indeed our understanding of any such limits evolves with our grasp of natural phenomena. The maximum amount of information that might be stored within a spherical volume is known as the Bekenstein Bound [17]. This is a more esoteric quantity, but for those so inclined to investigate these kinds of measures, it is defined as the entropy of a black hole with an event horizon of a corresponding surface area.
In 2000, Seth Lloyd, one of the pioneers of quantum computing, envisioned the ultimate laptop [18] as a 1-kilogram mass occupying a 1-liter volume and calculated that its maximum speed would be 10^50 operations per second. Needless to say, any such device would quickly be consumed in a ball of plasma. Cells and their lipid membranes which sustain life only within a few degrees of 98.6, won’t stand a chance against whatever technology might approach the ultimate laptop — but for now at least, they still reign supreme.
Is
it even possible to measure how good life is at what it does? One thing
that makes life the envy of all things hardware is the ability to
replicate itself. Imagine the power of a supercomputer that could
replicate processors in-situ as demand arises, and then could absorb
them just as fast (or faster) when they began to accumulate errors or
became superfluous. Along these lines, a recent theoretical paper
[19] seeks to define how efficiently an E. coli bacterium can produce a
copy of itself.The astounding result is that the excess heat generated by real bacteria is only about three times that which is optimally possible. It is difficult to imagine what an optimal assembly of a bacterium from its constituent atoms might look like as there are no doubt many near-optimal ways to go about it.
If we are to conclude anything at this point, it might be that any machine that could duplicate one of its processors using roughly the same amount of energy in the same amount of time that the processor itself uses during normal operation, would be something of unimaginable power. In the same breath, any computer which has the ability to direct its own assembly would potentially be ominous, perhaps — some might say — a bit too much like us.
Research papers: – “The Origin of Membrane Bioenergetics” [20] and “Engineers Hunt for Ways to Cool Computing” [21]
Endnotes
- typical search query: http://googleblog.blogspot.com/2009/01/powering-google-search.html
- Reynolds number: http://en.wikipedia.org/wiki/Reynolds_number
- PET scanning: http://en.wikipedia.org/wiki/Positron_emission_tomography
- : http://www.extremetech.com/wp-content/uploads/2012/12/IBMWatson-Post1.jpg
- IBM’s Watson: http://www.extremetech.com/tag/watson
- Power 750 server nodes: http://en.wikipedia.org/wiki/POWER7
- E6 preferred number series: http://en.wikipedia.org/wiki/Preferred_number
- Why Watson’s specs matter…: http://www.extremetech.com/extreme/143901-the-origin-of-life-and-the-future-of-computers/2
- SuperMUC: http://www.extremetech.com/extreme/131259-ibm-deploys-hot-water-cooled-supercomputer
- intercalate: http://en.wikipedia.org/wiki/Intercalation_%28chemistry%29
- What computers can learn from organisms…: http://www.extremetech.com/extreme/143901-the-origin-of-life-and-the-future-of-computers/3
- Glomar Explorer: http://en.wikipedia.org/wiki/Project_Azorian
- protein machinery: http://www.youtube.com/watch?v=3y1dO4nNaKY
- : http://www.extremetech.com/wp-content/uploads/2012/12/51ATPase.jpg
- atoms per transistor: http://www.extremetech.com/extreme/119025-extreme-nanotechnology-creating-a-transistor-with-a-single-atom
- circuits just a few atoms wide: http://www.extremetech.com/extreme/136614-researchers-create-single-atom-silicon-based-quantum-computer
- Bekenstein Bound: http://en.wikipedia.org/wiki/Bekenstein_bound
- envisioned the ultimate laptop: http://arxiv.org/pdf/quant-ph/9908043
- recent theoretical paper: http://arxiv.org/pdf/1209.1179v1.pdf
- “The Origin of Membrane Bioenergetics”: http://www.cell.com/retrieve/pii/S0092867412014389#MainText
- “Engineers Hunt for Ways to Cool Computing”: http://www.scientificamerican.com/article.cfm?id=engineers-hunt-for-ways-to-cool-computing
Printed from http://www.extremetech.com/extreme/143901-the-origin-of-life-and-the-future-of-computers. Copyright ©2013 ExtremeTech unless otherwise noted.
[1]The ventricles are lined with a particular kind of cell that is very
similar to the stem cells that generate new neurons. New cells
generated here (and in parts of the olfactory system) generally migrate
to predetermined locations, but artificial conduits for these newly
generated neurons might direct them into close association with new
hardware. [2] While there are many ways that these new neurons might
communicate with hardware, those that afford for them the use of their
existing transmitter systems [3] would be the least demanding.
[7]Right down the center of the brain there is a deep fissure that
separates the cerebral hemispheres down to level of the corpus callosum,
the thick band fibers that connects them. Placing record and stim
hardware along the length of the callosum from above, and along the
underside from within the intraventricular location, would permit direct
access to these 300 million communicating fibers which are central to
integrating the higher functions of the brain.
[8]After the ventricles have been supplied with hardware, the next
likely step would be to get up nice and close to individual neurons
through the brain’s backdoor — its vasculature. Provided your hardware
is comparable to the scale and constitution of your 7-micron diameter
red blood cells, it should have no trouble navigating to any location
desired. No neuron is found more than a few hundred microns from a
capillary, and the circulatory system could likely absorb the addition
of many such machines.
[9]Maybe you would find it convenient to have the voice emanate from
preferentially organized virtual locations in space? Or you could feel
it like a braille code, on your back, tongue, index finger, or virtual
appendage. Maybe, you just want to simply “know it.” When you finally
have the call cued up, how do you want to output it? It would be nice to
be able to go directly from your inner voice or visual image to a radio
out and be done with it. I would argue that while it may eventually be
possible to do this, we may not want that level of integration. Our
natural thought structure and memory is too valuable to us. Most output
tasks would perhaps be best relegated to co-opted peripheral nerves. The
delicate signals of the main processor in a computer, for example, are
not brought out to the external world unbuffered. Even with inexpensive
and pokey microcontrollers the task is usually relegated to other
circuitry.
With
arrangements made in this fashion, it takes only a small leap to
realize that the brain will quickly learn to feel its own activity, just
as a baby so effortlessly learns to feel its body. In the same stroke,
it will gain the ability to control what it feels. If one is to argue
that consciousness is in some sense the brain feeling its own activity,
then what the cyborg will come to possess will be something much greater
than consciousness. We have at present no real evidence that the
mapping of a neural state to a conscious state is, in the language of
the mathematician, isomorphic or one-to-one — but the cyborg no doubt
will.












James Corbett of the Corbett Report joins Scotty Ledger on his program,
Hollywood often hears that they force people into pirating films by failing to make their content widely available. 
