Thinking Small
Forty years ago, quantum
theorist and Nobel Laureate Richard Feynman became the first individual to
sight the gloaming before the dawn. In
a famous lecture titled “There’s
Plenty of Room at the Bottom”, he examined the infant field of materials
science. With his well-known sense of
humor, Feynman hypothesized that as scientists learned more about how to make
transistors and other small-scale structures, we’d be able to make them smaller
and smaller until – eventually – they would approach their natural limits, at
the edges of quantum uncertainty, stopping only when the atoms themselves
became too slippery, to unknowable, to be mechanically reliable.
Before molecular biology,
such speculations seemed wild, and nearly unfounded. But again and again Feynman proved himself an intellectual giant,
standing on the shoulders of giants, and could see further than most anyone
else. The detailed study of the
structures of the cell revealed that nature had engineered machinery from the
insubstantial substance of a few atoms strung together; the search for a “vital
force” only revealed a bewildering array of mechanisms – enzymes, ribosomes and
other tiny structures – which demystified the cell even as it revealed the
incredibly versatility of atomic-scale chemistry.
Feynman went on to postulate
that – once the tidy language of atoms had been decoded – it would be possible
to engineer molecules precisely, placing one atom against another to create the
smallest possible artifacts. What kinds
of tools might we create with these ultra-miniaturized forms? Well, Feynman mused, we might be able to
create a molecular “doctor” which would be hundreds of times smaller than an
individual cell. It could be injected
into a human body, and go to work, reading the health of cells, making repairs,
and generally keeping the body in perfect health.
Science fiction, his peers
pronounced. Absolute fantasy tossed off
by the master storyteller of physics.
During the heights of the Industrial Age, big carried an importance of
its own – big science, big engineering projects, big dreams. Even computers, in the 1950’s, consumed
whole floors of buildings. But even as
Feynman made his address, engineers at Texas Instruments put the finishing
touches on the first integrated circuits, and the world began to grow small.
Feynman laid his tiny seeds
in the ground, and Marvin Minsky, the founding father of Artificial
Intelligence, possessed a mind fertile enough for Feynman’s dreaming to take
root. Throughout the 1960’s and 70’s,
Minsky lead the world in future thinking – consulting with Stanley Kubrick and
Arthur C. Clarke on the cinematic reality of HAL 9000, directing a small
squadron of graduate students into the emerging fields of machine intelligence,
and always speaking broadly about the nascent possibilities of tiny
technologies.
By the mid-70’s, Minsky’s
word was taken as gospel around the halls of MIT’s Artificial Intelligence Lab,
and the grand old man of computer science found a mind ripe to nurse the seed
of Feynman’s ideas. A graduate student
named K. Eric Drexler came to Minsky seeking a sponsor for his master’s
thesis. Drexler, fascinated with these
tiny devices, wanted to explore their possibility. Minsky – who had never forgotten Feynman’s vision – immediately
agreed. Thank you very much, Drexler
replied – and went on to produce a vision which would come to shape the world.
When I attended MIT – brief
stay in the early 80’s – Drexler had just received his Masters Degree in
computer science, and had, like some Pied Piper of Cambridgeport, lured a small
coterie of students into his orbit. Not
yet called “hackers” (though they certainly practiced the arts of hacking, in
the positive – and now nearly lost - sense of the word) they found in Drexler’s
ideas a blueprint a future as programmable as they could imagine. In salons at his flat, Drexler entertained
younger minds with a set of ideas he christened nanotechnology. Bring a bottle of wine, pull up a chair, and
help design the future.
How could any red-blooded
hacker resist such an opportunity?
I went to one of these salons
d’idées, and by the end of the evening considered Drexler
the prophet of the next age of Man – a time when nearly anything seemed
possible: nanomachines – or, more commonly, nanites - which could
repair cellular-level damage and guarantee a nearly eternal, healthy existence;
kitchen appliances which, when fed garbage, could produce an endless supply of
high-quality “meat”; and an inexhaustible supply of incredibly strong building
materials made of diamond, grown in forms of any conceivable volume. Most of all, Drexler promised a material
world nearly entirely subservient to the whim of the human imagination,
programmed according to need.
Like many others in
Drexler’s orbit, it took me many years to absorb the full implications of
nanotechnology. During this time
Drexler worked hard to both explain this revolution to the popular mind – his Engines
of Creation: The Coming Era of Nanotechnology, with a forward by mentor
Marvin Minsky, was published in 1986 – and he moved from Cambridge to Palo
Alto, doing Doctoral work at Stanford University. His thesis work – Nanosystems,
published in 1992 – grounded the wild
speculations of nanotechnology in the hard-and-fast sciences of mechanics and
atomic chemistry. A cookbook of
atomic-scale machinery, with gears, rotors and motors, Nanosystems
provided a blueprint – a codex atomicus – for the design of the
nanomolecular universe.
Now Dr. Drexler set
out to change the world. On the
backside of the twentieth century, Drexler sensed that the implications of his
work would be as profound as the work itself, so – unlike Robert Oppenheimer,
the Faust of our nuclear age – Drexler announced the bad with the good,
detailing the dangers of molecular magic.
It’s important, in any discussion of nanotechnology, to bring these into
full view. We do know enough of the
possibilities latent in nanotechnology to construct a simulation – an
Einstein-style thought experiment – which helps to illuminate some of the more
harrowing possibilities of a nanotech future.
Two of the most crucial –
and, as yet, unbuilt – devices in nanotechnology are the nanocomputer
and nanoassembler. The
nanocomputer, as its name implies, is a molecular machine capable of executing
a string of instructions and producing a result. In function, it differs little from today’s microprocessors,
although it bears a curious resemblance to the antique, mechanical computers
designed by Charles Babbage in the middle of the Victorian era, with rods and
registers creating something like a grown-up adding machine – an adding machine
a million times smaller and a billion times faster than any microprocessor yet
designed.
Once the nanocomputer
exists, it becomes possible to create a nanoassembler, a device which –
constructed at the atomic level and working at the atomic level – can precisely
arrange atoms into most any desired form.
In 1999, working at the atomic level requires bulky and expensive Atomic
Force Microscopy (AFM), which uses electric fields to “push” atoms into
position. But a nanoassembler can
simply “pluck” atoms from a “bin” and, like some sort of post-industrial loom,
knit them into position. In every one
of our cells, our ribosomes do something similar, copying DNA into RNA,
and feeding this RNA into the ribosome – like so many instructions being fed
into a computer – then gathering the correct amino acids to create the proteins
which make up our physical nature. The
nanoassembler – which contains a nanocomputer at its core – does much the same
thing, translating instructions into molecules.
The nanoassembler is the
Holy Grail of nanotechnology; once a perfected nanoassembler is available,
almost anything becomes possible – which is both the greatest hope and biggest
fear of the nanotechnology community.
Sixty years ago, John Von Neumann – who, along with Alan Turing founded
the field of computer science – surmised that it would someday be possible to
create machines that could copy themselves, a sort of auto-duplication which
could lead from a single instance to a whole society of perfect copies. Although such a Von Neumann machine is
relatively simple in theory, such a device has never been made – because it’s
far easier, at the macromolecular scale, to build a copy of a machine than it
is to get the machine to copy itself.
At the molecular level, this balance is reversed; it’s far easier to get
a nanomachine to copy itself than it is to create another one from
scratch. This is an enormous boon –
once you have a single nanoassembler you can make as many as you might need –
but it also means that a nanoassembler is the perfect plague. If – either intentionally or through
accident – a nanoassembler were released into the environment, with only the
instruction to be fruitful and multiply, the entire surface of the planet –
plants, animals and even rocks - would be reduced to a “gray goo” of such
nanites in little more than 72 hours.
This “gray goo problem”,
well known in nanotechnology acts as a check against the unbounded optimism
which permeates scientific developments in atomic-scale devices. Drexler believes the gray goo problem mostly
imaginary, but does admit the possibility of a “gray dust” scenario, in which
replicating nanites “smother” the Earth in a blanket of sub-microscopic
forms. In either scenario, the outcome
is much the same. And here we encounter
a technological danger unprecedented in history: If we had stupidly blown
ourselves to kingdom come in a nuclear apocalypse, at least the cockroaches
would have survived. But in a gray goo
scenario, nothing – not even the bacteria deep underneath the ground – would be
untouched. Everything would become one
thing: a monoculture of nanites.
It’s not as though we could
close the door on nanotechnology, pronouncing it “too dangerous” for peaceful
uses, for there are two fundamental approaches to the field. The molecular nanotechnologists study how to
build machinery up from the atomic scale, while the molecular biologists study
how to “strip down” the organelles of the cell into atomic-scale devices. Given the immense commercial pressures of
the biomedical industry, it seems unlikely that the molecular biologists will
stop learning how we work, so eventually – likely sooner rather than later –
we’ll know enough of how to construct both nanocomputers and nanoassemblers,
one way or another.
Hence, Drexler’s first act
was to create a scientific and educational foundation – The Foresight Institute – to act as both a
clearing-house and think-tank for research into nanotechnology. In its fourteen years, Foresight has grown
to become a focal point for the community of nanotechnology researchers – and
ethical discussions about the nature of the collective project have an
important place in it. Only with such a
structure in place, Drexler argues, can we have any degree of safety in a
coming age of nanotechnology.
The public debate on matters
nanotechnological is practically non-existent.
Right now the field belongs to research scientists – and a growing
community of amateurs.
In mid-October, The
Foresight Institute held their annual conference at a mid-grade hotel in Santa
Clara, Silicon Valley’s ground zero for the revolution in microelectronics and
software. According to the attendees,
there was a new buzz in the air; recent developments in molecular-scale
manufacturing have resulted in the invention of some of the very basic
components Drexler described in Nanosystems – the same components which
will be essential features of nanocomputers and nanoassemblers. The pieces are coming together.
Last week I found myself
flipping through the galley pages of Robert Freitas’ Nanomedicine, a
book that was perhaps the most eagerly sought-after prize at this year’s
Foresight conference. (The fifty
advance copies provided at the event found many eager buyers.) More than anything before it, Nanomedicine
attempts to articulate the promise of Feynman’s ultra-miniaturized “doctor”,
and lays out a path of step-by-step technological hurdles which must be
overcome on the way towards nanomedical devices.
Perhaps the most interesting
aspect of Nanomedicine is the author himself. Robert Freitas is neither a doctor nor a molecular physicist;
while he holds a post at the Institute for
Molecular Manufacturing (IMM) – which Drexler founded as the R&D and
grant arm of Foresight – he is really only an amateur, uncredentialed in the
field he describes. This would be an
unrecoverable fault in other, more established fields of scientific discourse,
but nanotechnology presently lives in the liminal gap between imagination and
reality. Much like the hackers of the
“Homebrew Computer Club” in the 1970’s, there’s plenty of room for the activity
of amateurs – because, in some sense, everyone working in the field is
still an amateur. The Homebrew Computer
Club gave Steve Jobs and Steve Wozniak a platform to share their work and sell
the Apple I, gave Lee Felsenstein the opportunity to demonstrate the first
portable computers, and legitimized the amateur in a field dominated by
corporate “big-iron” interests.
Foresight, the IMM, and nanotechnology in general have such a feel –
hackers on the edge of another
revolution. For the next several years,
amateurs will be essential to the field, the necessary mid-step into a
professional discipline.
For now, nanotechnology is
beneath the cultural radar; even the corporations sponsoring research into
nanotechnology don’t quite know what to do with it. Dr. Ralph Merkle, who, after
Drexler, has done more than any other individual to advance the science of
molecular engineering, had a post at Xerox PARC – but left it, just last
month. Xerox wanted Merkle to split his
focus between nanotechnology and public-key cryptography, a field which Merkle
helped to define. But Merkle, unwilling
to give his intellectual passion half-duty, left to become a Research Fellow
for the Zyvex Corporation, the first of a
new generation of nanotechnology startups.
Xerox has a long tradition of shooting itself in its technological foot
– most of the innovations of GUI computing were pioneered there – so one has to
wonder if Xerox hasn’t perhaps done it again, and locked itself out of the
market for the semiconductor computer’s successor.
But the work goes on. This Congress – which seems resistant to
fund any R&D that doesn’t have immediate benefits for the military or
medicine – doubled government funding for nanotechnology research. Some of that lucre will be showered upon
NASA’s Ames Research Center, in Mountain View, California, where a small team
is working on the design of nanocomputers.
Why is NASA interested in nanotechnology? Size, mostly. Current
computers – such as those found on the Mars Pathfinder – are large,
power-hungry and prone to failure.
Using nanobots NASA could send a hundred million tiny eyes and
ears to the Martian surface in a package weighing a few grams. Who cares if half of them fail? There’s still fifty million left!
The nanobot is still just a
dream; to create one researchers will have to crack the problem of the nanocomputer
– so that’s the focus of the research group. It’s a big problem, and they don’t expect to
solve it until around 2011 – but that was an estimate made back in 1997, a
lifetime ago in nanotechnology. Every
day now, researchers are posting new breakthroughs – gleaned from materials
science or molecular biology – propelling them on their way toward a future
nearly unbelievable.
Forty years after Feynman,
the promise of nanotechnology remains before us, potentially the most important
technological development in human history.
It promises perfection – and apocalypse. In the perfect worlds of fantasy science fiction, all want has
been satisfied and all disease cured.
Without the inequities that produce politics or the sufferings that
create melodrama, the human story rings hollow, as if our pains give birth to
our drives. It makes for bad
storytelling: no room for heroes or noble acts, no sacrifice to create moral
legends. But the approach to such a
perfect world seems fraught with pitfalls, the ascent into perfection allowing
ample opportunity for the darker forces of our nature to present themselves in
their full dimension.
If this were an entirely
hypothetical question, we could hand it off to the ethicists and moralists, who
could study the problem for a thousand years.
But in less than a thousand weeks, we will be confronting these
questions collectively, and no less than the fate of humanity hangs in the
balance. Already Nightline
spends a week examining the impact of bio-terrorism in the American city, and
CIA analysts lie awake at night wondering who among our enemies – and our
friends – has the capacity to wreak destruction on our very cells. If the threat ended there, if we could
simply inoculate ourselves against the terrors which our neighbors might infect
us with, we could content ourselves in believing that the future has much the
same form as the past, that we know the shapes of the things which go bump in
the night. But more and more it becomes
clear that we are opening into a new day, and everything we know matters not at
all.