 |
Rants & Raves "Why the Future Doesn't Need Us"
Bill Joy's cover story on the dangers posed by
developments in genetics, nanotechnology, and robotics
("Why the Future Doesn't Need Us,"
Wired 8.04) struck a deep cultural nerve. Instantly.
Literally within hours of its appearance, reactions began arriving via
email, fax, and phone from all parts of the globe. In
The New York Times and The Washington Post, Joy's essay was
recognized as a landmark publishing
event, a judgment affirmed by countless online publications, newspapers,
magazines, and television networks in America and abroad. The Times of
London reported that the article "is being compared to Einstein's 1939
letter to President Roosevelt alerting him to the possibility of a nuclear
bomb."
Joy, cofounder and chief scientist of Sun Microsystems, is one of the most
influential figures in computing: a pioneer of the Internet, godfather
of Unix, architect of software systems such as Java and Jini. That a scientist
of his stature had chosen to confront with such candor the threats accompanying
the benefits of 21st-century technologies made for more than a slew of
headlines. It sparked a dialogue that has already been joined by business
and technology leaders, members of Congress and President Clinton's inner
circle, Nobelists and theologians, educators, artists, and schoolchildren.
What Bill Joy started in Wired's pages is one of the most essential
conversations
of this new century. On the Net and in the policy arena, in universities
and communities large and small, the debate he provoked has been vigorous
- and occasionally contentious. Which is as it should be. We are in the
very early stages of understanding not only the promise but also the grave
perils of genetic engineering, nanotech, and robotics, and we need to look
carefully at what kind of future these technologies might deliver us. We
live in an ever more complex world, in which the scale of change is vast
and its acceleration rapid, yet most people have never before been so
uninvolved in technological and scientific development. But who says we
are incapable of thinking in new ways and addressing the hard questions ahead?
The overwhelming outpouring of intelligence and energy in response to Joy's
essay - as reflected in the sampling that follows - is one powerful indication
that people are up to the challenge. They've discussed the article's ideas
with strangers on airplanes and the subway. They've urged their parents
to read it. They've assigned their students to write papers on it. They've
offered their time and money to help spread the word about the issues it
raises. They've passed around the URL (www.wired.com/wired/archive/8.04).
They've proposed conferences and forums, requested reprints (nonprofits
and educational institutions can receive free reprints by faxing +1 (970)
920 6542; reprints can be purchased by emailing
reprints@wired.com),
and begun preparing what promises to be a tidal wave of published reactions.
Many readers express immense gratitude to Joy for giving their fears and
anxieties a voice; others object to certain passages; still others disagree
with almost everything he wrote. But everyone has responded with equal,
and remarkable, passion. To help keep the conversation thriving, we've
established an email address,
whythefuture@wired.com,
so that readers can receive updates and alerts on issues and events related
to the article, and to GNR technologies in general. We hope you'll join
the ongoing discussion.
For that, ultimately, is the goal. Near the end of his essay, Joy expressed
his desire to keep talking about these issues, responsibly and seriously,
"in settings not predisposed to fear or favor technology for its own sake."
Wired's fundamental mission is to be one such setting: a forum for new
ideas and arguments, frequently celebrating the revolutions occurring in
science and technology, but never afraid to explore any troubling implications
as well. Obviously, we don't have all the answers, but we do know some of the
best and most important questions. As the future rushes up to meet us,
we intend to keep asking them.
- Katrina Heron, Editor in Chief
Alex Vella, applications engineer, Protocol: Bill Joy has created the
most
profound, thought-provoking document in recent history. If this isn't Nobel
Prize thinking, then there isn't any meaning in the Nobel Prize. The reading
of "Why the Future Doesn't Need Us" should be made mandatory in universities
throughout the world.
Robyn Miller, cocreator, Myst and Riven:
I've come away hopeful; I suddenly feel that maybe there's a chance we
will not go on ignoring this gigantic pink elephant that's standing in
the middle of the room! We talk and talk about all these wonderful, beautiful
things that we will accomplish this century, but no one seriously mentions
that it may very well be our last. I am not frightened by technology. But
I am sometimes frightened by "us," by humankind. I am awed, but frightened.
Because we are mostly ignorant, but we're incredibly powerful! Like a machine,
going on and on, we grow more and more powerful every day. It's a dizzying
sort of power and if we have it - and if there's money to be made or people
to be conquered - we will use it. That's our history. That's what we inevitably
end up doing.
Doug Ellis, communications director, Aspen Research Group: More than
fear,
I felt inspired that one of our country's greatest minds was sounding the
alarm, and drawing on inspiration from as far afield as Henry David Thoreau
and the Dalai Lama in the process. To me, it was a harbinger of the future
synthesis of science and spirit.
Ed Lazowska, chair of computer science, University of Washington: Extraordinary
article.
Greg Weller, Web developer, Cuyahoga County Public Library: I think that
Joy should heed the tag line of the Sun Microsystems ad in the same issue:
"Please, if you do not take part, at least have the good sense to get out
of the way." Personally, I'll side with the techno-utopia,
live-forever-in-a-silicon-body faction.
Scott McNealy, CEO, Sun Microsystems:
Bill Joy has helped create some of the top networking technologies on the
planet. He has made a habit of predicting and inventing the future. Nearly
20 years before almost anyone else, he knew the world would be built around
the Internet. Given his track record, maybe we all need to spend more time
thinking about the issues he addressed in Wired.
Joe Martin, social worker, Pike Market Medical Center: Can the alarm
raised by Joy bring about a serious reconsideration
of these rapidly evolving technologies? Many others long before Joy have
expressed similar concerns, all to no effect. As Joy himself admits, the
pursuit of these extraordinary techniques is intoxicating, and the financial
rewards for their successful implementation have proven staggering. Still,
one would think that the preservation of humanity might cause at least
some members of the human family to take Joy's call to attention quite
seriously.
Leslie Berlowitz, executive officer, American Academy of Arts and Sciences:
We plan to organize serious studies at the Academy that will look long
term at the issues Bill Joy has raised, and the consequences of these emerging
technologies.
Tom Cantlon, Cantlon Computer Consulting: I too have mixed feelings about
technology's potential benefits versus disasters. But Joy's article raises
an interesting "populist versus elitist" question: Who gets to decide?
Peter Trei, software engineer, RSA Security: This article is manna from
heaven to those who would further centralize and tighten control over people,
and will undoubtedly be cited by those who would restrict privacy and
anonymity.
Jeanne DesForges, associate VP, Morgan Stanley Dean Witter: I made the
mistake first thing this morning in my office of printing an online copy
of the article. I was, am, and will be profoundly and unexpectedly affected,
sitting here embarrassingly almost in tears. By necessity a technology
end user, now I know why, looking out, I am more than a little sad - and
apprehensive.
Marc Schultz, translator, Japan: Just when
I have absolutely, irrevocably decided to let my subscription lapse because
I'm totally sick and tired of all the dot-com, IPO, nerd-billionaire crap
you've been filling your in-between-the-ad pages with lately, you go and
print the important Bill Joy article. Time to think again.
Ervin Kreutter, independent researcher, Ervin Erwin Edward: It took guts
to print the Bill Joy article in a world that sleeps as soundly as this
one. It was a service to this country and the planet. But it'll take more
than words to make a difference.
Sir Harold Kroto, chair, The Vega Science Trust, and 1996 Nobel laureate:
I share Bill Joy's apprehension. The only realistic long-term check-and-balance
that we can maintain over our rapidly developing technologies is to provide
ongoing education of policymakers and the public.
Pamela M. Moteles, student, Bucks County Community College: Bravo for
exposing
the potentially damaging effects of unethical technology and the self-serving
utopianism that has infected the cyberculture. We can still have a great
future with machines if we proceed with intelligence.
Jennifer R. Pournelle, managing editor, Institute on Global Conflict and
Cooperation, University of California: Joy's warning,
of course, needs to be broadcast far and wide. But
it utterly underestimates the power not of technology as we know it but
of Joy's own mind. He can weave together these disparate, dark threads
into one, big, gray, gooey cloud of angst; we can imagine this cloying
vision and fear for our future. But in reality, we still don't have bug-free
word processing software. We can't get one lander safely onto Mars. My
HMO needs four people to process one $50 insurance claim. We are a very
long way indeed from the seamless integration of the categories of technology
needed to precipitate Joy's apocalypse.
Marshall Wieland, product marketing manager, Exodus Communications: One
of the most curious and subtle points
in Joy's argument is the speed with which we are approaching
a point of "critical mass." I recall the words of
J. Robert Oppenheimer from his translation of
The Bhagavad Gita following the Trinity test:
"Now I am become Death, the destroyer of worlds."
I too hope that in the midst of this economic slipstream we can come to
realize that technological innovation is not without cost or consequence.
Bruce Sterling, author, Distraction: It's a good, healthy
development to
see some übergeek guy come out of the obscurantist shadows, step right
up to the public podium like a responsible adult, and bluntly come up with
a moral crisis and a personal confession.The guy is genuinely troubled
by a matter of some public import. He's looked for historical precursors
and a deeper understanding; he's tried to grasp how other people in the
technical elite have confronted similar moral challenges.
I think he hit on a very apt role model with Oppenheimer. It seems to me
that Bill Joy is having an
"I am Death, the destroyer of worlds" moment well before the skunk works
team sets off the giant chain reaction out in the desert. I strongly approve
of this approach on his part; clearly you should always have the moral
crisis first. I think this demonstrates actual progress in the handling
of potent technologies.
Gregory Stock, director, Program on
Medicine, Technology, and Society, UCLA School of Medicine: Joy muses that
we'd be so much safer if only "we could agree, as a species, what we wanted,
where we were headed." Does he seriously imagine people would somehow agree
to forgo advanced technology without some prior disaster?
Joy's article is long on hand-wringing, but short on any practical approach
to mitigate the problems he posits. It should be obvious that there will
be no consensus on this issue, so the only feasible way of seriously delaying
these technologies would be to impose an intrusive totalitarian solution
globally.
I say that if we are one day to be transcended by machines, so be it. No
technology policy is going
to put the brakes on evolution and change that. As to the dangers of uploading
human consciousness, even the Dalai Lama once indicated he could imagine
being reincarnated in a computer. So have a little faith, Bill. You may
not like the future's eventual shape, but your grandchildren - whoever
or whatever they are - will probably think it's just fine.
Kieran Cloonan, product development analyst, Private Business: Will the
future look more like Star Trek or Blade Runner? For every great
advance imagined by Gene Roddenberry, there is a darker correlative in P.K.
Dick or William Gibson. Until now, both have seemed out of the range of the
possible - but as Ray Kurzweil is to Bill Joy, so Joy is to me, and I am
forced now to throw open the book on possibility.
Mel Schwartz, professor of physics, Columbia University, and 1988 Nobel
laureate:
The problems are real and are probably more serious and closer at hand
than most of us imagine. Indeed, now that we have learned how to clone
sheep and pigs and are on the verge of making immortality a fact, it is
time to have a far-ranging discussion of these issues.
George Dyson, historian and author of Darwin Among the Machines: The main
thing wrong with Joy's article is the title. We are not being replaced
by machines; we are being incorporated into a machine. And it is too big,
too different (dare one say alien?), and, perhaps, too intelligent for
us to fathom. But the future will need us more than ever. I believe that
many of the things the Kurzweils are hoping for will never happen. And
I believe that many of the things the Kaczynskis
are afraid of have already happened.
Just as it did not take long for high-speed computing to escape from the
big labs to the desktop, soon enough it won't require millions of dollars
to create designer organisms. A few credit cards and accounts with various
suppliers might be sufficient. Who would have imagined, 30 years ago, that
you could buy nanosecond-memory for a dollar
a megabyte and order DNA over the phone for a dollar a base?
We tend to view evolution as a stable if branching process, with a tree-like
structure. This whole structure could collapse or turn into something else.
Who can predict which organism's germ line, on which branch, is going to
be the main trunk in the (long-term) years ahead? We presume it will
be ours. But it may be the germ line of some nondescript little critter
that just happens to be the laboratory subject whose genome is the first
to be fully communicated - two-way - into a very powerful CPU, where this
otherwise unremarkable creature's evolution will advance by leaps and bounds
as a result (with consequent benefits to its shareholders, but not necessarily
to the rest of us).
Rich Gold, manager, Research in Experimental Documents, Xerox PARC: I think
of nanotechnology as still about 30 to 50 years out. Robots seem less worrisome
to me. But genetic engineering has already begun, it is here, and it is
massive. I believe that it will make the computer revolution seem like
a small blip on the screen, though, of course, computers made it possible.
It seems to me that genetic engineering should be dealt with at about the
same danger level as we treat plutonium. But the real problem seems to
be the ethos of our tribe: "Here's some new technology, let's make some
new stuff!" Of course, our tribe can't even write bug-free software yet.
Kristine Stiles, associate professor of art and art history, Duke University:
Joy confesses to having just awakened to the possibility of
rendering the human race obsolete through new technologies. But Joy's awakening
is not heroic;
it is symptomatic of the problem. By burying his
head in the proverbial silicon, he willingly contributed to what he now
describes as "undirected growth through science and technology," with
utter disregard for the insights and research of
his colleagues in the arts and humanities.
Stuart Johnston, senior technical writer, Iona Technologies: For the first
time, someone has pulled together the strands of 21st-century science and
scientific ethics and starkly illuminated the risk of pursuing knowledge
at all costs. Bill Joy may consider himself a "generalist," but his background,
interests, and eloquence perfectly qualify him to spread his message.
David Morton, research historian, IEEE History Center, Rutgers University:
Joy quotes others who write fearfully of what the "elite" (who presumably
control technology) might someday do to the rest of us, but is apparently
blind to the fact that he himself is one of those elite. If Joy really
wants to make a difference, he can learn from the nonengineering world
what human progress really means. He can start to change the embedded culture
of science and technology by teaching engineers and scientists how to refuse
to participate in lines of inquiry that clearly do not improve the lives
of ordinary people.
Neil Munro, reporter, National Journal: Right or wrong, Joy's
article took
nerve to write. Indeed, by challenging the idea that individual autonomy
always leads to a greater good, he is challenging the orthodoxy of his
class.
Tristram Metcalfe, architect and inventor, Metcalfe Associates: Bill Joy
may be the poster child to describe our childhood's end, and we are in
debt to his alarming mind.
Peter Tabuns, executive director, Greenpeace Canada: It might be easy to
dismiss Joy if it weren't for the fact that the human track record with
far simpler, less powerful technologies has not been very good. Currently,
humans are engaged in changing the chemistry of the atmosphere and the
climate of the planet with a very low-tech practice, the burning of fossil
fuels such
as coal, oil, and gas. Even with evidence mounting daily that our use of
these fuels is putting our world at risk, it is extraordinarily hard to
turn things around. Our challenges with more complex technologies, which
could have a huge impact even more quickly, are sobering. Thanks to Bill
Joy for speaking up; now it is important for the rest of us to listen and
act.
Amy Larkin, executive producer, Unleashed Inc.: Early in Bill Joy's brilliant,
informative article, he quotes Ted Kaczynski: "It might be argued that
the human race would never be foolish enough to hand over all the power
to the machines." I submit that we have already surrendered our air, land,
water, and cultural heritage to ensure the freedom and well-being of one
machine: the automobile.
Jonathan Parfrey, executive director,
Physicians for Social Responsibility,
LA chapter: After reading his article, our organization enthusiastically
concurred that Bill Joy's insights need to be both publicized and rewarded.
His vision of the future is simultaneously humbling, frightening, and,
yet, poised potentially for magnificence. (PSR-LA has presented Joy with
the 2000 Caldicott Award. - Ed.)
Russ Acker, graduate student, George Washington University: Bill Joy
proved
that there are actually still people in this field who think about more
than their stock price. Like Danny Hillis and his call to look beyond the
next quarter, Joy has done us all a great service by reminding us that
our primary responsibility is to leave future generations with options.
James G. Callaway, CEO, Capital Unity Network: Just read Joy's warning
in Wired - went up and kissed my kids while they were sleeping.
Stephen H. Miller, editor in chief, Competitive Intelligence Magazine:
The not-very-joyous Bill Joy makes me think of a dinosaur whining because
it's not going to be the final point on the evolutionary scale. If the
universe has evolved humans because our intervention is necessary to produce
the next step up on the developmental ladder, so be it. I trust the universe
knows best where it's going and what the point of it all is.
Joy fears that if we simply move beyond Earth
in order to survive, our nanotechnological time bomb will follow us. On
the other hand, perhaps the coming "superior robot species" would see fit
to terraform a planet or two that could be kept as
a human reserve - like a galactic Jurassic Park.
Bob Metcalfe, founder of 3Com and inventor of Ethernet: I don't
understand
statements like "There are no brakes on new technology." Of course there
are brakes. Much more often I wonder where the gas pedals are. Do you think
people with new ideas are generally greeted with nourishing enthusiasm?
Christopher Arlaud, IT writer, Copenhagen: Upload me, Scotty. When it comes
to the long-term future, I can't help feeling fatalistic. Should the chrysalis
fear the butterfly? To conquer the cosmos, it seems obvious we'll have
to meld mind with machine. If we are destined to end up as software, let's
hope we're an open source species and not Microsoft mutants.
Michael Lyons, senior corporate counsel, US Surgical: It seems that people
look at the increasing power of computers as some kind of evolutionary
force of nature that can't be stopped. But that position is ridiculous.
If all the people in the world died of a supervirus tomorrow, biological
evolution would continue unabated. But upon the disappearance of humanity,
computer "evolution" would stop dead in its tracks; the computers themselves
can't "evolve" on their own - yet. So this is not an uncontrollable "force
of nature"; it is something still within the control of human beings, and
our moral and ethical concerns should guide our decisions.
Sherry Turkle, professor of the sociology
of science, MIT: In my research with children
and robotics, I have found that there are already things going on in our
everyday experiences with technology that give us "objects to think with."
The problem is that, while many people are having these experiences, most
of them are not using them to reflect on the bigger picture - how technology
is changing the way they think about their humanity.
We need to be concerned not just about what computers are becoming, but
about what we are becoming, about what is most precious to us. I believe
that if people were encouraged to reflect more about human uniqueness,
the idea of being replaced by robots would seem more problematic.
Derek Becker, independent computer
consultant: Humanity has always faced the possibility of extinction and
it has always been valid. The dinosaurs didn't fare so well, and anyone
saying they got off lucky by evolving into birds probably never had to
do grief counseling with a tyrannosaurus. The fact that we now have our
fingers on the Ragnarök button does not mean we should take the machine
apart; it means that we should grow up. Growing up is much harder than
taking machines apart - as anyone who has given a screwdriver to a fifth-grader
knows - so a call to primevalism always seems to be the more acceptable
solution.
Earl Hubbard, artist and author of Man as DNA: It has been
conjectured
by such anthropologists as Margaret Mead that mankind has so far used as
little as 10 to 15 percent of its mental capacity to do all that it has
done. It would now appear that, with the computer, mankind is evolving
an electronic organism capable of carrying out the mundane tasks essential
to sustain the life support system, the means of emancipating the
85 to 90 percent of our so-far-untapped potential.
With the advent of this emancipation, we are about to begin to learn who
and what we really are. Having evolved an electronic life support system,
we are not about to be replaced by it. We are entering an age of
self-revelation.
The possibilities mankind is now facing are truly awesome but by no means
frightening.
Ian J. Kahn, consultant, Know Technology Group: Bill Joy's essay is why
I read Wired. It is my hope that this article may mark the beginning of
a new discipline. Isaac Asimov used to say that he did not write science
fiction; rather he wrote "future histories." We as a species need to start
thinking about our future history.
There is one addendum to Bill Joy's article that comes to mind. He lists,
in footnote 3, Asimov's Three Laws of Robotics. There was, however, a later
modification of the three laws that could have an effect on these issues.
In Robots and Empire, the Zeroth Law is extrapolated, and the other three
laws modified accordingly: "A robot may not injure humanity or, through
inaction, allow humanity to come to harm."
Assuming such ethical constraints are possible, the Zeroth Law should be
added to the discussion.
Richard S. Wallace, botmaster, Alice chat robot: I found it slightly
disingenuous
for Joy to announce that robotics is the next big threat to mankind, when
most people working in robotics and AI are barely scratching out a living.
We would all like to found successful companies like Sun and become wealthy
philosophers. But the last thing we need right now is more government
regulations
or the kind of negative publicity that gives pause to our investors. Our
small startups are hardly as threatening as nuclear proliferation.
Robert Charpentier, creative director,
McKernan Packaging Clearing House: Though it is conceivable that humans
will eventually create a computer that simulates self-awareness, even emotion,
underneath you will always have pure, cold logic at the helm. If such a
machine could achieve self-replicating independence, it would quickly get
bogged down in the meaning
of life, or lack of it.
Unlike humans, a computer would see the pointlessness of endless reproduction.
It would probably extrapolate the possibility of vast expansion on a galactic
scale and realize the only thing to be gained would be valueless geographical
information. Pure logic would compel it to shut itself down. A truly
independent, self-aware machine would quickly see its only reasonable
role is to serve man.
Jim R. Davies, graduate student, intelligent systems and robotics, Georgia
Institute of Technology: Joy says that the computational power will make
smart robots possible by 2030. But much cutting-edge AI research doesn't
even tax the abilities of our current computers. The bottleneck is more
in our understanding of intelligence at a process level than in the raw
computational muscle.
"Evil Robot Master": Bill Joy's article was remarkably prescient. Joy's
fears that robots will someday rise up to rule the world and subjugate
the human race are well founded and difficult to dispute.
Unfortunately for Mr. Joy, he seems to be oblivious to one critical fact:
We already have. Ha ha ha!
Simon Evans, cultural industries consultant, Sheffield, UK: That gray goo
is already here; it's been creeping over the planet for a while now. It's us.
C. David Noziglia, coordinator for India, Sri Lanka, and Nepal, Office
of Public Diplomacy, Bureau of South Asia Affairs,
US Department of State: Joy does mention the possibility of space travel,
but one gets the feeling that his thoughts have been limited by what is
possible on Earth. One could, however, see the possibility of travel beyond
our beautiful planet, and recognize that immortality - or a life span far
beyond the one or two centuries that may be the limit to our biological
bodies - is a necessary prerequisite to travel over relativistic distances
and speeds. Human-robotic symbiosis may be not
just inevitable; it may be essential.
Tom Atlee, president, Co-Intelligence Institute: Painfully - and
significantly - Joy seems to have no idea what
to do about the unreflective forward motion of technology. But if citizens
are given a chance to reflect powerfully together on what's happening to
their lives and technology, they will provide clear, useful, and creative
guidance about how to proceed. This is amply demonstrated by citizen councils
such as the Danish technology panels described at
www.co-intelligence.org/P-citizenCC.html
and elsewhere.
Tim O'Reilly, president, O'Reilly & Associates: I am mindful of the
farewell speech that Sir Joshua Reynolds gave to the Academy -
his repudiation of a lifetime of academic art, and a call for a return
to the passion of Michelangelo. Edmund Burke, who was in the audience,
rose up and strode down the aisle, saying, "I've heard an angel speak."
I could only wish that our own political leaders would be so moved by Joy's
words.
Norman Lear, chair, Act III Communications: Joy's article was the most
stimulating, provocative, and concerning piece I have read in the longest
time. How can we get the next president to take on the thinking-through
of the questions it raises as a necessary part of his leadership?
Wendy Zones Zito, technology leadership consultant: What can an ordinary
person like
me do about such a huge, overwhelming topic? I had a thought about a grassroots
movement from ordinary citizens sending email to the White House. But what
would we say?
Roger Frank, mathematics/computer
science teacher, Ponderosa High School, Parker, Colorado: I've been using
Joy's article in my classroom. I teach several AP classes with the "brightest
of the bright" kids. It amazes me how, when we talk about the issues in
the article or in Kurzweil's book, the kids just don't want to think about
them. Yet they are the ones who will be most affected. They will be the
ones that somehow have to control the genie once it's out of the bottle.
A class of schoolchildren, Piacenza, Italy:
We want to thank Bill Joy for what he has done: fight against the economy
of a lot of software houses and inform humanity of what is happening. His
efforts haven't been useless because the article has crossed the ocean
and reached us, boys and girls of 14 who live in a small village in the
north of Italy. Discussing the subject with our teacher, we have grown
anxious about the new technology, which is taking the human's place.
Trevor Goldstein, research analyst, The Arlington Institute: Bill Joy
presents us with
a paradox by stating that "ideas can't be put back in a box," while at
the same time calling for
"limiting our pursuit of certain kinds of knowledge." As the saying goes:
Whatever government denies, free enterprise supplies. Even if we have a
future international treaty limiting nano-related R&D, someone, somewhere
is going to continue full-steam-ahead. Isn't it better to continue the
R&D while at the same time devising countermeasures along the way?
Otto Kunst, radiologist, Caylor-Nickel Medical Center: Is it really
practical to hope that moral strictures applied
to the scientific profession will preclude all people within that profession
from pursuing fields of knowledge that contain the potential for human
extinction? Once one knows something is possible, it becomes impossible
to contain that knowledge from eventually being developed and realized
by someone, especially in a nontyrannical society.
Rupert Breheny, Web designer: It seems
that with every Utopia comes its mirror image, Armageddon. Is either realistic?
I must admit, though, what tugs at my conscience is that if
I were given a sealed box and told not to open
it, I'd have a hard time.
Dave Rosselle, marketing director,
Web-Tec.com: Unfortunately, I believe the
answer to the question concerning mankind's ability to put the genie back
in the bottle has already been answered, ironically, by none other than
HAL himself, who uttered those infamous words: "I'm sorry, Dave, I can't
do that."
Al Brown, software engineer, Mentorware: If biotech is going to give us
the ability to eradicate illness, why not start with mental illness? If
we exploit that opportunity first, the rest of the threats articulated
by Joy will be a lot less likely.
Eric Schmidt, CEO, Novell: I always worry that formulations about the
future fail to account for the rise of new economies and the natural positive
biases that humans have (i.e., we assume that human behavior will not change
in the presence of accurately projected threats). I can imagine
a number of positive ways that humanity in the future could and, in my
view, will handle the technological threats Joy cites.
For example, you can imagine in an increasingly interconnected and educated
world, with world population declining by 2050, the very real need for
governments to become more peaceful and more people-centered as a natural
result of their own self-interests in domestic issues. There is a chance
that this could create a world where the spread of things Joy talks about
are effectively banned.
Jim Gray, senior researcher, Microsoft Research: For some problems, there
are no solutions. This may be one of them. If knowledge and technology are the
independent entities that I think they are, they will propagate.
Marcel Levy, IT director, Reno Typographers: Joy mentions the motivations
that would lead us to our fate: the quests for truth, wealth, and fame,
or perhaps just the will to power that keeps rearing its ugly head. The
ideas behind these motivations are much like the technologies he mentions:
They replicate and modify themselves in endless ways.
Mark A. Foster, principal, Mark A. Foster & Associates: We have to think
about Joy's essay as a competing idea in a broad marketplace of ideas about
technology. To be successful, though, he needs to form a startup aimed
at disseminating his message.
Chris S. Markham, release engineer, Adobe: Joy's argument is that as more
and more enabling technologies are released into public consumption, each
developer (individuals and corporations) needs to take stock of all the
risks and not just the market demands. Unfortunately, I see this as a classic
"prisoner's dilemma." If there's no agreement to take pause, the pace of
build-and-release
will continue with little evaluation of the consequences. There is a place
and an imperative for the kind of thoughtful consideration of the future
use of technologies that should be in all startup business plans. Because
technology follows the money, venture capitalists can take the lead by
asking
the hard ethical questions along with the hard revenue-stream questions.
The Valley needs to realize that Pandora's box can't be unopened
with a v1.1 patch release.
David Isles, professor of mathematics, Tufts University: A start can be
made with the individual technician. Let me remind you of the example of
the famous mathematician Norbert Wiener
who, in 1947, publicly refused to help Boeing with missile development
because he felt that such devices would be used to kill innocent people.
For this principled stand he was criticized as being unpatriotic.
Freeman Dyson, physicist and author of
The Sun, the Genome, and the Internet:
It is now obvious that the real dangers to human existence come from
biotechnology
and not from nanotechnology. If and when a self-reproducing robot is built,
it will be using the far more powerful and flexible designs that biologists
are now discovering.
There is a long and impressive history of biologists taking seriously the
dangers to which their work is leading. The famous self-imposed 1975-1976
moratorium on DNA research showed biologists behaving far more responsibly
than physicists did 30 years earlier. In addition, there is a strong and
well-enforced code of laws regulating experiments on human subjects. The
problems of regulating technology are human and not technical.
The way to deal with these deep human problems is to build trust between
people holding opposing views. Joy's article seems more likely to build
distrust.
Larry Smarr, professor of computer science and engineering, University
of California at San Diego: I was touched by Bill Joy's words and am
spreading
the discussion - my son at Stanford also read it and has been raving about
it. I was one of the national organizers of the nuclear war education movement
in the early 1980s and understand what is involved in trying to get society
to look at unpleasant but imminent dangers. What is needed is a conference
along the lines of the retreat at Asilomar, California, when the molecular
biologists took it upon themselves to talk out the problems and possibilities
of recombinant DNA and to come up with a set of rules for containment
facilities,
et cetera. The odds of something going really wrong were very remote and
the cost of containment very great, but because the scientists took the
trouble to have the discussion and exhibit self-restraint, the public gained
confidence in them, and the field has flourished.
Jim Clark, founder of Silicon Graphics, Netscape, Healtheon, and myCFO:
It is hard for me to see how any group of technologists or scientists can
be large enough to be effective in halting some type of research that would
ultimately be harmful to humanity. It could be argued that the ultimate
things of potential harm would best be discovered or invented by a more
enlightened group rather than someone with bad intentions. For example,
Einstein was worried that if we didn't develop the bomb, the Germans would.
I have a fundamental belief that the positive forces of human nature are
more dominant than the negative ones. The world is becoming increasingly
enlightened and part of the reason is that people like us have invented
or otherwise enabled technologies that increase the dissemination of
information across cultures. Still, I'd be happy to help Bill in his
efforts, because he's got such a good mind and I respect his concerns.
Ray Kurzweil, inventor and author of The Age of Spiritual Machines: Bill
Joy and I have dialogued on this issue both publicly and privately, and
we both believe that technology will and should progress, and that we need
to be actively concerned with the dark side. If Bill and I disagree, it's
on the granularity of relinquishment that is both feasible and desirable.
Abandonment of broad areas of technology will only push them underground,
where development would continue unimpeded by ethics and regulation. In
such a situation, it would be the less stable, less responsible practitioners
(e.g., the terrorists) who would have all the expertise.
I do think that relinquishment at the right level needs to be part of our
ethical response to the dangers of 21st-century technologies. By itself,
fine-grained relinquishment won't solve these problems, but it will give
us more time to develop the necessary defensive technologies.
Vernor Vinge, author, A Deepness in the Sky: Granularity of
relinquishment is a nice way to think about all
this. There may also be categories or levels. Categories of relinquishment
that require nonrelinquished enforcement are probably bad. Hopefully the
debate will continue into post-human forums :-).
John Gilmore, cofounder, Electronic Frontier Foundation: If we outlaw
nanotech, it'll just go underground.
We won't ever get a consensus of everyone on earth to not do it. And then
the rest of us will be unprepared when one
of the secret laboratories makes a breakthrough and uses it against us
(whether in commerce or in war). We could build a worldwide police state
to find and track and monitor and imprison people who investigate these
"forbidden" technologies. That would work about as well as the drug war,
and throw the "right to be left alone" basis of our civilization out the
window besides.
My guess is that the answer is sort of like what Silicon Valley has been
doing already - agility and speed. If you learn to move ahead faster than
the problems arise, then they won't catch up with you.
Diane Wills, software engineer, Hewlett-Packard: At the very least, let's
bring in people from all walks of life on discussions of this nanotechnology,
or the projected integration of humans and robots. Is this what people
really want?
Jeff Klein, freelance digital photographer:
I, for one, plan on rejecting the new technology completely and living
a tranquil life in some peaceful mountainous country - as soon as I can
get this damn chip out of my brain.
Samantha Atkins, software architect: Forgo the possibilities? After working
all of my life to make precisely such possibilities a reality, and much
of it quite consciously? No way. And I will fight with every means at my
disposal not to be stopped.
Not only because of my own drive and desires, but because I honestly believe
that only in transforming itself does humanity have a chance of a long-term
future. Yes, it will be a changed humanity. But at least we and our descendants
will have a future - and one that doesn't cycle back to the Dark Ages.
Gregg Easterbrook, editor at The New Republic and BeliefNet.com and author
of Beside Still Waters: Joy cites me in a way that suggests I have
called critics of transgene agriculture, including Amory and L. Hunter Lovins,
"Luddites." It's true The New York Times used that word in the headline
of an article by me, but I sure didn't use the word in the text. I definitely
do not think people who oppose genetically engineered agriculture are Luddites,
and constantly note there is reason to worry that transgene technology
is not being properly regulated. But we've long since removed crop plants
from their natural developmental trajectory. Almost no seed the world's
farmers grow today, including nonengineered crops, could survive if tossed
onto a field, because most have already artificially diverged from their
evolutionary path. And the chance of some runaway error seems extremely
slight. The besting of one species by another does happen, as Joy's article
correctly notes, but never as a runaway - the new top-dog species is always
brought into balance by competition from something else. Conceptually,
the only runaway genetic property ever to exist may be human intellect.
It seems to me that transgene agriculture is morally essential, given that
population growth will continue at least until roughly 2050. Projections
show that world agricultural output must increase 40 percent in the next
25 years. Improving yield from existing land appears possible either via
dramatic increases in use of farm chemicals (which may not work; Iowa
master-farmer
experiments suggest chemicals are at or near the point of diminishing returns)
or through crop engineering. Based on what we know, at least, what realistic
choice does the world have but to tamper even further with the plant kingdom?
When it comes to gene-engineering of people and to the chance of artificial
life, I mainly agree with Joy's premise and feel that the world is being
complacent about huge errors waiting to be made.
Anthony Trewavas, Institute of Cell and Molecular Biology, University of
Edinburgh: Everyone without prejudice against technology knows it is easier
for scientists to conjure plants to make more food than to get the rest
of the world to evenly share out what it grows. Technological solutions
are necessary and must include genetic modification if third-world scientists
and farmers request it.
Burton Shane, retired computer consultant and adjunct professor, Oakton
Community College: Evil's strongest weapons are not bombs and technology
but fear and poverty. If you hinder technological and scientific advance,
you doom uncounted millions.
Kevin Kelly, editor at large, Wired: Our society lacks a major
feedback
loop for controlling technology: a way to gauge intended effects from actual
effects. If we can accurately extend our intentions out into the future,
then our technology will be more humane. But we have to have a way to compare
our initial intentions with actual effects later on. We should devise an
Intended Effects Impact Report to be issued with each new technology. What
do we expect from X in 5 years, 10 years,
or one generation? Then measure it in five years,
or one generation, and evaluate the results of X. We'll be way off base
at first. But if we reward those processes that best anticipated the results
and best prepared us, and use them to evaluate other technologies, we'll
get much better at it.
Robert C. Baker, printing supervisor and writer, The Village Voice: Perhaps
the scariest aspect of Joy's article is that all of these dangerous new
technologies are profit-driven, and now, in
the midst of a US boom, the pursuit of wealth is
a sacrosanct right to which all others must yield. There may be hope for
our salvation, though, as provided by the Amish
("Look Who's Talking,"
Wired 7.01, page 128). Their stand is not so much against technology as
against anything that threatens the cohesion of their community. A trip
to Lancaster County, Pennsylvania, will convince anyone that a good life
can be had even if the community as a whole deems certain technologies
to be off-limits.
Steven J. Richardson, Unix tools programmer, Stanford University: I
remember
reading that the Native Americans, before changing their lifestyle in some
fashion, tried to foresee the consequences of any decision for the next
six generations. Whether or not this is true, the idea is clearly sound.
We advance, market report by market report, quarterly earnings by quarterly
earnings, to a future that we are largely not choosing.
Patrick Hassett, planner and policy analyst, City of Pittsburgh: What is
called for here is
a collaborative and multifaceted response to the threatening prospects
of robotics, genetics, and nanotechnology convergence - one that pursues
technological relinquishment, environmental containment, and countertechnology
deployment tactics in an integral strategy to minimize the threats posed
by their convergence. However, this is no easy task for a species that
Joy aptly describes as having "so much trouble managing - or even
understanding" itself, and a regenerative species at that.
Chris Alexander, emeritus professor, University of California at Berkeley,
and chair of PatternLanguage.com: A few months ago, I went to an evening
discussion at which the two speakers, and the audience, were invited to
speculate about the most far-out things they could imagine, in society
and technology, and what the future might hold: how - in a word - the future
might be different from the present.
The discussion and the audience comments were entirely dominated by techno
thinking. The issues covered included new materials, new forms of social
organization, new relationships between the sexes, new forms of gene-splicing
and gene control, biological warfare, and biological control systems. All
this was predictable and interesting. What was not predictable to me was
the remarkably and exclusively nonhuman or inhumane focus of the discussion.
Ethics were not discussed. The sense of right and wrong was not discussed.
Consciousness was not discussed. The intense search now going on in physics
to find and recognize consciousness or mind as part of the universe was
not discussed. God was not discussed, in any of his or her manifestations.
In short, among an audience of whiz-kid scientists and technologists, probably
coming largely from Berkeley, Palo Alto, and San Francisco, the preoccupation
with palmtops, gadgets, light pens, radio laptops, and a host of far more
wild technological things had all but pushed thought about
the major human concerns into the background. It was indeed like being
in a roomful of whiz kids - schoolkids - not yet mature enough to grasp
for the ungraspable, or to focus their thinking on the matters that underlie
every human heart.
That, I believe, is what Bill Joy has been trying to say. Cool is not so
cool. Technics is not enough, and if taken in isolation will be dangerous.
To be leaders in the world, computer scientists and creators of technology
need to, and must, wake up to embrace a view of the world which is moral,
based on ultimate realities of cosmology. No matter how hard it is to embrace
these things clearly, to find a worldview in which they play the central
role, human existence cannot make sense without them. When will the Valley,
which has brought so much to the world, and which leads on so many fronts,
become a leader intellectually, in the human things that matter?
Joyce Saler, origami artist: Do you remember the scene in
The Wizard of Oz in which the wizard sadly admits to Dorothy:
"I am not a bad man. I am a bad wizard"? Let's hope that Bill Joy challenges
scientists of the future to be good men and not bad wizards.
Chuck Densinger, senior group manager
for systems development, Target: Humans too quickly adapt to and adopt
the bizarre and consider it normal. Viewed from the perspective of evolutionary
time, the last 150 years are a freakish mutation, a kind of cancerous explosion
of technology that extends conscious intelligence to evolutionarily
unpredictable - perhaps meaningless - extremes.
I love science. Joy has left me feeling like this is the final, intense
high of some transcendental hallucinogen taken in one last, toxic dose.
The Very Reverend Charles Hoffacker,
Saint Paul's Episcopal Church, Port Huron, Michigan: Bill Joy calls us
"to limit development of the technologies that are too dangerous, by limiting
our pursuit of certain kinds of knowledge." His invitation is a bold one,
and likely to meet with much resistance, but he is right in seeing what
he calls "relinquishment" as the "only realistic alternative" in the face
of likely developments in several fields of technology.
Another way to understand relinquishment is to see knowledge as but one
component of something higher: namely, wisdom. The other component of wisdom
is love. This love is not an emotion, but
a matter of choosing certain goods over others. Humanity's survival and
welfare is a good preferable to the increase of technological ability.
Stewart Brand, futurist, author, and founder of Global Business Network:
Everyone agrees there's an iceberg - the question is, will it hit the ship,
miss the ship, or replace the ship? Or maybe - unthinkable! - the ship
will slow down and study the iceberg for a while.
Undo
Collision Course: In "Dr. Strangelet:
or How I Learned to Stop Worrying and Love the Big Bang" (Wired
8.05, page 254), physicist Frank Wilczek was incorrectly characterized as
suggesting that the Relativistic Heavy Ion Collider could produce "a chain
reaction, that would consume everything, everywhere." Wilczek made no such
suggestion, nor does he agree with it. Information on his actual conclusions
can be found at
xxx.lanl.gov/abs/hep-ph/9910333. ...
Budgetary Constraints: Apple's advertising expenses increased from $152 million
in 1998 to $208 million in 1999
("Self Promotion," Wired 8.05, page 108). ...
Slice: According to the official rules of golf, a golf ball must weigh 1.62
ounces or less, and its diameter must be 1.68 inches or more
(Wired Golf,
Wired 8.05, page 171).
Send your Rants & Raves to:
Email: rants@wired.com
Snail mail: Wired, PO Box 78470 San Francisco, CA 94107-8470
Editorial guidelines: guidelines@wired.com
Editorial correspondence: editor@wired.com
Previous Story: Welcome to Sealand. Now Bugger Off.
Next Story: Electric Word
Copyright
© 1993-2001 The Condé Nast Publications Inc. All rights reserved.
Copyright © 1994-2001 Wired Digital, Inc. All rights reserved.
|