SFWRITER.COM > Futurism > Y3K: Artificial Intelligence
Y3K: The Science of the Next Millennium
Artificial Intelligence
by Robert J. Sawyer
Copyright © 2000 by Robert J. Sawyer
All Rights Reserved
Within a century, it will be possible to scan a human mind
and reproduce it inside a machine. Regardless of whether our
minds are just very sophisticated analog computers, or whether
they have a quantum-mechanical element (as Roger Penrose
proposes), we will nonetheless be able to duplicate them
artificially.
Already, at the close of the second millennium, a
transhumanist movement has begun; Christopher Dewdney is the
principal Canadian spokesperson for it. This movement holds that
uploading our consciousness into machines is desirable, since
that will free us from biological aging and death. On the other
hand (a decidedly biological metaphor), there is more to being
human than just the networks of synapses in our brains; clearly,
much of what we are is tied in intimately with our bodies. We
may find that uploaded humans are not happy indeed, are
incapable of happiness or any emotion.
Still, by the year 3000, there will doubtless be millions of
uploaded people, including perhaps versions of some who are alive
today. Indeed, religions might evolve around worshiping
thousand-year-old computer-based avatars; with the acquired
wisdom of a Methuselah, these entities might provide profound
insights.
Just as laws today are moving toward recognizing a woman's
right to control her body and any separate sentience that may be
contained within it, so too will the laws of the future recognize
the right of humans to upload their consciousness and then
dispose of the original biological versions of themselves; such
eliminations will not be seen as suicides or murders, but rather
as a natural, perfectly legal step, eliminating a
no-longer-needed biological container and preserving the
uniqueness of the individual.
But there will also be other thinking machines, with a
separate genesis: we will doubtless develop artificial
intelligence within a century. A key question humanity will have
to consider as it does so is what, if any, constraints will we
build into AI? It may, in fact, be dangerous to build conscious
machines that are more intelligent than we are; just as
intelligence may be an emergent property of sufficiently complex
systems, so too may ambition and desire be emergent properties of
sufficiently intelligent systems. One possible scenario is that
by the dawn of the fourth millennium, there will be no biological
humans (or even any uploaded echoes of them) left; Homo sapiens
may have been entirely supplanted by its AI creations.
A more appealing (at least to us) scenario would see
humankind carefully crafting AIs (including many embodied as
robots) who will take care of all the necessary work of food
production, manufacturing, recycling, and so on, leaving us to
pursue other things. Although we used to consider the mastery of
chess to be the pinnacle of human intellectual achievement, we've
had to concede that it is simply a mathematical problem, and even
today's primitive computers can do it better than the most
skilled human. But there are other realms including art,
philosophy, and scientific theorizing that, because of their
intuitive, nonlinear nature, we may always be better at than any
machine. Our AI servants may free humanity at the dawn of the
fourth millennium to concentrate on these areas.
More Good Reading
Rob's speculations on the future of:
Rob's essay on life in the future: "The Age of Miracle and Wonder"
Rob's thoughts on Asimov's Laws of Robotics
A dialog on Ray Kurzweil's The Age of Spiritual Machines
My Very Occasional Newsletter
HOME • MENU • TOP
Copyright © 1995-2024 by Robert J. Sawyer.
|