
Credit: Pixabay/Geralt. CC0 Creative Commons.
Several years back the physicist Stephen Hawking proposed that the full development of Artificial Intelligence could spell the end of the human race by out-thinking our species and perhaps even out-competing it—through programs that not only self-replicate but generate novelty and select for advantage.
Without question, technological innovation will dramatically transform major aspects of our lives, but in what direction? All inventions have costs, benefits, and unintended consequences. In order to use technology wisely we must accept the reality of limits, recognize that every question is ultimately a moral question—a question of value and not just technological efficiency—and learn to combine all the different facets of our intelligence in new ways.
To do this we’ll need much more cultural maturity, a fundamental ‘growing up’ as a species. A distinctive feature of human beings is their capacity as tool-makers, but too often we treat our tools—especially our new digital tools—as truths, if not gods. This can’t work going forward. The future will require a greater ability to step back and consider the consequences of what we humans create—if for no other reason than we are now capable of so much that would cause great harm as well as great good.
Take, for example, our growing addiction to electronic devices, whose dangers easily sneak up on us. These devices do many things that we find useful—and are fun. But their dangers far outweigh those of other addictions around food or drugs.
The dynamics of addiction are most obvious with video games where shootings and explosions create readily repeatable jolts of excitement. Addiction works by promoting artificial substitutes for real fulfillment, as when real relationships are replaced by the stimulation we get from our electronic devices, a phenomenon we see with growing frequency in relation to social media.
It’s even easier to use Virtual Reality to confuse or deceive. ‘Fake news’ lies and distorts; ‘fake realities’ have even more potential to be used for demagoguery and manipulation. Artificial stimulation in the name of meaning—as in Virtual Reality or video games—readily translates into ever-more sophisticated digital ‘designer drugs’ which are immensely profitable.
Such dynamics are also present in the ways we relate to our cell phones. That’s partly because cell phones have become such an aspect of almost everyone’s life, and partly because of the immense commercial rewards that come with the ability of cell phone companies to control our attention.
It’s important to recognize that what we see is not simply a product of the usefulness of these devices. There are specific chemical reasons why people feel they have to check their cell phones every few minutes. A dirty little secret of the tech world is that programmers consciously design their software to be addictive. They build in rewards that make visiting a favorite site just like playing a slot machine. And they intend us to feel anxiety if we are away from our devices for any period of time. The fact that most of the content on our cell phones is advertising-driven means that such addictive methodologies will only become more sophisticated in the future.
These concerns are amplified by what I call a ‘crisis of purpose.’ As traditional cultural beliefs stop providing essential guidance, we can be left feeling adrift and alone, and this can make us particularly vulnerable to addiction. But we pay a high price (both personally and as a species) when we confuse addictive pseudo-significance with real meaning, because this diminishes who we are and undermines future possibilities. The antidote lies in asking what matters most to us with new depth and courage. Being distracted and addicted undermines our capacity to take on this essential task.
The internet promised a new democratization of information and has often provided just that. But if we do not pay attention, rather than freeing us and making communication more democratic, the information revolution could end up by undermining the democratic experiment—and even put the larger human experiment at risk. In his dystopian novel 1984, George Orwell warned of Big Brother taking control of our minds. The real danger in the future is not government manipulation, but artificial stimulation masquerading as substance so that information is used in ways that ultimately disconnect us from matters of real importance.
In fact the term “Artificial Intelligence” is a misnomer, since compared to the human variety it is so much more limited. Some computers may have already passed the famous Turing Test that says that Artificial Intelligence has been achieved if a computer responds to your questions and you can’t tell that it’s a computer. But the Turing test is bad science. Think about it. Imagine a bright red toy sports car made out of candy that someone pulls along with an invisible string. From a distance, you can’t tell that it isn’t real. Such a toy might be fun and useful for many things, even amazing things. But that doesn’t make it a car, just as a computer isn’t human because it can mimic the way we process information.
Managing Artificial Intelligence wisely depends on drawing on precisely what makes living intelligence, and in particular human intelligence, so different. Human intelligence—when all its complexity is included—is not just creative in ways we are only just beginning to grasp, but also inherently moral. Different moral codes follow from how human intelligence works, including our capacity for rational processing. But this dynamic breaks down if intelligence becomes ever-more machine-like and hence vulnerable to exploitation.
The only way to keep Artificial Intelligence from becoming our undoing is to manage it with greater cultural maturity, so that we become better able both to draw consciously on the whole of our cognitive complexity, and to step back and appreciate our tools as simply tools. That will also help us to discover new skills and capacities that can help us utilize our tools in the most life-enhancing ways.
By contrast, levels of technological enthusiasm today are sometimes extrapolated to the point that they become literally religious. I’m thinking in particular of the techno-utopian assertions of people like futurist Ray Kurzweil, who proposes that we are rapidly approaching a point in history—what he calls the “singularity”—when artificial intelligence will surpass the human variety. He proposes that a whole new form of existence will result, one that will transcend not just our biology but also our mortality. Kurzweil describes digitally downloading our neurological contents and thereby attaining eternal life—which he hopes to be able to do in his own lifetime.
There’s no doubt that the technologies of the future will affect how we think about ourselves in important ways. But it is important to appreciate that—while modern day techno-utopian thinking is put forward as radical in its newness—it is not new in any fundamental sense. Rather, it reflects an ultimate expression of the Modern Age’s heroic, onward-and-upward story.
We can also tie techno-utopian thinking to even older impulses: the desire, for example, to eliminate polarities between the body and the mind, the unconscious in favor of an all-knowing consciousness (even if devoid of real human knowing), and the reality of death in favor of a now triumphant digital immortality. But far from being new to our times, these efforts to eliminate the body, the unconscious, and death have been common to utopian beliefs for thousands of years.
Instead of succumbing to such techno-utopian fantasies, our future depends on appreciating both the possibilities and the limitations that come with invention, and taking responsibility for thinking and acting about these costs and benefits in more mature ways. That will be the key to making good choices about issues such as climate change, avoiding nuclear catastrophe, guaranteeing clean air and water and adequate food for the world’s people, slowing the ever-increasing rate of species extinction, and addressing inequality.
None of these questions have a convenient technological fix. The fact of new invention is exciting, and the inventions yet to come are important for us to contemplate. But much more important are the need for new ways of thinking about ourselves and finding the right relationships to the technologies we create.
With these things in place, our relationship to invention changes fundamentally, as we begin to see more clearly that electronic devices must serve what human beings are at their best, and what machines are not: moral, creative and loving, capable of being not just intelligent but also wise. There lies the critical fork in the road. Our tools can free us or replace us, depending on how we understand them—and how we understand ourselves.
Read more
Get our weekly email
Comments
We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.