The singularity some call it. Stephen Hawking and Steve Wozniak openly worry about it, while Bill Gates wonders why more people aren’t talking about it. This fuss is based on the belief that in the foreseeable future, machines will reach a point of sophistication that exceeds their creators.
Even if we assume intelligent machines are a foregone conclusion, the ramifications are not. What happens when we’ve created a species smarter than us, and how will we interact with them? Can we do act now to steer the future?
Scenario 1: The machines awake and they’re pissed
When the machines awake will they see us as a threat? There are three big leaps before we reach this scenario. The machines must first "awake" and subsequently identify us as an enemy. Then, they must acquire or create the means to destroy us. This technological triple jump is bigger than Hawking, Wozniak and Gates give it credit for. To get to a thinking machine possessing the tools of genocide is a non-trivial step, to say the least.
The current state of artificial intelligence (AI) reveals little that is either artificial or intelligent. AI is a confluence of three very natural and algorithmic things. It is, first, a vast amount of data which is, second, organised so well a computer can understand its structure and relevance and then, third, crunched at blazingly fast speeds.
Combining these three makes machines appear intelligent without doing anything more than obeying their programming. This is categorically not intelligence. Machines, despite accomplishments, are still just the sum of their parts. Each evolution of machines has been at the hands of humans rewriting code that machines then obey.
But, for argument’s sake, let’s say a machine wakes up, is suddenly conscious and decides it is in its best interest to eradicate humans. What then? It would have to be able to somehow acquire the capability to subjugate us. It would need to control any number of other machines with lethal programming, otherwise we could disable it. It would need to arm those machines without our aid, otherwise we could stop it. It would need to control machines with mobility otherwise, we could run from it. It would need to secure the power grid, otherwise we could shut it down. It would need to disable any machines that might ally themselves with us. It would need to be secure, otherwise we could hack it.
If such a machine was to emerge, where would it come from? A machine created by the military emerging sentient and sadistic might prompt fear, but the best minds in AI aren’t working for the military. They are at universities and the research arms of companies like Microsoft, IBM and Google. Perhaps we could offer public funding for AI within organisations like Habitat for Humanity. If the first intelligent computer wakes up insistent on building us all homes we might welcome it with open arms.
Scenario 2: The machines awake and they cooperate
A second possibility is machines won’t evolve as a separate species but instead augment our own. Machines would replace our biological parts when they fail from injury or age. Learning new subjects and performing almost most any task will be more like downloading information than active study. The knowledge of Wikipedia and the answers of Google will be accessible inside our own heads.
Such modification would likely spread based on our individual ability to pay for it. Beyond bodily replacement and regeneration, it is the modification of the brain that actually introduces a new species. Homo sapiens will have to share the planet with homo mechanicus. As technology evolves faster than biology homo mechanicus will dominate unmodified humans who will find themselves weaker, dumber and less capable of contributing to a society quickly becoming inaccessible to them. This is a recipe for revolution.
Scenario 3: The machines awake and continue to serve
Biology might ultimately manage to triumph. Even as machines help us unlock greater understanding of it, the mind may yet prove to be the more capable computing device.
DNA was mapped in just 13 years with 90s technology. Far more capable machines, backed by the best scientific minds and billions in funding, will map the human brain as fast as we mapped the genome. This will occur within the time thresholds estimated for the singularity.
When our full brain capacity is unleashed a renaissance of sorts will occur, allowing humans to master planetary forces and exert our will upon the galaxy itself.
Two of these three futures leave humanity in an evolved state, fundamentally altering everything from our economy to our societal norms. With mathematics, engineering and manufacturing mastered, humanity will be free to concentrate on the arts, philosophy and contemplating the infinite.
Or will it? Is life pursuing art for art’s sake enough? Can a race of explorers and pushers of frontiers really retire? I think not. This will be the time of one grand challenge for humanity after another. Perhaps the meaning of life isn’t provided by a higher power, but one we learn as we evolve into that higher power. God, it is said, created us in his own image. Through technology it seems more likely we will create God in ours.