3 Reasons Why We Should Skip Human Level A.I.
Most researchers you speak to these days predict that after the boom of neural networks in machine learning, we will reach A.G.I. (artificial general intelligence), and then soon A.H.I. (artificial human intelligence), until the final step A.S.I. (artificial superintelligence).
While this seems like the most logical path, and a solid theory based on logic, does this mean that we should rigidly follow this direction?
There are a lot of downsides to especially the artificial human intelligence step, both in the implementation details (which can be overcome), as well as the implications it will have on the following step, artificial superintelligence.
1. Our Brains Are Not Computers
Recently I came across a truly awesome article called The Empty Brain, in which Robert Epstein explains that our brains are not like computers at all, even though that is mostly the model we adhere to these days.
His most convincing arguments stem from his brief history rundown on how humans have perceived their own brain along our historical roots.
Epstein explains the metaphors that have been defined by artificial intelligence expert George Zarkadakis in his book In Our Own Image (2015).
Intelligence started to be explained by using the spirit metaphor, stemming from biblical origins, before being replaced by the hydraulic metaphor, which found its genesis in the 3rd BCE.
This model looked at intelligent thinking to be due to fluids being moved around the human body and brain.
Around the 1500's, with the invention of automata, an intelligent machine model started taking over, which then lead us neatly to where we are now: The brain as a super complex computer, a model that fits really nicely in the information age.
Yet, while a computer distinctly processes information, with a processor, working memory, and long-term storage, the brain operates on a completely different level.
For instance, the brain does not have any long-term storage, the only thing it can do is re-experience a set of stimuli and respond with trained reactions.
2. Human Intelligence Is Not "Safe"
When you think of human-level intelligence you can perceive a few things very clearly.
First of all how amazing our mental abilities are, especially compared to other organisms on this planet.
Second, how very little control we have over our mental state, evidence of which can be found in phenomena like teenage angst, mental disorders, and many other states that are a detriment to ourselves, our environment, and people around us.
If we were to perfectly implement human intelligence into a machine, what would prevent it from developing the exact same mental patterns as any of us do, and what would prevent it from falling into darker thoughts.1
This by itself may not be very dangerous, but since the common thinking is that after human-level intelligence, this same machine would develop artificial superintelligence in a matter of hours (which is quoted by many researchers), going through the angsty human level step may not evolve to the most benevolent superintelligence.
3. Human Intelligence Is Not Interesting
Of course our own intelligence is very interesting to us, but the use-cases for artificial intelligence in the future are usually way more directed to super intelligent machines.
While we might think that we need to follow the logical path to get there, I do not agree actually, which I will explain in a moment.
Human level intelligent machines are basically only suitable to do things that humans already can do, with the same fault ratio that humans perform at, and this level intelligence is very good for creating robots that act exactly human.
On the other hand, a super-intelligent system should be able to emulate human-level intelligence, so that almost takes away from the usefulness entirely.
Of course, there are a couple of arguments that can be made here, but I think most people will agree that superintelligence is not only much more interesting to think about, it is also a lot more useful to apply to problems we face in our world today.
I see no reason why we absolutely have to reach human level intelligence first, especially since mapping the technology of a machine over the biology of a brain is already very difficult, to begin with, and even if we were to reach this goal, the consequences might not be what we expected in the long run as the intelligence evolves.
In fact, maybe it is much easier (as far as the word easy means anything in this field) to create a superintelligence from the start.
A superintelligence is not bound by the biological nature of human brains and maps a lot better to what machines are good at.
By skipping the mental disorders that could arise in the human level intelligence stage, we even have a much bigger chance that a superintelligence will see us as an integral part of the world and the eco-system, and even though the scenario still exists that it would see our disregard for our natural environment, it might choose to try to educate us, instead of just wiping us out.