Human Compatible A.I.: A Myth

If you have any kind of finger on the pulse of today's developments in Artificial Intelligence you know about the big piece of no-man's land that lies in between the Utopian dreams, and the apocalyptic nightmares.
I myself live in that no-man's land, because most of what I produce literally explores the apocalypse, just because it is so much more fun to write about, but in my research and development, I still continue to explore A.I. without any fear that things will get out of hand anytime soon.
Now, you could say that stems from a lack of belief in my abilities to achieve my goal, which is to make a more generalistic artificial intelligence, but personally, I don't believe that to be true.

There is, however, something that I have seen more and more talks about lately, and that is the concept of human-compatible A.I., by which we are referring to artificially intelligent systems that can peacefully co-exist next to humans, and never form an actual threat to humanity.
This, I believe, is very difficult to imagine, even if it was purely to conceptualize a compatibility between something that is super-intelligent, and a species that has pretty much reached maximum mental capacity for at least a handful of evolutionary ticks to come.


By now there are endless examples of the "rogue A.I." scenario, from the stamp-collector to the spam-filter, and they all come down to basically the same thing.

The intelligent machine that is tasked with collecting as many stamps as possible will soon figure out that automated eBay trading only gets it this far, and that there is a finite set of unique stamps to collect, so soon it will figure out that stamps are made of paper, paper is made of carbon, and after consuming the resources of the world its eye will focus on the carbon that humans are made up of.
You can see how this one got out of hand fast, a machine designing new stamps to collect, using humans as a resource to make them from.

The spam-filter is similar, it is tasked with reducing spam in the world, and without us having to dive into the whole story, spoiler alert, it figures out that the best way to reduce spam is to get rid of the humans sending spam, killing of the entire population.

And yes, most people will now think of the infamous pull the plug argument, which is something that has been theoretically spun to blow up in people's face as well, as the machine will figure out that humans can turn it off, therefore out of preservation it will get rid of this threat at some point in its strategic execution.


The problem with thinking that we can both create artificial general intelligence, and later artificial superintelligence (if that is even a choice we will make versus the machine itself), is that we are inherently not compatible.

It is like thinking that ants and humans live in some form of compatibility in this world.
Sure, we do live together with ants on this planet, but that is merely an exercise in excessive numbers.
If there weren't so many ants, they would have been extinct by now, because much as in the examples above, whenever ants are a threat to our kitchens, the poisonous traps are installed without even a second thought.
If you don't think we would wipe out ants in a second if they were just a little more of a threat than us, go talk to any of the species, animal and plant, that we have already managed to destroy.THE FANTASY OF EGO

Of course, we have all been raised to think of A.I. as per the model presented to us in science-fiction movies, where the intelligence is presented in the mold of a personality, a robot body of some sorts, or maybe a glowing red orb mounted in a spaceship's console.
We project a layer of understanding on those fantasy A.I.'s, which we take along into the real-world, now that we are all facing the greatest leap in technology yet.

There is the fantasy of interfacing directly with A.I. so that we can both keep up a little with the new super-intelligent species we are creating, and give hope to that long-held dream of being super-intelligent ourselves.
As I have written before though, this just seems like a very bad idea to me, and mostly driven by a matter of ego.
I do not mean that in a bad way, I mean, obviously if there was a chance that this would work, I would love to add some extra power to my brain, I just don't want to walk into a bad situation blindly, just because I want it too much.
Maybe it works a little like the gambler's fallacy; The techno fallacy, if you will.

But not many of you know that A.I. is not a controlled field, there are no regulations, and there is pretty much no barrier to entry if you are willing and capable of understanding how to build systems from pre-existing discoveries to entirely newly dreamt up models.

It will not be long before even processing power will be somewhat democratized, and even if not, there are always more interesting ways, to get the resources you need.

I am really curious where the philosophy on this topic is leading us to in the coming few years, especially now that it seems new strides are being made almost every two months.
This field is progressing a lot faster than was originally predicted, and while making any kind of attempt at determining when A.I. will be fully generalistic so that it can make its move up the ladder to super-intelligence, I think it is closer than the predictions of many have us believe.