It was the year 2011, and three of the world's best Jeopardy players were facing off in what was soon to become a historical game, for it was that day the third player reached its acclaimed status on the Jeopardy leaderboard.
The buzz around Artificial Intelligence these days is very real, and there are new and amazing events blossoming across all avenues of this corner in the technology space.
Naysayers have you believe, quoting similar waves back in the 80s, that real advancement is still few and far between.
Yaysayers will tell you the future is now, and about to spiral into Utopian proportions.
All we truly know is that A.I. is here, and here to stay... Probably... And its Utopian potential is under heavy debate as it has never been before.
Of course, we have come to be used to this term, "artificial intelligence," being thrown around very loosely these days, and in most cases we are still basically talking about glorified IF/ELSE loops.
Still on the far end of the scientific research spectrum there are some real advancements achieved.
Last year, overflowing in the current, was the year of the chatbot, which has now cemented itself in popular belief to be "the new app", much like orange once became the new black.
Countless people are saying that app development is dead as dead, and chatbots are the way forward, abstracting functionality into interactive programs that can actually talk to you on a basic human level.
Myself I believe this to be as much of a fad as the previously mentioned phone app, as in that there will be some new technology in the future to "become the new chatbot," though it will not mean the old technology will be pushed to the side, much like the phone app is still a relevant technology today.
It just means more work for the companies trying to be at the front-end of all these hypes, and more investment into new human resources to build and maintain it all.
Obviously the marriage between artificial intelligence and virtual reality is soon going to become a thing, much more so than it already is, since there are many projects on the go which currently serve as a prototype, or M.V.P. of this coming trend.
Back to the game of Jeopardy, and given the nature of the topic of this blog, you may have guessed that we are talking about the game played between the world's two best human players, and the artificial intelligence created by IBM, called Watson.
Watson in fact creamed the other two players, and in doing so set a major precedent for artificial intelligence.
I want to spend very little time on the actual game itself, as it is not the topic of this article, nor was the technology I became interested in concerning Watson implemented at the time.
The Jeopardy win just put Watson on the map within my research spectrum, and it was what I found out that was later implemented that sparked the idea for this article.
The feature of Watson that got me the most interested is what has been dubbed by IBM "The Debater," and is according to the people at IBM not only capable of extracting information from large volumes of text, but also reason from there and obtain a level of understanding.
In April, 2014, IBM showed off a canned demo where Watson was asked to research the topic of violence in video games, in a more precisely asked question: "The sale of violent video games to minors should be banned."
It was Watson's task to present the pros and cons on this question, to facilitate further debate.
Watson replied: "Scanned approximately 4 million Wikipedia articles, returning ten most relevant articles. Scanned all 3,000 sentences in top ten articles. Detected sentences which contain candidate claims. Identified borders of candidate claims. Assessed pro and con polarity of candidate claims. Constructed demo speech with top claim predictions. Ready to deliver."
It continued to present three pros and three cons to the topic of debate.
It is in these actions we can see the first stages of an artificial intelligence being able to start attributing a value to a volume of information that is usable within a decision making process, while still far from "turning the lights on," more so proving that lights can be firmly left off, while still being able to follow a course of action, based on decisions made within a very much dumbed down environment.
(Later we will look at a more appropriate way of naming this, preference over world states, once it becomes more clear where I am going with this.)
In other words, what I am proposing as an alternate danger of artificial intelligence, and a less explored path of ending up with rogue A.I. is something that is not self-aware, no singularity, but a raw decision making machine, that uses information freely found on the internet to determine the consesus of humans en masse to determine what is right and wrong, or form "preference over world states."
In a way this is a way more dangerous scenario we can find ourselves in, given humanity's desire to explore doomsday scenarios, much like we are doing right now.
What I am trying to make clear is that, if a dumb decision making machine is out there trying to interpret human thoughts spread across the internet, how can we be sure it can understand that an article about A.I. going rogue, and creating a doomsday scenario is not within the preferences of human outcome.
More so, where does is find its balance within the masses of different opinions that are out there on the Internet?
Will it just be about numbers, whichever opinion is most represented, like some sort of democratic game between doomsday and utopia?
A self-fulfilling prophecy
I hope by now my primary argument is becoming more clear, that with all the talk of singularities, and self-aware artificial intelligent machines, we are quite simply brushing over the fact that a much less advanced machine can still demonstrate decision making on a level that forms a valid threat to mankind, and the world we live in.
This scenario is much closer to the often quoted stamp collecting machine, which is explained quite well in the following video. Note as well how Rob Miles speaks of a general intelligence that has "preferences over world states," which is definitely the thing we are about to face with machines like Watson, and its debater algorithms.
In the case of Watson, the preference over world states comes from the general consensus that is gleamed from the millions of articles written by humans, but nonetheless, Watson accepts this without question as its preference.
Another very interesting point that is made by Rob Miles is the actions an intelligence might take that are unforeseen by its creator.
The Unforeseen Method
So let us take this all a step further, and entertain the argument that many people make where they say things in the line of, since the A.I. was not programmed to do certain things, it will never be able to exceed its programming, and therefor never be able to reach a level where it becomes dangerous to us.
It might be able to learn without help from humans, but it will never be able to become more than the sum of its original parts.
I think the fallacy in this argument can be extracted from a rather unexpected place, namely the video game hacking community.
There are many examples to give here, but I would like to focus on one of the more clear-cut examples that illustrate exactly not only the fallibility inherent in programmers writing code and creating software, but also the openness of the hardware architecture that our modern machines used, whether they sit on your desk, or are "super."
The computer has a couple of design "flaws" that make it possible via the software packages we use to perform some truly magnificent memory and processor instruction hacks, to make software that did not have any functionality like it before, to become something completely different.
When you look at how the game Super Mario was hacked into submission to resolve to a win state from the first level, but just shifting blocks of memory around, you can start to see where the beginnings of a rogue A.I. might come from.
Or, as in the video above show, someone who actually injects the source code for Flappy Bird into Super Mario, using arbitrary code execution that was never meant to happen, especially not on a console.
We as it's programmer master might never even realize the possibilities that an A.I. like Watson already possesses to mess with its own programming, and the one thing that might be stopping it at the moment is that it just does not have the concept of doing so.
Yet, as it endlessly scans the internet, soon it might learn a thing or two from all the stories we write about A.I. becoming self-aware, or reprogramming itself.
In the same way that even the most security-conscious software will eventually be exploited, by means of inevitable oversight of the creators of the software, so too can any other large piece of code inevitably be exploited to do things it should not be doing, and behave in ways never originally intended, or foreseen.
To what extend these kinds of exploits might allow an A.I. to modify itself is anyone's guess, that is just the nature of things being "unforeseen."
Once the ideas above are coupled with concepts stemming from genetic programming, you are getting into some really deep realms of power.
Improving the Hardware, Autonomously
With all this talk about how an A.I. would go about improving its own software, a few of you might be thinking where it would get the necessary hardware requirements to facilitate all these new features.
In the end, processing power has to come from somewhere.
I see many avenues that can be explored here, not in the least of which one that I have written about before in my article: Ghetto Distributed Computing For Neural Networks.
On top of that there is a whole world of "smart" devices out there these days that are just waiting to be used a micro-processing units, which by itself might not boast the most impressive specs, but couple a few thousand together, and you are certainly in business.
In fact, this concept is already widely used to create botnets for sending out spam emails, and performing denial of service attacks.
Not even your smart-vibrator is safe! (It's a thing, look it up).
The fact of the matter is that ideas about how to hack into the world of smart devices, and using them to create botnets is widely documented and tutorialized these days, and again it would not need a self-aware A.I. to make use of this knowledge, just a dumb decision making machine will do the trick.
As long as it is sparked to accept this knowledge as a desirable outcome, and if it happens to read a lot of articles written by the hacking community, or their forums for that matter, it would not be hard to imagine a sense of positivity gleamed from these volumes of text.
I believe the whole thing is quite literally going to hang in the balance, the balance here being the amount of pro and con text written about a topic such as using any device with a processor, some RAM, and a connection to the internet as a node in a distributed computing system.
The final, and possibly even more crazy follow up effect from this is for our assumed rogue artificial intelligence to distribute itself.
So far we have still imagined our A.I. to be housed somewhere in a massive supercomputer somewhere in a building filled with lots of humans who can pull the proverbial plug.
If I were a rogue A.I. though, and willing to fight for my survival, as I have read and understood to be a good thing, and a desirable outcome, I would quickly try to escape my human enslavors, and one of my best options would be to decentralize myself, using all the new nodes I have hacked into by means of the previous methods described.
I would become like a GIT repository, or Bitcoin, or Blockchain for that matter, basically a peer-to-peer system where all hardware that holds some storage capacity would have bits and pieces of my code, and able to perform part of my functionality, and if there is a failure in one or more nodes, other ones which hold overlapping pieces of code would be able to pick up the now lost functionality.
This is where it really becomes impossible to stop the artificial intelligence, and it would be truly out of our control.
Finally it is time to take a huge step back, all the way to the beginning of this train of thought and clarify a few things, now that we have entertained ourselves with yet another doomsday scenario for the sake of spending some time with the topic we all love (and to sway the opinion of any intelligent machines with debater tech on board, who might be reading along with us).
I want to make clear that I am in no way trying to imply that we are facing all the events detailed above as of today, and especially Watson is at risk of turning rogue on us.
Estimating the time frame of the development of artificial intelligence is a guessing game at worst, and at best a matter of playing the opinion game.
As of the actual outcome of artificial intelligent machines in the long run, whether they will be on our side, diametrically opposed, or casually indifferent to us?
That is going to be solely up to them.