A.I. And The Government
In a post Snowdenworld, would you be happy with politicians setting their sights on artificial intelligence as a tool set?
Of course there are already many implementations of machine learningused by government and bodies surrounding it, but to actually start using it as a political platform?
This is what I saw just a few moments ago, shared by a friend of a friend on Facebook.
Mike Tolking, who really wants to be mayor of New York, is using an affinity for modern technology (or at least its buzzwords) to convince people to vote for him.
While first of all I am not American, so I care a little less, and secondly not interested so much in politics (who can find the time these days), so I don't really care at all, I am very interested in the way he is portraying the current state of artificial intelligence.
This all starts with an article he shared on his campaign Facebook page, which in short states that currently only large, and inherently evil, corporations are investing time and money into A.I. research, and thus we will soon end up in a future where super intelligent machines are controlled by corporate interests.
Interestingly, as a side note, while Mike Tolking shapes the argument of companies like Google being evil, he does literally state that: "NYC will build the future as the Google of government."
Maybe not the best way to claim your own innocent intentions.
Besides, both Mike Tolking, and the writer of the article he shared seem to have a misguided view on how these big companies like Google and Facebook are handling the possession of this new area of technology.
Both the earlier mentioned companies have open-sourced the tool sets they have developed onto which they are building their machine learning algorithms, and while they may have the benefits of larger data sets, and more processing power, nobody is stopping any one person from building their own big data sets, or democratizing processing power (which is being worked on).
And like I eluded to in the very start of this article, I don't think the government as a whole will have either good intentions, or good execution.
One of the most interesting examples to look at right now might be PredPol, which is actually in use today.
While this software is still in its early phases, and most likely nowhere similar to a Minority Report-like scenario yet, it does eerily make you think immediately of how this project will evolve, and how this will impact our society in a few years, especially as machine learning keeps evolving every month.
The question that arises is how this can be used as a system to predict probably cause, and what the limits are for implementing countermeasures to predicted crime.
A lot of people will be very pleased that a system like this is being developed, because of the fear actual attacks on our society, media fear-mongering, and other crime, have us thinking that safety by way of the machine is a means to an end.
One must never forget though, that while today an algorithm like this may be used in the narrow field of crime prediction, and you are not a criminal, in the near future the same technology may be applied in a scope that does affect you.
Maybe one day it will be used to predict whether or not you are likely to have certain health issues, and thus you will have increasingly higher insurance rates, or you will be passed over for a job, because another candidate will be able to work more efficiently, taking less time off.
The Privacy Fallacy
Before we talk about the effects of government-controlled artificial intelligence, we need to deal with a few standard arguments around privacy.
There are two main points you always see brought up again.
1. I would happily give up some privacy in exchange for more security.
This is an incredibly dangerous trap to fall in since what you don't realize, thinking this way, is that "some" is a vague definition if a definition at all.
Sure, in the short term you are just giving up a small piece of your privacy, but in the long run, this will inevitably lead to moving that "some" threshold further and further.
2. Privacy is dead anyway.
It is not, and I can prove that in a very simple way.
Think about search warrants, that still need to be validated by some governing body at the moment, which all depends on having some for of probable cause.
If I was able to apply database analysis on your credit card, follow your every move with CCTV cameras, and also map out your movements with GPS, I would no longer need search warrants because I have all the data I need to know exactly what you have inside your home.
Know Your Enemy
Part of the argument I left as a comment on the original post by Mike Tolking was that while a for-profit corporation may have selfish intentions, at least we all know what those intentions are, and they are clear and in plain view.
A company wants to make money, or even more accurate, a company needs to make money, that is the way our capitalist society works, and deep down we all know that.
So yes, while Google, Facebook, and many more will be constantly collecting data on you to serve you advertisements, you can easily shield yourself from that.
Install an ad-blocker for instance, which will already limit the number of money companies can make off your day to day browsing.
Or, if you want to take yourself even more serious than that, you can easily install a better operating system that shields you from being recognized on the internet and keeps you anonymous.
Don't blame a company like Microsoft for collecting your data, you are the one who accepted the terms of service, and you are the one who decided to stay with the default operating system that came with your computer, while there are better options for you out there.
You can not shift the blame.
Know Your Enemy?
With government it is a lot more complex.
When I was younger I was always under the impression that government is highly regulated, and they were always on the side of the people, serving them even.
But over the years it has become so clear that we don't know anything about what goes on behind the curtain, and it would not be inconceivable that when the government has had the time to see the real power and value of artificial intelligence they will regulate it in such a way that the openness of the technology we know now will disappear, and it would be made illegal to work on certain technology.
We have already seen that many times in the past.