Report

A.I. And The Government

Should the governments of the world get involved in A.I. and possibly use it against their citizens? Let's explore...

Governments around the world are getting more and more interested in artificial intelligence, and starting to think about regulations and control. Is this a good thing, or should we start arguing our case for a free and open structure in the field of A.I. research?

SIGN UP FOR OUR NEWSLETTER

The newsletter will keep you informed of interesting projects in the fast-growing field of A.I. and machine learning.
Of course, we promise we will not spam you, and no shameless self-promotion.

PRO TIP: Highlight any text to share a quote easily!
 

In a post Snowden world, would you be happy with politicians setting their sights on artificial intelligence as a tool set?
Of course there are already many implementations of machine learning used by government and bodies surrounding it, but to actually start using it as a political platform?

This is what I saw just a few moments ago, shared by a friend of a friend on Facebook.

Mike Tolking, who really wants to be mayor of New York, is using an affinity for modern technology (or at least its buzzwords) to convince people to vote for him.
While first of all I am not American, so I care a little less, and secondly not interested so much in politics (who can find the time these days), so I don't really care at all, I am very interested in the way he is portraying the current state of artificial intelligence.

This all starts with an article he shared on his campaign Facebook page, which in short states that currently only large, and inherently evil, corporations are investing time and money into A.I. research, and thus we will soon end up in a future where super intelligent machines are controlled by corporate interests.

Interestingly, as a side note, while Mike Tolking shapes the argument of companies like Google being evil, he does literally state that: "NYC will build the future as the Google of government."

Maybe not the best way to claim your own innocent intentions.

Besides, both Mike Tolking, and the writer of the article he shared seem to have a misguided view on how these big companies like Google and Facebook are handling the possession of this new area of technology.
 
Both the earlier mentioned companies have open-sourced the tool sets they have developed onto which they are building their machine learning algorithms, and while they may have the benefits of larger data sets, and more processing power, nobody is stopping any one person from building their own big data sets, or democratizing processing power (which is being worked on).

And like I eluded to in the very start of this article, I don't think the government as a whole will have either good intentions, or good execution.

 

PredPol

 
One of the most interesting examples to look at right now might be PredPol, which is actually in use today.
While this software is still in its early phases, and most likely nowhere similar to a Minority Report like scenario yet, it does eerily makes you think immediately of how this project will evolve, and how this will impact our society in a few years, especially as machine learning keeps evolving every month.
 
The question that arises is how this can be used as a system to predict probably cause, and what the limits are for implementing counter measures to predicted crime.
A lot of people will be very pleased that a system like this is being developed, because of the fear actual attacks on our society, media fear mongering, and other crime, have us thinking that safety by way of the machine is a means to an end.
 
One must never forget though, that while today an algorithm like this may be used in the narrow field of crime prediction, and you are not a criminal, in the near future the same technology may be applied in a scope that does affect you.
 
Maybe one day it will be used to predict whether or not you are likely to have certain health issues, and thus you will have increasingly higher insurance rates, or you will be passed over for a job, because another candidate will be able to work more efficiently, taking less time off.
 

The Privacy Fallacy

 
Before we talk about the effects of government controlled artificial intelligence, we need to deal with a few standard arguments around privacy.
 
There are two main points you always see brought up again.
 
1. I would happily give up some privacy in exchange for more security.
 
This is an incredibly dangerous trap to fall in, since what you don't realize, thinking this way, is that "some" is a vague definition, if a definition at all.
Sure, in the short term you are just giving up a small piece of your privacy, but in the long run this will inevitably lead to moving that "some" threshold further and further.
 
2. Privacy is dead anyway.
 
It is not, and I can prove that in a very simple way.
Think about search warrants, that still need to be validated by some governing body at the moment, which all depends on having some for of probably cause.
 
If I was able to apply database analysis on your credit card, follow your every move with CCTV cameras, and also map out your movements with GPS, I would no longer need search warrants, because I have all the data I need to know exactly what you have inside your home.

Know Your Enemy

 
Part of the argument I left as a comment on the original post by Mike Tolking was that while a for-profit corporation may have selfish intentions, at least we all know what those intentions are, and they are clear and in plain view.
A company wants to make money, or even more accurate, a company needs to make money, that is the way our capitalist society works, and deep down we all know that.
So yes, while Google, Facebook, and many more will be constantly collecting data on you to serve you advertisements, you can easily shield yourself from that.
Install an ad-blocker for instance, which will already limit the amount of money companies can make off your day to day browsing.
 
Or, if you want to take yourself even more serious than that, you can easily install a better operating system that shields you from being recognized on the internet and keeps you anonymous.
Don't blame a company like Microsoft for collecting your data, you are the one who accepted the terms of service, and you are the one who decided to stay with the default operating system that came with your computer, while there are better options for you out there.
 
You can not shift the blame.
 

Know You Enemy?

 
With government it is a lot more complex.
 
When I was younger I was always under the impression that government is highly regulated, and they were always on the side of the people, serving them even.
But over the years it has become so clear that we don't know anything about what goes on behind the curtain, and it would not be inconceivable that when the government has had the time to see the real power and value of artificial intelligence they will regulate it in such a way that the openness of the technology we know now will disappear, and it would be made illegal to work on certain technology.
 
We have already seen that many times in the past.
 
Ai using neuroscience

A.I. Using Neuroscience?

Demis Hassabis, founder of DeepMind, recently spoke out about his vision on how to reach the "next level," of artificial intelligence.
His strategy is, predictably, to reconnect with the field of neuroscience, to study natural intelligence, in the hope of mimicking these processes inside the machine.

While I have shortly cover my opinion on this before, I want to take another pass over this topic to see if the opinions of multiple high ranking experts are able to make me change my mind about human like artificial intelligence.

Facebook ai new language

Facebook's A.I. Did Not Invent A New Language

So, Facebook researchers have taken their machine learning algorithm offline, which as far as I can tell was being used as an experimental chatbot program.
Of course most of the media is jumping on this story with the usual sensationalism, and are quick to connect this to the remarks made by Elon Musk, and Mark Zuckerberg, trying desparately to use this as fodder to determine which one was right, and which one was wrong.
The answer to that last part, by the way, is: Both, and neither.

China ai superiority

China's Plan For A.I. Superiority

So China has made the news recently when announcing their plans to be the world leader in artificial intelligence technology by the year 2030, and I for one am 100% not suprised at all by this news.

Think about it for a second, besides those freaky machines made at the Boston Dynamics labs which recently were in the spotlight—mostly because of the footage of them being kicked around by their creators, while subsequently failing to perform the challenges set for them outside of the lab by DARPA—what other country do you know to be often in the limelight when it comes to advances in robotics and machine learning?

Further more, I suspect any country that has money and human resources to spend on this will have the same goal as China and America, which is to become world leader in the most advanced technologies possible, and this has been going on since the dawn of man.