Elon Musk vs. Artificial Intelligence?

By now you are probably aware of Elon Musk's performance at the National Governors Association's meeting, where he proceeded to express his deep concerns about artificial intelligence, and that there is a need for regulation and for researchers and companies to "slow down."

No doubt about it, things are moving fast, and some people calling themselves experts are even saying that we are moving a lot faster than earlier predictions by other experts. Whether that is true or not, I think we can all agree that we need to start actively and openly discussing the safety concerns surrounding A.I.

Meanwhile, we also need to start identifying the true valuable resources in this field when it comes to expertise, because sure: Elon Musk may have invested in start OpenAI, but that does not make him an expert, and after his remarks recorded on video, most of the truly important machine learning researchers have spoken out against him.

Elon Musk is first and foremost a businessman, and we need to realize that when taking in his words, and question what his motivations are when he say he thinks there should be more regulation, and other (competing) companies should slow down their research.

He has done it again!
Elon Musk spoke, and the world is sharing his vision en masse, with social media yet again ablaze with his latest quotes.

So, Elon Musk now views artificial intelligence as an existential risk and claims that the dangers we are facing are far more prevalent than we initially thought.
And while throwing any kind of criticism at the way of the Musk usually results is quite a lot of hate coming from the fans, somehow he keeps being allowed to make these blanket statements that for the most part are less than adequately examined.

Because we are talking about the man that himself owns a company experimenting with artificial intelligence.
A company who's website has some very interesting quotes of its own, and equally void of any real argumentation or empirical evidence to show for them.

"By being at the forefront of the field, we can influence the conditions under which AGI is created."

This is sure an interesting idea, were it not that for the most part the tools to experiment with machine learning and artificial intelligence are all open-source, so I do not see how one would influence the conditions of how other people and companies apply them.

It seems to me that while they are indeed at the forefront, and capable of inventing the future, eventually like with all the other companies that have open-sourced their tools, innovation will come from many fronts.

"We will not keep information private for private benefit, but in the long term, we expect to create formal processes for keeping technologies private when there are safety concerns."

I don't know what kind of double-speak this is supposed to be, but going onto public forums as large as Elon Musk can walk onto, and claiming that he has never had more concerns about artificial intelligence than right now, don't you think it is going to be a bit too late if they still have to start working on "formal processes" in the long term?

One of the questions we could ask ourselves is this: If Elon Musk wants to create regulations around artificial intelligence, does he want to be one of the regulators?
And, if so, given he has an A.I. company himself, does this constitute a conflict of interest?

The crazy part of all of this is that I think he is right about one thing, artificial intelligence is potentially a real threat, and indeed sooner than most people think.
In my article HowTo: Create A Rogue A.I. (for Dummies), I already showed some ways that A.I. can go horribly wrong, and way before we even reach AGI, ASI, or any kind of singularity.
The threat lies not so much in intelligence, but in over-connectivity to real-world environments.

Personally, I don't think you can ever regulate this industry, and neither could you ask companies and researchers to just "slow down a little," like Elon Musk is suggesting.
There is a real need to come together and think about safety protocols that are not only built right into the algorithms and systems themselves, but also in the way we connect these to the real world, and this is going to take a lot of time.

Basically, it is a crap-shoot at the moment, and we are all going to be affected by it. Whether it will work out for or against us, is really anyone's guess right now.