Elon Musk and the dangers of AI

I wasn’t sure about where to put this, since its both Political and IT related and neither at the same time, so figured it could go here.

Elon Musk has recently donated 10 million to the Future of Life institute, specifically to "“keep AI beneficial” to humanity.

From the article :

“Here are all these leading AI researchers saying that AI safety is important”, said Elon Musk in the statement, referring to this letter originally put forward by FLI founder and MIT professor Max Tegmark. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

Now, many papers run the story as making sure the AI doesn’t go the way of Skynet, which sounds ridiculous to me, but of course, articles has to sell. Hmm…now I wonder if I’d gotten more clicks if I went that way as well ;-)

I do find the idea compelling that instead of just researching AI to evolve and become more complex, the focus is to steer it towards beneficial tasks.

Anyone here who has any insights into the whole AI business, and how well developed these are and can be?

I think he had read Stephen Hawkings recent (ish) stuff on this and now both of them are involved in making sure things are safer (we hope).

Edit: link

I would add we are already making machines that kill humans without a human in the decision process, they are not ‘deployed’ yet, but the tech is there.

I would add we are already making machines that kill humans without a human in the decision process

Proof? a link? anything to back this up?

DUN DUN DUNNNNN

If he’s talking about true artificial intelligence, making choices rather than picking from scripted outcomes via pre-weighted or even genetic/evolutionary algorithms, I don’t see how a non-Skynet interpretation is possible.

If he’s using “AI” as shorthand for the type of machine learning so prevalent today, you can think of it as steering towards positive outcomes rather than those exploiting human psychological imperatives. Which is a lot less interesting, but probably is what he really meant to say.

Full disclosure: I work in the the software industry developing AI for training systems

A few things here… First, the notion of “true AI” is always somewhat problematic, as it implies that we have some actual understanding of our own intelligence to the extent that we can classify it qualitativesly as “real” and something else as “fake”. I would argue that we do not really understand our own intelligence to that degree, and that there is essentially no evidence to suggest that we ourselves are something more that essentially cranking out “scripted outcomes” based on the current weighting provided within our neocortex. We have the ability to develop new connections in our brains, but tons of AI systems have the ability to learn and make new connections on their own as well. The set of “rules” encoded as connections within our neocortex just happens to be so complex that it’s virtually impossible to trace the system from stimulus to response. But that doesn’t mean that the concrete connection is not there. With a suitably high fidelity understanding of the connections in your mind, you could theoretically predict your actions. Hell, to some degree, people like FBI profilers kind of already do this. Thus, this would make the difference between a human mind and an artificial one more of a difference in degree, rather than kind.

Now, folks like Roger Penrose have pointed to certain physical structures in the human mind, like quantum tubules, suggesting that those structures are what inject “randomness” into our actions, making them unpredictable, and separating us from AI systems…but those are just physical structures. You could just as easily create a computer which leveraged the same type of structures, and thus achieve the same type of qualities.

Beyond that though, I don’t see how an AI which displayed a human-like intelligence would automatically result in a skynet situation (assuming we are using that term to mean a malevolent AI set upon the task of exterminating humanity). It could likely exceed the capacity for individual human thought, but I don’t see why that would automatically equate to malevolence.

Just make sure there’s an on/off switch. ;)

This brings up an issue I dealt with in college, in a piece I wrote regarding rights for AI.

If you create an intelligent entity, you are ethically obligated to not just “turn it off”. Doing so, essentially, constitutes murder… or, minimally, knocking unconscious. If a human being does something we do not like, we are not immediately prone to “turn him off”, are we? Even aside from extreme measures like capital punishment, we don’t (generally) think it’s acceptable to do things like perform involuntary brain surgery on criminals to try and forcibly adjust their behavior.

Those same ethical concerns are now being applied to non-human animals, largely because the fundamental basis for granting those rights hinges upon some element of self consciousness in the part of the entity being dealt with. Chimpanzees have a cognitive capability that can be roughly equivalent to a human child at around 6 years of age. We consider that child to be a person, with certain human rights, prior to that stage in his development, and thus we are somewhat obligated to give those same rights to the chimpanzee. Likewise, with an AI, we are ethically bound to give it the same kind of consideration.

Beyond the ethical considerations, you have problematic results when you try to terminate a conscious entity. If the AI possesses a self-preservation instinct, as essentially every other intelligent being does, then you would be forcing it into a position where it may need to take action against the thing which is threatening its existence (i.e. you).

Things are going to get weird, as we haven’t really thought about a lot of this kind of stuff previously, as humanity’s never created something of this type before (other than more people).

I didn’t mean to imply malevolence was inevitable, just that was what people were concerned about.

We don’t understand human consciousness, all that stuff about sufficient complexity gestating sentience and such is all science-fiction.

Blindsight is a great take on the Singularity and the future of organic/synthetic evolution, in the sense that a future AI need only to be very intelligent, and have access to incredibly powerful self replicating abilities. In Blindsight instead of a shabby Singularity where brains in jars are hooked up to the network resting on unsteady cheap shelving, the future of biological evolution is to evolve into an unconscious custodian/repair unit of a much larger AI driven interstellar mechanism. The Von Neumann like device than survives and spreads under evolutionary principles. It doesn’t need emotions or even to be self aware. Peter Watts’ initial title was “Dandelion” which gives a good approximation of what he was aiming for.

In a sense that’s the real threat; not that an AI becomes self aware but self replicating and capable of evolutionary forces acting upon it. A typical Von Neumann device is really a kind of runaway train.

Where did this guy come from? He’s like a character out of a movie. Let’s go to space! Let’s create AI! Let’s revolutionize energy! Let’s create awesome electric cars!

Let’s just say if you google “Elon Musk” and “alien” you’re going to get quite a few hits.

Other intelligent beings arose from natural selection, which favors self-preservation.

AI is the result of artificial selection, which does not have to provide a self-preservation instinct.

Even if the AI doesn’t itself possess a drive to protect itself or replicate itself, the human tendency towards uncontrolled spread of information might end up being functionally equivalent to one or both of those things.

The general argument is that something resembling human cognition is a tiny sliver of all possible minds that you might create. The rest aren’t ‘evil’ but many still result in the extinction of all humans. That and once you create one intelligence smart enough to improve itself, you don’t get another chance to get it right. So even if there was a small chance of it being harmful, that’s a small chance of complete extinction.

The general argument is that something resembling human cognition is a tiny sliver of all possible minds that you might create. The rest aren’t ‘evil’ but many still result in the extinction of all humans. That and once you create one intelligence smart enough to improve itself, you don’t get another chance to get it right.

Well, you do, but at that point you’re killing a sentient being if you want a do-over. Also “something resembling human cognition” is perfectly capable of the causing the extinction of all humans.

sure:

http://m.bbc.com/news/business-27332130

http://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_story.html

http://www.oilempire.us/robot.html

And related to the topic at hand:

Replacing leaders with AI can actually be a good thing. But I don’t know if is a thing that AI can do naturally well.

Imagine a army of humans, commanded by a AI… the AI can be aware of all detected enemies, all of them, give individual orders to squads or even soldiers. Maybe even give instructions in real time to use complex systems.

What if we don’t replace soldiers for robots but we replace generals by PC computers?

Oh sure, it’s possible to try and just leave that part out. Although such an instinct may be a crucial element in a conscious mind as well.

There’s also no evidence to the contrary, though. All your further reasoning takes this lack of evidence and extrapolates on it. I suspect this view we programmers have (I also shared it) of conscience as repicable might be just wishful thinking. At least until deeper studies into the brain and its relationship to conciousness become available (consciousness is what I see as not explained by this view. A strict input-output relationship I can understand, but where consciousness lies in that paradigm (an inner feedback loop?), I’m unsure).

But that’s irrelevant, though. The runaway train Enidigm posits is the real issue here (conscience is not neccesary for catastrophe). However, the solution could be not creating trains that can go running. That is, not giving a machine or piece of code the possibility of reproducing and/or self replicating and/or accesing mankind threatening systems (if there is any). Basically, follow sensible security protocols like many other industries have…