Elon Musk and the dangers of AI

Well, you already have manufacturing systems controlled by computers, which make computers.

Irrelevant, since that’s not self replicating. A system like that going amok would result, at worst, in a lot of wasted resources in unwanted computers that don’t work. :D

I work with a potentially dangerous computer controlled machine. Next to it is a large red button that will completely shut it down. Presumably a more complex system would have even more red buttons.

And turning off a machine is ethically equivalent to shooting a tranquilizer dart at a lion. I would have no qualms about pushing any red button if necessary.

Unless the machine is self-conscious, but as I said before, we can’t even define consciousness yet. SO that’s a pretty wild IF.

There is also that computer program at Cambridge (maybe they have one at one of the leet uni’s in the usa too?) that is programming itself, thousands of iterations per second or something, as it works on improving itself for maximum efficiency etc.

Now let’s just say that what if computer data can experience errors in writing like our DNA, maybe it would mutate out of the remit of the original programming? (yes i watch too much crappy sci-fi) Couple that with the self building physical computers Timex mentioned and maybe it slips out for a wild night with one of those robot fighter planes and gives that suggestions…who knows where it could end?

But idle speculation aside what really matters is that if the concern is great enough so that one of the most brilliant men of our time counts it as the biggest threat to mankinds future, well we should all take a breath and think about that a little no? (eye’s his computer through narrowed eyes…yeah i’m thinking it, so you better be good, silicon seductress)

Also, what if a military contractor accidentally threw a high power laser into the ocean, and it somehow embedded itself in the forehead of a great white shark? You would have an unstoppable killing machine. And it’s much more worrisome then your scenario, because it might have actually happened.

After checking the first 4 links and finding that every one of them operates under human control, these aren’t the droids Im looking for.

Well, this is where you say “intelligent”, and I say" sentient", and we start setting different bars :)

Also, I note that AI’s would need a self-preservation instinct written for them. And it’s an area you really, really need to be careful on. What could it’s programming justify for self-preservation. This is where sci-fi is awesome, since it’s been exploring this stuff for decades.

True, although the WAPO article is at least about autonomous drones, even if they were unarmed and using a simple color search. The rest, not so much.

Elon Musk will be ready for Judgement Day.

Will you?

No he won’t. The NSA will get it in as a backdoor

He will burn in a simulated hell for his attempts to thwart Roko’s Basilisk

I think legally they have to currently? You had some recent law vote on this is the USA i think, and in the case of the MOD’s robot planes (Taranis, and Mantis) they have many autonomous systems, but combat requires (by law) a human pilot, currently. It won’t be long before fully autonomous weapon systems are in use in field conditions, because this is what the military really wants.

Not necessarily. The general idea is that, whatever goals the AI values, getting itself shut down would probably not maximize its goals. Similarly, it could probably achieve its goals better if it was smarter and controlled more resources. So whatever the primary goal, staying ‘alive’, getting more processing ability, more knowledge and more resources are all likely subsidiary goals.

Well, only if you set unreasonable goals.

Let’s put it this way. My mechanic is quite intelligent. But I’m not gonna drop off my car and my credit card and just give him free reign. I tell him to make a plan but don’t touch anything, just stop and wait for further instructions. Then I review the plan, and if it’s ok I tell him to follow the plan exactly and then stop. I certainly don’t ask him to “end all human suffering, whatever it takes”.

If it works for him, it can work for Skynet.

It’s better to build a genie that doesn’t want to kill you, than to depend on always making perfectly worded wishes.

Relevant

web comic

Indeed.

Listen. And understand. That terminator is out there. It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.

You just need to screw up building the genie that doesn’t want to kill you just once to maybe get the genie that will kill you for efficiency’s sake.

You and your mechanic are both human beings with about the same level of intellectual ability. That makes it a lot easier for you to predict what he might do, and monitor his activities. Even as it is though, if you don’t know a lot about cars, you are really taking a lot on faith. If your mechanic had a Flash level of speed and an Agatha Heterodyne level of mechanical aptitude then you have to give up all pretense that you have any idea what your mechanic is doing.