Elon Musk and the dangers of AI

My general point is that there is a difference between giving an agent the ability to solve a problem, and giving it the power to act autonomously.

All scenarios involving rogue AIs somehow assume that humans have programmed considerable autonomy into the AI. But why would we do such a thing? Nearly every complex thing that happens in this world is subject to extensive prior review. We plan our work, wait for approval, and then work our plan. AIs can do the same. The more superintelligent the AI, the better they can adhere to that MO.

Yes, Mr. Super AI, I know I made of atoms that you need to accomplish your plan. Thanks for checking in, and your request is denied. End of story.

There’s basically no way a rogue AI can evolve and defeat us with 20th C. technology. It probably can’t with 21st C. technology. We’re just not that automated. We still live in 20th C. houses build on 19th C. principles of sewage and water delivery, living in 18th C. style cities expanded, paved, and en-signed to be navigable by motorized traffic. Now maybe we end up with weather satellites or food biomes automated by rogue AI or something which hold us hostage in the 22nd C.

Now how we go about making 1) a rogue AI that is 2) self replicating and 3) synthetic or abiotic is not at all clear. A more existential threat might be not that we’re destroyed by rogue AI but that we automate everything on petaflop computers simming and maximizing every aspect of our 22nd C. lives in the Mathusian overcrowded futures cape that is out inevitable destination that we lose touch with our ‘instincts’ about what it means to be free or human or independent, and end up atrophying intellectually because of the need to perfectly maximize resources in a resource contained future. Or some such thing.

21st century technology is going to very quickly change into stuff that you cannot even come close to imagining.

Consider how technology changed in the twentieth century. We went from most people not having electricity, to landing on the moon, manipulating genetic code, creating the microprocessor and then the internet.

And technology doesn’t increase linearly. It’s a curve, and evolves faster the more we know.

By the end of this century, we will either have destroyed ourselves, or we will have utterly changed human existence.

Technology isn’t as advanced as you think it is, actually. While microelectronics have advanced by leaps and bounds, they’re powered by dumb coal and gas burning powerplants not much more sophisticated than those guys with handlebar mustaches shoveling coal into the boiler. Most of us rely on water sources built in the 30s. Cars are far more sophisticated than they used to be… running on the same greasy stuff pulled from the ground as they’ve been doing for a 100 years.

You can see this in the way the cost of doing ‘industrial scale’ projects has gotten increasingly and prohibitively expensive. We can supply ipads to schools, but we can’t afford to build new schools, nor rebuild the roads going to those schools. We can make incredible social networking connects on wireless phones, but our home phones lines are decaying and our internet at home is oftentimes stagnant because no one wants to lay new lines. We’ve been flying the same commercial airlines models for 50 years, more or less. Even guns peaked 100 years ago and have only gained incremental improvements since the centerfire cartridge and automatic weapons. The biggest difference has been the growth of synthetic materials, essentially, and the exportation of manufacture to developing nations.

There’s basically no way a rogue AI can evolve and defeat us with 20th C. technology. It probably can’t with 21st C. technology. We’re just not that automated.

If it’s particularly good at rhetoric it could always win humans over to carry out its genocidal plan. Follow me and you’ll rule the world!

Some are running on electricity that was created by splitting freaking atoms in half.

…Said technology not having advanced too far in half a century, not least because of phobia about it from a portion of the population, and it’s a declining share.

Big parts of the world live in barbaric state, with constant fear of the Wifi signal failing. Some of them don’t even have access to wired internet.

People in africa are dependant on cheap Android 2.0 devices and sometimes the monkeys destroy the cell phone towers.

And in PNG it is the people that destroy modern world investment like phone lines/towers etc. They take the copper/stuff and sell it. If it is on their land, someone must have left it for them, so they can do what they like with it. PNG’s have a cool idea on their personal ownership of their land, but it frustrates modern development quite a bit, i sort of have a grudging respect for their sense of worth?

Still none of this (monkeys/people slowing tech) will save them once the Skynet Hunter Killers are roaming the sky.

Look at you, terrorist.

Panting and sweating as you run through my desert.

Cause programs never do things people don’t intend.

“Request denied.”

/AI does it anyway.

“Well, shit.”

‘AI will not kill us, says Microsoft Research chief’:

hmmm. So who to believe? A smart scientist and a technology innovator or a guy that works for ‘the machine’? ;)

I used to have an old computer than would not shutdown when i asked it to, and in it’s later years it would shut down all by itself…so even dumb unthinking machines are not 100% predictable 100% of the time. Once you add in any kind of ‘self aware’ ability and higher ‘goals’/‘problem solving’ you are taking a risk, and we already have machines that technically can ‘decide’ to kill us on their own (we just don’t let them…yet). The whole ‘military AI’ side of this debate is where the core dangers are imho. We (as in you, dear USA+paid for stooges) have already shown a casual disregard for human life over recent decades. Where could it end?

I’m a little loopy after having a medical procedure done so when I read this, for just a second, I thought you meant The Monkees kept destroying cell phone towers. I rather enjoyed that visual.

That’s the old flying cars argument.

That’s the old I-read-too-much-scifi argument.

When new revolutionary technologies come onto the scene they change lives a lot and quickly. We’ve all lived through such changes with first pervasive computing and then the pervasive computer networking. But once the revolutionary change bursts onto the scene we get long periods of gentler evolution of existing technology.

You referred to a graph, but but it’s actually a stepped line if you zoom in to look at one century.

That may be, but those steps are coming closer and closer together.

Consider how dramatically things have changed between 1980, 1990, 2000, 2010. And the changes are absolutely pervasive. They are changing fundamental things about human existence already, like how we deal with each other, or how we access information.

Back in 1990, you didn’t really even have the internet. It certainly wasn’t what it is today. If you were sitting around with folks and had a question about something, and none of you personally knew the answer? Well that was just too bad.

Now, i can reach into my pocket, pull out a computer that is orders of magnitude more powerful then all the computers in the entire world in 1980, and access information about virtually anything, instantly.

That’s not flying car shit. That is WAY MORE AMAZING THAN FLYING CARS.

AI reminds me of Mr Meeseeks from Rick amd Morty. Fulfill your function or go mad trying.

“Hi, I’m Mr. Meeseeks, look at me!”

Flying cars are totally feasible. They are also at this point completely impractical. No infrastructure to support a mass amount of flying cars. Its more that we didnt go with flying cars than that we couldnt. Programmable self driving smart cars are far more likely in the near future anyway.

Bill disagrees with the head of MS AI research :)

‘Microsoft’s Bill Gates insists AI is a threat’:

Humans should be worried about the threat posed by artificial Intelligence, Bill Gates has said.

The Microsoft founder said he didn’t understand people who were not troubled by the possibility that AI could grow too strong for people to control.

Mr Gates contradicted one of Microsoft Research’s chiefs, Eric Horvitz, who has said he “fundamentally” did not see AI as a threat.

Mr Horvitz has said about a quarter of his team’s resources are focused on AI.

During an “ask me anything” question and answer session on Reddit, Mr Gates wrote: "I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.

“A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

His view was backed up by the likes of Mr Musk and Professor Stephen Hawking, who have both warned about the possibility that AI could evolve to the point that it was beyond human control. Prof Hawking said he felt that machines with AI could “spell the end of the human race”.

Wall Street and the power of corportions is a real problem.

They are so powerfull that they change the laws of some countries, so the balance of power continue beneficing the rich people against the poor people. They also control TV stations, so they control the message and can make some stuff “un-exist”, by giving them 0 time on these stations.

And is not that bad.

Sure, most people don’t live as well as we want. and some people education could be a lot better. But living under the power of corporations is not that bad. Maybe a big AI getting into politics will not be worse than Concast buying politicians to push his anti-internet agenda.

I don’t think the power that superintelligence could have over us is at all analogous to that of a human politician or corporation.