Elon Musk and the dangers of AI

You would have to go to the library.

Visiting the library is time-consuming, but is it more time-consuming than hand-washing all your clothing? If not, then maybe washing machines have made a bigger impact on your life than your smartphone.

After all, there are still people in this country who feel like they don’t need to own a smartphone. Not so many get by without using a washing machine.

Wait, what? Corporations are unlikely to reach the conclusion that humans should be deleted.

Only so long humans remain their best potential source of profits. If that ever changes, corporations may even take an active hand in wiping us out to make room for whatever replaces us to them. Remember back in the day when Coca Cola hired death squads to break up unionists in South America? How much do you want to bet those same unionists drank Coca Cola?

Corporations aren’t pro-humans in the slightest.

Was thinking about the AI problem today again, and I always keep coming back to this:- most of us here work with computers, we know them, and we know how they’re actually pretty flakey sometimes. i.e., for all their magic, they’re not consistent enough to really form the basis of a stable, self-aware, sentient intelligence … not yet, certainly, but maybe never?

Come on, how realistic is it, really, that we’ll get self-aware AI any time soon enough for even our children or grandchildren to worry about?

Maybe AI is going to be like flying cars - something that’s a part of the vision of the future that never quite comes to be (and meanwhile other cool things that are a bit like flying cars or AI do eventually come to be, just like everyone and their mother now has a thing a bit like a Star Trek communicator).

I don’t know gurugeorge, I’ve lived long enough to see some jaw dropping advances already. In the third grade a teacher held up an advertisement for calculators, then told us kids “Some day these will be cheap enough that everyone will be able to own one”. We all laughed and said no way. He’s probably been long dead but sometimes I wish I could have met him when I got my first computer(calculator? Ha!) and told him “Dude, you saw the future”.

I think these AI people are the same. It’s coming, I have no doubt of that. When? Who knows? But it’s coming. The thing is, since it’s going to be something completely unprecedented in history, there is no real way to predict anything about its nature. It could turn out to have some intrinsic limitation that means it’s no danger to humans at all. It could turn out so powerful that on the first day it comes into existence it goes out of control. It’ll be interesting to see either way.

(when are people going to stop bringing up ‘flying cars’? ;) That was a cartoon, the Jetsons right? pretty 1950’s version of the future?)

Everything is connected these days, and we’ve structured many of our utilities and public works (sorry 4x strat player) to rely on computer systems to function. Even our humble car is completely reliant on it’s own CPU to run.

Military, check. That is the big one. As aside from creating our own ‘hunter killers’, imagine a naughty computer AI deciding to intervene in the code checks the nuclear bombers make to ensure they turn around without firing their warheads? Or like in the awesome ‘War Games’ film getting into that kind of military system. It’s already heavily reliant on computers to function, a rogue AI could really call an end game scenario for mankind in that environment. Could be a keydrive, an email, any kind of ‘soft’ entry to get into the system and then ‘game over man, game over!’

Even just cutting power from our utilities would be enough to send us back to the stone age pretty quick. So yes, even if the main current threat is from human hackers, imagine someone letting a super AI (assuming we make one) out of it’s box to do their work for them?

How many times were you sitting around, and wondered about something, and then were like, “Hey! Let’s go to the LIBRARY!”

Yeah, pretty much never. Me neither. And even if you did, the information in libraries is all outdated by a decade anyway. Remember when we used to use encyclopedias?

We washed our clothes because having clean clothes was important… knowing some random factoid about something wasn’t really that important. But that’s the amazingness of information accessibility now. Since accessing information is so trivial, there’s basically nothing that “isn’t worth” looking up. It makes learning about stuff so easy, and it’s awesome.

As one of the biggest pooh-pooh-ers of AI worrying around, I’ll take a swing at that. I think it’s fairly likely. Obviously that’s just a guess, but I think it’s something that’s obviously going to be really useful and there’s plenty of smart people working on the problem. That in conjunction with the ever cheaper and more powerful computer hardware means we ought to be able to crack this one in the next 100 years.

It is very difficult to predict, but things have a way of sneaking up on you. Kevin Drum used an analogy of filling up Lake Michigan one drop at a time, except that the rate that you deposited drops grew exponentially. For a long, long time, it looks like nothing is happening, and then things start to happen very quickly as critical thresholds are reached.

Ten years ago, in my intro course for computer vision, my prof described his personal Holy Grail of Computer Vision as the postcard problem. That is, given a picture (like on a postcard), write a reasonable sentence describing it’s contents. Ten years ago, possible solutions seemed really far off. Five years ago, given the rate of progress, it looked like it would be solved in another 20 years. Today, it’s pretty much solved. Not to say that there isn’t a whole lot to work on in computer vision, but the progress on object category recognition in the last few years has been pretty astounding.

Its worth noting that this progress has been driven in large part by simple availability of huge amounts of data and computation power. These aren’t radically new techniques, but given the ability to learn from very large data sets, even fairly ‘simple’ techniques can do surprisingly well on what I’d thought were pretty hard problems. Machine learning is getting to be a very, very powerful hammer.

As a child I often wondered about something, asked someone I considered knowledgeable, and got the wrong answer. Today, I can get the wrong answer from a stranger who lives thousands of miles away.

Getting the right answer has always required effort.

Remember when we used to use encyclopedias?

Yes. Today we use wikipedia. I’m not sure that counts as a huge improvement.

knowing some random factoid about something wasn’t really that important

Knowing random factoids is satisfying, but still largely useless.

Wikipedia is an absolutely immense improvement over any encyclopedia.

Not only does it have orders of magnitude more content than any encyclopedia ever had, but it is far more up to date.

There was a big article a few years back, I think it was in Time magazine, that discussed how technology is moving forwards at a geometric pace. The more new tech we develop the faster things leap as they build upon the cumulative knowledge of the past and the inspirations of the present. Their predictions for the next 20 years were quite impressive.

Although, to be fair, your encyclopedia had a fairly good chance it was accurate. Wikipedia’s accuracy rate is (from anecdotal evidence) not nearly as good.

Actually, in studies comparing Wikipedia to managed encyclopedia sources, it’s actually not less accurate.
http://blog.wikimedia.org/2012/08/02/seven-years-after-nature-pilot-study-compares-wikipedia-favorably-to-other-encyclopedias-in-three-languages/

I stand corrected. So wikipedia has both greater breadth, depth, and (at least according to that study) greater accuracy.

But, damnit, it doesn’t have those cool movies my old Encarta CD had! :)

On reflection, these challenges seemed impressive and difficult at the time, and they are, but they are actually turning out to be more ‘engineering challenges’ than ‘intelligence’. They are brute force programs meant to match patterns to millions of examples. But ask that program if the bridge in the picture, with the rust spots, is safe in a 90mph wind?

Maybe it’s unfair to set the goalposts so far, but to me it would be a program/entity that we develop to find the tastiest apple pie recipe, but then comes back to us to ask us the meaning of life (shortened).

So I also disagree that it’s a hardware, or software or data problem. Future AI would probably run on a 486 with C++. It’s a thinking/learning/creativity problem, and I think we’re at step seventeen out of a million on the way there.

But let’s be honest - what’s one of the most common experiences with computers? They crash. And who has to switch it off and switch it on again? A human being.

Now, I understand that with the things we here might often use (programs to make games, music, art, and games themselves), it’s not mission critical programming, and military stuff is probably better, and they’ll have secondary and tertiary systems and failsafes and stuff.

But … e.g. let’s say with the self-driving cars thing. A few years ago everyone was like, “wow, we’ll have self-driving cars in a few years”; it turns out that it’s still harder than people thought, and while a self-driving car might do very well for ages, it’s those tricky moments when it’s flummoxed, and a human wouldn’t be, that means it’s still not ready for prime time as an “intelligent” system. Intelligence just is the trick of not being flummoxed by novelty.

Same with speech recognition, face recognition - heck, even OCR is still flakey to some extent. Expert systems? Well, they’re helpful, but they aren’t going to replace doctors any time soon.

We’re the product of millions of years of evolution. The brain is still the most complex thing we know; and there are soooo many ways the brain can go wrong, it’s all such a jerry-built contrivance, very robust in some ways, but surfing a narrow corridor of functionality in others.

So - yeah, maybe in 100 years or so, but not any time soon, at least not for self-aware, standalone AI - and it will still probably be subject to the AI equivalent of epileptic fits.

I think what’s far more likely is continued cyborgification and melding of humans and machines, so that we become more integrated with computers that do what computers do best, while our brains do what they do best (which is still pattern recognition, coping with novelty, etc.).

I’m not so sure AI is that far away.

But enhancing Humans is a shitload safer. So…

And another thing - if people think they can make self-aware AI just off the bat, they’re barking up the wrong tree. If consciousness and intelligence are anything, they are products of being social animals (our closest neighbours to self-awareness and intelligence in the animal kingdom, like corvids, apes, etc., are all social creatures, with the exception of the octopus, which is pretty smart, but a solitary animal).

Far more likely that the key will be creating a community of AIs that develops as a community, rather than a single AI. Self-awareness is a function of “how do I appear to others?”

Playing together will also be found to be very important, I believe.

I think sometimes the insight comes after the engineering. Our own minds are the result of an iterated ‘whatever works now’ engineering tinkering process. I also don’t think it is quite correct to say these the deep nets are just brute force programs. They certainly learn more useful features than people are able to devise themselves, and the features seem to be meaningful within the problem domain. It’s not intelligence, and there is obviously so much missing that an actual neural network has (feedback, dynamics, a million other things).

But I think the pattern of development in this area should further undermine confidence in assertions of the form “X just turns out to be number-crunching, but Y is the sign of true intellect”. At this stage I really wouldn’t care to guess what sort of problems are true tests, or how far away they might be from being solved.

On the other hand, I definitely agree that current systems aren’t really thinking or applying creativity or something like that. But whether is it step 17 out of a million or 17 out of 77, I couldn’t say. I don’t think anyone else can either.