Do You Trust This Computer?

I really enjoyed this recently released Documentary. It features some of the top minds in the AI development field discussing… AI, and has been making headlines as the media pulls quotes from Elon Musk.

The summary from the creators -
“Science fiction has long anticipated the rise of machine intelligence. Today, a new generation of self-learning computers has begun to reshape every aspect of our lives. Incomprehensible amounts of data are being created, interpreted, and fed back to us in a tsunami of apps, personal assistants, smart devices, and targeted advertisements. Virtually every industry on earth is experiencing this transformation, from job automation, to medical diagnostics, even military operations. Do You Trust This Computer? explores the promises and perils of our new era. Will A.I. usher in an age of unprecedented potential, or prove to be our final invention?”

Available to stream at http://doyoutrustthiscomputer.org/watch

If anyone is involved in AI development I would love to hear your thoughts on the matter.

I haven’t watched the doc yet, but I do work in cognitive intelligence and it will absolutely be transformative. Ultimately the effects will be positive, but short-term many people will be hurt, much like the industrial revolution’s impact on craftsmen and the advent of agriculture on hunter/gatherers.

The “final invention” bit I assume refers to the apotheosis, the singularity where AI becomes smarter than any human and then evolves at an ever-increasing rate with goals that may not necessarily be great for humans. That’s general AI, which is not what I’m working on as it is much further away and doesn’t offer near or mid-term commercial viability. That’s what Elon Musk is concerned about, and I share his concerns there.

Speaking of which, I just read a great book about the AI singularity. Incredibly funny, not a serious take, but it deals with all these very real concerns in a humorous way. Recommended.

https://smile.amazon.com/After-Silicon-Valley-Rob-Reid/dp/1524798053

I’m sorry, citizen, but you do not have enough Security Clearance to post that graphic.

Please report to your local Happiness in Termination Disposal Center immediately.

Yeah, I’m kind of in the field (I’m not a data scientist but I hire them and am working in an AI) and yeah… I have lots of thoughts on this.

To sum up though, we humans, from birth, are pattern recognition machines. It’s the thing that truly separates us from the animals, our ability to recognize patterns and extrapolate useful things from them. You can watch this in your own babies as they grow up.

We have reached a time where computers have enough processing power to actually pull off a lot of theoretical things that were thought of in the 60s and 70s. And now, machines are getting as good as or better than the pattern recognition humans can do.

So yeah, outpaced at our own game. Doesn’t look so good for labor, as Adam Smith might say. Looks great for capitalists.

One of the risks I see mentioned, is that machines are often being trained on data and results from human sources, so the machines are implicitly learning and incorporating our human biases and prejudices. But the deep learning inside is so inscrutable there is no way to know what factors it is crunching to make decisions. Management loves it because they can think of the decisions as completely impartial and pure, but how do we know? How do you challenge a mortgage lender’s decision in court if their results end up statistically biased, but there is no way to show e.g. racial bias in a hidden machine algorithm? We can choose to give up control of our own decision making but we need to keep human ethics in the loop somehow.

I haven’t read it (although it’s on my list):

https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815

If nothing else, that title punning game is on point.

I wonder if any code examples in that book are in Lisp. ;)

Silouette, that’s an excellent point. The real challenge in DL/ML right now is not the algorithms that do the nth dimensional compares, it’s the data sets. This is a challenge we are facing. In order to train an AI to make films by itself we need to feed it as many examples as we can of filmmaking, from as many sources as we can, from as broad a data set as we can. This data set (and getting a data set of all films ever made is … challenging, to say the least… many people are having this problem now) is inherently going to have all the pattern biases from those filmmakers. It’s an interesting question, the bias of the data set/training pairs/adversarial network.

In other words, cats in, cats out.

another vote for the documentary. Its not anything drastically new if you follow the topic, but its good.

A thought came to me during my daily drive last week, that part of what we’re seeing with Facebook, “fake news” et al is literally like running up against hard, biological constraints to information processing - that we’re rapidly creating new information almost infinitely faster than our ability to process it, and right now we’re just on the cusp of synthetic technology that will reproduce and can be used to “fake” information. IE, we’re actually in something like a proto-singularity or pre-singularity state, and we’re literally on the crest of the wave of our ability to distinguish true information from false information and that the wave is about to sweep us all away.

See your wife cheating on you with your brother, only it never happened, despite the video and sound revealing their guilt perfectly reproducing their faces, bodies and personal vocal intonation. Or, more likely, see your President candidate making an evil deal with the dark powers, only to have none of that even be slightly true.

The takeaway i have seems to be we need to be really, really working to increasing our bio-hardware’s information processing capacity right now before things really spin out of control and reality and the synthetic reality become indistinguishable. Whether this is eugenics, genetic engineering, cybernetics, neo-Luddism (at least as popular culture construes the Luddite movement) or Butlerian Jihad, i have no opinion, but it feels like we need to keep up in the race towards the information singularity before it overtakes us in such a way that we’re more or less lost.

Maybe the short term solution then is “trusted AI”, run by NGOs or reputable governments that can filter false information from true information to safeguard the public. And that there’s going to be an arms race between the AI filters and the AI fakers (if it’s not going on already).

Well over half of Americans seriously, legitimately, fearfully believe in angels and demons.

I don’t like our chances if critical thinking is our defense.

If you are interested in this topic, I have a friend doing a survey as part of her master’s thesis, and there are questions about human interactions and AI in roles such as digital assistant.

This seems like the right thread for your newest nightmare, AI brainwashing:

Basically, researchers have found that it is possible for hackers to feed an AI tiny snippets of innocuous training data with malign objectives. It’s HAL meets the Manchurian Candidate!

I guess AI doesn’t need to reach Skynet-levels of awareness in order to pose a danger to us.

Interesting article.

Blockquote
Li’s team tricked a popular reinforcement-learning algorithm from DeepMind, called Asynchronous Advantage Actor-Critic, or A3C. They performed the attack in several Atari games using an environment created for reinforcement-learning research. Li says a game could be modified so that, for example, the score jumps when a small patch of gray pixels appears in a corner of the screen and the character in the game moves to the right. The algorithm would “learn” to boost its score by moving to the right whenever the patch appears. DeepMind declined to comment.

Poisoning learning algorithms is a really interesting attack vector, and seems obvious when you think about it.

Even more so when you consider most “machine learning” these days consists of stochastic black boxes that when big enough are too costly to “retrain” and impossible to “fix”.

It’s really hard to fix things we don’t understand. We understand that when we input a picture of a cat, a cat label pops out, and when we put in a picture of a dog, a dog label is emitted. But why the algorithm decides one is a cat and one is a dog is beyond our understanding. Researchers have started really hammering on this “interpretability” problem in the last few years, but often it’s solved by making a simpler, less accurate model that humans can understand…but may react entirely differently to this sort of trick.

I think of “fixing” heuristic models, or more easy to interpret models that we explicitly state, but I’m not sure how one “fixes” these, aside from inputting the same poisoned training data with a new score/label.

So obvious as to seem trivial. I’m not sure I understand what the research insight is. If you change the rewards in the training you’re going to change the output, of course. They describe it as “tiny alterations” in the dataset, but it’s not, it’s absolutely fundamental - literally changing what increases the score. It may be changes to only a small part of the overall dataset, but the changes themselves are huge. Of course that’s going to influence the AI’s behaviour!

I suppose the tricky part is that most machine learning algorithms are pretty much black boxes, so even if you know your AI is fucking up in a particular way, you may not know why or how to “cure” it, and may have to start entirely fresh with different training data.

Edit: Ah. Looks like authors deal with that precise situation in the full paper.

I think the key point is that the AI’s behavior stays the same until it is presented with an occult trigger.

It would not be surprising that a minor change to training data could cause an AI to fail completely. It would also not be surprising that a minor change to training data could have no effect.

But it is somewhat surprising that a minor change to training data could cause no apparent effect, until failure is deliberately triggered by a subtle signal. If so, then how do you know an AI that is working today won’t fail catastrophically tomorrow?

We expect critical systems to be robust, with predictable mechanisms of failure. AI’s do not appear to meet these criteria.