The 'show why science is awesome' thread:

In theory, I guess. But the reality is we expect everyone to work at all times and economics pretty much requires it as well. Until we hit a Star Trek scenario where everyone gets things for free (which doesn’t even work in Star Trek really), people need jobs and shit.

Sure, but they don’t need menial manufacturing jobs.

To be fair, it’s all some people are really able to do other than working fast food. Everyone can’t be a doctor and even if they could, we don’t need that many doctors. Hell, everyone can’t really work in the service industry (which outsources everything it can as well).

Indeed, and the danger with the select few pushing this race, is, according to the report i linked (and the one below from arstechnica on the same subject), that the AI may just turn on them too, the ‘skynet’ situation thing. But first the rest of us will be out of work, living in poverty and not have access to the means to stop any of this from happening.

‘Robots: Destroying jobs, our economy, and possibly the world’:

The PC i’m typing this message from is our long-term enemy, as long as we recognise that we might have a chance.

I suspect that they could all do something more than menial labor though. Some may be more creative pursuits, just in things which aren’t easy to get paid for currently. But once energy is cheaper, then it would hopefully be more feasible for them to make a living doing weird stuff that they enjoy.

Like space hippies from star trek.

I mean, I’m all for it, but reality isn’t remotely there yet and people live in said reality.

I’m probably in the extreme minority on this one, but I don’t necessarily view that as a bad thing. Bad for homo sapiens, certainly, but it seems like a vastly more adaptable evolution. A silicon-based species (if we can even use that term for non-carbon forms of consciousness—and if not conscious, then fully controllable by humans) wouldn’t be dependent on ecological factors currently under threat from climate change, and it would have the longevity & relatively light weight to make space-faring civilizations immediately possible, aided again by the greatly diminished natural resource problems.

If we’re not too anthropocentric about it, there’s more upside than downside, evolutionarily speaking.

I’m pro human though, in that i believe we can all (life on planet earth) have a future, and that future does not have to = no humans. We just have to stop being short term stupid. And some more on this seemingly hotter than normal topic (going on all the articles appearing that i’m finding):

‘The superhero of artificial intelligence: can this genius keep it in check?’:

Demis Hassabis has a modest demeanour and an unassuming countenance, but he is deadly serious when he tells me he is on a mission to “solve intelligence, and then use that to solve everything else”. Coming from almost anyone else, the statement would be laughable; from him, not so much. Hassabis is the 39-year-old former chess master and video-games designer whose artificial intelligence research start-up, DeepMind, was bought by Google in 2014 for a reported $625 million.

He is the son of immigrants, attended a state comprehensive in Finchley and holds degrees from Cambridge and UCL in computer science and cognitive neuroscience. A “visionary” manager, according to those who work with him, Hassabis also reckons he has found a way to “make science research efficient” and says he is leading an “Apollo programme for the 21st century”. He’s the sort of normal-looking bloke you wouldn’t look twice at on the street, but Tim Berners-Lee once described him to me as one of the smartest human beings on the planet.

Artificial intelligence is already all around us, of course, every time we interrogate Siri or get a recommendation on Android. And in the short term, Google products will surely benefit from Hassabis’s research, even if improvements in personalisation, search, YouTube, and speech and facial recognition are not presented as “AI” as such (“Then it’s just software, right?” he grins. “It’s just stuff that works.”).

In the longer term, though, the technology he is developing is about more than emotional robots and smarter phones. It’s about more than Google. More than Facebook, Microsoft, Apple, and the other giant corporations currently hoovering up AI PhDs and sinking billions into this latest technological arms race. It’s about everything we could possibly imagine; and much that we can’t.

I don’t know, isn’t history littered with the mistakes and unforeseen consequences of ‘clever’ men and women?

You guys are making me wonder if the Butlerian Jihad will always be fiction.

…and while this could have gone in a number of the threads in PR, in the current trend of AI stuff in this thread, that is interesting and sciencey, some pretty horrific stuff about the real current SKYNET program:

‘The NSA’s SKYNET program may be killing thousands of innocent people’:

In 2014, the former director of both the CIA and NSA proclaimed that “we kill people based on metadata.” Now, a new examination of previously published Snowden documents suggests that many of those people may have been innocent.

Last year, The Intercept published documents detailing the NSA’s SKYNET programme. According to the documents, SKYNET engages in mass surveillance of Pakistan’s mobile phone network, and then uses a machine learning algorithm on the cellular network metadata of 55 million people to try and rate each person’s likelihood of being a terrorist.

Patrick Ball—a data scientist and the director of research at the Human Rights Data Analysis Group—who has previously given expert testimony before war crimes tribunals, described the NSA’s methods as “ridiculously optimistic” and “completely bullshit.” A flaw in how the NSA trains SKYNET’s machine learning algorithm to analyse cellular metadata, Ball told Ars, makes the results scientifically unsound.

Somewhere between 2,500 and 4,000 people have been killed by drone strikes in Pakistan since 2004, and most of them were classified by the US government as “extremists,” the Bureau of Investigative Journalism reported. Based on the classification date of “20070108” on one of the SKYNET slide decks (which themselves appear to date from 2011 and 2012), the machine learning program may have been in development as early as 2007.

In the years that have followed, thousands of innocent people in Pakistan may have been mislabelled as terrorists by that “scientifically unsound” algorithm, possibly resulting in their untimely demise.

That’s a great headline, but it seems like a heck of a lot of speculation. I am in no way connected to the intelligence community, but if I had such a tool, I would use it to identify targets for surveillance. Surveillance doesn’t mean “drone strike.” Of course, that’s just a bunch of speculation on my part.

Exactly. It’s not like they take the outcomes of the system and say, “the computer says this guy is a terrorist, kill him!”

They would just use that system to focus human surveillance.

The article looks like speculation, but you guys should read Rule 34, by Charles Stross.

I also consider this the goal of technological progress. It would mean a much more socialized economy to make it work though (or at least a dramatic reassessment of how to value human production, which I think it’s even harder).

But as others have said, probably each step in that direction you get a harsh readjustment until priorities are rearranged with the productive reality, like in the previous industrial revolution (which solved the unemployment problem via limiting the franchise to work, namely taking kids and elderly out of the equation and limiting working hours, so the average work per working citizen dropped to about a half -and thus a potential 50% rate of unemployment was taken care of-. The myth that new jobs took the place of the old jobs is that, just a myth).

the comments actually go into a lot more detail (as would a general internet search around the subject to help you dig up details to not draw the (weak) ‘speculation’ tag):

This really could have gone in the Evil™ thread, obviously, but it was a trend in the science thread in terms of AI and where it is going/what it is doing etc. Just because you may not like the tone of an article, or it’s subject, does not make it speculation. The Skynet program is not speculation, nor are innocent deaths due to our drone strikes, these are all real.

I could do a general internet search and come up with results telling me that Obama is actually a lizard man. What I posit is that none of us, including earnest article commentators, know the actual process with which this software was used. It could be used for evil, but it is not required or even particularly helpful for evil to be done so why would it?

Nobody is saying that the program doesn’t exist, that drone strikes don’t exist, or that innocent deaths don’t exist. Get past that - it’s not part of the discussion.

I could not resist. Behold, 30 seconds of Google yields:

True. I think i find it just base pretty disgusting that our military’s are working in area’s where this might be a reality, and if not actually now, certainly look likely for the near future. It’s a dark path to walk imho. Anyway the guardian, that bastion of liberal thought and news takes issue with the article also, and provides a bunch of detail:

‘Has a rampaging AI algorithm really killed thousands in Pakistan?’:

A killer AI has gone on a rampage through Pakistan, slaughtering perhaps thousands of people. At least that’s the impression you’d get if you read this report from Ars Technica (based on NSA documents leaked by The Intercept), which claims that a machine learning algorithm guiding U.S. drones – unfortunately named ‘SKYNET’ – could have wrongly targeted numerous innocent civilians.

Let’s start with the facts. For the last decade or so, the United States has used unmanned drones to attack militants in Pakistan. The number of kills is unknown, but estimates start at over a thousand and range up to maybe four thousand. A key problem for the intelligence services is finding the right people to kill, since the militants are mixed in with the general population and not just sitting in camp together waiting to be bombed.

One thing they have is data, which apparently includes metadata from 55 million mobile phone users in Pakistan. For each user they could see which cell towers were pinged, how they moved, who they called, who called them, how long they spent on calls, when phones were switched off, and any of several dozen other statistics. That opened up a possible route for machine learning, neatly summarised on slide 2 of this deck. If we know that some of these 55 million people are couriers, can an algorithm find patterns in their behaviour and spot others who act in a similar way?

What exactly is a ‘courier’ anyway? This is important to understanding some of the errors that The Intercept and Ars Technica made. Courier isn’t a synonym for ‘terrorist’ as such - it means a specific kind of agent. Terrorist groups are justifiably nervous about using digital communications, and so a lot of messages are still delivered by hand, by couriers. Bin Laden made extensive use of couriers to pass information around, and it was through one of them – Abu Ahmed al-Kuwaiti (an alias) - that he was eventually found.

That’s who the AI was being trained to detect – not the bin Ladens but the al-Kuwaitis. Not the targets so much as the people who might lead agents to them. Ars Technica implies that somehow the output of this courier detection method was used directly to “generate the final kill list” for drone strikes, but there’s zero evidence I can see that this was ever the case, and it would make almost no sense given what the algorithm was actually looking for - you don’t blow up your leads.

Still, we should all keep in mind this question: ‘Would God approve of me using AI to kill other humans? How would that look on my rap sheet when i’m at the Pearly Gates?’

I think the load-bearing part of this question is “kill other humans”, not “using AI”. At least at this point, the “AI” in question is strictly a tool.

Well yes, especially in the Christian context, killing other humans is expressly forbidden by God, still just in case folk see using AI to do the actual killing as ‘wriggle room’ to get by those Pearly Gates, well you’d be wrong. So keep that in mind NSA computer AI researchers (and people like that).


So let’s get back to other science stuff (as this current trend is what it is):

‘Is D-Wave’s quantum processor really 10⁸ times faster than a normal computer?’

We have been following D-Wave’s claims about its quantum hardware at Ars for a number of years. Over that time, my impression has oscillated between skepticism, strong skepticism, and mild enthusiasm.

Back in November, D-Wave issued a press release that basically asked tech journalists to spray paint a finish line just behind their feet and break out a victory flag. It seemed a bit much. But now that D-Wave has declared victory, perhaps it’s time to re-examine the skepticism. What exactly has D-Wave achieved, and does it constitute victory? Either way, where are the company’s efforts focused now?

Of course the best way to judge D-Wave is not by its press releases nor by the specifications and benchmarks glued on the boxes of its processors—these should be treated with utmost paranoid suspicion. Instead, it’s better to look at what researchers who have access to D-Wave hardware are claiming in publications. And despite my suspicions, the paper accompanying that last press release—plus a couple of other papers on the arXiV that were released earlier—is interesting. All together, they paint a picture that says we should finally be cautiously optimistic about D-Wave’s progress.

If you are unfamiliar with what D-Wave does and its history, you could easily read several other articles before continuing. The TLDR is as follows: a long time ago, in a country far far away (Canada), a little start-up company announced a 16-bit quantum computer. This surprised everyone, considering that a lot of researchers were previously very proud of achieving one or two bits. To muddy the waters even further, the alleged quantum computer used an entirely different approach from everyone else’s attempts, one called adiabatic quantum computing.

In adiabatic quantum computing, one does not directly perform operations on individual bits or groups of bits. This is unlike circuit quantum computers, where there are single operations such as a CNOT (controlled not, the fundamental logic operation in quantum computing). Instead, the solution to a problem here is re-configured so that it is the ground state of an energy landscape.

Think of it like this: in an energy landscape shaped like a bowl, a particle can sit at the bottom of the bowl, it can be sloshing back and forth up the sides of the bowl, or it can be anywhere in between. The ground state is one that involves the particle sitting at the bottom of the bowl. For a bowl, this is easy to figure out. But for an arbitrary landscape with multiple particles, the ground state is not easy to determine in advance. So even though we know that our desired solution is the ground state of some energy landscape, we cannot conveniently calculate what that is. Therefore, we still cannot efficiently find a solution.

This is where things get clever for D-Wave. Instead of starting at the desired landscape, the company starts with the bowl and puts all the particles in the ground state of the bowl. Next, it slowly and carefully deforms the bowl to the more complicated landscape we care about (this is called an adiabatic process, hence the name adiabatic quantum computer). If it’s done carefully, the particles stay in the ground state—and at the end of the transformation, we have the solution.

Afterward, to get the answer, we simply read out the state of all the particles. Job done.

Just weird as heck…and cool and interesting too :)