Social media controls the world

As I pointed out upthread, if the best masters of go are learning from the AI, then it’s a pretty safe argument that

  1. Algorithms can’t just use memorization of moves as mechanism, as there would be nothing novel to learn in their approach.
  2. The algorithms are finding novel strategies, so they have surpassed humans, which is more than matching them.

What is also important is that while alphago is amazing at playing go, it can’t figure out what to make you for dinner. AI is trained for specific tasks, and excels at that task, but there isn’t anything like generalized AI that can solve “problems” in a more general way. Humans are still far better there, at least some of us…

[timex, I know you aren’t making this argument]

Fine, it’s completely different but that’s not my main point. What I’m saying is our minds have mechanisms which are not fully understood (for the time being, they will be at some point), which makes our minds MUCH more efficient at doing anything, including playing Go. AlphaGo does not play humans on equal terms (if it did, it would get crushed) because it has a backlog of millions of games and it perfectly “remembers” all of them while a human mind has a backlog of no more than 10-20k games and it remembers hundreds of games and just few of them perfectly. This is why it has developed previously unknown strategies, it has played more games at “master” level than the total number of master level games that have been played between humans through history.

What do you think would happen if you put the software up against a human master on equal terms, such as a learning simulations limit? If you restrict its electricity consumption to that of the human brain, it would probably be on the play level of a child, if that.

You are making totally empty statements.

“What if you didn’t train a neutral network? Then it wouldn’t be good at doing that thing you didn’t train it for!”

No shit. Who cares?

Your brain spent decades learning.

Maybe you stop debating your own imagination and start addressing my points. I didn’t suggest no training, I suggested training similar to what a human player receives through their career.

It’s relevant because a single human human mind can accomplish a task (such as playing Go) almost as well as 1202 CPUs, 176 GPUs and a database of millions of games. So maybe we should find out how it works first, then replicate it and possibly improve it. Crunching numbers aimlessly won’t ever lead to anything more than number crunchers,

I concur. You should educate yourself on human cognition, neural physiology, and modern artificial intelligence concepts.

That’s the equivalent of “L2p, noob!”. Well played, well played…

You know what the concept of deep learning, and specifically neural nets is based on, right? ITS RIGHT THERE IN THE NAME…

To not just be a shitpost, here’s a nice link on the basics.
https://cosmosmagazine.com/technology/what-is-deep-learning-and-how-does-it-work

You could learn things, like:

Inspired by the nerve cells (neurons) that make up the human brain, neural networks comprise layers (neurons) that are connected in adjacent layers to each other. The more layers, the “deeper” the network.

A single neuron in the brain, receives signals – as many as 100,000 – from other neurons. When those other neurons fire, they exert either an excitatory or inhibitory effect on the neurons they connect to. And if our first neuron’s inputs add up to a certain threshold voltage, it will fire too.

In an artificial neural network, signals also travel between ’neurons‘. But instead of firing an electrical signal, a neural network assigns weights to various neurons. A neuron weighted more heavily than another will exert more of an effect on the next layer of neurons. The final layer puts together these weighted inputs to come up with an answer.

But as networks get deeper and researchers unwrap the secrets of the human brains on which they’re modelled, they’ll become ever-more nuanced and sophisticated.

“And as we learn more about the algorithms coded in the human brain and the tricks evolution has given us to help us understand images,” Corke says, “we’ll be reverse engineering the brain and stealing them.”

Man. I thought the Star Citizen thread was fun.

Unlike star citizen, there’s actual software and results that work that people can talk about here. :)

My local NPR station recently had a segment about AI and what it can and can’t do. It was a good listen. Basically, it can be both impressive and disappointing. Current AI impresses in a narrow band of tasks but most researchers have no idea how to mimic human thought processes.

Can we read When HARLIE Was One again?

It being right there in the name is no more relevant than horses are to horse power. They were loosely inspired by neurons but they are closely related to mathematical and statistical models rather than neurobiological models. In fact, a single, biological neuron is a complex, multifunctional machine where as a single “neuron” in a neural network is a simple mathematical function. Let me put that into perspective: comparing an artificial “neuron” to a biological one is like comparing a line a code to a fully functioning personal computer.

All neural networks are weak-AI at best, they resemble our brains in the same way nests of birds resemble stadiums, they are tools with specific, limited use and if you think there’s more to them or , worse, you’re “worried” about them you need to read much more than that article.

To be clear, you have no formal education or practical experience with any of this stuff, right?

I ask this as an honest question, because I’m trying to figure out what you’re arguing about. You’re throwing out terms like weak AI, saying that existing systems are that… which I think is pretty universally agreed upon anyway. I don’t think anyone claims to have developed a strong AI capable of general intelligence.

Beyond that, you seem to trivialize elements of AI development, but lack much deeper understanding of those things.

Nice I just bought that on Kindle and I am gonna read it. Better be good or I will judge you. Wait no I won’t because I am by far the least judgmental person on this forum.

I read that in a high school class. Enjoyable book. High school was 40 years ago so I’ve forgotten most of the details. I wonder how it holds up?

Let me put it this way: I’m a computation biologist who does machine learning on single cell data (diseases of the brain in particular, like schizophrenia), and our lab works on differentiating stem cells into neurons so we can run experiments on human models in culture. I’m pretty aware of how biological and computational “neurons” differ. It’s not like I’m saying we (humans) run a fancy series of logistic regressions to understand the world. You DO know what a logistic regression is and how it’s related to neural networks, right?

Nobody said that neural networks were strong AI. I don’t think there is any strong AI out there, nor am I worried about it. We were talking about AlphaGo and the Dota bot, which are both definitely very specialized tasks.

You’re still 100% wrong on how they work, but that doesn’t seem to stop you from continuing to argue. :)

Zethi, you are gonna be as surprised as I was when you start talking to data scientists and researching current deep learning. Google released Tensorflow and the community is losing it’s mind thinking about all the crazy thing they can do with it. DL is the next “internet level” advance in human achievement. The whole thing, as explained to me, is based on math research done in the 60s-70s and hardware (especially parallel processing) has just caught up the last few years to start putting it into practice.

One of the huge problems data scientists face first when designing one of these neural networks is getting a good, clean, well defined data set. This alone takes a lot of effort and design and creativity. It’s not at all just number crunching, there is a whole . . . thing.

(Source: I have detailed letters of intent and signed statements of work from several data scientists for an AI project awaiting funding.)

Could the end of Net Neutrality also end the social media/fake news problem? It may become hard for people to connect to each other beyond the gated communities that ISPs will create. OTOH, ISPs are mainly going to target the downloading/sharing/viewing of videos and (large) files. I doubt tweeting is going to be affected. Thoughts? Is it a coincidence that the Zuck is in favor of Net Neutrality?