Social media controls the world


These Wall-E units are bringing food to people around DC.

I agree with your sentiments, though. Also: consider that AI may be benevolent and actually teach us how to save ourselves. I can’t imagine people wanting to destroy anything faster than a benevolent AI.


First thing I thought when I saw that article about the food-delivering bots is that IMHO they will be preyed upon like nobody’s business. Yeah, the article says it sets off alarms and has a tracker and can’t be opened except by the right people, etc, etc. But people who want to steal stuff can be devilishly clever at bypassing stuff like that. If they get popular I foresee people grabbing them, stuffing them into a metal sack and taking them off somewhere to bust them open for free food. You may say “that’s a lot of work for free food” but don’t underestimate what people will do to get something for nothing.


My understanding is that, if it turns out to be possible to create an AI superintelligence, there are many, many more ways for it go disastrously wrong than for it to turn out to be something we’d recognize as benevolent.

In general, I don’t understand how anyone can make confident predictions about what the world would be like AFTER we create something much smarter than ourselves. Having the commisioner say “I’m a techno-optimist” and assuring us it will be fine doesn’t do much for me.


Opposable thumbs.

AI isnt a threat as long as it’s confined to an armless, legless, handless box, so to speak. AI goes disastrously wrong when it can directly affect the world.

Im also not afraid of 2D silicon AI tbh. When we have 3D CPUs… ill pay closer attention. Right now AI is almost certainly a Chinese Room.


Errr, right, because networked computers can’t directly affect the world.


“So to speak”.


Most power isn’t about picking up a rock and bashing a head in. Betting that you can keep a super intelligence in a box is a high-risk strategy.

AlphaGO Zero completely crushes AlphaGO, which crushes all human’s at the game that we completely dominated up until a couple of years ago. It does it with zero human training input and about a tenth of the compute power. If you’d asked people five years ago how much compute was needed for super-human performance at Go, they would have said a lot more than was actually needed. It’s not obvious until it is actually done, and then another round of optimization makes it easier.

Edit: Which is not to say that I think true AI is just around the corner. Progress seems fast lately but there is a long way to go. But it’s really hard to predict just how long a road it is.


Go is still a deterministic problem with a finite search space. AI will struggle for a while with inductive reasoning.


Have you seen the data bots? Those seem a bit more complicated in terms of search space, randomness, etc?


I think it was only 1 character they could actually play, and a simple one at that. Also 1v1 dota is kind of a weird space in general, and free from the significant complexity of 4-5 players all making different decisions with diifferent characters at different times.


There’s also something to be said for more complex games, in some ways, making things easier. Go and chess have been studied for centuries, and are very mechanically simple, and strategically deep.

But a game like DOTA has orders of magnitude more complexity. Now this certainly makes creating an AI to play, at a basic level, harder. But the distance from able to move and use mechanics, to being able to play level and beat human players? Probably much smaller. People haven’t optimized and analyzed to the same degree, and with more things to track/ do it is easier to have a human player make a sub optimal move.

This is not to shortchange the AI design, or top tier human play, but merely me hypothesizing that humans have not reached the same tier of ‘perfection’ in terms of the absolute theoretical range of play. So if humans have achieved 98% of whatever theoretical perfect play is in chess, they’ve probably only gotten 80-85% of such in DOTA.

Add in rebalancing, new mechanics, tweaks, character numbers, and I can easily envision the task requiring less optimization. Especially if limited in scope to a single character play.


I’d actually argue, based off of some Starcraft 2 results, that human play of these games is enormously far from the theoretical max perfection limit, just on a sheer physical level. For instance, the firing pattern of siege tanks (high, long range, large splash damage, particularly against small lightly armored units) is fairly deterministic and consistent. They are considered a hard counter against, for instance, a swarm of Zerglings (highly mobile, cheap, but very frail melee units). However, AI, with pixel perfect control of each and every unit more or less simultaneously (up to the command input limitations of the engine), the Zerglings can be micro controlled and split off to have each tank’s individual shot affect just a single unit, allowing the rest of the swarm through unharmed while the tanks endure their very long firing delay.

The difference in number of Zerglings needed to breach a tank line between human pros and AI control was damn near an order of magnitude if I recall correctly.

Perfect, instantaneous control alone overcomes so much. If you teach the AI how to operate the basics of the game (which can be insanely hard of course), that alone opens up extraordinary possibilities. Our weak, slow bodies can’t keep up.


All current versions of AI are use a tone of resources for tasks we can easily accomplish with a tiny fraction of those resources. Whether it’s chess or Go or Dota, all of these AI’s take a brute force approach to solving problems. They either use a preset database of previously played games or they create one from scratch playing billions of games with themselves. They also use a tone of energy.

Why do we find it amazing that a neural network AI that played millions or billions of games beats humans that played less than 50k? Put the AIs on equal footing with the human in terms of games known/played and they get hopelessly destroyed. All types of current AI’s are glorified pocket calculators, there’s no intelligence in there, just massive number crunching.


Tell me more about AI ARE USE A TONE.


Your statements are not at all true. Deep learning approaches are nothing like brute force solutions. The search space for go is so incredibly vast that you can’t brute force it and be in any way competitive against go masters.

The big deal with AlphaGo zero was that it trained on a single computer, and did the learning process completely in 40days by playing millions of games against itself. Instead of the algorithm being seeded with human knowledge, the program learned from first principles and avoided local optimums that humans are stuck in. Go masters are now studying how AlphaGo itself plays the game to learn from it.


When the robots take over, Chinese Room advocates are going to still be yelling “That’s not really AI!” as the Terminators roll over their skulls.


Which one isn’t true?
Having the software play millions of games is not too different than a brute force approach, the AI’s advantage is the superior number crunching capabilities. What do you think would happen if AlphaGo were set to reset n restart ‘learning’ the game but with a restriction on how many game simulations it performs. Instead of millions, let it play just 50k games(which is much more than a human master) and see what happens. It would lose like a beginner, that’s what because our minds don’t rely on simple number crunching. Our minds use shortcuts (such as intuition) we don’t fully understand and until we do it will be impossible for AI to br anywhere near matching human intelligence.




It’s fundamentally different.

Playing games and using that experience to develop a mechanism by which you select moves in a novel situation is fundamentally different than simply evaluating every possible sequence of moves in the entire search space.

“Brute Force” does not mean “uses computational power”.

You midaswell be saying, “we have souls given to us by God, and without a soul a computer cannot do what we do.”

You are saying that an AI cannot equal you, due to factors which you explicitly define as unknown and ethereal. That’s a nothing argument.


What isn’t true is that it isn’t a brute force memorization technique. It’s COMPLETELY DIFFERENT from the methodology you’re taking about, which was closer to true 20-30 years ago, when we relied on a combination of memorization and heuristics to solve problems like chess.

Do you think self driving cars memorize all the roads and all the possible size/shapes/types of cars, people, road blocks, traffic signs that can be seen from every possible angle, occlusion, lighting condition, etc then perform a simple lookup table to decide what to do?

So yes, none of what you’re asserting is true, because that’s not how these algorithms work, or what their limitations are. Understanding these things a little bit more would put you in a better place to argue about why they aren’t “glorified pocket calculators”…well, any more than I can argue that you, as a human, are an almost completely deterministic machine suffering from the illusion of free will.