Well, I better start thinking of a back up position. If it can play go, it can do 401(k) regulatory work.
Never underestimate the deviousness of tax legislation…
I read about training a neural network to give out judicial sentences a little while ago, based on historical data. Totally creepy, building in all our preexisting biases into the machine.
Wired article on the third game, also won by Alpha Go. So 3-0 in the best of five.
As Google researcher Thore Graepel explained earlier in the week, because AlphaGo tries to maximize its probability of winning, it doesn’t necessarily maximize its margin of victory. Graepel even went so far as to say that inconsequential or “slack” moves can indicate that the machine believes its probability of winning is quite high. As Redmond saw them, AlphaGo’s latest string of moves were slack.
This sounds a lot like the theory I heard in sailing about spending your lead. It makes moves to increase its safety factor. It must be very frustrating to play against.
This was also a good article by Wired.
Lee Sedol won game 4 on Saturday night/Sunday, and is challenging in game 5 right now. We’re liveblogging again, although I don’t have nearly as much to say this time around.
Good game so far. The fighting looks fairly even to my eye.
After an extremely irresponsibly late night, I’m hitting the sack. It looked, when I signed off, like AlphaGo won by a hair, but we’ll see for sure when I wake up. If I wake up.
AlphaGo won the last game. The commentator called some of its moves mistakes, because it resolved several local situations before they needed to be resolved, but that might be another instance of the AI rejecting traditional Go wisdom, prefering a small certain lead to larger, uncertain potential.
At the award ceremony, AlphaGo received an honorary 9p degree from the KBA. That was a nice touch, I think.
Mike James at I Programmer argues that AlphaGo is a watershed in AI because it shows that one can solve a previously inscrutable problem by just throwing enough hardware and a neural network of sufficient depth at it. Maybe, maybe not: Go is still a very constricted artificial environment compared to the real world. But Google can already do pretty decent translations, drive cars, and identify cat pictures, so who knows…
Yeah, I’ll be keeping an eye on that [serious?] comment about doing Starcraft next. The level of complexity is significantly higher moment-to-moment, and it’s no longer a perfect-information scenario, so it’d mirror a very different category of real-world problems very well, if they can manage it. I’d just hate to be on the team of guys who have to somehow program in all the possible actions that can be taken/visual recognition/etc.
We should keep in mind, though, that AlphaGo did not teach itself from scratch. It was initiated with a database of professional games, i.e. indirect human expert knowledge, and that’s why its playstyle resembles a human. In an interview, one of the programmers speculated what might happen if the neural net actually would teach itself completely. It would take much, much longer, for one, and we have no idea to what degree it would vindicate classic knowledge and playstyle, if at all.
DeepMind has already built machines that learned classic Atari games, and they didn’t have to program in the games mechanics. It was just, “Here’s the controls, here’s the screen, here’s your score” and the machine learned what the controls did, what the stuff on the screen meant and how to respond to things on the screen in order to get a high score.
So for something like Starcraft, they don’t have to code the mechanics at any deep level, the machine can learn that. All they would have to code was the very basics for how to select a unit, give it orders and move the screen around (and how tell if it won or lost). They would probably spend most of the time trying to design the high-level architecture that had the right kind of flexibility to learn the important concepts in an RTS.
Interesting! I do remember reading about the Atari experiments, but never connected it to DeepMind. Thanks for the insight :)
The proprietary nature of video games disqualifies them as an object of serious study. No official client could sustain the throughput a neural net would require, and no others can be legally written. Even DeepMind’s experiments with Atari emulators probably involved copyright violations.
I think you can argue that the human brain is still terribly more advanced in many areas than the most advanced AI available currently. Thus, “throwing enough hardware” at the problem might actually be the ‘right’ solution. Isn’t that partially what separates DeepMind/AlphaGo from DeepBlue (neural net not withstanding)?
I think a challenge to this is generating a proper score metric to optimize. Given you don’t have full game state knowledge, it may be hard to generate a score that doesn’t fluctuate wildly.
Imagine playing a RTS yourself and thinking you’re doing great, until that huge army shows up out of nowhere. Perhaps scouting will be a heuristic they add in, so that the game can spend resources to get a better feel of game state. I bet gaining information and microing scouts so they don’t die while handling base buildup would be pretty easy for AI.
Yeah, no, we already have super-micro AIs in SC/SC2 that can obliterate a human foe with “inferior” forces in a straight up, forces-already-available scenario. Perfect Zergling splitting to minimize splash damage from Siege Tank fire, perfect “stutter-stepping” to move-and-shoot with units like the Marine, and perfect moving-fire options with units like the Phoenix in SC2, etc., are very close to “solved problems.”
It’s all the other stuff–proper weighting of resources for expanding, teching, and accruing forces; scouting of enemy plans and fakeouts; army composition development; creative uses of terrain–that’s been the bane of existing SC/2 AIs. Given the theoretical complexities inherent in each of those decisions–and at a top-tier Korean pro level, they are massive–they’ll be tough problems to solve well. I think. This whole AlphaGo thing makes me question a lot of my assumptions on this stuff :)
Well, that was fast.
Edit: From wikipedia:
[quote]All 60 games except one were fast paced games with three 20 or 30 seconds byo-yomi. Master offered to extend the byo-yomi to one minute when playing with Nie Weiping due to his old age. After winning its 59th game Master revealed itself in the chatroom to be controlled by Dr. Aja Huang of the DeepMind team, then changed its nationality to United Kingdom. After these games were completed, the co-founder of Google DeepMind, Demis Hassabis said in his tweet that “we’re looking forward to playing some official, full-length games later  in collaboration with Go organizations and experts”.
Human players tend to make more mistakes in fast paced online games than in full-length tournament games due to short response time. It isn’t definitively known whether AlphaGo will succeed as well in tournaments as it has online. However, Go experts are extremely impressed by AlphaGo’s performance and by its nonhuman play style; Ke Jie stated that “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong… I would go as far as to say not a single human has touched the edge of the truth of Go.”[/quote]
The short-time thing makes sense to me. In my own work on hnefatafl AI, I find that the AI is much better than me given limited time, but I can still beat it when I have time.
That said, I would be very surprised if any human or combination of humans could beat AlphaGo given equal time.
It’s just another example of a task where human-level ability is not some near-optimum. How many times in the next 50 years are we going to have the experience of going from “computers can’t do that” to “computers can do that way better than any human or combination of humans” inside a couple of years?