Get the latest tech news How to check Is Temu legit? How to delete trackers
NEWS
Google Inc

Google DeepMind's program beats human at Go

Trevor Hughes
USA TODAY
Black-and-white pieces occupy spaces on a board during a game of Go, which Google's software engineers say they've taught a computer program to play better than most humans.

Google’s software engineers have taught a computer program to beat almost any human at an ancient and highly complex Chinese strategy game known as "Go."

While computers have largely mastered checkers and chess, Go, considered the oldest board game still played, is far more complicated. There are more possible positions in the game than are atoms in the universe, Google said — an "irresistible" challenge for the company’s DeepMind engineers, who used artificial intelligence to enable the program to learn from repeat games.

The Google unit's AlphaGo computer program is much more sophisticated than the IBM-created Deep Blue computer that in 1996 won the first chess game against a reigning world champion, Garry Kasparov.

The AlphaGo system makes what its developers consider to be fewer, but smarter, decisions. Previous systems relied much more on what’s known as "brute force" calculations. In other words, Deep Blue and its contemporaries used massive processors to plot out millions of possible moves in a relatively short time. The game of Go is harder to mathematically predict, in part because the board is much larger: While a chessboard has 64 squares, Go has 361.

The brute force-style approach "has led to superhuman performance in chess, checkers and Othello, but it was believed to be intractable in Go due to the complexity of the game," AlphaGo’s creators wrote in a paper published Wednesday in the journal Nature. Until now, the best computer Go players were no better than amateur humans.

The object of Go is to control as much of the the board as possible, and players use either white or black stones to surround territory and their opponent.

The new approach worked by "teaching" the program how humans played, and then letting its learning software play game after game for practice. Ultimately it defeated the reigning European champion 5-0 in October. It’s the first time a computer has beaten a professional player in a complete game, the Google developers said.

AlphaGo's next challenge will be playing the world’s top Go player in March.

Zuckerberg also posts on Go

Artificial intelligence is undergoing a major boom in Silicon Valley. Alphabet-unit Google is a leader in machine learning and deep learning, and rivals including Facebook and Microsoft are also making substantial investments. Google, which bought DeepMind in early 2014, is using artificial intelligence to improve its products and services by training computers to learn from data with little or sometimes no human intervention in areas such as search, translation and photo storage.

Smarter, more powerful computers could help Google's search engine learn and improve results in real time as people click on answers or Web links, for example. And Facebook uses artificial intelligence to automatically recognize and label friends in photos posted on the social network.

In a lengthy blog post Wednesday, Facebook CEO Mark Zuckerberg lauded the promise of artificial intelligence but said scientists still haven't figured out how to get a computer to learn like a human, and then apply lessons learned in one area to another. In a separate post made Tuesday, Zuckerberg discussed Facebook's efforts to teach a computer to play Go but doesn't mention Google's competing effort.

"We should not be afraid of AI. Instead, we should hope for the amazing amount of good it will do in the world," Zuckerberg wrote. "It will saves lives by diagnosing diseases and driving us around more safely. It will enable breakthroughs by helping us find new planets and understand Earth's climate. It will help in areas we haven't even thought of today."

Featured Weekly Ad