Jump to content

Google has beaten Go, what could it do to bridge?


Recommended Posts

Google's AlphaGo defeated one of the top players in the world 5 games to 0, and achieved a 99.8% winning rate against other Go programs....AlphaGo’s next challenge will be to play the top Go player in the world over the last decade, Lee Sedol. The match will take place this March in Seoul, South Korea.

 

Rather than trying to use pure computation to conquer Go's ridiculous search space, they used deep neural networks to learn from human games, then learn from games against itself.

 

What do you think Google could do with this technology if they decided to turn to bridge?

 

Some links:

http://googleresearch.blogspot.com/2016/01/alphago-mastering-ancient-game-of-go.html

http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html

  • Upvote 2
Link to comment
Share on other sites

I've long thought that a neural network might be the best technology for computers to learn bridge. But you have to be careful with the training -- if the input data includes auctions from many different systems (e.g. both natural and strong club), it will probably confuse it horribly. Even a mix of SA and 2/1 would probably make it difficult to learn.

 

A few years ago Uday and I experimented with using our archives of years of BBO auctions to help GIB with its bidding -- before making a bid, it would search for similar hands and auctions, to see what the most common bids were. We just assumed that most auctions were using similar systems (mostly SAYC and 2/1), and the rest would be outliers that didn't hurt the statistics. But we weren't using this to try to teach it the basic principles of bidding, it was just being used as a sanity check: if its simulations said to make a particular bid, but the database search showed that fewer than 20% of humans chose that action, it was ruled out. But even with that limited scope, it was not very helpful -- there are so many combinations of hand types and auctions that the number of matches were generally too low for useful statistics, except for early in the auction, but those bidding rules are already pretty good.

Link to comment
Share on other sites

What I think neural networks will be particular useful for is to make inference from the opponents' (and partner's) play. Sort of generalized restricted choice: "if he had a strong holding in declarer's second suit he might have lead a trump". So when selecting/weighting hands for the sims you devaluate hands with a strong holding in declarer's second suit.
Link to comment
Share on other sites

Chess is suited to computers. Go also but is more difficult.

The problem with bridge is you don't know where all the cards are. Whereas is chess and go you know where all the pieces are.

Bidding has improved considerably but still has a lot to be desired. When chess programs were starting to challenge masters, bridge programs were no better than beginners at bidding.

Link to comment
Share on other sites

Chess is suited to computers. Go also but is more difficult.

The problem with bridge is you don't know where all the cards are. Whereas is chess and go you know where all the pieces are.

Bidding has improved considerably but still has a lot to be desired. When chess programs were starting to challenge masters, bridge programs were no better than beginners at bidding.

 

We should start a campaign to get google to tackle bridge next, precisely because it is not well suited for computers to solve. The challenges bridges present, in incomplete knowledge, is an interesting and a next logical step to the progression of artificial intelligence.

Link to comment
Share on other sites

Somehow I think bridge is less challenging for AI than GO - although I (currently) do not have any expierance in the field. Maybe I should take that as a thesis for university :P
Link to comment
Share on other sites

19 years ago in 1997, Mr.kasparov who was a international chess master with ranked first in the world, played against computer Deep Blue,was most shocking news on the planet and that year the computer beat top intelligence of human beings for the first time.From then on the computer made more rapid breakthrough in the field of application. It was said that Deep Blue and its successors beat Mr Kasparov by using the "brute force" technique.In 2006, human beat the computer software for last time,then never win against computer up to now.

Yesterday the new focus again.Computer software has defeated the professional players in the game of Go.That's incredible, it happened too suddenly.

In general, for International chess, each step contains 35 possible changes with about 80 rounds in a chess match, but for the game of Go, every step might contain 250 possible changes with 150 rounds at least in a Go match.

Lee SeDol,South Korea Go player, he is a great player who won most champion titles in the world Go event in the recent 10 years.The match will be held this march in Seoul of Korea capital, the bonus is $1 million provided by Google,which shows strong self-confidence of Alpha Go team.

However among China, Japan and South Korea,whatever it is players or Fans, who believes computer has ability to win human top player? Never ! Does Google's Go programming own the magic of god?Even so, we would like to say Best Wishes to Alpha Go team.

Link to comment
Share on other sites

I've never considered bridge to be a particularly complicated game for a computer to play.

I think that there are some complicated issues around disclosure, but nothing that seems particularly challenging.

 

Even the disclosure issues seem solvable, especially if we can require that the competitors provide the computer with a corpus of hands consistent with the bidding so far.

Note that this is something that is relatively easy for a computer to do, but hard for a human.

I think that the problems with having people play against computers are likely to be on the human side, rather than the computer side...

 

I suspect that the reason that there is no equivalent to Deep Blue is the (relative) insignificance of the the game.

The serious academic researchers prefer to focus on games that are more popular. (Chess, Poker, Go)

  • Upvote 2
Link to comment
Share on other sites

In 2006, human beat the computer software for last time,then never win against computer up to now.

There was certainly a win against a computer in 2008 due to exploiting a program bug but I cannot even remember the last time I heard a human vs computer being reported so it is difficult to say what other losses there might have been.

Link to comment
Share on other sites

There have been some funny games with human vs computer using various material odds. One of them was Nakamura vs Komodo where the machine had:

-no f7/f2 pawn

-an exchange deficit

-3 moves behind (white had to play 1 e4, 2 d4 3 Nf3 and had the move)

 

All of them were drawn except the computer won in the "3 moves behind." To me this is conclusive proof that humans can't challenge computers on a level playing field. Yea yea sampling problem. But any time I see any good player asked about this, they not only agree with this assessment, they also say it is really not close at all.

 

https://www.chess.com/news/komodo-beats-nakamura-in-final-battle-1331

Link to comment
Share on other sites

The problem with bridge is you don't know where all the cards are. Whereas is chess and go you know where all the pieces are.

This is exactly why a neural network approach is likely to be more suitable than the techniques currently in use. Neural networks are good at detecting patterns automatically, while traditional programming requires the programmer to spell out everything precisely.

 

In Go, you know where all the pieces are, but that still wasn't good enough for programs to be as good as human experts. The problem is that there are still so many combinations of plays that it can't analyze it sufficiently. It's necessary to recognize patterns from experience and intuition. Describing all the patterns in a traditional program would also be overwhelming, but a neural network can learn them on its own.

Link to comment
Share on other sites

Even the disclosure issues seem solvable, especially if we can require that the competitors provide the computer with a corpus of hands consistent with the bidding so far.

Note that this is something that is relatively easy for a computer to do, but hard for a human.

A corpus of hands is not very helpful without telling the computer what features they have in common, and what distinguishes them from hands that would have made other bids.

 

This is where the neural network would excel. You'd feed it millions of hands and auctions, and it would learn on its own how the bids relate to the hands, and which features of the hand are important.

Link to comment
Share on other sites

Lee Se-Dol is looking forward to play against AlphaGo.

After determining the date of the challenge, Lee said it is his great pleasure for him to play against artificial intelligence, " Whatever the outcome will be, it would be very meaningful event in the history of Go.I heard that artificial intelligence is unexpectedly strong, but I have confidence to win this time at least."

Many readers of course support Lee. they think the biggest differences between artificial intelligence and human are every step of computer calculation is the best choice, and the layout of human is not necessarily best, but the human is able to set a trap. Of course,many readers strongly support AlphaGo, "Don't look down on artificial intelligence, AI owns super computing power, human can do?".

I will vote for Lee Se-Dol, I think it is impossible for AlphaGo to beat human at present, even AlphaGo have beaten European Go Champion, but compared to Go champions from China,Japan and South Korea, he's too weak.

  • Upvote 2
Link to comment
Share on other sites

I've never considered bridge to be a particularly complicated game for a computer to play.

I think that there are some complicated issues around disclosure, but nothing that seems particularly challenging.

 

Even the disclosure issues seem solvable, especially if we can require that the competitors provide the computer with a corpus of hands consistent with the bidding so far.

Note that this is something that is relatively easy for a computer to do, but hard for a human.

I think that the problems with having people play against computers are likely to be on the human side, rather than the computer side...

 

I suspect that the reason that there is no equivalent to Deep Blue is the (relative) insignificance of the the game.

The serious academic researchers prefer to focus on games that are more popular. (Chess, Poker, Go)

I doubt that any of these claims or reasons are valid.

It is also a myth that there has been no serious academic effort.

The question, how challenging or complex a game is for the human mind, is not the decisive criteria whether we will see software beating a human or not.

Many seem to believe if only IBM, Google Apple or the like would provide some serious resources we would see such a Bridge computer tomorrow.

I doubt that.

No serious amount of resources would have put a man on the moon in the 19th century.

You need good ideas and a strategy before resources (money) can be put into a productive investment.

And even if these conditions are present there is no guarantee of success.

For example despite the efforts and money so far spent, we always seem to be 50 years away from the first commercial fusion reactor.

 

I believe compared to Chess or Go, the challenges are very different for putting Bridge logic into software.

To mention just two:

Bridge is a game of incomplete information, Chess and Go are not.

Bridge is a partnership game played against another partnership, neither Chess nor Go are.

This already provides a challenge in definition what a good Bridge program would be.

One, who plays with a clone of itself as partner? That is like identical twins playing Bridge together.

It is entirely possible that identical twins could be world class when playing together, but be only mediocre when playing with others.

They certainly would have a big advantage when playing in a partnership.

Not my definition of a great Bridge player. But this is the way Computer Bridge championships are played today.

A more suitable comparison would be two independently developed Bridge software programs playing against an expert partnership.

Two neuronal networks might be acceptable as partners, provided they were primed by independent deals and experience.

 

I am not saying we will not see eventually bridge computers being capable of beating experts, but there are good reasons why so far nobody came close.

 

Rainer Herrmann

  • Upvote 1
Link to comment
Share on other sites

I wouldn't mind playing bridge where my partner would be myself (careful choice of words, avoiding innuendo). If anything, I'd identify my annoying habits (in bidding or play) and would be able to correct for them much more efficiently. It can be fun but after a while exhausting to convince my partners to try out some new treatment or to scrap some of their idiotic conventions. I think a lot of people would like to at least try playing in a partnership where the other member is a clone (hopefully I still managed to avoid most of the double entendres).
Link to comment
Share on other sites

One of my projects for my webpage was "bid with yourself" where you bid hundreds of similar deals at the same time partnering yourself, with enough time space to hopefully forget about sequences. And maybe adding notes to yourself to be check when the bidding is over. But I am moving to other areas ATM.
Link to comment
Share on other sites

Large strides were made in Chess due to the presence of open source projects to which people from all over the world could contribute coding ideas, testing H/W, etc. The existence of some reverse engineered commercial S/W also helped ;) It is quite possible that initiating an open source Bridge project could result in Bridge programs that can play stronger than humans.
Link to comment
Share on other sites

Please note we already have computers that play better bridge than the majority of humans, I would add the vast majority of the world's humans. We also have computers that play better than the "average" ACBL member.

 

I think the question is how long until they can beat a top ten or twenty player or pair. I would expect this before 2050 as we gain a more fuller understanding of how the hardware and software of the brain works at a fundamental level. This research is being done now.

Link to comment
Share on other sites

I think the question is how long until they can beat a top ten or twenty player or pair. I would fully expect this before 2050 as we gain a more fuller understanding of how the hardware and software of the brain works at a fundamental level.

I think it has already happened. 15 years ago, Jack played at about "meesterklasse" level (highest level of the Dutch competition, comprising about 100 players out of an organized bridge population of about 150,000). I don't know how much it has improved since then but the increased CPU speed since year 2000 alone might be enough for it to win the Bermuda Bowl.

 

It is not easy to measure, though. In Go or Chess, just a handful of games against a World champ is enough to give you an idea of the relative strength, and those games are easy to set up. In bridge, it is a lot of work to make the computer understand the human bidding and carding system, and then you need a sample size of several hundred boards to make a reasonably robust verdict.

  • Upvote 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...