Jump to content

Google has beaten Go, what could it do to bridge?


Recommended Posts

This means even if they wanted to "adjust the program" between games there isn't really a way to do that.
You could retrain the network.
Unfortunately, there's no Nobel Prize category for Computer Science.
I'm sure he'll settle for the Turing prize.
Link to comment
Share on other sites

What happened to the Japanese players? In the movie Go Masters the Japanese masters dominated Go in the thirties.

The Japanese professionals were the strongest for a very long time, from the 1600s until about the early 1990s. Then the Koreans overtook them and the Chinese not long after.

Link to comment
Share on other sites

At the beginning of the event, almost of us never heard of what AlphaGo is,never believe AlphaGo can overcome human in this world's most complex game.

Now some Go champions think :

1- AlphaGo only shows about professional 6 duan skill in the beginning stage of layout.

2- ALphaGo miraculously owns about professional 13-15 duan skill in the intermediate stage,(the highest level of human skill is 9 duan.) far more than world Go champions, its precise calculation ability is far more than human, especially the most worthy of admiration is its " Overall Situation Assessing " is too good,really great, better than human !

3- Local processing ability is often worse than human.

4- All the Go champions know AlphaGo has great ability to improve their skill rapidly.They like it very much.

5- Only after several years in the future , if DeepMind team is willing to develop AlphfaGo , AlphaGo will have its own Go theory for sure, we 100% believe AlphaGo be Go god for sure, that is to say AlphaGo will have great ability to teach world Go champions " How To Play Go correctly " !!!

 

So I have to say how strong AlphaGo is,how terrible Google technology ! Hats off to AlphaGo !

This is my feeling.

Thanks to everyone, especially to the "OP".

Link to comment
Share on other sites

We are most concerned about AlphaGo just passing by, then after its retirement, will leave the Go game forever.

 

 

Yes this is true... Alphago may retire but AI in general wlll live on.

 

thousands and thousands of years from now AI will live on.

 

Ancenters

 

-----

 

 

Humans can and do evolve along with machines.

 

For some reason many forbid evolution.

 

 

--------

 

 

genes are selfish

 

------------

 

 

genes jump species.

Link to comment
Share on other sites

Further, part of what's exciting from an AI standpoint is that AlphaGo is built on a very general machine learning framework. There isn't really human "expert knowledge" being hard coded in the way most chess programs have.

I somewhat disagree. I realize this wasn't the point you were making, but assuming that this approach is generalizable to all sorts of problems (Mr. Hassabis' favourite example being "healthcare") seems dangerous to me.

 

1. The way the two neural networks are connected is already an encoding of expert knowledge, as this is exactly what an expert Go player does: read some key variations and evaluate the board at the leaf nodes. While Google claims that even the Policy Network alone can be moderately successful, it would obviously not be impressive enough to create all this hype.

 

2. Google found that a mixed evaluation between the Value Network and Monte Carlo rollouts was more successful than the Value Network alone. Monte Carlo rollouts, at a minimum, actually encode human expert knowledge about "eyeshape" to make sure that the game comes to a reasonable end rather than the players suiciding all their stones. The previous state of the art programs included even more hand-crafted patterns in their Monte Carlo playouts, and AlphaGo might do this also.

 

3. The training data builds on 1000+ years of accumulated human knowledge. Consider that, for the very first move of the game, there are 55 possibilities after accounting for symmetry. Only 2 of those are commonly played in professional games, with a further 2-4 considered potentially viable. AlphaGo so far has not deviated from those top 2 moves. While it is very exciting to Go players to see what AlphaGo would come up with if trained only on self-play, there is no guarantee that it would lead to a strong program in a reasonable timeframe.

 

Most importantly, it seems to me that the parametrization of the neural network is very significant. Note that the inputs to AlphaGo's networks include whether a stone/group is capturable in a ladder. This is pretty Go-specific obviously! And Go is a very well bounded problem - finding the correct parametrization of a neural network for a more fuzzy problem is going to be quite nontrivial.

  • Upvote 2
Link to comment
Share on other sites

One of the ways you can tell when a story has hit the mainstream is when it gets spoofed on a late night comedy show. Last night AlphaGo hit this, as The Daily Show did a bit about AI that was mostly about this.

 

I'm still hoping for SNL to spoof the bridge cheating scandal (I vaguely remember a few jokes about Disa's "doping" violation a few years ago).

Link to comment
Share on other sites

  • 3 weeks later...

My strong opinion on the topic of computers in bridge is that if *serious* effort was put into building a world-class bridge program, and it was allowed to play unrestricted by ACBL/WBF system constraints (with full disclosure, obviously) then it would easily win the BB.

 

The only reason it hasn't happened yet is because

 

a) The issues regarding disclosure (mostly towards the computer) represent a huge grey area.

b) There is no demand for such a computer.

 

Now all we need is for someone to offer a 100 million dollar bet, and i'll be able to prove myself correct!

 

Any takers?

Link to comment
Share on other sites

Today's SMBC is apropos

 

http://www.smbc-comics.com/index.php?id=4075

 

(I'm just linking instead of embedding because there's some NSFW content.)

 

There was a game designed to make it hard for computers to win, Arimaa (https://en.wikipedia.org/wiki/Arimaa), but computers defeated humans at that in 2015 (there was a 10K challenge each year since 2004).

Link to comment
Share on other sites

There was a game designed to make it hard for computers to win, Arimaa (https://en.wikipedia.org/wiki/Arimaa), but computers defeated humans at that in 2015 (there was a 10K challenge each year since 2004).

 

I have never heard of arimaa before this thread, but having read the wiki article I cannot understand why the designers thought that this would be particularly hard(er) for a computer to learn to play well than, say, chess, go, Othello, etc

Link to comment
Share on other sites

I have never heard of arimaa before this thread, but having read the wiki article I cannot understand why the designers thought that this would be particularly hard(er) for a computer to learn to play well than, say, chess, go, Othello, etc

 

Because each side takes 4 moves at a time in a single turn, that raises the branching factor by an exponent of 4, more or less. There are at most 361 moves per turn in a 19 x 19 go board (and decreasing over time, plus some symmetry early) - all over about 250 on average for the branching factor of go. In a typical chess position there are about 35 moves per turn for chess. For Arimaa you'd expect that could mean about 35^4 = 1,500,625. In actuality, because the pieces in Arimaa don't move quite as freely as the pieces in chess it is quite a bit less than that, and the real measured average branching factor for Arimaa is 17,281. But still, it was largely the branching factor for go with 250 >> 35 that made go considered much harder than chess. And 17,281 >> 250 which makes Arimaa much harder as a game.

 

Since a lot of computer uses some form of minimax processing considering what is my best choice move assuming you make your best move assuming I make my best move, etc. it means that you are doing a search down these branching factors. So if you are looking 4 ply (2 turns for each of us) down the tree than in chess you need to worry about 1,500,625 possible sequences of moves. For go it would be about 250^4 = 3,906,250,000 possible sequences. In Arimaa it would be about 17281^4 = 89,181,645,395,627,521 possible sequences. Now all of these algorithms would have optimizations with pruning and symmetry and memoization of same position from different paths and other optimizations. And it is possible static evaluation functions for the various games are different degrees of difficulty, but still the branching factor is an obvious reason why the game would be hard for traditional computer AIs.

Link to comment
Share on other sites

My strong opinion on the topic of computers in bridge is that if *serious* effort was put into building a world-class bridge program, and it was allowed to play unrestricted by ACBL/WBF system constraints (with full disclosure, obviously) then it would easily win the BB.

"Easily" is an overbid. Bridge results are limited to some extent by luck.

 

Also, what difference would it make to remove system restrictions? Do you imagine bots that will create their own bidding systems?

 

Last night China CCTV1 program : Start To Lecture

Lecturer : Nie Wei-Ping ( China Go sage )

Summary :

- I should call AlphaGo as Teacher Alpha, it is far more stronger than human.

- AlphaGo gives us a shocking education.

 

If you are chinese, you can watch it. Its link : http://tv.cctv.com/2...60o160410.shtml

She does not think Ke Jie can beat alphago?

 

 

Link to comment
Share on other sites

She does not think Ke Jie can beat alphago?

 

It should be he.

Nie Wei-Ping think apparently AlphaGo skill is far more strong than the highest skill of all the human Go professional players. AlphaGo makes human feel fear!

Of course, Ke Jie probably can't beat AlphaGo.

P.S.

Google CEO came to Beijing several days ago,dicussed challenge match.

Link to comment
Share on other sites

"Easily" is an overbid. Bridge results are limited to some extent by luck.

That's why you play long matches, to reduce the luck factor.

 

Notice that we typically see most of the same faces in the late rounds of major bridge events like Spingold and Bermuda Bowl -- how often is there a real surprise win? That implies that ability is the major factor in who wins, not luck. Therefore, if we could make a bridge program that's better than all the human bridge champions, it should be able to win. And if it's much better, it should win consistently.

Link to comment
Share on other sites

"Easily" is an overbid. Bridge results are limited to some extent by luck.

 

Also, what difference would it make to remove system restrictions? Do you imagine bots that will create their own bidding systems?

 

Fair call - "go in as short priced favourites" is probably more accurate.

 

Forcing computer bridge engines to play a standard method is the equivalent of forcing a chess engine not to deviate from standard opening lines.

 

Given computers have no limitations on memory or complexity, they would gain a huge advantage from being allowed to devise a complex and completely optimal, (and artificial) relay bidding system that attributed a precise meaning to every possible auction and continuation.

Link to comment
Share on other sites

Given computers have no limitations on memory or complexity, they would gain a huge advantage by being allowed to devise an entirely artificial relay bidding system that incorporated all kinds of revolutionary ideas.

Why shouldn't computers be subject to the same system restrictions as humans? They should play the same game as we do. We have to figure out how to find the best game within these limits, so should they.

 

Sure, one of the reasons human bidding systems have only limited artificiality is because most of us wouldn't be able to remember dozens of artificial bids, so the restrictions are to some extent our own making.

 

There's also an inherent limit to artificiality: someone needs to bid naturally before you get past the safe level. So even though there are 34 legal responses to a 1 opening, potentially allowing you a huge number of messages you can send, most of them will get you past a contract you can make.

Link to comment
Share on other sites

Why shouldn't computers be subject to the same system restrictions as humans? They should play the same game as we do. We have to figure out how to find the best game within these limits, so should they.

 

Sure, one of the reasons human bidding systems have only limited artificiality is because most of us wouldn't be able to remember dozens of artificial bids, so the restrictions are to some extent our own making.

 

There's also an inherent limit to artificiality: someone needs to bid naturally before you get past the safe level. So even though there are 34 legal responses to a 1 opening, potentially allowing you a huge number of messages you can send, most of them will get you past a contract you can make.

 

Why are system restrictions imposed at all?

 

In a practical (read: human vs human) situation, there are many good reasons to impose some system restrictions. They make disclosure easy, create a level playing field and generally facilitate an enjoyable game.

 

But if you consider bridge in it's purest form, the rules are elegantly simple. You must make a legal bid and that bid can mean anything as long as you fully disclose the meaning to your opponents. Computers would have absolutely no issue following the disclosure aspect - they could even deal out a huge sample of hands that fit partner's bidding pattern on request and present them to the opponents as an example.

 

However complex methods have enormous unrealized potential. Just one example (which is legal even with the current rules) is that it's possible to exchange a key (specifically possession of some subset of particular cards) which allows you to encrypt a message that only your partner can understand. A method that I've seen allows you to check for a major fit, without revealing to the opponents which major either player holds.

 

Have a look at the final page of this system card for a full explanation.

 

http://livebridge.net/bbo/abf/cc/476791-497746.pdf

Link to comment
Share on other sites

That could be interesting, even if I don't expect a different result.

More details didn't be disclosed, it appears that Google doesn't arrange to hold AlphaGo Vs human event recently.

here I post three pictures.

 

 

http://i63.tinypic.com/2lifoxx.jpg

Kejie and Sundar Pichai

 

http://i68.tinypic.com/2ldadf7.jpg

 

http://i66.tinypic.com/14d39k9.jpg

Link to comment
Share on other sites

System restrictions rarely impose any limits beyond openings and initial overcalls. Even the ACBL mid-chart has few restrictions beyond this; WBF has virtually none.

 

An advantage computers have is ability to memorize many long sequences without error. This will matter more in long and rare auctions, not the openings. So I don't see much difference made by system restrictions here.

 

There are issues with communicating meanings of bids between computers and humans, especially in the area of when/how an agreement may be violated.

 

In any case, watching late rounds of top level matches there are enough clear errors (fatigue probably a factor) that it's easy to believe a serious effort at computer players could win consistently.

Link to comment
Share on other sites

But if you consider bridge in it's purest form, the rules are elegantly simple. You must make a legal bid and that bid can mean anything as long as you fully disclose the meaning to your opponents. Computers would have absolutely no issue following the disclosure aspect - they could even deal out a huge sample of hands that fit partner's bidding pattern on request and present them to the opponents as an example.

That's a horrible method of disclosure. Seeing a huge sample of hands doesn't tell you which features of those hands are relevant to the situation at hand. It would be like the encyclopedia entry for "dog" just having pictures of dozens of dogs, without any words explaining how they differ from wolves.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...