Antrax Posted July 26, 2012 Report Share Posted July 26, 2012 Interesting, from what I understood from Stephen Tu's posts, single-dummy analysis is possible but computationally expensive. Quote Link to comment Share on other sites More sharing options...
Scarabin Posted July 26, 2012 Author Report Share Posted July 26, 2012 Interesting, from what I understood from Stephen Tu's posts, single-dummy analysis is possible but computationally expensive.Not familiar with these, can you reference specific Topics/Posts, please? By the way I am not sure exactly what Gibson (in GIB) does either. Quote Link to comment Share on other sites More sharing options...
helene_t Posted July 26, 2012 Report Share Posted July 26, 2012 Since Bridge is a finite game, single-dummy analysis is conceptually possible, but it would surprise me if it became feasible before we get quantum computers. But maybe a little could be done towards patching the biggest holes in the double-dummy approach. I have seen GIB leading a stiff king in a suit bid by declarer - makes sense since declarer would never take a losing finese anyway! Quote Link to comment Share on other sites More sharing options...
Antrax Posted July 26, 2012 Report Share Posted July 26, 2012 Here's one: http://www.bridgebase.com/forums/topic/53476-cant-cash-on-the-way-to-a-potentially-big-set there are better examples. Helene, that seems a bit pessimistic. Intuitively, it seems possible to write a program to play as well as a low intermediate. So no practice finesses and no suit blockage and no discarding your K, and you end up with something more fun to play with than GIB, that can sometimes get a quad-shifted guard squeeze right or whatever, but doesn't lead AK vs. 6NT. [edit]Another example: http://www.bridgebase.com/forums/topic/52712-second-hand-high/page__view__findpost__p__632092And this: http://www.bridgebase.com/forums/topic/52989-gib-the-worst-declarer-play/page__view__findpost__p__635141 Quote Link to comment Share on other sites More sharing options...
advanced Posted July 26, 2012 Report Share Posted July 26, 2012 Thanks for your responses, I really appreciate your valuable input. To Advanced: I chose GIB as being familiar to most BBOers and because Barmar gives us insights into its actual methods. I agree Jack is very professional but so is Wbridge5 and both have serious limitations and, I find, infuriating defects. If you think Jack has "infuriating defects" then GIB makes you want to buy a rope to hang yourself, GIB is a baby compared to Jack Quote Link to comment Share on other sites More sharing options...
bluecalm Posted July 26, 2012 Report Share Posted July 26, 2012 Since Bridge is a finite game, single-dummy analysis is conceptually possible, but it would surprise me if it became feasible before we get quantum computers. Helene, that seems a bit pessimistic. Intuitively, it seems possible to write a program to play as well as a low intermediate Wow my intuition on this one is completely different. I am shocked nobody wrote a card playing program better than top humans by now and I think it's only for the lack of incentives (not that current programmers are bad, just that lack of incentives to write good bridge playing programs makes the field stagnant compared to say chess programming).Computers are fast these days they can simulate a lot. I think 1 or 2 levels of "assuming he plays double dummy and I do that the result is" are feasible and that could potentially be very strong. Anyway, I might be completely off on this one. Can't wait to finally have time to try some ideas and see how they fare. Quote Link to comment Share on other sites More sharing options...
Scarabin Posted July 26, 2012 Author Report Share Posted July 26, 2012 To Advanced: What I want, perhaps unreasonably, is a simulation that plays and bids to the 4th level of BridgeMaster. I fear that prolonged exposure to Jack, Wbridge5, GIB, and Shark, may actually lower my standard of play. I hesitate to identify differences without a lengthy test on set hands. To Antrax: Thanks but do you not wonder where Stephen Tu gets his special information. Just writing a simulation does not give any special insight into other author's achievements although it sure shows you what can go wrong.Contrast his attitude with Barmar's. Barmar manages to correct our assumptions about GIB without appearing either condescending or arrogant. I am aware of analyses of later plays becoming more accurate although my commercial copy of GIB seems to make mistakes at all stages of the play. Quote Link to comment Share on other sites More sharing options...
nigel_k Posted July 26, 2012 Report Share Posted July 26, 2012 Wow my intuition on this one is completely different. I am shocked nobody wrote a card playing program better than top humans by now and I think it's only for the lack of incentives (not that current programmers are bad, just that lack of incentives to write good bridge playing programs makes the field stagnant compared to say chess programming).Computers are fast these days they can simulate a lot. I think 1 or 2 levels of "assuming he plays double dummy and I do that the result is" are feasible and that could potentially be very strong. Anyway, I might be completely off on this one. Can't wait to finally have time to try some ideas and see how they fare.I have the complete opposite view. Not having full information makes the problem exponentially harder. Actually, I'm not even sure the problem of how to play and defend a hand is well understood enough right now that any speed of computer could be programmed to beat the top humans. Quote Link to comment Share on other sites More sharing options...
barmar Posted July 27, 2012 Report Share Posted July 27, 2012 I think it's a little bit of both. Programming bridge is HARD. What makes bridge such a fascinating game is that it's so hard for us to figure out how to play. A bridge player with any competence has to make lots of inferences about WHY partner and the opponents are doing what they're doing. Getting into the mind of others is a difficult thing to program, but it's actually one of the things the human main does very well. Bridge expertise depends on an aspect of humanity that really sets us apart from computers. Put that together with the fact that there's not a huge market for bridge programs. For some reason, chess programs have always been more popular. And until Deep Blue bested the world's best chess player, it was always seen as a very legitimate area of study. And it was true that until computers got fast enough, the types of analysis necessary to make a good chess program were very interesting computer science problems. Currently, GIB works by dealing a bunch of random hands, then calculating the double dummy result of playing each legal card with each of them, and choosing one of the cards that has the best average score across all the hands. Its biggest problem is in selecting the hands to perform this analysis on; currently, it just selects them based on how consistent they are with the bidding. It DOESN'T make any inferences from previous cards played (except I think it does recognize standard honor sequence leads, and infers the adjacent card is in the player's hand). What a program COULD do is go back to the previous card played, insert it into the player's hand in each of these random hands, and do a similar analysis from that player's perspective to see if that was a reasonable card to play with each of the hands, so that it could discard hands that are not consistent with the previous player playing that card. It could then repeat this process for each previous card played, up to some reasonable limit. But this is extremely computationally expensive, even for modern computers. Realize that in the above process, we're just using all those extra simulations to filter the hands down to a set of hands to perform DD analysis of OUR play on. If we want to get a reasonable number of final hands so that the statistical results are meaningful, it means performing all that backtracking on many thousands of hands. That adds up quickly, especially in early rounds of play when there are so many choices of plays. Lack of this kind of analysis is the reason for some really stupid things GIB does. If declarer cashes an Ace, with Queen in dummy, and partner plays the King, GIB can't figure out that the only reason for this is that partner had a stiff. Even the dumbest human recognizes this, and uses it to refine his count of the hand, or gives partner a ruff, but GIB doesn't do any of this. When examining the GIB source code, trying to understand its defensive algorithms, I think I saw some code that was intended to do this kind of analysis, but it's disabled. When we asked Ginsberg about it, he said he tried to do it, but it slowed the program down too much. But that was a decade ago, so I'm curious how well it would work on modern systems. Quote Link to comment Share on other sites More sharing options...
Quantumcat Posted July 27, 2012 Report Share Posted July 27, 2012 I wonder if it would make a good postgraduate computer science research topic? Making a bridge program that plays more like a human? It wouldn't have to be commercially viable (would probably need a supercomputer to run quickly enough), but perhaps it could compete in the computer bridge championships ... Quote Link to comment Share on other sites More sharing options...
Stephen Tu Posted July 27, 2012 Report Share Posted July 27, 2012 Thanks but do you not wonder where Stephen Tu gets his special information. Why not just ask me? My information is not "special". It is just information gleaned from Matt Ginsberg's postings on rec.games.bridge and the gib-discuss email-list that used to be hosted on lists.gibware.com but is now defunct, mostly in the late 90s until 2002. Some of Matt's posts are archived on groups.google.com from the rec.games.bridge usenet group, unfortunately a lot has been lost to the sands of time of the un-archived internet. A shame that the GIB email-list archive isn't still online. Perhaps BBO has this on storage somewhere and could perhaps dig it out and make it available again? Ginsberg's posts were very educational for people who are curious about how GIB thinks and why it is prone to making the kind of mistakes it does. I probably only truly understood half of his more technical posts, but that's still more than a lot of people on the robot forum coming in and making proclamations of how it should be fixed despite knowing close to zero of how it works. I also did some volunteer grunt work for Matt some 10-11 years ago manually combing through many hundreds of deals GIB played vs. humans on Swangames bridge, noting the more egregious bidding holes some of which Matt would attempt to fix before he quit working on GIB ~10 years ago. Unfortunately bidding is very complex with zillions of potential sequences, so there are still plenty of holes, and as new fixes are put in sometimes old things break again. As bad as GIB still is on some sequences, especially rarer ones and competitive sequences, 12 years ago it was even nuttier! Note also that Ginsberg mainly developed GIB's play engine; the bidding was based on a rules db from "Meadowlark Bridge", by Rod Ludwig. Ginsberg did modify it extensively to link it with GIB's simulations, trying to get GIB to bid more on this and use fewer rules. But he found that there were just too many holes, GIB would do its simulations & conclude that doing something crazy would work (often bidding to some high level where auctions were undefined) assuming opps couldn't deal with it (and it would be right, playing vs. itself, but real humans would just pull out the obvious penalty double etc.), so he had to turn off psyching and go back to more rules and less simulation in spots. Contrast his attitude with Barmar's.Sorry, I know my posting style is blunt. But I have little tolerance for idiocy, people making declarations of how things should be done when it is clear they don't really know what they are talking about. If this comes off as arrogant, so be it. I also think it is arrogant to think that a small group of you will in your spare time in a few short years do better than Ginsberg, or people like Hans Kuijf/Yves Costel and their Jack/Wbridge programs. I agree totally with fuburules3's post above. BTW GIB does do single-dummy reasoning after a few tricks as declarer only. If you have the commercial program, and are running the bridge.exe command prompt mode you can run with lots of different parameters described at:http://www.gibware.com/engine.txtEspecially experiment with the -L par mode and -m # of deals parameters when working with the harder problems. When it switches to Gibson, the single-dummy engine it prints out a message "switching to Gibson" and with the -s flag you can see its method of reasoning changes. IMO GIB does very well as declarer when given proper time controls and understands the auction (can get thrown if hits a bidding DB hole, and it biases its sample deals badly). If you think it is still making mistakes at late stages in the play, maybe post some hands where you think it is going wrong? The problems with GIB IMO are the bid DB still has tons of holes (which also affects play/defense since it uses bid info to bias its sample), defense is severely crippled from 1. inability to signal and 2. assumption of double-dummy declarer leading to solve declarer's guesses for him. The declarer play in my view is quite good. But all bets off if playing with the "basic" bots which have the Gibson turned off and the super-fast time settings, then it can play rather horrendously. What I want, perhaps unreasonably, is a simulation that plays and bids to the 4th level of BridgeMasterGIBson is claimed to be 26/36 on L4 bridgemaster, although there is some debate on some of the ones GIB "missed" in the following rgb thread: (i.e. for some "missed", possible Gibson analysis defensible vs. Gitelman's)https://groups.googl...dk/6FbE6W2QG4oJ Quote Link to comment Share on other sites More sharing options...
Scarabin Posted July 27, 2012 Author Report Share Posted July 27, 2012 There is one thing I am confident about: If you focus on a specific narrow problem you have a good chance of solving it (and I hope that by getting a lot of programmers to focus on such problems we could produce a first class simulation). Think of deriving information about the concealed hands from the bidding, this is feasible to program and can include ancillary information like "opartner has called for a lead of this suit". If there is a limited number of bidding systems the computer can itself obtain the information or we can cope with a lot of systems by having the user enter the understanding of each bid. From this it is a small step to programming opening leads. I am not telling anyone anything they do not know already but I am trying to explain why I think it is possible to program a rule based system ultimately to outplay a random simulation. I think there is already sufficient evidence that rule based systems are better for bidding. To Stephen Tu: Thanks for that very helpful post. I would have asked you directly but I must confess I was put off by your style, I am still new enough to posting to have delicate feelings. I will investigate the links you give. By the way I think I have been using GIB with Gibson turned off, so thanks for information. Quote Link to comment Share on other sites More sharing options...
Quantumcat Posted July 27, 2012 Report Share Posted July 27, 2012 After reading posts by people that sounds like they know what they are talking about, I would suggest you learn how GIB or Deep Finesse works first (and read their source code), figure out what their weaknesses are and find out if the programmers know of them, and have reasons for designing their algorithms the way they have (more efficient), then when you have a holistic understanding of GIB and could program it yourself from scratch with no references, THEN you can go about designing your own bridge playing software. Otherwise you'll end up making the same mistakes people did when they designed the very first bridge program, and you've got about 30 years worth of mistakes to make and fix before you get to where GIB is now. Quote Link to comment Share on other sites More sharing options...
Siegmund Posted July 27, 2012 Report Share Posted July 27, 2012 I am shocked nobody wrote a card playing program better than top humans by now and I think it's only for the lack of incentives (not that current programmers are bad, just that lack of incentives to write good bridge playing programs makes the field stagnant compared to say chess programming).Computers are fast these days they can simulate a lot. As nigel said, having each side choosing a strategy makes the problem vastly harder. Without getting into details... single-dummy analysis of cribbage is something that might just barely be coming within reach of a computer. Single-dummy analysis of a random whole bridge hand is a long, long way away. I think 1 or 2 levels of "assuming he plays double dummy and I do that the result is" are feasible and that could potentially be very strong. ...strong, yes, in the same way that GIB is strong now -- and its cardplay playing with itself IS very strong -- but if your goal is to avoid the plays that GIB doesn't understand, you don't get there by sticking one ply of single-dummy on top of a double-dummy analysis. Quote Link to comment Share on other sites More sharing options...
Antrax Posted July 27, 2012 Report Share Posted July 27, 2012 I also think it is arrogant to think that a small group of you will in your spare time in a few short years do better than Ginsberg, or people like Hans Kuijf/Yves Costel and their Jack/Wbridge programs.Indeed. However, personally my goal is not to create the next Bridge world champion program, but rather go off in a challenging direction that wasn't explored too much (due to the constraints of the time), and can result in a nice product (a program that plays Bridge which mistakes you can understand). A while ago I was involved with a group that tried to write a chess engine that explains why the right move is the right move - not by showing some responses and how you counter them, but by "understanding" stuff like "his queen side is weak" or "this keeps the bishop locked up", i.e. the way a tutor would explain to a student. Nothing came out of it, as probably nothing will come out of this, but that's the sort of project that I think can be both feasible and rewarding. Quote Link to comment Share on other sites More sharing options...
Stephen Tu Posted July 27, 2012 Report Share Posted July 27, 2012 This is Ginsberg's last paper on GIB and Gibson:http://www.jair.org/media/820/live-820-1957-jair.pdf I'm too dumb & lazy to understand most of it, but if someone can, maybe you are one who should join BBO team and work on improving the bots. What I do think is that a "rules-based" "human-like thinking" approach to play is unlikely to work well (for bidding, rules necessary, this can work). Computers are good at brute force searching & computation. From what I understand what makes the chess programs good is mostly just sheer computational power; they get human GMs to tweak positional evaluation but brute force is the computer's edge, not "human-like thinking". I don't see why bridge wouldn't turn out the same. Our goal is "human expert level play", for the *output* of the program. But to me that requires more brute force simulation, and the *internal logic* of the program is thinking like computer, not human-type rules at all. The program think like computer, but the card it selects end up same as human expert, but arrived at same answer using different path, different technique. I think GIB/Jack/Wbridge all show superior result of simulation approach vs. early version Bridge Baron etc. which use planning technique. I think if you try build new rules-based cardplay program from scratch you likely just end up spend many many years and still end up worse card player than current GIB. More likely success is some AI expert who can understand GIB code, then extend it along the lines Ginsberg was thinking of to fix current flaws and take advantage of today's computer, with more processors/cores/memory. This is just my opinion, maybe I am wrong. But I bet on simulation not rules-based cardplay given what I know. Quote Link to comment Share on other sites More sharing options...
Antrax Posted July 27, 2012 Report Share Posted July 27, 2012 Stephen Tu, you're preaching to the choir - I've posted in this very thread that I don't believe rule-based programs have a future. But surely there's a middle path where the computer AI takes into account that the opponents have incomplete information, or at least prunes impossible hands from the double-dummy sims based on previous plays. No? Quote Link to comment Share on other sites More sharing options...
Scarabin Posted July 27, 2012 Author Report Share Posted July 27, 2012 After reading posts by people that sounds like they know what they are talking about, I would suggest you learn how GIB or Deep Finesse works first (and read their source code), figure out what their weaknesses are and find out if the programmers know of them, and have reasons for designing their algorithms the way they have (more efficient), then when you have a holistic understanding of GIB and could program it yourself from scratch with no references, THEN you can go about designing your own bridge playing software. Otherwise you'll end up making the same mistakes people did when they designed the very first bridge program, and you've got about 30 years worth of mistakes to make and fix before you get to where GIB is now. This reads like a pretty rude and pompous post but if it is directed at me then perhaps I deserve it. I did not mean to offend you by ignoring your posts as I did not recognise any questions addressed to me but please accept my apologies. I am hardly qualified to advise you on choosing a subject for a thesis and can only say that when I attended uinversity we had a system of mentors, called tutors, and I would have discussed this with my tutor. This was a very long time ago and nowadays I suspect the normal thing would be to approach one of the professors in your discipline. Turning to your specific points: I do not have access to GIB or Deep Finesse's source code (and as far as I know these are not available) so I cannot follow your advice. I wrote my first bridge simulation about 30 years ago for the Commodore 64 computer. The program covered bidding and defence and coped with the AutoBridge hands compiled by Alfred Sheinwold. I can understand that you consider a rule based play program is much more difficult than random simulation What I cannot understand is the rush to discourage anyone from even attempting to do so. Do you consider Matthew Ginsberg presumptuous and arrogant? He cannot have been a proven author of world class programs when he started GIB, and the same goes for John Norris (Shark), Yves Costel (Wbridge5), Hans Kujf (Jack). Benito Garrozzo used to say that every great play starts with a single decision (like every journey starts with a single step). What is wrong with taking results on faith and seeing what you can do? Again my apologies, I meant no disrespect. Quote Link to comment Share on other sites More sharing options...
Scarabin Posted July 27, 2012 Author Report Share Posted July 27, 2012 I think GIB/Jack/Wbridge all show superior result of simulation approach vs. early version Bridge Baron etc. which use planning technique. I think if you try build new rules-based cardplay program from scratch you likely just end up spend many many years and still end up worse card player than current GIB. More likely success is some AI expert who can understand GIB code, then extend it along the lines Ginsberg was thinking of to fix current flaws and take advantage of today's computer, with more processors/cores/memory. This is just my opinion, maybe I am wrong. But I bet on simulation not rules-based cardplay given what I know. Thanks Stephen, Life is short and if I had access to a good random simulation play engine I would probably follow your advice. I don't have so I will continue with my attempt to perfect a rule based play engine. If I find that I cannot avoid sheer judgment plays I will probably have the program peek, at least pro.tem.. I have no desire to endure the drudgery of reproducing a random simulation. At this stage of my development I prefer to branch out on a new path rather than follow a well worn trail. Quote Link to comment Share on other sites More sharing options...
bluecalm Posted July 27, 2012 Report Share Posted July 27, 2012 .strong, yes, in the same way that GIB is strong now -- and its cardplay playing with itself IS very strong -- but if your goal is to avoid the plays that GIB doesn't understand, you don't get there by sticking one ply of single-dummy on top of a double-dummy analysis. My understanding from barmar's posts is that GIB doesn't even try this one ply search (due to computational power limitations of decade old computers). It also doesn't understanding basic signalling. My intuition is that it could be fixed in simulation based framework. Single-dummy analysis of a random whole bridge hand is a long, long way away. Solving it is long way away and maybe not possible ever.Playing better than top humans is much easier though. Humans suck. Even at top level they make numerous blunders in card play which would never be made by fresh/rested/not pressured advanced player. Computer player doesn't need to be even close to solving anything to beat that imo. I also think it is arrogant to think that a small group of you will in your spare time in a few short years do better than Ginsberg, or people like Hans Kuijf/Yves Costel and their Jack/Wbridge programs. No idea about Jack but if Ginsberg stopped GIB development 10 years ago the chances are he missed many solutions/improvement due to his intuition of what is computationally feasible being developed at the time where hardware was 1000x slower than today. I appreciate all the knowledgeable people posting in this thread. I accept that my intuition on this one might be completely off so I will shut up and maybe code something. Quote Link to comment Share on other sites More sharing options...
bluecalm Posted July 27, 2012 Report Share Posted July 27, 2012 Btw, can current state of the art bridge programs solve all bm2000 problems assuming we tell them what hands are possible from the bidding ? Quote Link to comment Share on other sites More sharing options...
Stephen Tu Posted July 27, 2012 Report Share Posted July 27, 2012 But surely there's a middle path where the computer AI takes into account that the opponents have incomplete information, or at least prunes impossible hands from the double-dummy sims based on previous plays. No? I don't see how that is a "middle path". It's just even more simulation, but opps are running single-dummy not double-dummy engines. Things that barmar described above that GIB could do, that can get super computationally intensive. I don't see how that relates to a "rule-based" approach. Btw, can current state of the art bridge programs solve all bm2000 problems assuming we tell them what hands are possible from the bidding ? The paper linked above claims 163/180 for 2001 GIB on BM2000, if given benefit of the doubt on a dozen or so hands out of the 180 where he felt that GIB's line wasn't clearly worse than BM's recommended line or might be better. See Gibson vs. Bridge Master posts on rec.games.bridge for discussion of the hands in question. The initial version of GIB, without the single-dummy engine, scored 100/180. Don't know about other programs, haven't seen people posting test results. Quote Link to comment Share on other sites More sharing options...
P_Marlowe Posted July 27, 2012 Report Share Posted July 27, 2012 Hi, I have toyed with such an idea, but my energy level is usually quite low. The whole thing may become a bit more interesting, if you go #1 use learning methods like reinforecment learing to train the player The difference between Chess and Bridge is the #1 random part#2 the amount of information that is available to all participants The rule based approach works, because you have full information ina non random enviroment.The reinforcement approach at least worked for Backgammon, ... random game with full information. Final comment, there exist an open source double dummy solver.http://privat.bahnhof.se/wb758135/ With kind regardsMarlowe Quote Link to comment Share on other sites More sharing options...
antonylee Posted July 27, 2012 Report Share Posted July 27, 2012 So, for example, if there is a rule in constructive bidding: "with 12-14hcp if it goes 1D - 1S we raise to 2S with 4 spades" then it's instantly not interesting because there is nothing new we can learn from that. On the other hand if simulations or similar process shows the bid being effective then that's something - even if only confirmation in this somewhat obvious case.I want my program to answer questions like : "is it better to bash 3NT here or to go via stayman and try to discover 4-4 major fit" or: "if I bid 3NT here, how often I will make in real play", or "what is the difference in EV between preempting to 3S and 4S here".Once it's rule based it will just tell me what common wisdom here is which - especially that you are proposing that many people contribute to the knowledge base - about useless in my mind.I think there are a few interesting questions here.The first example can be seen as, given an auction that starts 1D-1S, and assuming that opener has 4 spades (let's forget about 3-crd raises :-)), with what hands should he bid 2S? 3S? 4S?We can make this question even simpler. Assume that opener opens 1N, and that responder has a balanced hand without a 4cM (or responder has any hand, but we only allow NT play -- allowing suit play only obscures the essential point). Essentially, there are 3 calls possible: pass, 2N and 3N (let's forget about slams too). An "invite strategy" is basically a partition of all possible hands for responder into 3 subsets, RESP={pass, 2N, 3N}, and a partition of all possible hands for opener into 2 subsets, OPEN={accept invite, reject invite}.How do we evaluate how good a strategy is? One possibility is to make the program play against itself, or against other programs. However this is not good because we would then try to exploit each program's peculiar weaknesses and bugs (say that my opponent, for some reason, misdefends whenever he doesn't have the C2; then I'll bid much more aggressively whenever I have it) -- this is pointed out, in the context of bidding, by Ginsberg in his paper linked above. An improvement would be to score the final contract against the par DD contract (again, only allowing NT contracts). An even better refinement would be to score against the par DD contract but with single dummy opening leads -- blasting to 3N compared to going through 2N is well-known to make it harder for the defense to find the right opening lead.Anyways, even with DD scoring, is it possible for simulations to give us a better hint at what the RESP and OPEN sets look like? Most of us use a description that resembles: RESP={pass: <8HCP, 2N: =8HCP, 3N: >8HCP} and OPEN={accept: <=17HCP, reject: >=17HCP} with some judgment in between... but better descriptions should exist. Quote Link to comment Share on other sites More sharing options...
barmar Posted July 27, 2012 Report Share Posted July 27, 2012 BTW GIB does do single-dummy reasoning after a few tricks as declarer only. Note that this is one of the significant differences between the "Basic" and "Advanced" robots that you can rent from BBO. The Basic robots have GIBson disabled, so everything is done using DD simulations, and they have less thinking time than the Advanced robots. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.