dosxtres Posted July 3, 2007 Report Share Posted July 3, 2007 I think the implementation of a rating system is a very big problem.A good rating system is almost impossible.It depends where do u play, who do you play with, what is the field...and more. Because wc want to launch games easyly with the best players, but maybe the dont know all other. Experts want to play with experts and wc. Advaced want to play with adv and experts. Int want to play with adv. Beg want to..... That is how they can probe that they are learning, they enjoy the game, and that is like in all aspects of the life, things have been doing. Ones must learn from others who know more. Anyway, if something could be done, i think it could be with something that anyone can make visible to people in his profile. Like clubs you are joined, and an status that could be managed by each club stuff. say, Expert Mountain club. Someone ask for joining.At the end of the trial period, the Expert Mountain club reject him or accept.So, they give him, say, the expert status (by this club point of view) So, you can enable to people: Expert Mountain Club skill level: Expert The rest is, you can trust some clubs or not, but you can have a clue.And everything is up to the single people to enable this status. Quote Link to comment Share on other sites More sharing options...
helene_t Posted July 4, 2007 Report Share Posted July 4, 2007 I wonder what sick soul came up with the idea of computing individual ratings from online selected-partner-game results. One can argue about the meaningfulness of such a scheme but the averse social implications must be obvious to everyone, even in advance. Much more now that we have learned from the experience from other sites. I would have more sympathy for individual ratings based on one-human-plus-7-GIBs team matches. Quote Link to comment Share on other sites More sharing options...
nige1 Posted July 5, 2007 Author Report Share Posted July 5, 2007 I wonder what sick soul came up with the idea of computing individual ratings from online selected-partner-game results. One can argue about the meaningfulness of such a scheme but the averse social implications must be obvious to everyone, even in advance. Much more now that we have learned from the experience from other sites. I would have more sympathy for individual ratings based on one-human-plus-7-GIBs team matches.Most games are competitive by nature. Bridge certainly is. Unsurprisingly, at face-to-face bridge, the various master-point and gold-point rating schemes are among the most popular facilities provided by National Bridge Organizations. On-line bridge allows more accurate computation of ratings, including quite accurate estimates of current form, if you play enough. I see little harm in giving players feedback of this nature if they opt for it. The site can reveal such information only to individuals who are interested. Nobody else need be nauseated. Quote Link to comment Share on other sites More sharing options...
Al_U_Card Posted July 5, 2007 Report Share Posted July 5, 2007 If you go to myhands, type in the player nick and ask for the last month's results, you get a good indication of his performance in terms of average imps and average mp % over all the hands played. This is an indicator and might save the hand or two that you usually need to "spot the loony" and excuse yourself from the table. Would it lead people to attempt to "pump up" their numbers? Remains to be seen. Quote Link to comment Share on other sites More sharing options...
hrothgar Posted July 5, 2007 Report Share Posted July 5, 2007 On-line bridge allows more accurate computation of ratings, including quite accurate estimates of current form, if you play enough. Developing an accurate rating system is far from trivial. I suspect that a fair amount of engineering effort would be necessary to devise an accurate system. This ignores the enormous ongoing effort that would be required to administer a rating system. As I've noted in the past, I don't see any reasonable way to develop a rating system that is both 1. Accurate2. Can be understood by laymen I recall all the bullshit necessary to try to explain the implementation of the Lehman system to average players. There was a never-endering queue of folks trying to understand how the ratings where calculated, arguing about implementation details, and trying to explain how the system was flawed because they're really much better than their score suggests... Please note: This cropped up all the time with the Lehman system, which used a very simple algorithm. Personally, I suspect that an accurate rating is going to need to use a Kalman filter or some other similar approaches taken from signal processing. You are NEVER going to be able to explain the implementation to the average. Accordingly, the political issues will be much worse. Personally, I'd prefer if Fred and Uday focused on more serious issues. Equally significant, its far from clear whether Fred / Uday have the technical expertise to address this type of project. Here's my suggestion: Bridge Browser provides all the raw data that folks would need to develop and test a rating system. If you think that you can develop an accurate system, go out and do so. Demonstrate that your system has good predictive power. Once you're done, come back and tell us about it. Folks can then debate whether or not this should be integrated into BBO. Conversely, if you aren't willing/able to do the necessary work, then stop talking about what can be done... (BTW, I'll make my usual suggestion that you will probably have better luck if you initially focus on a system that is able to accurately rate partnerships rather than rating players) Quote Link to comment Share on other sites More sharing options...
awm Posted July 5, 2007 Report Share Posted July 5, 2007 Trying to rate individuals based on information about performance of pairs seems to be a hard mathematical problem. It's made especially difficult because pairs can be more or less than the sum of the parts (depending on partnership experience and agreements). However, on BBO we really have more information than just the net results of contracts. We could analyze each card played. In particular, I like the idea of computing double-dummy error rate. We could define: (1) A play is a double-dummy error if it reduces the double-dummy trick total for the person making the play. It's easy to notice these when kibitzing using GIB. (2) A play is a double-dummy contract-costing error if it changes the result of the hand from making to down (for declarer) or vice versa (for defense). In principle we could compute these error rates. This has a number of nice features; in particular the error rates for declarer are mostly independent of partner. There are obviously a few things to watch out for, in particular: (1) Everyone will have non-zero error rates, because double-dummy play often requires anti-percentage lines. (2) Safety plays will often score as a double-dummy error, but not as a contract-costing error. Especially at IMP scoring it may be good to be aware of this. An alternative might be to weight the errors by IMP cost (so losing an overtrick costs only 1, but losing the contract costs 10 or whatever for a game). (3) One can avoid contract-costing errors by constantly underbidding, so every contract rates to make +1 or +2. (4) On defense, there will be impact from partner's signals/failure to signal to the degree that some mistakes will not be a defender's "fault." However, despite the problems, this seems to me like a better way to measure skill than trying to look simply at end results. Quote Link to comment Share on other sites More sharing options...
hotShot Posted July 5, 2007 Report Share Posted July 5, 2007 However, on BBO we really have more information than just the net results of contracts. We could analyze each card played. In particular, I like the idea of computing double-dummy error rate. We could define: (1) A play is a double-dummy error if it reduces the double-dummy trick total for the person making the play. It's easy to notice these when kibitzing using GIB. (2) A play is a double-dummy contract-costing error if it changes the result of the hand from making to down (for declarer) or vice versa (for defense). In principle we could compute these error rates. This has a number of nice features; in particular the error rates for declarer are mostly independent of partner. There are obviously a few things to watch out for, in particular: I have done that and posted some results of that a while ago. But the system is limited to play and ignores bidding completely.It is less significant as one might think. WC's and gib produce error rates of about 0.4 errors per board. Intermediates make about 0.8 and beginners reach 0.9+. I had a few pickup partner with rates larger than 1, but i did not play enough boards with them to make the results mean something in a statistic. Quote Link to comment Share on other sites More sharing options...
junyi_zhu Posted July 6, 2007 Report Share Posted July 6, 2007 However, on BBO we really have more information than just the net results of contracts. We could analyze each card played. In particular, I like the idea of computing double-dummy error rate. We could define: (1) A play is a double-dummy error if it reduces the double-dummy trick total for the person making the play. It's easy to notice these when kibitzing using GIB. (2) A play is a double-dummy contract-costing error if it changes the result of the hand from making to down (for declarer) or vice versa (for defense). In principle we could compute these error rates. This has a number of nice features; in particular the error rates for declarer are mostly independent of partner. There are obviously a few things to watch out for, in particular: I have done that and posted some results of that a while ago. But the system is limited to play and ignores bidding completely.It is less significant as one might think. WC's and gib produce error rates of about 0.4 errors per board. Intermediates make about 0.8 and beginners reach 0.9+. I had a few pickup partner with rates larger than 1, but i did not play enough boards with them to make the results mean something in a statistic. World Class and GIB make 0.4 errors per hand in card plays? That's just an insult of world class players, hahaha. GIB is generally weaker than intermediate players, not even in bidding, but also in card plays. It may make some tough double dummy contracts, however, it blows way more tricks in simple situations and it blows way more tricks in redoubled contracts than you can imagine. Quote Link to comment Share on other sites More sharing options...
junyi_zhu Posted July 6, 2007 Report Share Posted July 6, 2007 Trying to rate individuals based on information about performance of pairs seems to be a hard mathematical problem. It's made especially difficult because pairs can be more or less than the sum of the parts (depending on partnership experience and agreements). However, on BBO we really have more information than just the net results of contracts. We could analyze each card played. In particular, I like the idea of computing double-dummy error rate. We could define: (1) A play is a double-dummy error if it reduces the double-dummy trick total for the person making the play. It's easy to notice these when kibitzing using GIB. (2) A play is a double-dummy contract-costing error if it changes the result of the hand from making to down (for declarer) or vice versa (for defense). In principle we could compute these error rates. This has a number of nice features; in particular the error rates for declarer are mostly independent of partner. There are obviously a few things to watch out for, in particular: (1) Everyone will have non-zero error rates, because double-dummy play often requires anti-percentage lines. (2) Safety plays will often score as a double-dummy error, but not as a contract-costing error. Especially at IMP scoring it may be good to be aware of this. An alternative might be to weight the errors by IMP cost (so losing an overtrick costs only 1, but losing the contract costs 10 or whatever for a game). (3) One can avoid contract-costing errors by constantly underbidding, so every contract rates to make +1 or +2. (4) On defense, there will be impact from partner's signals/failure to signal to the degree that some mistakes will not be a defender's "fault." However, despite the problems, this seems to me like a better way to measure skill than trying to look simply at end results. If bridge is an art of partnership, it's just a big nonsense to rate individual's game(unless in individual tournaments). If you rate partnership strength, it's extremely simple, about the same as chess. Quote Link to comment Share on other sites More sharing options...
BillHiggin Posted July 6, 2007 Report Share Posted July 6, 2007 If you rate partnership strength, it's extremely simple, about the same as chess.Rating partnerships in bridge is much more complicated and problematic than rating chess. Bridge results (the basis for a rating) are not based on your partnership's performance compared to the performance of the opposing partnership at the same table. Rather it is your partnership's performance compared to the performance of the set of other partnerships holding the same cards IN COMBINATION with the performance of the other partnership at your table compared to the set of other partnerships sitting their direction on the same deal. The concept of "set of other partnerships..." can be conveniently considered as the corresponding field (and is a single partnerships in pure team games), but still bridge rating involves 8 (virtual) players or 4 (virtual) partnerships rather than the simple 2 opponents of chess. I have seen an attempt to adapt ELO ratings to bridge. The result was that the ratings did not behave similarily to chess ratings and more importantly the perception of those subject to the ratings was not positive. Chess ratings are transparent (you know how they are calculated and can verify the calculations) and most importantly they are percieved as fair and accurate. The perception issue is the toughest nut to crack. If those being rated have any perceptions that the rating is not accurate or overly subject to manipulation then the ratings become a source of trouble and complaints. Quote Link to comment Share on other sites More sharing options...
bassaidai Posted July 6, 2007 Report Share Posted July 6, 2007 GIB is generally weaker than intermediate players I don't think so: how many boards did you play against/with GIB ? How much thinking time did you allow GIB ? Quote Link to comment Share on other sites More sharing options...
mikeh Posted July 6, 2007 Report Share Posted July 6, 2007 Any attempt to rate players card play (defence or declarer) by a double-dummy analysis is not only silly, but completely misses the point. While true experts will sometimes adopt the double-dummy line, that will actually constitute an error some of the time. The beauty of the game lies, in part, in the fact that the best single-dummy line doesn't always work. This is obvious: holding AQJxx opposite 109xx, with no clues from the bidding, the correct single-dummy line is to finesse: but some of the time the correct double-dummy line is to play the A, dropping the stiff K offside. No expert would do this under anything approaching normal circumstances and so taking a hook losing to a stiff K will be recorded as a double-dummy error. Yet, if a ruff threatened, and we could afford to lose a trump trick (that suit as trump_ but not a trump trick and a ruff, we may well play the A rather than finesse. When the Kx(x)(x) is onside, we have committed a double-dummy error, altho, single-dummy, we made the right play. Furthermore, at the table, there are real-life inferences available to an expert declarer or defender... inferences from bids or passes, based on one's assessment of the level of aggression of the opps, inferences based on tiny or marked breaks in tempo, etc. I am not at all surprised that the double dummy analysis has .4 errors per board for WC... but I am willing to bet it is down to about .1 single dummy. At the same time, the few times I have kibb'd friends who are indifferent players or looked at samples of boards played many times on BBO, suggests to me that the average player makes more than 1 mistake per board... but that many of these mistakes don't cost, and would therefore not be caught by a computer analysis. A common type of error is the order in which we play the suits or the cards within a suit.. timing issues. And many of those end up not costing because the correct play caters to low-probability events. And so on. Not to mention the resemblance of bad bridge to watching tennis, as the errors go flying back and forth, often cancelling each other out B) Quote Link to comment Share on other sites More sharing options...
nige1 Posted July 6, 2007 Author Report Share Posted July 6, 2007 World Class and GIB make 0.4 errors per hand in card plays? That's just an insult of world class players, hahaha. I reckon that if you make less than 2 mistakes per board (bidding and play) then you are world-class :) :) :) You may even be a world-champion B) B) B) I don't mean double-dummy errors or even sophisticated errors -- I mean ordinary face-to-face practical errors that most players would accept if simply explained. -- like daisy-picking in the bidding -- or failing to make a play that could help prevent partner misdefending. At Bridge, few errors cost. Some actually gain because of a lucky lie of the cards or compensating opponent error. For a season, our team conducted a detailed post-mortem after every match. Collectively we never chucked less than 100 imps even in 24 and 32 board matches :) Some of these matches, we won, in national competition. On a few occasions, opponents resigned with boards to play. In 1 or 2 matches, compensating errors meant that the score-card deceptively showed us conceding less than 2 imps per board. Although, of course, we should have lost much more :( :( :( Quote Link to comment Share on other sites More sharing options...
junyi_zhu Posted July 8, 2007 Report Share Posted July 8, 2007 GIB is generally weaker than intermediate players I don't think so: how many boards did you play against/with GIB ? How much thinking time did you allow GIB ? A team of four with about 500 ACBL master points would beat a team of four gibs without much difficulties in a 64 board match, IMO, if both play sayc, under current BBO's slowest setting of gib. GIB has way more bugs than you can ever imagine. And it's certainly a wrong judgement of Zia to give up his famous bet when gib was invented. Quote Link to comment Share on other sites More sharing options...
junyi_zhu Posted July 8, 2007 Report Share Posted July 8, 2007 World Class and GIB make 0.4 errors per hand in card plays? That's just an insult of world class players, hahaha. I reckon that if you make less than 2 mistakes per board (bidding and play) then you are world-class :) :) :) You may even be a world-champion B) B) B) I don't mean double-dummy errors or even sophisticated errors -- I mean ordinary face-to-face practical errors that most players would accept if simply explained. -- like daisy-picking in the bidding -- or failing to make a play that could help prevent partner misdefending. At Bridge, few errors cost. Some actually gain because of a lucky lie of the cards or compensating opponent error. For a season, our team conducted a detailed post-mortem after every match. Collectively we never chucked less than 100 imps even in 24 and 32 board matches B) Some of these matches, we won, in national competition. On a few occasions, opponents resigned with boards to play. In 1 or 2 matches, compensating errors meant that the score-card deceptively showed us conceding less than 2 imps per board. Although, of course, we should have lost much more :) :( :( I have watched and commented enough top level matches to claim that it's not even close to 0.4 errors per board for world class players at top level competitions. Most hands in bridge are routine and there exists a huge gap between intermediate and world class players. Intermediate players blow at least one trick per hand(some may not cost because errors can cancel out). For world class players, I agree with Mikeh, it's less than 0.1 erros per board. Quote Link to comment Share on other sites More sharing options...
DKJ Posted July 8, 2007 Report Share Posted July 8, 2007 It is so very obvious that the current practice of the players descibing their own level of bridge skill is seriously detrimental to playing on BBO; there is simply on dearth of deluded ones and 'jokers'!. A more objective system must be introduced soonest possible, sesily done by BBO point system, or a rating system by partners in actual play...or a combination of both. Quote Link to comment Share on other sites More sharing options...
awm Posted July 8, 2007 Report Share Posted July 8, 2007 There is obviously a difference between single dummy errors and double dummy errors. I agree that top flight players make quite few single dummy errors (other than possibly on the opening lead). However, this doesn't necessarily invalidate the idea of measuring double dummy errors. If you routinely take the best single dummy line, sometimes you will do the "wrong thing" double dummy, but more often than not you will do the "right thing." In the long run, this will tend to average out and people who are taking the best single dummy line will end up with fewer (but nonzero) "errors per board." Obviously no one (except a cheater) will end with zero "errors per board." A number of nice aspects to this: (1) Measuring single dummy is hard. Measuring double dummy is relatively much easier.(2) Some players are known for table feel, or for reading the opponents leads or bidding. This enables them to find "anti-percentage" lines that work. If they can really do this, that should be rewarded by the rating system rather than penalized because "they didn' find the best single-dummy line."(3) It's nice that you can actually rate different aspects of people's play (declarer play, opening lead, defense, even distinguish between trump contracts and notrump or between partials and slams). There is an issue with safety plays, since taking a safety play is usually a "double dummy mistake" but actually maximizes the expected score. Simple way to fix this is to count mistakes based on the change in result, so that a safety play "mistake" is just lose one but a drop the contract on the floor "mistake" is lose a lot more. Quote Link to comment Share on other sites More sharing options...
helene_t Posted July 8, 2007 Report Share Posted July 8, 2007 What kind of World Class players make those 0.4 errors? The self-rated ones or the real ones? It wouldn't surprise me too much if GIB is close to WC level. Jack is on par with the Dutch internationals against whom it has played team matches, and my personal impression is that Jack and GIB are at similar level. We have had this disucssion in other threads as well. I think people tend to under-estimate GIB. Anyway, I don't see how one can compute error rates in an automatic way. You never know why a good player chooses an anti-percentage line - maybe it was actually a percentage line given the (misleading?) information he had about opps carding methods, BITs etc. Maybe he was trying a deceptive line. Maybe he was behaving ethically, playing contrary to what was suggested by UI he recieved. Maybe he was speculating about what might happen at the other table. Quote Link to comment Share on other sites More sharing options...
hotShot Posted July 8, 2007 Report Share Posted July 8, 2007 My program scans all lin-files in a given directory, to perform the analysis.This way I could use vugraph files and files from myhand archive of well known WC players.To get a valid score more than 400 boards are needed (player will be dummy in about 100 of them). Most GIB errors are lead errors or happen during the first tricks.GIB errors that happen later are usually percentage plays that unfortunately don't work, while a simpler line would not have failed. Quote Link to comment Share on other sites More sharing options...
junyi_zhu Posted July 8, 2007 Report Share Posted July 8, 2007 What kind of World Class players make those 0.4 errors? The self-rated ones or the real ones? It wouldn't surprise me too much if GIB is close to WC level. Jack is on par with the Dutch internationals against whom it has played team matches, and my personal impression is that Jack and GIB are at similar level. We have had this disucssion in other threads as well. I think people tend to under-estimate GIB. Anyway, I don't see how one can compute error rates in an automatic way. You never know why a good player chooses an anti-percentage line - maybe it was actually a percentage line given the (misleading?) information he had about opps carding methods, BITs etc. Maybe he was trying a deceptive line. Maybe he was behaving ethically, playing contrary to what was suggested by UI he recieved. Maybe he was speculating about what might happen at the other table. You were talking about Pairs and top dutch players might not be familiar with jack. For a long team match, any computer programs simply have no chance against a top flight team of humanbeing, cause those programs all have numerous bugs and humanbeing can explore their weakness and take the full advantage of their bugs. The declarer play of gib is slightly better than intermediate human players (they don't usually understand safe plays, cause those are rare events and hard to produce in a limit number of random deals; they don't understand opp's bidding, cause it's extremely hard to set those right constraints, which requires a big advance in AI), the bidding of gib is way worse than intermediate human players,, that means gibs tend to make very costly mistakes in bidding and we all know how important bidding is in bridge. Also the truth is not that most people underrate computer, the situation is that the programmers often overate the programs IMO. I am only trying to be objective after played thousands of hands with gib and few in this forum may have played the same number of hands as I have done. Quote Link to comment Share on other sites More sharing options...
nige1 Posted July 8, 2007 Author Report Share Posted July 8, 2007 However, on BBO we really have more information than just the net results of contracts. We could analyze each card played. In particular, I like the idea of computing double-dummy error rate. We could define: (1) A play is a double-dummy error if it reduces the double-dummy trick total for the person making the play. It's easy to notice these when kibitzing using GIB. (2) A play is a double-dummy contract-costing error if it changes the result of the hand from making to down (for declarer) or vice versa (for defense). In principle we could compute these error rates. This has a number of nice features; in particular the error rates for declarer are mostly independent of partner. There are obviously a few things to watch out for, in particular: I have done that and posted some results of that a while ago. But the system is limited to play and ignores bidding completely.It is less significant as one might think. WC's and gib produce error rates of about 0.4 errors per board. Intermediates make about 0.8 and beginners reach 0.9+. I had a few pickup partner with rates larger than 1, but i did not play enough boards with them to make the results mean something in a statistic.My program scans all lin-files in a given directory, to perform the analysis.This way I could use vugraph files and files from myhand archive of well known WC players.To get a valid score more than 400 boards are needed (player will be dummy in about 100 of them). Most GIB errors are lead errors or happen during the first tricks.GIB errors that happen later are usually percentage plays that unfortunately don't work, while a simpler line would not have failed. Fascinating stuff, hotShot! Thank you! I am amazed. I find it hard to believe that the error rate is so low! Please tell us how you count a player's errors on one board? In particular, what is the range and variance for different categories of player? I recckon that Double-dummy analysis is rather cruel even if it can't catch all the mistakes picked up by Single-dummy analysis. For example...Signaling errors.Failures to help partner.Failures to give an opponent a guess.Double-dummy criteria may be more objective than Single-dummy but I would expect them to high-light a similar number of errors, although, obviously, a "mistake" at double dummy is sometimes the correct play at single-dummy. Thus failure to drop a singleton king offside is a "mistake" at double-dummy. Also, for example, Scoring considerations affect correct single-dummy play. Often declarer makes the "mistake" of adopting a safer single-dummy line when a riskier double-dummy line would have garnered more over-tricks. Similarly, a defender, at teams, often sacrifices potential extra under-tricks to achieve his prime target of defeating the contract. In my experience, players often make several "mistakes" on a single board. For instance, a defender makes a trick-costing "mistake" -- but declarer promptly makes a compensating "mistake". Sometimes this happens again and again on the same board. Occasionally, in spite of this comedy of errors, the end-result seems quite normal! Quote Link to comment Share on other sites More sharing options...
Codo Posted July 9, 2007 Report Share Posted July 9, 2007 For the rating stuff: We had these discussion before, I have nothing to add. For GIB: My wife started to play just some month ago and she decided that she would be a plague to any human opponent. So we played the gibs. I think the gibs are far better then the intermedeate players around here. (We hadAnd they are better then the normal club players. But they are lightyears away from Worldclass.I have big respect for dutch nationalist, so I guess that Jack ran on a much better engine and with other settings then the gib that is working here.As far as I know, people who own GIB found out that he is playing better in the normal buyable programm then he is playing here. Quote Link to comment Share on other sites More sharing options...
3for3 Posted July 19, 2007 Report Share Posted July 19, 2007 Some thoughts about this thread. 1. When calculating errors, it is ok to say that failing to drop a stiff king missing 3 in the suit is an error, 'everyone' makes that error, and we are comparing rates, so it is not a big deal. 2. A World Champion once told me if you make only 2 mistakes in a session, you have played very well. So, I doubt the WC players make less than 0.1 mistakes in a session. 3. Everyone so far is missing a big point. Bridge is at least 1/2 bidding. Any rating system that tries to count errors needs to look at bidding as well. This would be an almost impossible task. For example, we would call it an error to bid a slam on 2 finesses. But what if they were through an opening bidder? Or into a preemptors hand? What about auctions that go, say 3s-6s, and the leader has to guess the suit? Impossible to evaluate. Is it an error to preempt, catch partner with the death 4450 with a void in your suit? Of course not. 4. There are plenty of mistakes that do not appear as mistakes, as many have pointed out. Failing to cater to an offside stiff queen is an easy example. Sloppy signalling is another. In the bidding, making a bid that partner doesn't understand is yet another. Danny Quote Link to comment Share on other sites More sharing options...
Gerben42 Posted July 19, 2007 Report Share Posted July 19, 2007 From what I've seen, Jack would massacre GIB in the bidding. Computer card play theory however has not improved much since GIB other than increase in computer strength. GIB was the first of a new generation of computer bridge programs and at the time miles ahead of the rest. The Jack team realized that the biggest gain was still in the auction (although I'm sure Jack is also better in the card play than GIB, simply because it was based on it and some improvements have been made since then). From my personal experience: Playing the money bridge tourneys, I end up positive long term playing total points against 3 GIBs . This must mean my total playing strength is higher than GIBs(at least the version that is implemented here). On the other hand I suspect I would lose long term playing total points with 3 Jacks. Quote Link to comment Share on other sites More sharing options...
jtfanclub Posted July 19, 2007 Report Share Posted July 19, 2007 It is less significant as one might think. WC's and gib produce error rates of about 0.4 errors per board. Intermediates make about 0.8 and beginners reach 0.9+. Wow, I'd estimate I have 2 errors per board in signalling alone. Telling partner what he doesn't need to know (which in theory might tell declarer what she needs to know) and not telling partner what to save for the endgame are the big two. Of course, most of the time these don't make any difference: partner can figure out what to save on his own, and even if declarer's paying attention the information isn't of any use to her. How do you measure errors? Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.