Jump to content

Real Experts?


Recommended Posts

Adam,

 

I like your proposed definitions a lot better than the current BBO ones. I've always felt like the current ones were rather vague, and especially left too little distinction between "expert" & "world class", since to me the people who "have success in national tournaments", at least in the U.S., are the same usual suspects who I would consider "world class".

Completely agree here and hope BBO changes definations to something worded like this.

Link to comment
Share on other sites

  • Replies 107
  • Created
  • Last Reply

Top Posters In This Topic

A lot of the systems proposed depend too much on what people "have done." My concern is situations like this:

 

(1) Player A lives in a small country with few bridge players. He made his country's national junior team when he was in his twenties, basically because he was under 26 and could follow suit. They proceeded to get thrashed in the junior championships. Player A has never done much in open events. Player B lives in a large country with many bridge players. He didn't really get involved with bridge until he was in college, leaving him too far behind to make his national junior team. He has a full-time job (not bridge), but has played in a dozen national-level events in his country, finishing in the top ten several times but never winning. Who's likely to be a better player, A or B? Guess who gets a star on BBO and is rated as "world class" by most people's rating systems? Who might not make "expert" by Frances' system? Hmm....

 

(2) Player A has been playing bridge since he was in college, and has been retired for the last ten years. He's played in over a hundred national events in his country, with one win and a half dozen other top ten finishes. Player B has been playing bridge for five years while working full time. He's played in ten national events in his country, with no wins but four top ten finishes and always in the top half of the field. Who's likely to be a better player, A or B? Guess who rates as "expert" according to most people's systems? Guess who gets a star?

 

(3) Player A is a billionaire. He has played in a half-dozen top-flight national events, always on a team of six with the five best players his money can buy. They managed to make the finals of a major event once, but never won. Player B is a graduate student. He has played in a half-dozen top-flight national events, always on a team of six with five of his buddies from college. They managed to make the finals of a major event once, but never won. Who's probably a better player?

 

Tricky, isn't it? :huh:

Link to comment
Share on other sites

:huh:

 

 

Hmmm

1) What's the goal of this BBO rating system, any?

2) How do you measure the success in reaching that goal?

 

Or are goals and measuring success towards that goal not important?

Note I frame all of this in terms of our goal and success towards that goal...NOt repeat not how accurate the ratings are.

 

If you cannot even agree on the goal then nevermind

 

 

:)

Link to comment
Share on other sites

I would rate expertise by measuring the average nr. of errors per board. In 20 boards, for instance,

 

<1 error: master (worthy of national team)

1: expert

2-3: adv

4-6: int

7+: newbie

I think most newbies probably make about 5-6 noticeable mistakes per HAND. Most experts, probably 4 or 5 in 20 boards, on a close detailed analysis by a group of other experts.

Using a program that analyzes lin-files I found that (looking at cardplay only) GIB (using the money bridge settings) makes one error in 3 boards => 7 in 20 boards.

 

Using lin-files of vugraph events I watched, I can tell that WC and expert player can reach a level of 1 error in 5 Boards => 4 in 20 Boards.

 

Intermediate BIL members I played with, are in the area of 14-18 errors in 20 boards.

Link to comment
Share on other sites

I would rate expertise by measuring the average nr. of errors per board. In 20 boards, for instance,

 

<1 error: master (worthy of national team)

1: expert

2-3: adv

4-6: int

7+: newbie

I think most newbies probably make about 5-6 noticeable mistakes per HAND. Most experts, probably 4 or 5 in 20 boards, on a close detailed analysis by a group of other experts.

Using a program that analyzes lin-files I found that (looking only at cardplay only) GIB (using the money bridge settings) makes one error in 3 boards => 7 in 20 boards.

 

Using lin-files of vugraph events I watched, I can tell that WC and expert player can reach a level of 1 error in 5 Boards => 4 in 20 Boards.

 

Intermediate BIL members I played with, are in the area of 14-18 errors in 20 boards.

How does your program define an error and why is that the correct definition compared to other commonly accepted ones?

 

 

Again if we cannot even agree what an error is why bother?

Are all errors weighted equally, why?

Can bidding or lack of bidding induce cardplay error? If so how do you define it and weight it?

Link to comment
Share on other sites

Errors are hard to define though. The tricky part is:

 

(1) Some "errors" on a double-dummy basis are actually percentage plays. Others are pretty much just guesses. This causes things which are actually not errors to show up as errors. Similarly sometimes low percentage actions work out, and therefore appear to be non-errors. Admittedly this type of thing may even out over the very long term.

 

(2) Some errors are things like not signaling clearly or correctly, or not making a defense easy for partner. In these types of situations no "double dummy wrong play" was made, or perhaps one was made by partner. In fact I find that the majority of errors made by top class players are frequently of this category -- they defend in a way which is not (double dummy) wrong but their choice of play or signal later puts partner on a guess which he gets wrong.

 

(3) Some errors such as not finding "mandatory" falsecards simply do not effect double-dummy results, but still make opponents' life much easier.

 

(4) Many errors occur in the bidding, where it is much harder to analyze who is at fault or what the right decision might be. There are situations where the right call depends on partnership agreement or the skill level of partner or the opponents.

 

I find that, watching top players, they rarely make absolute bonehead plays that are obviously wrong (okay maybe in the late stages of a long tournament we see a few). On the other hand, there are many subtle "mistakes" of the kinds described above. In contrast, watching beginners and intermediates the absolute bonehead plays come at a rate of a few per session, whereas the subtle mistakes (which often they wouldn't even understand if I tried to explain them) come at a rate more like several per board.

 

I know at one point I started counting the "absolute bonehead plays" I made per session and got very happy when the rate started to look like less than one per session. For a while I felt like "wow I'm good now" until I started realizing the number of subtle mistakes I was racking up was more like one per board... :huh:

Link to comment
Share on other sites

Hmmm

1) What's the goal of this BBO rating system, any?

When I play in the ACBL pair games on BBO, I use the opponent's self rating as a guide for the level of play that I should be expecting.

 

In addition to the self rating, there is also a symbol on each player's profile showing their cumulative master point holding on BBO. That also factors into the equation.

 

Of course, this only applies to those players that I have not encountered before. As I am playing more frequently lately, there are fewer and fewer opponents that I have not played against.

 

There are only a handful of true experts in the ACBL pair games on BBO, and I know them when I see them.

Link to comment
Share on other sites

guys, my idea is simply to rank people according to average mistakes made. The exact amount is just a gauge thing. It don't matter much whether

 

1-2 = expert

2-4 = adv

 

or

 

10-20 = expert :)

20-40 = adv

As Adam said, how we define error is important. Moreover, the better one is, the more errors one recognizes. Thus my take on the idea that an expert makes an average of 1 error per 20 boards says a lot more about the level of expertise of the poster than about the actual error rate of an expert B)

 

wc: 1 error per 20 boards if fresh, and rested.

 

recognized: almost all of them

 

expert: 4 - 10 errors per 20 boards, none of them bone-head unless fatigued

recognized: 2-8

 

advanced: 10-20 errors per 20 boards, probably 1 bonehead

recognized: 5-10

 

intermediate: 20-40 errors per 20 boards, 3-6 bonehead

recognized: 5-10

 

beginner: 40-100 errors per 20 boards, 20 of them bonehead.

recognized: 5-20

 

By recognized, I mean errors that the player realizes, then or later, without prompting by others.

 

By errors, I mean plays that are technically lower percentage than an alternative equally valid on the hand to the time of the play, or bids that are, in the context of the methods chosen, demonstrably inferior to an alternative.. and includes such things as failing to give partner help on defence, or failing to draw the correct inference from opposition action or inaction, provided that the inference that ought to have been drawn is valid. I do not include percentage action that fails on the hand, or falling for clever play by the opps, etc.

 

But it really doesn't matter B) The measure of a good player is the partners and teammates who ask him or her to play. Respect of peers is the only true measure.

Link to comment
Share on other sites

my take on the idea that an expert makes an average of 1 error per 20 boards says a lot more about the level of expertise of the poster than about the actual error rate of an expert

You know why you'll never be a good player? You waste too much time bickering with others instead of improving your own game. I, on the other hand, won't bother with you. Have a nice day.

Link to comment
Share on other sites

my take on the idea that an expert makes an average of 1 error per 20 boards says a lot more about the level of expertise of the poster than about the actual error rate of an expert

You know why you'll never be a good player? You waste too much time bickering with others instead of improving your own game. I, on the other hand, won't bother with you. Have a nice day.

Thanks for the tip B)

Link to comment
Share on other sites

How does your program define an error and why is that the correct definition compared to other commonly accepted ones?

 

 

Again if we cannot even agree what an error is why bother?

Are all errors weighted equally, why?

Can bidding or lack of bidding induce cardplay error? If so how do you define it and weight it?

These are indeed interesting questions, but I look at it from a practical point of view.

 

I have a list of about 250 player from which I watched or played more than 40 boards, I find that world champions and our national champions are at the upper end of this list, while BIL member find their place at the lower end of he list.

 

If I'm interested in a players true rating, I can get me a bunch of boards from myhands and calculate the average error rate. Thats good enough for me.

Link to comment
Share on other sites

The error rate idea does work a heck of a lot better if you restrict to declarer play. Now things like signaling partner are removed from the equation. You still get a bit of an effect from the opponents (good opponents will find mandatory falsecards and such that make double-dummy play more difficult for example). And of course there is the effect that the right percentage play isn't always the right double-dummy play. But the latter effect should tend to cancel out over a very large number of boards. Obviously it's not right to call this really "error rate" because someone who plays every hand single-dummy perfect will not have a perfect "zero error rate" here. But you'd expect that good players generally take higher percentage lines and therefore produce fewer non-double dummy plays over the long term.
Link to comment
Share on other sites

Well no rating is perfect but why not if you win:

1) open wc you are rated wc player

2) open nat you are nat rated player

3) open reg, you are reg rated player

4) etc, etc

 

yes this is not perfect but it seems faster, simpler, and you do not need a computer and good enough ?

 

I assume the goal is to have a "good enough" rating system for some unknown reason.

Link to comment
Share on other sites

Well no rating is perfect but why not if you win:

1) open wc you are rated wc player

2) open nat you are nat rated player

3) open reg, you are reg rated player

4) etc, etc

 

yes this is not perfect but it seems faster, simpler, and you do not need a computer and good enough ?

So you think Zia is not world class?

Link to comment
Share on other sites

Well no rating is perfect but why not if you win:

1) open wc you are rated wc player

2) open nat you are nat rated player

3) open reg, you are reg rated player

4) etc, etc

 

yes this is not perfect but it seems faster, simpler, and you do not need a computer and good enough ?

So you think Zia is not world class?

Seems the rating sytem is good enough for bbo if that is the goal. I did say not perfect, just good enough, fast and simple. If you need a computer to tell you what Zia is, fair enough.

 

Keep in mind the goal in Not an absolute accurate rating, is seems to be some unknown goal...I just think this is good enough for that goal.

Link to comment
Share on other sites

Seems the rating sytem is good enough for bbo if that is the goal. I did say not perfect, just good enough, fast and simple. If you need a computer to tell you what Zia is, fair enough.

It's fast and simple. But I neither know what you mean by 'good enough', nor agree that it is.

Link to comment
Share on other sites

Seems the rating sytem is good enough for bbo if that is the goal. I did say not perfect, just good enough, fast and simple. If you need a computer to tell you what Zia is, fair enough.

It's fast and simple. But I neither know what you mean by 'good enough', nor agree that it is.

Well since you have not stated what the goal is, how you expect to measure the success of that goal and why that goal is important, what can I say.

 

Seems better than anything else I have read so far but if you got a better plan or goal, ok.

Link to comment
Share on other sites

A lot of the systems proposed depend too much on what people "have done." My concern is situations like this:

 

(1) Player A lives in a small country with few bridge players. He made his country's national junior team when he was in his twenties, basically because he was under 26 and could follow suit. They proceeded to get thrashed in the junior championships. Player A has never done much in open events. Player B lives in a large country with many bridge players. He didn't really get involved with bridge until he was in college, leaving him too far behind to make his national junior team. He has a full-time job (not bridge), but has played in a dozen national-level events in his country, finishing in the top ten several times but never winning. Who's likely to be a better player, A or B? Guess who gets a star on BBO and is rated as "world class" by most people's rating systems? Who might not make "expert" by Frances' system? Hmm....

 

(2) Player A has been playing bridge since he was in college, and has been retired for the last ten years. He's played in over a hundred national events in his country, with one win and a half dozen other top ten finishes. Player B has been playing bridge for five years while working full time. He's played in ten national events in his country, with no wins but four top ten finishes and always in the top half of the field. Who's likely to be a better player, A or B? Guess who rates as "expert" according to most people's systems? Guess who gets a star?

 

(3) Player A is a billionaire. He has played in a half-dozen top-flight national events, always on a team of six with the five best players his money can buy. They managed to make the finals of a major event once, but never won. Player B is a graduate student. He has played in a half-dozen top-flight national events, always on a team of six with five of his buddies from college. They managed to make the finals of a major event once, but never won. Who's probably a better player?

 

Tricky, isn't it? B)

This gets my vote for post of the month. It really hits home.

Link to comment
Share on other sites

B)

 

The good thing of a worldwide bridge hall of fame is that it acknowledges great players who never won a WC, let alone an open WC. Granted that player may have to wait until what 60 or so to be allowed in and get his or her true rating.

 

This lets clients as Mrs. Meltzer be rated a WC player and others be rated one step higher, Hall of Famer.

 

Are the ratings perfect no, but seem good enough.

It just seems a bit unfair to win a few open WC events and not be allowed to call oneself WC on BBO no matter how many cardplay errors the computer says you make.

Link to comment
Share on other sites

We can always ask the question: "Why does a rating system matter at all? Who cares?"

 

I think the success of BBO so far indicates that a rating system isn't all that important. We can survive without one, or with a rather inaccurate one. It's not a hugely important issue.

 

However, ratings do matter a little bit. Some reasons:

 

(1) A lot of people would like to see a good rating for themselves, to get an objective measure of how they are playing. Results on boards only go so far, since your results on boards depend an awful lot on who you play with and against.

 

(2) When one is picking potential partners, opponents, and teammates it's good to have some idea of how good people are. Obviously when dealing with people we compete against on a daily or weekly basis, we have some opinion on who's good and who isn't. But these opinions are often very subjective and inaccurate. And they're likely to be even worse on BBO when we frequently play with or against people we've never met face to face. Anyways, the point is that there's some desire to play with/against people of your approximate level, rather than wasting your time playing against people who are much worse (or much better).

 

(3) Some people like to kibitz. It can be fun kibitzing friends, but a lot of time the kibitzers want to see the best game possible. With this in mind, it's nice to be able to figure out who the really good players are so we can watch them.

 

Of course, there are also some negatives to having a rating system. One is that some people are getting worse and would rather not be informed of it (usually age-related issues) or that some people like to believe they are a lot better than they are and ratings would disabuse them of this notion. Another is that having a rating system leads people to care a lot more about their results (i.e. a lot of people would like to have a rating that they feel reflects their skills). This makes it less appealing to play late at night when tired (or drunk) for fear that one's rating will go down. It may create an incentive to cheat. If the rating system is not very accurate, it can also create an incentive not to partner weak players or oppose strong players because this is likely to reduce one's rating (this was a problem with the old OKB Lehman system).

 

In any case, there are several possible approaches to ratings:

 

(1) Do away with ratings entirely. Not too many people want this though, because even simple things like picking who to kibitz become tough.

 

(2) Use self-ratings without any policing of the ratings. This is basically what BBO does now. Some guidelines for how to self-rate (to maintain at least some modicum of accuracy) are probably a good thing. Of course, this leads to constant complaints about people who overrate (or underrate) themselves, and people harping about "I played with an EXPERT and he could hardly follow suit."

 

(3) Base ratings completely on tournament "wins." BBO does a bit of this, granting stars to people who have represented their country internationally or have won a national event. The problems are set out in my previous post -- like masterpoints, this kind of rating favors participation over skill in many cases. Older people who've played for decades find it easier to accumulate a national win or two simply because they've had more chances. Young people who can make junior teams in a country with few young players also have an advantage (they can "represent their country" without having to make a top-notch Bermuda Bowl team). There's also the problem of wealthy sponsors who may be poor players but hire five elite professionals and win some big event.

 

(4) Use self-ratings, combined with some form of subjective ratings by partners and opponents. This way, if someone self-rates in a ridiculous way, other people will hopefully tag them and their rating will come back to the consensus of the community. This is the kind of rating system used by a lot of other online services (i.e. Netflix, Amazon). One nice thing about this is that no one has designed a fool-proof objective rating system for bridge yet, and it removes the complaint that the numerical system is poor or inaccurate.

 

(5) Use some objective system, but make the result only visible to the person being rated. This prevents some of the social "abuses" where people won't play with other people because their rating is too low. But it also prevents some of the comparable social benefits.

 

(6) Use some objective system, make the result globally viewable. This is basically the OKB approach. It had a lot of negative social effects (one can argue that it would've worked better if the rating system were more accurate -- designing a good rating system for a partnership game like bridge is a tough mathematical problem which hasn't yet been adequately solved). Fred has decided not to do this on BBO, I think for good reason.

Link to comment
Share on other sites

A lot of the systems proposed depend too much on what people "have done." My concern is situations like this:

 

(1) Player A lives in a small country with few bridge players. He made his country's national junior team when he was in his twenties, basically because he was under 26 and could follow suit. They proceeded to get thrashed in the junior championships. Player A has never done much in open events. Player B lives in a large country with many bridge players. He didn't really get involved with bridge until he was in college, leaving him too far behind to make his national junior team. He has a full-time job (not bridge), but has played in a dozen national-level events in his country, finishing in the top ten several times but never winning. Who's likely to be a better player, A or B? Guess who gets a star on BBO and is rated as "world class" by most people's rating systems? Who might not make "expert" by Frances' system? Hmm....

 

(2) Player A has been playing bridge since he was in college, and has been retired for the last ten years. He's played in over a hundred national events in his country, with one win and a half dozen other top ten finishes. Player B has been playing bridge for five years while working full time. He's played in ten national events in his country, with no wins but four top ten finishes and always in the top half of the field. Who's likely to be a better player, A or B? Guess who rates as "expert" according to most people's systems? Guess who gets a star?

 

(3) Player A is a billionaire. He has played in a half-dozen top-flight national events, always on a team of six with the five best players his money can buy. They managed to make the finals of a major event once, but never won. Player B is a graduate student. He has played in a half-dozen top-flight national events, always on a team of six with five of his buddies from college. They managed to make the finals of a major event once, but never won. Who's probably a better player?

 

Tricky, isn't it? :blink:

This gets my vote for post of the month. It really hits home.

You and Adam get to keep your expert status, not to worry.

Link to comment
Share on other sites

A lot of the systems proposed depend too much on what people "have done." My concern is situations like this:

 

(1) Player A lives in a small country with few bridge players. He made his country's national junior team when he was in his twenties, basically because he was under 26 and could follow suit. They proceeded to get thrashed in the junior championships. Player A has never done much in open events. Player B lives in a large country with many bridge players. He didn't really get involved with bridge until he was in college, leaving him too far behind to make his national junior team. He has a full-time job (not bridge), but has played in a dozen national-level events in his country, finishing in the top ten several times but never winning. Who's likely to be a better player, A or B? Guess who gets a star on BBO and is rated as "world class" by most people's rating systems? Who might not make "expert" by Frances' system? Hmm....

 

(2) Player A has been playing bridge since he was in college, and has been retired for the last ten years. He's played in over a hundred national events in his country, with one win and a half dozen other top ten finishes. Player B has been playing bridge for five years while working full time. He's played in ten national events in his country, with no wins but four top ten finishes and always in the top half of the field. Who's likely to be a better player, A or B? Guess who rates as "expert" according to most people's systems? Guess who gets a star?

 

(3) Player A is a billionaire. He has played in a half-dozen top-flight national events, always on a team of six with the five best players his money can buy. They managed to make the finals of a major event once, but never won. Player B is a graduate student. He has played in a half-dozen top-flight national events, always on a team of six with five of his buddies from college. They managed to make the finals of a major event once, but never won. Who's probably a better player?

 

Tricky, isn't it? :blink:

This gets my vote for post of the month. It really hits home.

You and Adam get to keep your expert status, not to worry.

Whew. I kept scrolling down for pigeon and couldn't find it ;)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...