Jump to content

tysen2k

Full Members
  • Posts

    406
  • Joined

  • Last visited

Everything posted by tysen2k

  1. A few questions for those of you currently using Zar to evaluate. Did you go through your system and convert all your point ranges? Is a weak 1NT now like 25-29? You have to go through all your response structures too and convert those too, right? What do you do when opponents ask you for an explanation?
  2. This is exactly what Binky points do, except that instead of taking just the one North hand in your example, it was based on over 700,000 hands. Additionally, you don't want an evaluation method that is best opposite an "average hand", you want one that is weighted over all the hands that partner can have. Binky takes the evaluation method that is best simultaneously for both players instead of just one. Binky and other double-dummy evaluators can give some insight into competitive bidding as well. Based on your own hand, you can make some estimations about how many tricks the opponents can take as well. Not as accurate as our own tricks, but still... You can generate a sort of "total tricks" estimate and use that for preemptive or competitive bidding. So if you have a shapely hand, you may not have that many "points" but you see that you have no defense and can bid it up.
  3. In my homegrown system for matchpoints I play 1N never has a 4cM. I also extend the range to 11-15, using 2♣ as a strength ask. At matchpoints, missing the 2M partial costs a lot more than at IMPs.
  4. I agree that Binky points are not meant to be used at the table, and I say so much in my articles. In my point of view they do some good in a number of ways. Computer bridge programs can use them for more accurate evaluations It allows us to quantify what changes occur during the bidding. We all have judgement on how valuations change during the bidding, but it's nice to be able to quantify it. We may be able to derive other "simpler" formulas from these that capture most of the accuracy but don't require a calculator. This is the project I'm currently working on.
  5. Since the HCP portion of Zar is essentially equal to BUM-RAP x1.5, the hands that Zar goes off from reality depend only on distribution and not high-card strength. Zar points undervalue some distributions and overvalue others. When you do a statistical analysis of all hands, you can pick out the distributions that mess up the most. When you weight them by frequency, the biggest errors are: 5332 – Zar gives this 11 distribution points, or 3 more than a 4333 pattern. Therefore Zar says it should take 0.6 tricks more than 4333, when in reality it only takes 0.339 more tricks. 6322 – Similar story. Zar says it’s 1 trick better than 4333 when it’s only 0.660 tricks better. 5422 – Zar says 0.8 tricks when it’s 0.595 4441 – This time Zar undervalues the holding. Zar says it’s 0.6 tricks better when it’s really 0.810. There are many other errors all over the place and many have bigger magnitude, but their frequency doesn’t put them at the top of the list.
  6. Waste of energy. BUM-RAP (A=4.5, K=3, Q=1.5, J=0.75, T=0.25) plus a simple 321 distribution scheme is more accurate. Even dropping the T completely is still better. Zar's method of distribution counting just doesn't reflect real trick-taking ability accurately. I wrote an article about this a few weeks ago on rgb comparing different evaluation schemes: http://tinyurl.com/32tgc For those intereseted in hand evaluation, that article also references two other articles I wrote on the subject: [improving Hand Evaluation Part 1] http://tinyurl.com/25huc [improving Hand Evaluation Part 2] http://tinyurl.com/383e6 Tysen
×
×
  • Create New...