Jump to content

An interesting mathematics issue?


Recommended Posts

It recently occurred to me that restricted choice has multiple permutations, as to cards in one suit, lead inferences, and even bidding inferences.

 

This got me thinking about an article I read years ago about an anti-field play. If the field can be expected 100% to take the 52% line, then you may win by taking the 48% line. You beat the field 48% of the time. Plus, if you fail, you then take the next 48% line on the next hand. If that works, you are back to even. That only fails 52% of 52% of the time, or roughly 25% of the time. So, the anti-field play has a 48% to 25% chance of gaining.

 

This same principle seems applicable to bidding. If a slam or game is slightly anti-percentage, you might stab at it early in a long match. If it fails, you hopefully can take another stab at it on a similar slight odds disadvantage. If it works, you revert to normal percentages.

 

This seems to place eggs in one basket, but that might be necessary in a match where you are underdogs. If the math is right, and if a slam, for example, is calculated for a large IMP gain, you will gain that buffer about twice as often as you sink yourselves irretrievably.

 

We have all heard this theory of going anti-percentage, but the mathematics actually back it up. It is not a matter of simply gaining on the off-hands. Rather, you will gain more than you lose.

 

As additional interest, if you lose on the first, then gain back on the second, you are back to even. Now, try again!

 

The key caveat, though, is that, once you catch the break, you must return to normal bidding, to protect you mathematical advantage.

 

Actually, the math might be even better. On the 25% that you fail twice, you might always win the next two and be back to even. That's not far off from 25% itself when the odds are close. So, maybe the odds of failure are roughly 19%? That would be 48% to 19%, or a GREAT gainer.

 

I'd love to see the actual numbers crunched, but I'm off to work now. :(

Link to comment
Share on other sites

It recently occurred to me that restricted choice has multiple permutations, as to cards in one suit, lead inferences, and even bidding inferences.

 

This got me thinking about an article I read years ago about an anti-field play.  If the field can be expected 100% to take the 52% line, then you may win by taking the 48% line.  You beat the field 48% of the time.  Plus, if you fail, you then take the next 48% line on the next hand.  If that works, you are back to even.  That only fails 52% of 52% of the time, or roughly 25% of the time.  So, the anti-field play has a 48% to 25% chance of gaining.

 

This same principle seems applicable to bidding.  If a slam or game is slightly anti-percentage, you might stab at it early in a long match.  If it fails, you hopefully can take another stab at it on a similar slight odds disadvantage.  If it works, you revert to normal percentages.

 

Comment 1: Stay very far away from Blackjack tables

 

Comment 2: Scoring tables are going to be very important to any calculation. For simplicities sake, lets assume that you're playing Board a Match

 

First, consider what happens if you and your opponent's are equally skilled and both play two boards straight down the middle

 

On board 1, you and your opponents will both make your contract 52% of the time. You will both tie the board and score 1/2 point. You and your opponents will both go down 48% of the time. You and your opponent will once again tie the board and score 1/2 point. Its easy to demonstrate that you're expected score over two boards will be 1 point.

 

Now, lets consider what happens when you adopt your "improved" strategy

 

On board 1, there is a 48% chance that you will score 1 point and a 52% chance that you will score 0 points. On board 2

 

52% of the time, your gamble on board one didn't pay off. In this case, you're going to gamble a second time

 

48% of the time, your gamble on board one did pay off, in which case, you're going to play straight down the middle on board 2.

 

Your expected return is .52(0 + .48(1)) + .48(1 + .5) = .97

 

Worse yet, you're "improved" strategy is simultaneously improving your opponent's score. (His expected score has increased from 1 to 1.03)

Link to comment
Share on other sites

It recently occurred to me that restricted choice has multiple permutations, as to cards in one suit, lead inferences, and even bidding inferences.

 

This got me thinking about an article I read years ago about an anti-field play. If the field can be expected 100% to take the 52% line, then you may win by taking the 48% line. You beat the field 48% of the time. Plus, if you fail, you then take the next 48% line on the next hand. If that works, you are back to even. That only fails 52% of 52% of the time, or roughly 25% of the time. So, the anti-field play has a 48% to 25% chance of gaining.

 

This same principle seems applicable to bidding. If a slam or game is slightly anti-percentage, you might stab at it early in a long match. If it fails, you hopefully can take another stab at it on a similar slight odds disadvantage. If it works, you revert to normal percentages.

 

This seems to place eggs in one basket, but that might be necessary in a match where you are underdogs. If the math is right, and if a slam, for example, is calculated for a large IMP gain, you will gain that buffer about twice as often as you sink yourselves irretrievably.

This ignores the fact that opponents in a team match can also bid to 48% slams if your first slam succeeds.

Your example about matchpoints is also flawed. It is true that you are gaining on 48% of the hands, and losing only on 26% of the hands, but the loss will be twice as big, and thus roughly affect your winning percentage by twice as much.

 

I remember a similar BW article, maybe it is the same and you misremember. The point in that article was that if you have two chances for an anti-percentage play on the same hand then it makes sense to try the first anti-percentage play, followed by the percentage play in the other suit if it worked, and the anti-percentage play if it didn't work. You don't mind the fact that in the 26% case, you are two tricks below the field and only one trick above the field if you win.

(The article was called "Two-point conversion" IIRC.)

Link to comment
Share on other sites

As always with mathematics, it helps to make the assumptions precise. Somewhere embedded in your thoughts I believe you are stipulating that the competition is good enough that it is not practical to think you can beat them by normal play, so you will try some abnormal play. Just as a for instance, not that it would happen, but pretend you play two grands, both with nine card trump suits missing the Q432. Following your recommendations, you take a second round finesse on the first hand. If that works you play for the drop next time. If it fails you finesse again next time. Assume BAM scoring. Assume the better opponents play for the drop both times. About 48% of the time you pick up a point. (Make the first slam while opps are going down, and do the same as they do on the second). About 26% of the time you lose 2 points (fail in both grands while they make both). The remaining 26% or so of the time you break even (fail in the first, make in the second while they make in the first and fail in the second).

 

Your average gain is negative: 1X.48-2X.26=-.04

As of course it should be.

 

The argument seems to hinge on: These guys are too good. Making out best plays will lead to a loss, so losing 2 points is really no worse than breaking even. We need a swing.

 

It's OK as far as it goes, and essentially amounts to "state of the match" strategy carried out from bd 1 because you decided the state of the match was bad as soon as you saw who sat at your table. If you are truly outclassed you should take also the finesse the second time even if it worked the first time. You may well need it.

Link to comment
Share on other sites

No, I think Ken is assuming that we have a 2-board KO BAM match where we know that we have a possible anti-percentage plays in both boards. With this advance knowledge, his strategy is sound. But then if you know that much about board 2 while playing board 1 (i.e. that it's not opponents who have the chance to come back even with an anti-percentage play in board 2), you may as well know in advance whom to play for the queen...

 

The key caveat, though, is that, once you catch the break, you must return to normal bidding, to protect you mathematical advantage.

That goes without saying, the key caveat is that opponents please continue to play and bid normally after you made your odds-against slam.

Link to comment
Share on other sites

I thuink y'all are missing the point entirely. This is not a matter of absolute percentages. This is a matter of mathematics over several events.

 

For insane simplicity sake, let's assume that you are entering four events at the Regionals in BFE. Each is a three-board match, with two slam hands and one tight defense and partscore hand. You expect to drop five IMP's on the tight competitive/defense hand in each of the four matches.

 

Now, assume that the opponents know the odds and will strictly follow the odds.

 

In the first event, you bid the 48% slam on the first hand. It makes. When you then stop shy on the second hand, you win that match.

 

In the second event, you bid the 48% slam on the first hand, it fails. On the second hand, you bid the 48% slam, it works. Now, you are back to even. But, you lose the match because of the partscore hand.

 

In the third event, you bid the 48% slam. It makes. You then stop shy again on the second hand and win that match.

 

In the fourth event, you bid the 48% slam. It fails. You then try the second 48% slam. It also fails. You lose that match big.

 

Using this simple approach, you win two matches that you should not win. You lose one match that you would have lost anyway. You lose one big.

 

That's simple odds play against a better team. Now, what if the partscore battle resulted in a tie because the teams are equally matched?

 

You still win the two events where you started anti-percentage. You concentrate your losses into the one match that you lose bid. But, you win 50% of the match where the first fails but the second succeeds.

 

Thus, with unequal teams, this method results in the lesser team winning one-half of the events. With equal teams, this approach results in the team winning 67.5% of the events. In either event, the team that wins more events does so by stealing IMP's from one of the matches. In another view, the teams wins more often by dumping much of its IMP losses into one event.

 

At the end of the tournament, the weaker team ends up 1st place, 1st place, 2nd place by a small margin, and 2nd place by a huge margin, but nonetheless two wins.

 

At the end of eight matches between two equal teams, the anti-percentage team wins four matches by a little, wins a fifth match barely, loses the sixth match barely, and loses two matches by a bundle. But, this means five wins out of eight events.

 

The mathematics objecting to this theory fails to recognize that the sacrificial event is getting hammered mercilessly. The net loss to the sacrificial event outweighs the net gains to the favored events, but that does not make a difference, as a loss is a loss and a win is a win.

 

Swiss scoring may correct this glitch, but KO's do not.

Link to comment
Share on other sites

That was all essentially a long-winded post in favor of swinging against better teams, with his added condition that if your first swing doesn't work you try another one, but if the first one did work you play to protect your lead. Whether right or wrong, this is hardly groundbreaking theory. Of course it involves a number of totally unrealistic assumptions, but for some people it's fun to talk and think about so whatever.
Link to comment
Share on other sites

That was all essentially a long-winded post in favor of swinging against better teams, with his added condition that if your first swing doesn't work you try another one, but if the first one did work you play to protect your lead. Whether right or wrong, this is hardly groundbreaking theory. Of course it involves a number of totally unrealistic assumptions, but for some people it's fun to talk and think about so whatever.

Actually, you are missing the fact that the mathematics suggests using this against equals.

 

There are many problems with this approach, the primary being the eggs-in-one-basket problem.

 

Further, the theory relies upon a large gain in a near-50% anti-percentage action. The sole candidate is the small slam.

 

Further, the theory relies, somewhat, on extended play, where you can be assured of two marginal slams coming up.

 

I suppose, then, that this is somewhat of a mere support for one general proposition. In an extended teams match, start slightly aggressive with your first small slam decision. If the slam makes, go normal from then on out. If it fails, stay aggressive. Practically-speaking, this may mean bidding the first small slam that relies upon a finesse of the trump King and no 5-0 splits anywhere. If that fails, be equally agressive next time. If it succeeds, be conservative next time.

 

One additional problem is that this assumes reliably conservative opponents who do not change tactics even when they see some anti-percentage slams coming in.

Link to comment
Share on other sites

To try and simplify.... suppose you're playing a round-robin team match with win/loss scoring.

 

You sit down and play a match. Every now and then you are presented with the opportunity to take some action that you're pretty sure won't be taken at the other table, but which has a reasonable chance of working out (say it's very slightly worse than the "normal" action you expect your opponents at the other table to take). The rule you should follow is:

 

(1) If you think that you're "up" in the match, either because of earlier boards that seem like they went favorably for you, or just because you have a better team, you should try to duplicate the other table's actions. This reduces the swings and maximizes the chance of translating your advantage (either in score-so-far or skill level) into a win.

 

(2) If you think that you're "down" in the match, either because of earlier boards that seemed to go badly for you, or because you have a worse team, then you should try to take the "abnormal" action (again assuming it's not much worse than the normal action). This increases the swings and thereby improves the chance of nullifying your disadvantage.

 

Similarly, you want to hope for "swing" boards. If you're down 10 in a match going into the fourth quarter and are dealt sixteen hands where you have 26 hcp and a cold game, you're likely to be in big trouble. If you're down 10 and are dealt sixteen difficult slam decisions, or sixteen hands where a normal game contract rides on the opening lead.... then it's pretty easy to imagine winning the match (or losing the match big) based on how these hands go. If you can "create" swing boards by taking weird actions (without the probability of the swing being against you being unacceptably high) then you should do it when you're down and not when you're up.

 

The point is that the goal is really "maximize your chance of winning" and that this is a different goal from "maximize your total net imps over the whole event." Even in IMP pairs those two are different! So the action that maximizes your total net imps is not always "best" in practice.

Link to comment
Share on other sites

That was all essentially a long-winded post in favor of swinging against better teams, with his added condition that if your first swing doesn't work you try another one, but if the first one did work you play to protect your lead. Whether right or wrong, this is hardly groundbreaking theory. Of course it involves a number of totally unrealistic assumptions, but for some people it's fun to talk and think about so whatever.

Actually, you are missing the fact that the mathematics suggests using this against equals.

 

There are many problems with this approach, the primary being the eggs-in-one-basket problem.

 

Further, the theory relies upon a large gain in a near-50% anti-percentage action. The sole candidate is the small slam.

 

Further, the theory relies, somewhat, on extended play, where you can be assured of two marginal slams coming up.

 

I suppose, then, that this is somewhat of a mere support for one general proposition. In an extended teams match, start slightly aggressive with your first small slam decision. If the slam makes, go normal from then on out. If it fails, stay aggressive. Practically-speaking, this may mean bidding the first small slam that relies upon a finesse of the trump King and no 5-0 splits anywhere. If that fails, be equally agressive next time. If it succeeds, be conservative next time.

 

One additional problem is that this assumes reliably conservative opponents who do not change tactics even when they see some anti-percentage slams coming in.

Got it, so as long as my opponents are reliably conservative, and unable to change tactics, and I know exactly 2 swingy boards are coming, and I know exactly what they will do at the other table, and I know the exact percentage of each action working, then we are taking a mathematically inferior action that will gain us a swing that may win us a match we may have been unable to win.

Link to comment
Share on other sites

I thuink y'all are missing the point entirely.  This is not a matter of absolute percentages.  This is a matter of mathematics over several events.

Ken...

 

Did you notice that fact most of the posters included cavaets about scoring tables? It's vaguely insulting when you act as if other posters are incapable of understanding a rather simplistic point. It gets even more annoying when the example that you chose to illustrate your points are flawed.

 

There's a very significant different between the example that you're using and the original Bridge World article that you're cribbing from: As Cherdano notes, the original Bridge World article focuses on a single hand. Declarer knows that he will have the opportunity to make a pair of anti-field plays. More over, the payoff in both cases the same. The two plays can balance each other out precisely. Your example focused on multiple boards in the same round. Normally, when I have the opportunity to take an anti percentage play on a slam I can't guaruntee that I can double down on the same bet later in that match.

 

Regardless, the entire exercise seems academic at best. In general, I suspect that anyone who has the skills necessary to actually implement any such strategy probably wouldn't need to use it. More over, if this strategy actually did improve the expected score of weak players it would simple lead to the "strong" team adopting a mixed strategy in which the randomized across the 52% line and the 42% line according to some optimal PDF.

Link to comment
Share on other sites

No! This is absolutely right, a truth from God himself, through me his prophet. This is also the most important new bidding theory of the past ten years. Why can't you see that?

 

I did toss in a question mark. This was not meant to be an unveiling of a grand new theory of IMP play. It was meant to be included in the stack of things that make you go hmm.

Link to comment
Share on other sites

There is a point something like this....

 

Not all IMPs are equal. Suppose I'm about to start a 12-board match against a roughly equal team. If I'm down 100 I will almost surely lose. In fact there's not much difference between starting down 100 and down 200. So if the goal is to win (margin doesn't matter), and you give me a choice of starting 100 IMPs down, or a coin flip where if it's heads we start even and tails I start 200 IMPs down, I will surely take the coin flip.

 

Control of Swinginess is Valuable. Suppose I'm playing a 12-board match against an equally skilled team. Either we will start the match even, or we can flip a coin and if it's heads I start up 20, and tails I start down 20. Does it matter to me if we start even or not? Normally if we start even I will win 50% of the time (equal teams). If I start up 20, I will win with probability p>50%, but if I start down 20 I will win with probability q<50%. Since we are assuming everything is symmetric between the teams, if I start down 20 I will win with probability 1-p, and it makes no difference whether we start even or flip a coin. However, say I control the swinginess of the match. For example, suppose I'm allowed to decide before each board whether it will be played at all vulnerable or all non-vulnerable. This improves my chance of winning because if I'm up, I can make every board all NV and if I'm down all boards are all V. Now if I start up 20, I can reduce the swinginess and win with probability p>50%, whereas if I'm down 20 I will increase the swinginess and win with probability q<50%, but also q>1-p. So in fact I prefer the coin flip.

 

If we control swinginess and the skill level is even, we should swing early. It follows that if I can control swinginess, I should want some big swings early on. After all I'd rather flip a coin and hand some IMPs back and forth (even with zero expectation) than start dead even. This is equivalent to a high swinginess early in the match, subsequently reduced if I gain the lead.

 

Of course we can ask, what is this business about controlling swinginess? Obviously both sides can take swingy actions if they choose. But not all pairs are symmetric in this respect -- some people "swing" better than others, or are more able to "switch modes" to play swingy or down the middle. Some pairs are naturally swingy (like people playing a weird system) and a team captain can sub that pair in or out.

 

In fact there is a useful upshot of this for team strategy. Suppose we have a team consisting of three pairs. Say the pairs are roughly equally skilled, but one pair is extremely swingy (playing weird methods, aggressive preempts, or something like this). Obviously at half time if we are substantially losing we should put the swingy pair in, and if we're up a bunch we sit the swingy pair out. But what about the start of the match? Assuming our opposition is roughly equally skilled, the argument is that we should have the swingy pair start the match.

Link to comment
Share on other sites

One thing which KenRex seems to forget is the case when we fail on both hands. Agreed, failing on both hands is ~27% and if those are the only two hands, the strategy is sound for not losing on those boards. But having only two such hands is not always the case.

 

If you try to apply the strategy more: Look at boards in pairs, On the first take the anti-percentage line. If it wins, play the precentage line on the second, else take the anti-percentage on the second is losing in the long run, which is probably so in the case of a 256 board match.

 

i.e. we can show that if p1 and p2 (0 < p1, p2 < 1/2) are the anti-percentage probabilities then in a BAM match, the expected number of points is < 0 for the above strategy. (same was shown by the other Ken for the p1 = p2 = 0.48 case)

 

Now this negative expectation manifesting itself in the match itself, is another question.

 

For the 48% case, of a small slam, consider a match which has 3 such pairs (i.e 6 hands in 3 pairs)

 

We have 0.2704 pbt of losing 2 points. 0.2496 pbt of breaking even and 0.48 pbt of gaining 1 point.

 

In a run of 3 such chances that we lose at least one such run is 0.61. And if we lose even one such pair, we can do no better than break even. For larger runs, the chances seem to becomes even worse.

 

For runs of 1 or 2 (ie two small slam hands or 4) we have > 50% chances of break even. So for shorter matches (but not too short), this seems like a sound strategy if you are the underdog.

 

In a 256 board match or longer, how many such hands do you expect?

Link to comment
Share on other sites

Some folks have mentioned assumptions about what Ken is assuming. I am assuming they mean KenRex and not me. We are getting to lots of assumptions.

 

Let's try this. Assume everyone can asses the situation when a swing occurs. Let's assume the opportunity to play anti-percentage happens at board 1. 48% of the time we are ahead, 52 % of the time we are behind after board 1. The ahead team will try to preserve its advantage. The behind team will try to overcome its disadvantage. It seems to be better to be the ahead team, but playing anti-percentage we will more likely be the behind team.

 

What I get out of this, still, is that swinging is good (although its not my style) if the opponents are better, playing straight is good if the opponents are equal or worse, unless they swing and get lucky.

 

Contrary to popular opinion, mathematics and common sense are usually on the same page, and I believe this is true here.

 

A true story: Long ago I sat down to play a seven board swiss match. On my right was a strong and young player. He bid to a slam, won the lead, finessed and then dropped my king of trump, then played a side suit that split 3-3 so he could pitch a loser. I asked if he needed anything else and he said yes and took another working finesse. We won the match. Fortunately the slam was on board 1 and partner and I each realized the necessity to push our luck on subsequent boards. Our luck was in. These things can cut both ways. Generally it is better to be ahead than behind after the first board, however. Duh.

Link to comment
Share on other sites

This may be somewhat getting to the point of silliness, and I may very well be misunderstanding some of these posts, but I still think that the concept is being lost because the concept of a sacrificial event is being missed.

 

The idea to going anti-percentage against an equal team presupposes a willingness to be completely pummeled in one event out of four. By creating this sacrificial lamb of an event, it seems that we maximize our chances of winning more events in the long run. All of this assumes, of course, that the parameters of the two-swings exists.

 

I'll be very precise in the constraints this time.

 

Suppose that you will be in the finals of the USBC for four years straight.

Suppose that you will be tied all three times with three boards to play.

Suppose that the last board will be a partscore battle, with a +/-3 IMP available, depending upon style and luck.

Suppose that the other two boards will be 48% small slams and that your opponents will never bid 48% small slams.

 

If you both bid merely game on the third-to-last and fourth-to-last boards, you will be tied with the one partscore battle to go. Law of averages says that you will split the difference, each winning two of the events.

 

Now, suppose, instead, that your team decides to go anti-percentage on the first board each year. 48% is roughly 50-50. So, you will roughly make the slam twice and fail twice. (For purists, assume 100 examples, such that you will make slam 48 times.) As this one contract will net you enough to ensure the win, you bid straight on the second board (game only) and pass out the last board, for two wins out of four immediately (or 48 out of 100 for the purists).

 

When the 48% slam fails, you are behind. So, you zag on the second hand also. 48% of the time, the second slam will make. If one of the two slams makes, you are back to even. So, roughly, you will make one and fail on the other. (Or, make on 25 of the remaining 52 hands for the purists).

 

Of the two (52) cases where the first slam failed, roughly half of the time the second slam fails also, and you lose the event. However, when the second hand succeeds, you are back to even, with even odds going into the last board.

 

So, the rough math (four USBC finals):

Win Year 1

Win Year 2

Hammered to oblivion Year 3

Year four is a toss-up on the last board.

Thus, you are assured at least as good as expected and have a 50-50 shot of winning three times out of four.

 

For the purists:

You seal the deal on 48 of the 100 finals.

On 28 of the finals, you are hammered.

On 12 of the finals, you lose on the last board.

On 12 of the finals, you win on the last board.

 

So, for the purists, this technique means that you win 60 of the 100 USBC championships against an equally-skilled opposing team.

 

The cost of the extra 10 wins was that you were utterly humiliated 28 times. However, you also sizably defeated the other team 48 times.

Link to comment
Share on other sites

Ah but suppose my team is your opponents in the USBF finals.

 

Knowing you think this way, I make sure that my team doesn't bid the slam on the first board. If the slam is failing, then we're up a bunch (since you bid it) and then we will bid the slam on the second board (knowing you are bidding it) to win the match 52 times of 100.

 

Now if the slam was making, I know I'm down. You'll be "resting on your laurels" and won't bid the slam on board two. So I'll bid it. If it makes, then we're roughly even odds on the partscore board (so I win about 12.5 times this way).

 

So now I'm winning 64.5 of the 100 matches, instead of the 50 times I "rate" to.

 

Basically this stuff only works if you "know" how your opponents will play.... :angry:

Link to comment
Share on other sites

This may be somewhat getting to the point of silliness, and I may very well be misunderstanding some of these posts, but I still think that the concept is being lost because the concept of a sacrificial event is being missed.

Ken: We understand your point. Repeating yourself isn't going to advance your argument.

 

With this said and done, your analysis is flawed. I don't think that its reasonable to assume that your partnership is capable of recognizing this opportunity to employ this strategy, but the opponents are not. In particular, is seems very strange to simultaneously be assuming that the opponents are better than you but you're the only one who is gets to employee this bizarre little trick.

 

If you are going to advance an argument like this, then you next to complete the analysis and calculate a mixed strategy in which each partnership will randomize across the 48% line and the 52% line.

 

I haven't bothered to complete this calculation, however, I expect that adopting a mixed strategy will destroy your example. If a mixed strategy is employed, you aren't going to be able to predict whether whether or not your result matches the one at the other table. In turn, this is going to prevent you from employing your staged decision making process.

Link to comment
Share on other sites

It's a well-known fallacy (it must have a name but I don't know it) that you can get an expected win at a repeated head-and-tail game by backing out as soon as you have a net gain. You even see it at the stock exchange: some unskilled investors (including a few professionals, I have been told) have an irrational aversion against selling stock at a lower price than the one they paid themselves.

 

In this particular case, however, it could be sound in theory: since a loss of 1 matchpoint over the entire match is as bad as a loss of 100 matchpoints, any strategy that would lead to many one-matchpoint-wins balanced by a few hundred-matchpoint-losses would be attractive.

 

But, as other have noted, when you factor in other kinds of randomness that you can't control, the theory breaks down. What rests is what we already know:

 

- Play swingy if status quo is unfavorable. This includes the situation in which you play against a better team.

- Maybe your ops at the other table know that you will play anti-percentage to get a swing so they will play anti-percentage as well to restore the wash. Taking that into account, you should adopt some mixed strategy, probably taking the 52% line 52% of the times (my guess, some game theorist please correct me if I'm wrong).

 

Here's a mathematical puzzle: Suppose you are an underdog so other things being equal, you prefer an abnormal result to a normal result. On the other hand, simply playing the best line could be the winning option since if you minimize your expected loss you improve the chance of winning by some uncontrolled randomness (say some artifact of you ops playing a different notrump range, or some blackout happening to one of your ops at the other table). Presumably, there must be a threshold somewhere: taking the 48% line instead of the 52% line may be correct while taking the 25% line instead of the 75% line could at the same time be incorrect.

 

Exercise: write a formula that enables you to compute the breakpoint percentage! Bonus question: extend the formula to the mixed-strategy scenario!

Link to comment
Share on other sites

Perhaps the battle of the KenR versus the world can be brought to a simple nub.

 

Suppose you are playing against an equal team. There is some randomness but equal means as you sit down your probability of winning is .5 Now suppose on the first board you seize an opportunity to try an anti-percentage play. Suppose that it goes well. You now have a probability of p>.5 of winning the match. Now consider the possibility it goes wrong. Does KenR agree, as I think is being said by many, that if the teams are equal then it follows that if the first board goes wrong the other team now has probability p of winning. If that is agreed, then the mathematics is simple. Let k be the probability of the anti-percentage play succeeding (so k<.5). Then the probability of winning using the anti-percentage play is pk+(1-p)(1-k). This is a weighted average of k and 1-k with the greater weight p put on the smaller number k so this turns out to be less than the unweighted average .5k+.5(1-k)=.5. (Put otherwise, the expression is linear in p and is decreasing as p increases.) This shows that the probability of winning following the KR strategy is less than .5 . (A little effort seems to give .5-(1-2k)(p-.5) )

 

 

I think that the KR argument essentially amounts to a denial of the symmetry claimed above: If we win our anti-percentage bet we now have a probability p>.5 of winning the match but KR treats the loss on the first hand anti-symmetrically, not assigning p to the probability of the opponents going on to win. This would seem to be a denial of true equality of abilities.

 

 

In the case of a hand it is, or can be, different. It's a matter of control. Declarer has options about how to play the spades and how to play the clubs. Assume the opponents are helpless to prevent him from playing these suits as he sees fit. Then playing anti-percentage in spades and basing his play in clubs on the outcome of the play in spades may have merit. But in the KR situation, the opponents are not helpless. They also, as many have said, can base subsequent play on the results of the first board.

Link to comment
Share on other sites

Some get it. When I re-clarify until I'm blue in the face, it is because some do not.

 

The penultimate point of all this?

 

As you have seen, the anti-p early hit has several defenses. However, without the defenses, it will work. The defenses include randomizing across p/anti-p plays and bids, and catering to the opps, and the like.

 

The conclusion I see is that ridiculous adherence to pure but incomplete mathematics is not even mathematically sound. Making the "percentage play" or the "percentage bid" is, in and of itself, subject to mathematical manipulation itself. Being predictably "percentage correct" is a liability, with a mathematical "proof" of that liability being possible.

 

The trigger for this line of thought was an email I received recently from a friend who sends questions to me from time to time. The bar scene after the game was filled with very talented players in a heated discussion as to slam bidding theory. I was given about five sample hands, where the slam success rate for each was razor-thin, hovering just at or just below 50%. My answer was that, when the percentages are very close to 50-50, just above or just below, it is not as easy as drawing a line at a specific percent success rate. Even forgetting state of the match, strength of opponents, and other psychologicals, such as board one of an extended teams match against a complete unknown, it still depends.

Link to comment
Share on other sites

Maybe I misunderstand. It appears you began with mathematics in the title and a mathematical argument. When the math doesn't seem to be working as claimed you speak of ridiculous adherence to pure but incomplete mathematics.

 

Mostly this just doesn't appear, on its face, to be a good strategy. The incomplete mathematics is complete enough to show that the incomplete mathematics you advanced doesn't hold up. If you want to think of your strategy as faith-based, I have no objection.

 

For what it's worth, I agree that there can be complicated mathematical issues here. Helene certainly mentioned some. But deliberately going against the odds on board 1 hoping to recover later if it doesn't work is, at best, a strategy for coping with a superior team. I don't care for it there either (I would dread going back to tell my teammates, who just played an inspired round against admired opponents, that we lost because I decided I had to swing a bit to cover for our incompetence), but if I were playing for my life I might try it.

Link to comment
Share on other sites

See, this is where my head explodes.

 

I really do not get how the mathematics I propose "doesn't hold up." The principle is extremely sound. You can make a living off of the deviation if you play the deviation to your advantage. I have absolutely no doubt, nor retraction, of the mathematics involved. The only part that is incomplete is the specific valuations.

 

My point arises in a specific instance of whether to bid or not bid a small slam. If the small slam is 49.9%, one mathematic model suggests that, in the long run, you will gain less than you lose if you bid these slams. However, that is a limited approach. If the opponents know that you will never bid a 49.9% slam, they can dump the failing slams into one set, taking advantage of the deviation, and beat you more often than they lose. That part of the equation I believe I have proven.

 

I believe that I have also proven that this tactic is very good against superior tems, but that it is also good against equal teams. It may be four times better advantage against a superior team than against an equal team, admittedly, but it still succeeds against an equal team.

 

Note, also, that this is not an advice to bid slim slams. It is also equally useful in not bidding a slightly favorable slam -- going conservative as opposed to going aggressive. So, there is no pro-slam bias or agenda.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...