Trumpace Posted December 19, 2005 Report Share Posted December 19, 2005 But then again, all else is not equal. In the eleven board match, if you always play for the overtrick you may "lose" 61% of the time, but almost all of these "losses" are by one imp when you score +1 on ten boards and -11 on one board. On the other hand, when you "win" 39% of the time, you will score +1 on every board and win by 11 imps. So while I agree that if these eleven boards are the entire match and it's win/loss and not VPs, and I know in advance that every board will be like this, it would be better to take safety plays... this is not how real bridge works. Assume the example was a final match, both teams starting from zero. The example was just to prove a point that the logic, "It works in the long run, hence I will play for it" is not exactly correct from a mathematical standpoint. It had nothing to do with the other factors of bridge. When all else fails, you resort to mathematics! For instance you know west has 5 clubs and East has 3. From a mathematical standpoint you finesse West for the Q. But that is not real bridge, if you know that West cannot have the Q from the bidding and the play so far. Anyway, more often than not, the decision which wins in the long run turns out to be the winning decision for shorter runs too. Quote Link to comment Share on other sites More sharing options...
arrows Posted December 19, 2005 Report Share Posted December 19, 2005 think about this: People know that insurance companies make money of them, but people keep buying insurance, why? People know that the odds of winning lottery is against them, but people keep buying lotteries, why? Are they all idiots? Typcially in a IMP team game. there are lots of swings that cost much more than 1 imp. Hence the cost of losing 1 imp more neglectable. That's why you should stick to the safe play. mathematically correct doesn't mean it's practically correct. Here, the key point is that most people are risk averse, that's why they pay premium for insurance. The 1 imp is your premium. Quote Link to comment Share on other sites More sharing options...
awm Posted December 19, 2005 Report Share Posted December 19, 2005 think about this: People know that insurance company make money of them, but people keep buying insurance, why? People know that the odds of winning lottery is against them, but people keep buying lotteries, why? Are they all idiots? The reason is that money doesn't value linearly. To give an easy example: Suppose you are given a choice. Either you will be given $10 million... or you can flip a fair coin. If the coin comes up heads, you get $30 million. If it comes up tails, you get nothing. What do you pick? The expectation from the coin flip is higher, but most rational people will take the guaranteed money. The point is, there is a threshold of money which will allow me (or most people) to be pretty much set for life. There wouldn't be much difference in my life between winning $10 million and $30 million, since either means I could retire and do pretty much whatever I wanted to do. Just the earnings from a safe investment on this money would yield a nice yearly salary. So even though the $30 million is "three times higher" it isn't really worth three times as much. On the other hand, if you get rid of the "million" I would take the coin flip. $30 is worth three times more than $10 to me. A really rich person (say Warren Buffet or Bill Gates) would probably take the coin flip for the $30 million too, since these numbers are basically pocket change to them. As for the lottery, the effective value of $10 million (or whatever the lottery pays) is really high and the cost ($1 or so) is very small. The relative values are not equal to the monetary values for most people. There's also the enjoyment gained by actually playing, which may be worth a small amount. You won't see people like Bill Gates (for whom even the winning value of the lottery is pocket change) playing the lottery. As for insurance, I know that I buy insurance because it's required by law. ;) I think the same reasoning applies in reverse though. A cost of $500-$1000 a year is annoying, but affordable. If I lose everything to an earthquake or fire, I'm pretty much screwed for life, so it may be worth the insurance premium even though it's expected negative. I'm not sure how much this applies to bridge, where an IMP is pretty much an IMP, other than in the "strength of field" and "state of match" constraints mentioned before, which can create a situation where IMPs don't really scale linearly either. Quote Link to comment Share on other sites More sharing options...
Fluffy Posted December 19, 2005 Report Share Posted December 19, 2005 I'm a damn matematician and wouldflip the coin :D Quote Link to comment Share on other sites More sharing options...
barmar Posted December 19, 2005 Report Share Posted December 19, 2005 In the long run counts only if you are playing a reasonable number of boards which decide the outcome of a match! which could well be over a 1000... Not really. If you adopt that reasoning, then you can't really play the odds at all, since they're all "in the long run". While playing the odds might not result in winning any particular match, the long run also includes all the matches you ever play. If you adopt a strategy that has a 5% improvement, then (all else being equal) you can expect to win 5% more matches. As a simplified example, suppose the only swing boards in any of your matches are the ones where you have to make this safety play decision, and you push all the other boards. If you don't make this decision correctly, you can expect to lose more of those deciding boards than if you do, so you'll lose more matches. In real life, of course, there are lots of swing boards, so no single board is likely to decide a long match (sometimes it *feels* like it does, as in the final board of the USA-Italy match last year, but there were many other big swings in the match that could have won Italy the championship despite that slip at the end). But it's not really helpful to think about that -- to decide what to do on a particular board, you should normally consider it in isolation, and go with the odds on it. There are some occasions where "state of the match" allow some deviation (if you're up 50 IMPs with 10 boards to go, swings of 1-2 IMPs probably don't matter any more), but most of the time you don't have this information and you have to play each board best. Quote Link to comment Share on other sites More sharing options...
cherdano Posted December 19, 2005 Report Share Posted December 19, 2005 I think the arguments about short matches in this thread are exactly wrong. When you are playing a very short match against an equally good team, the 1 IMP from the overtrick will decide the match more often than 1/11 * the number of times the 11 IMPs will decide the match. If you don't believe me, just think about a one board match :D The argument about VP is also wrong. 11 IMPs will swing around 3 IMPs usually, while as for 1 IMP there is about a 25% of swinging an IMP, so the ratio of expected gain vs loss is still the same. (Unless, of course, you are in the last board and your feeling about the state of the match is precise enough that you are, say, well within the 3-8 IMPs range of 16-14. Not that I would believe anyone who thinks he knows the SOM that exactly.) Arend Quote Link to comment Share on other sites More sharing options...
Trumpace Posted December 19, 2005 Report Share Posted December 19, 2005 In the long run counts only if you are playing a reasonable number of boards which decide the outcome of a match! which could well be over a 1000... Not really. If you adopt that reasoning, then you can't really play the odds at all, since they're all "in the long run". That statement is a bit misleading. What it meant was that the analysis works only if you consider a long run of such boards. When the number of boards is cut short, a different analysis might be needed and that the long run analysis might end up giving wrong results. That is all. This does not imply never to play the odds as you seem to have deduced. I think the arguments about short matches in this thread are exactly wrong. When you are playing a very short match against an equally good team, the 1 IMP from the overtrick will decide the match more often than 1/11 * the number of times the 11 IMPs will decide the match. If you don't believe me, just think about a one board match Which arguments about short matches are you talking about? I have seen multiple! :D Quote Link to comment Share on other sites More sharing options...
luke warm Posted December 20, 2005 Report Share Posted December 20, 2005 very interesting thread, i've never really thought about this... i've "taken on authority" that at imps, the safety play is better "in the long run" having read all the arguments, two sway me toward always taking the safety play.. first was mikeh's argument about the likely contracts at both tables... i agree with him that you don't have to risk a contract for 1 imp, the very fact(?) that different contracts will be played on, say, 50% of the boards will make taking the safety play a better bet 2nd, trumpace's argument concerning the "long run"... what he said about bb2 vs. bb3 is correct... the 'long run' would include both of those final matches, but would have nothing to do with what to do while actually playing bb3... imo Quote Link to comment Share on other sites More sharing options...
Trumpace Posted December 20, 2005 Report Share Posted December 20, 2005 very interesting thread, i've never really thought about this... i've "taken on authority" that at imps, the safety play is better "in the long run" having read all the arguments, two sway me toward always taking the safety play.. first was mikeh's argument about the likely contracts at both tables... <Snipped> 2nd, trumpace's argument concerning the "long run"... what he said about bb2 vs. bb3 is correct... the 'long run' would include both of those final matches, but would have nothing to do with what to do while actually playing bb3... imo Please don't allow the bb2 vs bb3 argument sway you towards taking the safety play. It is true that the long run analysis has an assumption of carry over of points which does not happen in practice, but, this does not imply that playing for overtricks is wrong. For instance consider the percentage of all such boards (boards with the guard against 5-0 break or overtrick). Assume that it is 2% (I am making this figure up, I have no idea what the right figure is), of all the possible bridge deals. Now consider a 256 board match. Expected number of such boards that would occur is ~6. If we play for overtricks, chances of gaining 6 IMPS on these 6 boards is 78.8%.The chances of losing at least 6 IMPS (lose 11 gain 5) is 22.2%. Based on other considerations of the match, if you think 22.2% is too high, play for safety. If other things are equal, playing for overtricks on each board might be the right way to go. (i.e if you play 1000 256 board matches, you will end up winning 788 of them as opposed to losing 222 of them, if the result is solely based on the 5-0 break deals) In fact you could even vary your tactics. 3 boards you play for safety and 3 you play for overtricks. The chances that you gain IMPS is higher now, but the loss is greater in case you lose. So if you think you need 5 IMPS from the 5-0 boards (because of the other boards), play 1 board for safety and the remaining 5 for overtricks to maximise your chances. Basically my point is... There need not be an "always safe" or "always overtrick" strategy. It could be mixed, in the same match itself. Of course, other considerations might force an "always safe" strategy, but that was not the original poster's question... Quote Link to comment Share on other sites More sharing options...
Chamaco Posted December 20, 2005 Author Report Share Posted December 20, 2005 I would like to ask a question: don't you think that the analysis should *not* be restricted to the single match (whatever the no. of boards) ? It seems to me, on afterthought, that the +1 imps pickups shopuld be accounted over ALL OF THE MATCHES where we will avoid safety playes vs a low %. The cost-benefit should not be restricted over a single match, but should include *all the imps TEAM matches in our life* where we shall adopt the same tactics refusing a low-risk safety play. Quote Link to comment Share on other sites More sharing options...
Chamaco Posted December 20, 2005 Author Report Share Posted December 20, 2005 think about this: People know that insurance companies make money of them, but people keep buying insurance, why? We can also think the other way around: Insurance companies are ready to pay a big prize once in a while if a disaster occurs, if they can cash in a moderate amount on a regular basis.And insurances companies do have a sound budget, so, on balance, their approach should payoff: indeed, on many instances, they reckon that the low-occurrence disaster they cover, NEVER occurs at all during a lifetime. This suggests that even the opposite reasoning can be defended: we might want to cash in on a regular basis our 1-IMP pluses, ready to payoff to the low-occurrence 11 imps losses, just as the insurance companies do. This analysis should not be restricted to the single match, but, if we consistently adopt the same tactics, to all of the team matches in our life. Quote Link to comment Share on other sites More sharing options...
whereagles Posted December 20, 2005 Report Share Posted December 20, 2005 I'm a damn matematician and wouldflip the coin :lol: that's the spirit! Quote Link to comment Share on other sites More sharing options...
Trumpace Posted December 20, 2005 Report Share Posted December 20, 2005 I would like to ask a question: don't you think that the analysis should *not* be restricted to the single match (whatever the no. of boards) ? It seems to me, on afterthought, that the +1 imps pickups shopuld be accounted over ALL OF THE MATCHES where we will avoid safety playes vs a low %. The cost-benefit should not be restricted over a single match, but should include *all the imps TEAM matches in our life* where we shall adopt the same tactics refusing a low-risk safety play. Why do you think it should be over all boards played in a lifetime? For instance, consider the following game. We have a coin which has 75% chances of showing heads and 25% chances of showing tails. I suggest a we play a game as follows. You flip the coin, if it is heads, you get 1.1 tokens. If it is tails I get 3 tokens. We do this in groups of 3 flips. At the end of 3 flips, whoever has the higher number of tokens gets 1$ (paid by the other of course). The tokens are now thrown away and for the next 3 flips, we start fresh with 0 tokens each. Do you play the game with me? If you undertake a long run analysis, over the lifetime...you expect to win 1.1*0.75 - 3*0.25 = 0.075 tokens per flip. Hence for 3 rounds you expect to have more tokens than me. So you take up the challenge... But consider this:Even if there is one tails among the three flips, I end up getting more tokens than you. The chances that all the three flips are heads is (0.75)^3 = 0.421875.i.e. there is a 42% chance that all three flips are heads. So with at least 58% chances, i win at least 0.8 tokens per round of 3 flips, i.e each round I win a $ with at least 58% chances! So even though you expect to win positive tokens per flip, you end up losing money! Why? This is because we throw away the tokens you earned in the previous round and those aren't counted for further rounds. If we kept a running track of tokens won so far, you would win more money. Now consider a similar game with 2 flips per round instead of 3.Here, you expect to win each round with at least 56% chances. So the length of the round matters... The tokens are IMPS and the $ is the championship. Each round is similar to the final match which decides the championship. (assuming the teams start out at 0 each) This shows that you need to analyse per match, rather than over the lifetime... Of course, by a match I just mean the final match which decides the outcome. If we have to play other matches in order to reach the final and there is a victory point kind of system, we need different analysis. Quote Link to comment Share on other sites More sharing options...
Chamaco Posted December 20, 2005 Author Report Share Posted December 20, 2005 Why? This is because we throw away the tokens you earned in the previous round and those aren't counted for further rounds. That's the point: 1-IMPS pluses , especially among comparable teams, are not irrelevant, or to throw away at the end of the match. Another way to view this is to ask a top level player: how many times did you lose a match by 1 imp?and how many times did you win a match because you made (or opps failed to make) a safety play catering for a <5% risk ? Quote Link to comment Share on other sites More sharing options...
Trumpace Posted December 20, 2005 Report Share Posted December 20, 2005 Why? This is because we throw away the tokens you earned in the previous round and those aren't counted for further rounds. That's the point: 1-IMPS pluses , especially among comparable teams, are not irrelevant, or to throw away at the end of the match. Another way to view this is to ask a top level player: how many times did you lose a match by 1 imp?and how many times did you win a match because you made (or opps failed to make) a safety play catering for a <5% risk ? I think there is a terminology mismatch. By a 'match' I mean the whole of Bermuda Bowl! According to you, Italy vs Egypt is a different match as compared to Italy vs Usa in BB2005, but according to me it is part of the same BIG match... Even so, the long run analysis (just calculating the expected IMPS per board) which you seem to think should work, can lead to incorrect strategies, as the above example of coin flip game shows. Quote Link to comment Share on other sites More sharing options...
Chamaco Posted December 20, 2005 Author Report Share Posted December 20, 2005 I think there is a terminology mismatch. By a 'match' I mean the whole of Bermuda Bowl! According to you, Italy vs Egypt is a different match as compared to Italy vs Usa in BB2005, but according to me it is part of the same BIG match... Even so, the long run analysis (just calculating the expected IMPS per board) which you seem to think should work, can lead to incorrect strategies, as the above example of coin flip game shows. If you run it over a long timeseries, we get closer to the mathematic odds, that is, the example of the insurance company, who is willing to risk to pay a big amount for a very low frequency risk, but cash in in the rest of the cases. I think the example of the insurance company is much better than the example of the coin flipped:the insurance can be one-year only, 2 years or longer, and the risk calculated on a 1 year period can be an analogy to the risk computed of the short-term" match (or set of matches) in bridge. ==== The exmple of the biased coin flip is not well posed, because: 1) at the end of a single set, before we start a new set of coin flips, the % of outcomes are not the same as at the start, because the probability "a priori" has changed because in the previous set(s) the coin exited as tail X times and exited as heads Y times;so if you have won one set, you are almost sure to lose the 2nd and 3rd set;you cannot recompute the odds for the following sets with the original probabilities, the odds must be updated as a function of past results. 2) the % of risk is too high compared to the number of coin throws.If you run the same analysis with every set having, say, 6+ throws, we all know that my side would be a winner. Quote Link to comment Share on other sites More sharing options...
hrothgar Posted December 20, 2005 Report Share Posted December 20, 2005 The example of the biased coin flip is not well posed, because: 1) at the end of a single set, before we start a new set of coin flips, the % of outcomes are not the same as at the start, because the probability "a priori" has changed because in the previous set(s) the coin exited as tail X times and exited as heads Y times; so if you have won one set, you are almost sure to lose the 2nd and 3rd set; you cannot recompute the odds for the following sets with the original probabilities, the odds must be updated as a function of past results. I'm not sure if I understand your point. You seemed to be confusing "bias" with auto-correlation. The coin toss example is typically used to illustrate independent events. The chance that the coin turns up heads or tails on flip N is NOT a function of the results on flip N-1, N-2, N-3, ... When people talk about a "fair" coin toss, they mean a coin in which heads is expected to come up 50% of the time and tails the other 50%. In contrast, a biased coin has been loaded in some way (weight) such that the percentage chances are not 50/50. However, biased coin tosses as still independent of one another. Its possible that you're discussing a situation in which you don't know the odds that the coin turns up heads versus tails, but this is a very different type of problem... Quote Link to comment Share on other sites More sharing options...
Chamaco Posted December 20, 2005 Author Report Share Posted December 20, 2005 I'm not sure if I understand your point. You seemed to be confusing "bias" with auto-correlation. Sorry, perhaps wrong terminology. My "bias" term referred to Trumpace premises which were as follows: "We have a coin which has 75% chances of showing heads and 25% chances of showing tails." it's obvious that the outcome of the the coin flip is "biased" (if it was a dice throw, the dice would be "loaded"), in terms of an "a priori" probability different from 50%, but feel free to use the most appropriate term, I won't argue about it :-) Of course, it is right to use the "biased" example, since both in the coin flip AND in the safety play, we have an a priori probability different from random. What I was criticizing is the fact that, in the analysis of the coin flip bets, one should take into account that, after each set of coin flips, the odds should be updated as a result of the previous flips.At the second, third, fourth, etc set of coin flip sequences, the odds for head and for tail will be different than 75% and 25% based on the previous outcomes. This alters considerably the cost benefit analysis, even in Trumpace's example. Quote Link to comment Share on other sites More sharing options...
MickyB Posted December 20, 2005 Report Share Posted December 20, 2005 I've only skimmed this thread, so apologies if I'm repeating something that has already been said - Cross-IMPs is what we have in MBC on BBO. The tactics should be pretty much the same as teams, so I'd go for the overtrick. Butler IMPs is different - you remove the top two and bottom two scores, take the mean and then IMP up against that. Overtricks tend to be less valuable there. Quote Link to comment Share on other sites More sharing options...
42 Posted December 20, 2005 Report Share Posted December 20, 2005 [snip] the sense [snip] is that the majority of good players take the safety play unless their table feel tells them that the overtrick play will definitely work.[snip]Against a tough team, in a short match, I'd risk the overtrick if my antennae were in good shape that day ... The insurance uses pure data and knows how high the contribution of every client must be to ensure the constant plus for them, even when they must pay from time to time, because they exclude the active brought out ensurance case like committing suicide or so by their clients, or some risks that cannot be controlled or would ruin them with one strike like a war. They try (try because there might be some creative cheaters and the detectives of the insurance company don't find it out) to exclude the human being as a risk that they cannot control, they just work with it as a factor. That is the difference to bridge. One cannot control the partner or the opps (and sometimes oneself ;) ). But one tries to "manipulate" them (I don't mean cheating!!): the partner by delivering the best information under the given circumstances, and opps by creating problems and traps. Think of expert falsecarding: it pays sometimes against good opps who watch the cards whereas it is more or less useless against a palooka who does not pay attention. And one tries to get as many informations as possible, including those things described as table presence.In the example with coin, may it be loaded or not: one flips the coin and it is assumed in the experiment that the same single person does it under the same conditions. Again the possibility of a manipulation is excluded. What I think is that pure mathematics and statistics will not make the perfect bridgeplayer, but I don't know. Quote Link to comment Share on other sites More sharing options...
Gerben42 Posted December 20, 2005 Report Share Posted December 20, 2005 Against a weaker you should take the safety play as you expect to pick up some 11s on other boards. Pay 1 IMP insurance to not give them the chance to get this one right and get 11 back. Against equal strength teams, train to become better than them but take the safety play. If they don't you have an outside chance of winning. Against stronger teams you might want to go for the overtrick. If they don't you're already 1 IMP ahead ;) Quote Link to comment Share on other sites More sharing options...
hrothgar Posted December 20, 2005 Report Share Posted December 20, 2005 What I was criticizing is the fact that, in the analysis of the coin flip bets, one should take into account that, after each set of coin flips, the odds should be updated as a result of the previous flips. At the second, third, fourth, etc set of coin flip sequences, the odds for head and for tail will be different than 75% and 25% based on the previous outcomes. Assume that we have a "loaded" coin that comes up heads one out of every 6 throws and tails 5 out of every six throws. This probability is fixed in stone. (If you have a problem believing that a coin can be fixed so precisely, feel free to consider a fair six sided die which will roll a "1" one out of six times and a "not 1" 5 out of ever six times) Furthermore, lets assume that we flip the coin 8 times in a row. The coin lands tails first each and every time. Now, lets flip the coin a 9th time. The odds that the coin will turn up heads is still 1/6. The odds that the coin will turn up tails is still 5/6. Quote Link to comment Share on other sites More sharing options...
Trumpace Posted December 20, 2005 Report Share Posted December 20, 2005 I think there is a terminology mismatch.Even so, the long run analysis (just calculating the expected IMPS per board) which you seem to think should work, can lead to incorrect strategies, as the above example of coin flip game shows. If you run it over a long timeseries, we get closer to the mathematic odds, that is, the example of the insurance company, who is willing to risk to pay a big amount for a very low frequency risk, but cash in in the rest of the cases. I think the example of the insurance company is much better than the example of the coin flipped:the insurance can be one-year only, 2 years or longer, and the risk calculated on a 1 year period can be an analogy to the risk computed of the short-term" match (or set of matches) in bridge. ==== The exmple of the biased coin flip is not well posed, because: 1) at the end of a single set, before we start a new set of coin flips, the % of outcomes are not the same as at the start, because the probability "a priori" has changed because in the previous set(s) the coin exited as tail X times and exited as heads Y times;so if you have won one set, you are almost sure to lose the 2nd and 3rd set;you cannot recompute the odds for the following sets with the original probabilities, the odds must be updated as a function of past results. 2) the % of risk is too high compared to the number of coin throws.If you run the same analysis with every set having, say, 6+ throws, we all know that my side would be a winner. The insurance example isn't right. The money made by the insurance company due to policy holders in that year is _not thrown_ away, unlike the IMPS you win in Bermuda Bowl 2004 are not counted in Bermuda Bowl 2005. Hrothgar has answered your point 1). But just to repeat...Each coin flip is independent of the previous flips. Just because there have been a 1000 tails does not mean the 1001st has higher chances of coming up heads. The chances of heads on 1001st throw are still 75%. No more. No less. About your point 2) Consider a round with 7 flips.If at least 2 heads come up, I make more tokens than you. Probability of this = 1 - (0.75)^7 - 7*(0.75)^6*(0.25) ~ 0.55 I still win with 55% chances if the rounds are 7 flips each. But,(I think, havent done the calculations) the long run analysis wins in the following case: Suppose we pick the length of the round at random say from a very large set of numbers. Then, there is more chance that you will end up making money! You might argue that in Bridge we have something similar... Each match would have a random number of such overtrick or down boards (which correspond to the round length in the coin flip game). But that is incorrect reasoning, since the total number of boards with the overtrick/down scenario are a fixed percentage of all possible bridge deals. Say that percentage is 2%. In a match of length 256, you should do the analysis for around 6 boards (and probably a few more in the neighbourhood of 6), as we expect 2% of the 256 boards to be overtrick or down boards, and based on the results of that, we should pick our strategy. Of course, it depends on how the 256 boards are being generated, which by computer these days is pretty close to being truly random. For 1%, 2%, 3% chances with a 1 IMP gain or 11 IMP loss, it seems that the long run strategy also works in most of the short runs (I have verified for runs upto 23). So the expert is right (play for overtricks to maximise score, if 5% or less chances of losing 11 IMPs while gaining 1 IMP), but by applying the wrong reasoning. Quote Link to comment Share on other sites More sharing options...
awm Posted December 20, 2005 Report Share Posted December 20, 2005 A lot of trumpace's examples seem very contrived. Let's see what we can actually show to be true mathematically. A few assumptions will be needed here; I'll write them in bold. Assumption One: Our goal is to win the match. Of course, this might not be true, especially if it's VP scoring. We may need a big win, or only to avoid a big loss. It might be more important to "look good" in the eyes of our client than to actually win. But let's go with trying to win for now. There exists a probability distribution over results of the match. In other words, there's a probability that we tie, win by one, win by two, lose by one, and so forth and so on. We could plot this and it would look like a probability density function, although discretized because "win by 1/2" isn't usually a possibility barring some strange director rulings. Assumption Two: The probability distribution of outcomes is symmetric about zero. This will not be true if the teams are not evenly matched, because you'd expect the distribution to be skewed to give the better side a higher chance of winning. It will also not be true if one side already has a known substantial IMP advantage. Both these factors have been mentioned already, and aren't the main point of this thread. Assuming two equally matched teams at the beginning of a KO match, a symmetric distribution is a reasonable assumption. Now suppose we take an action (say choosing a safety play or not) which might possibly result in losing an IMP. Our chance of winning would then decrease, since all the results at zero (tie match) now become losses, and the results at +1 (win by one) become ties. We can now determine the cost/benefit, in terms of probability of winning, from making a decision. Let the probability distribution represent outcomes of the remaining boards of the match -- say we are trying to decide what to do on the first board. If we take the safety play, we lose one with probability L and we win 11 with probability W (assuming the other action at the other table). If we don't take the safety play, things are exactly reversed. Let PX be the probability of a result where we "win by exactly X." If the safety play loses an overtrick, our chance of winning decreases by exactly P1 (because win by one becomes a tie). If the safety play is necessary, then our chance of winning increases by P0+P1+P2+...+P10 (since ties and losing by ten or fewer now becomes a win). So the remaining question is, which is larger, P1 times the probability of losing an overtrick, or P0+P1+...+P10 times the probability that the contract would go down without the safety play? Assumption Three: In general, if X>Y then PX<PY. Also, PX and P(X+1) are quite close to the same. This will not be true for a single board, because certain imp differentials just happen to be unlikely. For example, those differences that correspond to a game swing are a lot more likely than 7-8 IMP swings. However, over a fairly long match between evenly matched teams, it seems more likely that my team will win by a small margin than it is that my team will win by a larger margin. There are also many opportunities to swing one imp in a long match, so it will never be the case that "my team is WAY more likely to win by 8 than to win by 9." Again, none of this applies to a one board match. In fact, it's possible to prove that since each board can swing a bounded number of imps, over a very long match the distribution must look as described. Of course, whether this applies to an 8 or 12 or 24 board match will be very difficult to determine, since very few matches actually pair evenly matched teams at all. Still, it seems like a reasonable assumption that between evenly matched teams a particular small margin is more likely than a particular larger margin, but not much so if the margins are similar. Under this third and final assumption, we can conclude that P0>P1>P2...>P10 and that P0 and P1 are pretty similar. This leads us to P1 being roughly (1/11) times P0+P1+P2...+P10, or perhaps a little more than that. So if the contract goes down without the safety play one time in twelve, we have (1/12)(P0+P1+...+P10) is about the same as (11/12)(P1) and the safety play is about break-even. Quote Link to comment Share on other sites More sharing options...
Al_U_Card Posted December 20, 2005 Report Share Posted December 20, 2005 So if the contract goes down without the safety play one time in twelve, we have (1/12)(P0+P1+...+P10) is about the same as (11/12)(P1) and the safety play is about break-even. But a lot easier to explain to your team mates when the safety play results in your winning and not making it costs the victory.... Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.