Jump to content

Odds Philosophy Question


Recommended Posts

An odds issue occurred to me as I was driving around town.

 

The best way to explain the problem is to give a strained and vague example. Suppose a particular move gives a 48% chance on double or nothing stakes. You do not take that bet, right. But, what if you knew that a specific occurrence happened 10% of the time and always meant a loss. To get to 48% net, you would then have 10% complete losses but 90% winning something like 53% of the time.

 

This sounds stupid, but it seems like you in this scenario have a 90% chance of a 53% chance of success. This is still a net of a 48% chance of failure, but not exactly.

 

I mean, bidding has these strange odds scenarios. There might be, say, only a 10% chance that partner does not have the diamond Queen. If he has it, the contract is a 66% winner. If not, the contract is hopeless. Well, I could do all sorts of math to see what the total odds are, or I could decide that I have a 90% chance of a great contract.

 

If one grouping of bad luck chances can be identified, we might be able to disregard that on the theory that this is a "bad day scenario." You only see, say, 2 of these 10% death sentence scenarios per session. So, if I ignore them, I pay a penalty in the Monday-Tuesday game but have a real good shot at a good game for the Wednesday-Thursday event.

 

Am I making any sense here? If so, any comments?

Link to comment
Share on other sites

I think this is usually a bad idea. I mean sometimes you might know that it doesn't matter if something bad happens (like your are in an unusual contract that is hopeless if trumps split 5-0 or 4-1 and you just assume and calculate "knowing" that they split 3-2), but otherwise you should calculate the complete odds that include everything you know, not the partitioned partial odds. Now, on yet again another approach when there is a partitioned probability shape like this if you now know the diamond Q is what you need, if your auction can find out about that card (say a control-asking-bid, or a sophisticated cue bid, or whatever) then you might figure out which population applies for real.
Link to comment
Share on other sites

Read any book or article on the psychology of risk and you'll find that this is a common fallacy. The mathematics of probability are correct. When we use intuition, we tend to underweight (i.e. ignore) low-probability events. And depending on the context and personality, we often bias our decisions towards the more favorable or unfavorable result.
Link to comment
Share on other sites

 

I mean, bidding has these strange odds scenarios. There might be, say, only a 10% chance that partner does not have the diamond Queen. If he has it, the contract is a 66% winner. If not, the contract is hopeless. Well, I could do all sorts of math to see what the total odds are, or I could decide that I have a 90% chance of a great contract.

 

Am I making any sense here? If so, any comments?

 

I think bidding is almost always based on incomplete information. One never knows the exact odds. It is always X% + e(rror).

Link to comment
Share on other sites

I will respectfully suggest that there are odds questions, and philosophy questions, and what was posted in the OP was just an odds question, devoid of philosophy. When partner is 90% to have the card that makes it a 66% contract, OP already knows that bidding on is .66x.9~59% to work (so, in most cases, we bid on.)

 

If you oversimplify, and say "you have a 90% chance of being in a great contract"... well... you are oversimplifying. When you have a 90% chance of being in a 51% contract (at MPs), or a 51% chance of being in a great (66%) contract, your oversimplification will cost you. That's the way oversimplifications are; often they work, sometimes they lead you astray.

Link to comment
Share on other sites

Funnily enough, I think we sometimes shy away from these risks. Say you can vul at IMPs make 4= with 100% certainty, but in doing so give up any chance of an overtrick. Alternatively you can play for the 90% chance of an overtrick, but 5% of the time you go down, 5% of the time you make exactly. I think most people take their 10 tricks even though the pure calculation suggests that going for the overtrick is better (0.9 x 1 IMP - 0.05 x 12 IMPs).
  • Upvote 2
Link to comment
Share on other sites

The thing is, you can always rephrase an x% contract as a "90% chance of a 10x/9% contract" just by splitting the failure event into two parts. Some of these will be natural splits and others will be artificial, but that doesn't really change anything.

 

The only time that sort of argument makes sense is when the payoff (in MP or whatever) changes based on who has the Q.

  • Upvote 4
Link to comment
Share on other sites

If you could analyse the 10% event and work out that it was less likely than expected, so that >40% of the time it would have occurred it can be eliminated, then you can now take the bet. But I think that taking generally bad bets that lead to extreme results is generally a bad strategy. This is essentially the tops/bottoms approach that some pairs play in a MP field when they feel that they cannot win by playing more conservatively.
Link to comment
Share on other sites

Funnily enough, I think we sometimes shy away from these risks. Say you can vul at IMPs make 4= with 100% certainty, but in doing so give up any chance of an overtrick. Alternatively you can play for the 90% chance of an overtrick, but 5% of the time you go down, 5% of the time you make exactly. I think most people take their 10 tricks even though the pure calculation suggests that going for the overtrick is better (0.9 x 1 IMP - 0.05 x 12 IMPs).

 

Your logic is flawless, however there are 2 other factors that influence this kind of decisions, and alter it if you introduce them: When the other table is not in 4 the risk is not worth it, and also important and discussed at lenght in the past, if you are a pro, you can't let the client see you going down on a cold game.

Link to comment
Share on other sites

I though this thread was about betting systems :(

 

I made a simulation trying to disprove my father's casino betting system, just to find out that his betting system was beating casino when the casino advantage was below 50.2%, the betting system advantage got below 50.03% when I introduced table limits though.

Link to comment
Share on other sites

It is actually trivial to produce a betting system that gives odds for the player of over 50%. The trouble is that the casinos also know this and spend a great deal of effort to avoid it. Indeed, if a casino decides you are a card counter they may very well ban you for life even though what you are doing is completely legal. Due to some clever software that also scans faces, this can also mean you are banned in the majority of top casinos across the globe. If you go to a casino you are expected to lose unless you are not playing against The House. If you do not like this then do not go to them.
Link to comment
Share on other sites

Your logic is flawless, however there are 2 other factors that influence this kind of decisions, and alter it if you introduce them: When the other table is not in 4 the risk is not worth it, and also important and discussed at lenght in the past, if you are a pro, you can't let the client see you going down on a cold game.

Yes I should have stated that this only applies where everybody is in 4. I think if you have a combined 26-27 with a 5-4 fit and an unopposed auction you can probably assume that in most rooms.

 

I hadn't considered the client issue. I wrote a humorous piece in our local bridge mag a few years ago on something related (where any idiot can take all the working finesses and make a contract with overtricks, but the pro secured his contract against all but the one very unlikely situation that actually occurred), and yes I imagine the client doesn't take this well, particularly if they're not very good.

Link to comment
Share on other sites

An odds issue occurred to me as I was driving around town.

 

The best way to explain the problem is to give a strained and vague example. Suppose a particular move gives a 48% chance on double or nothing stakes. You do not take that bet, right. But, what if you knew that a specific occurrence happened 10% of the time and always meant a loss. To get to 48% net, you would then have 10% complete losses but 90% winning something like 53% of the time.

 

This sounds stupid, but it seems like you in this scenario have a 90% chance of a 53% chance of success. This is still a net of a 48% chance of failure, but not exactly.

 

(.1 * 0) + (.9 * .53333...) = .48

(.2 * .3) + (.8 * .525) = .48

 

Given the scenario that you describe, it seems silly to worry about the left hand side of the equation as opposed to the right.

  • Upvote 2
Link to comment
Share on other sites

I mean, bidding has these strange odds scenarios. There might be, say, only a 10% chance that partner does not have the diamond Queen. If he has it, the contract is a 66% winner. If not, the contract is hopeless. Well, I could do all sorts of math to see what the total odds are, or I could decide that I have a 90% chance of a great contract.

 

Others seem to be getting more out of this than I am. If I were somehow convinced that the probability of partner having the diamond Q was 0.9 and the probability of the contract making, if he has it, is 0.66 (and 0 if he does not hold it) then I would figure the probability of the contract making is 0.9 times 0.66, which is just under 0.60.

One may or may not like the math, but it seems like that is all that in involved.

I simply am not seeing what else you are getting at.

 

Possibly I am just repeating what brothgar just said, only with mild rephrasing.

Link to comment
Share on other sites

Kenberg, the numbers are off. For a better example. The probability of partner having the Q is 0.8. If they have it the contract makes 60% of the time; if not then 0%. The total odds are 48%. Ken is suggesting that an 80% chance of an odds on contract might be worth going for. Others have provided the maths on why this is a bad idea. For a 90% example, you need the odds of the contract making to be around 55% (instead of 66%) to fall into the category Ken is talking about. That gives total odds of 49.5%.
Link to comment
Share on other sites

sorry for being pedantic, but "odds" is not the same as probability. The odds is the ratio between two probabilites. For example, if the probability of head is 50% and the probability of tail is 50% then the head/tail odds is 50/50=1. A probability of 48% corresponds to odds of 48/52, i.e. appr. 0.96.

 

I suppose in most context it is clear what is meant but if someone refers to "odds=1/3" then I really want to know if it means odds=1/3, i.e. a probability split of 25%/75%, or if it means a probability of 33.3%.

  • Upvote 1
Link to comment
Share on other sites

You are right and also wrong. If an event has a probability of 33% then it has odds of 2 to 1 against. However, the odds of the event happening are 1 in 3, or 33%. So odds can be the same as probability providing you specify "the odds of something happening" or the like. If you use odds alone then it works as described and there are different forms of expressing this depending on where you live and what you are doing.
Link to comment
Share on other sites

I suppose in most context it is clear what is meant but if someone refers to "odds=1/3" then I really want to know if it means odds=1/3, i.e. a probability split of 25%/75%, or if it means a probability of 33.3%.

The 25%/75% split is more properly expressed as 1:3, not 1/3, to avoid ambiguity. Verbally, these should be said "1 to 3" and "1 out of 3" (or "1 in 3"), respectively.

Link to comment
Share on other sites

When the other table is not in 4 the risk is not worth it, and also important and discussed at lenght in the past, if you are a pro, you can't let the client see you going down on a cold game.

I think this applies just as well to amateurs/team-mates as to pro/client. How will team-mates react when you come back with a negative score - will they go through the maths and agree with you? Mine wouldn't.

Link to comment
Share on other sites

An odds issue occurred to me as I was driving around town.

 

The best way to explain the problem is to give a strained and vague example. Suppose a particular move gives a 48% chance on double or nothing stakes. You do not take that bet, right. But, what if you knew that a specific occurrence happened 10% of the time and always meant a loss. To get to 48% net, you would then have 10% complete losses but 90% winning something like 53% of the time.

 

 

Am I making any sense here? If so, any comments?

 

The best way to solve a problem is to start with a well formulated statement of the problem. That usually involves a process of careful definitions, which is lacking here. Here we have mixed in "stakes", probabilities. It might seem that it could just be cleaned up a bit, and introduce the concept of expected value - which is also being vaguely bandied about. But we still we would be lacking a complete problem definition.

 

One possible problem is how does one compare the expected value of a complete set of events, each of known probability and each with a known measure of value? That could be formulated pretty rigorously and the mathematics is generally well understood.

 

On the other hand, perhaps the problem is that of predicting behavior. If we mix dollar investments (costs) in and dollar rewards (returns), then things get much more complicated than just measuring expected value. If you don't believe it, then perhaps you have not heard that states run lotteries to generate income, and casinos make money. Would you take an odds off line to win the Bermuda Bowl, if it was the last hand, you knew your deficit, and the odds on line would be insufficient to win?

 

But it does not have to be just about money to introduce perturbations that seem irrational, but are measurable. Dan Ariely "Predictably Irrational" studies these types of problems professionally.

Link to comment
Share on other sites

The best way to solve a problem is to start with a well formulated statement of the problem. That usually involves a process of careful definitions, which is lacking here. Here we have mixed in "stakes", probabilities. It might seem that it could just be cleaned up a bit, and introduce the concept of expected value - which is also being vaguely bandied about. But we still we would be lacking a complete problem definition.

 

One possible problem is how does one compare the expected value of a complete set of events, each of known probability and each with a known measure of value? That could be formulated pretty rigorously and the mathematics is generally well understood.

 

On the other hand, perhaps the problem is that of predicting behavior. If we mix dollar investments (costs) in and dollar rewards (returns), then things get much more complicated than just measuring expected value. If you don't believe it, then perhaps you have not heard that states run lotteries to generate income, and casinos make money. Would you take an odds off line to win the Bermuda Bowl, if it was the last hand, you knew your deficit, and the odds on line would be insufficient to win?

 

But it does not have to be just about money to introduce perturbations that seem irrational, but are measurable. Dan Ariely "Predictably Irrational" studies these types of problems professionally.

 

You raise really interesting points but I ask what is the question?

 

 

For me, often I dont really understand the question ..let alone the answer.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...