
mythdoc
Full Members-
Posts
114 -
Joined
-
Last visited
About mythdoc
- Birthday 10/16/1960
Profile Information
-
Gender
Not Telling
-
Location
Tennessee USA
mythdoc's Achievements

(4/13)
4
Reputation
-
Thank you, Mycroft, for your reply.
-
Thank you for your reply.
-
Beautiful day in southeastern US. Woke up today ready to download and post sets of hands, then read smerriman’s post 78. In an nutshell, he draws a completely contrary set of inferences compared to the inferences I drew from the same hands. Firstly, thank you for finally looking at them. In the time that was spent writing posts insisting I said what I didn’t say, and demanding I submit to tests to satisfy your idea of mathematical and statistical rigor, you could have looked at them and responded 20 times over. But, finally you did. And that told me what I needed to know, which is that it will be pointless posting additional sets of hands and then arguing over inferences from those. I thought mycroft’s post 58 was very interesting, wherein the “corporation test” was cited. I think it’s a test that many, many corporations have failed in this internet age. That has left many of us skeptical of how mainframes may be used to adjust the environment is subtle ways, to substitute a plausible enough (or indeed hyper-plausible) reality in the place of actual reality. Why do they do this? Many reasons, but money tends to be the common factor. I am speaking of the googles, facebooks, and Amazon’s of the world, of course. Their algorithms are their private property. It would be very hard indeed to know anything specific without access to these programs, but the effects on people have been pernicious in many obvious cases. And these corporations often succeed because their participants like to be manipulated in the exact ways the computers manipulate them. Go figure! This is different. I don’t sense any potentially pernicious effects. My hypothesis, if found to be true, would not, as I have said above, be of real consequence from my point of view. I think others would care very much if BBO was using any kind of process to make daylongs more engaging, or to reduce the effect of bad robot play, or whatever. Anything that altered 100% random hands. Which probably explains the ferocity with which the very idea has been met. Which, in turn, leaves us with the irony that I, who harbor suspicions, would not really care if they were found to be true, whereas others, who, as we have seen, dismiss such suspicions with supercilious condescension, would be absolutely irate if they were found to be true. Again, go figure. Lastly, I know I am not the only BBO user to have had these suspicions. I know it because others have posted in other threads and because (other) others have made themselves known to me directly. I apologize to them, because I don’t think I did a good enough job passing along their experiences and their concerns. These concerns will persist after this thread dies down, until either BBO becomes more transparent and definitive as to their processes, or until the processes themselves change. But I’m going to stop writing about it, at least for now. Thanks.
-
They were the only two I ran, so I should not have said “selected”. But, your offer is exactly what I have been asking, thank you. Almost as if you finally stopped misrepresenting my posts when I insisted upon it, :) How many boards would you like the challenges to be? You have stated twice how precious your time is, lol. I’ll run them and post the hands sometime tomorrow.
-
Last I knew, smerriman could access my tourneys and could verify that the two sets were the only two run, as I said they were. And as I just said, publicly publish sets of challenge hands of the types I specified (just declare, non-best-hand challenges), post them AS A SET (none of this dividing one by one bullshit), have a poll at the top with the following question: “If, as a supposition, BBO was thought to be selecting sets of hands for competitions involving one human and three robots, to reduce incidence of flat boards, and was doing this both for sets of boards under MP scoring, and for separate sets of boards under IMP scoring, which of these sets (below) would you think is the MP set and which is the IMP set?” Now that would be very interesting to me.
-
No, I made no such specific claim. You interpreted my statement that the differences between the challenges I played on that occasion were obvious as amounting to a claim that I could, and would, do it over and over again like a magic trick. My interest is and has always been that others take a look at these hands and state if they see any patterns. I have no interest in submitting to self appointed lord high interrogators because your lack of good faith is beyond obvious at this point. So, did you look at the hands I posted above? Could be a coincidence, but 3 out of the 4 IMP hands were game score contracts, whereas 4/4 of the MP hands were part score contracts. One of the trends I said one might see to tell one set from another in one of my earlier posts. Doesn’t prove anything, but certainly doesn’t disprove my hypothesis. Meanwhile this is still a blunt tool to get at the question of whether flat boards are being removed from daylongs. If you do want to pursue this in good faith, make it public, set up a poll, and post SETS of 12 or 16 board challenges (just declare, non-best hand) of the two types, and ask folks to pick which set is which. Then we might all judge for ourselves. Or let it drop. I for one am tired of replying to your misrepresentations and playing your gotcha games.
-
And also, I’d love a reply to this paragraph that I wrote above, that you guys ignored. smerriman, you said it would be impossible for BBO to have a computer sophisticated enough to generate conditioned hands. I wrote: Do you agree, then, it is quite doable?
-
Why not let everyone play? Just please understand that it is not scientific and doesn’t prove that BBO conditions hand delivery for the two scoring types. It only suggests it. These two sets of four hands (four board robot challenges, just declare, non best hand) were generated this morning. Only these two sets were dealt. I didn’t pick and choose among sets, lol. If an algorithm was being used to enhance scoring swings, which set do you think would be MP’s scoring, and which would be IMPs scoring? SET 1 https://tinyurl.com/ygh6cpy7 https://tinyurl.com/yhp5l466 https://tinyurl.com/yh79nqt5 https://tinyurl.com/yzpxg35h SET 2 https://tinyurl.com/ygrz9jj8 https://tinyurl.com/ygrsto4k https://tinyurl.com/yetatguk https://tinyurl.com/yge2hw3x
-
Actually, it would be quite easy for BBO servers to create this outcome. Thousands upon thousands of hands are played on BBO every day, generating scores in both MP’s and IMPs. Hands played at anonymous tables, hands played at live tables, there is no shortage.. All that is necessary is to recycle these hands, making sure not to deliver the same hand twice to the same user, and in the meantime, dropping out a few of the flattest and/or selecting out some of the boards generating wider swings. I want to make one final point that I think is likely to have been missed in all this back and forth. I don’t think there is some nefarious plot afoot to make BBO less fair for the average user. My belief is that, over the long haul, better players will get better results and lesser players will get lesser results — perhaps even more so if there is a filter being used to select harder or suppress easier hands. I, for one, welcome this challenge, if you’ll pardon the pun. I do speculate, however, that there may well be (at least) two forms of motivation that a profit-making online bridge website would find worthwhile in using such a filter: 1) it could provide a more engaging, more valuable experience for the player spending .40 cents US, or whatever it costs in your local currency, to play more interesting hands as opposed to a daylong with 2 or 3 out of the 8 hands flat. 2) it could lessen instances when the robots generate flat boards by making an embarrassingly bad play (like leading an ace against certain slam contracts, and enabling a lay down claim.) As to your other point — namely that MP’s hands and MP tournaments are inherently harder... sure! But surely you aren’t saying that an experienced bridge player can’t assess the difficulty of a a given hand and (more importantly) its likelihood for generating a swing at MP’s scoring vs. at IMPs scoring. I also won’t take you up on your offer send me hands to identify. Take the hour you would spend generating deals for me to look at, and look at them yourself. Remember, they are “just declare,” “non-best-hand” MP challenges and IMPs challenges.Thanks for reading. mythdoc out.
-
Ok, yes, I have been saying this. Whether we call these “deal pool” hands or not is subsidiary to this point. I don’t think the hands from one event to another are the same, or from one scoring type to another. My hypothesis (hat tip to Hrothgar) is that hands that are flat for MP scoring occur less frequently than normal in an MP game, and hands that are flat for IMP scoring occur less frequently in an IMPs game. [EDIT—note added] I am talking about individual robot games, which is what I have mainly played Yes, the thing that really threw up some questions for me and my friends was when we played the series of challenges. “Why in these just declare challenges were the MP set of hands so, so different from each IMP set, if they were not put through a filter?” we kept asking ourselves. I tried to elucidate these differences above, but perhaps I was too unclear. —The MP hands typically had multiple decisions per hand designed to test one’s technique and appetite for risk in pursuing overtricks, saving undertricks, ruffing losers, finesses and other cardplay devices, establishing side suits, etc. (NOTE: The MP hands compared each to another didn’t have the same decisions, and these decisions were only occasionally influenced by distributions, splits and the like. All good bridge players know that the game is not as simple as distributions and splits.) —The IMP hands were ridiculously simple by comparison. Of course, my buddies and I playing was not a scientific process. But yes, seemingly time after time the MP hands just happened to have these MP-intensive decisions and the IMP hands didn’t. (I am not saying you could never go for an overtrick in an IMP hand. Please don’t infer that.) Well then, do you know of an easy way to state a specific, simple quantitative factor that one could use to test hands for a “preponderance of scoring-method-specific decisions to be made”? I don’t. But one could certainly test the hypothesis of fewer flat hands. That’s where method B, as you put it, comes in. In order to do so, one would have to do a study along the likes of what I described above. Testing for fewer flatter hands would not be a comprehensive test of the ways and purposes that an algorithm might use to select or leave out hands, but it is ONE specific way to inquire, and perhaps the only way, since bridge is such a complicated card game that scoring results don’t come down to one or another specific quantitative factor. I hope this is clearer. Thank you for allowing me to elucidate in answer to less contentious questions.
-
@Hrothgar: Respectfully, I think I have been quite clear. I also think you are writing as if I am on the stand under cross examination. I think I have no obligation to play the role of “agitator” you have assigned me, while you play the role of “defender of order.” BBO is more than just our game, it is a for-profit concern. I have no idea why they do or don’t do anything, but I certainly did not come to BBO looking to make waves. What would be my motivation? What would be the motivation of all the people who have posted here over the years claiming to detect something not completely random in the mechanism? Yes, we all could be the saps who don’t recognize our own bias...but that is not science, that is psychology. We all have one. That just closes off the debate. I outlined a test that would satisfy me. It’s clear but, alas, impossible to do. It seems (as I’ve repeated) that you would like a simple measurable that you can run through a computer on some hands—irrespective of the very factors I have stated about scoring types, events, players, and variance, complicated as they are. Perhaps as a reason not to try the test, yourself? I think I have finished saying my say, and thanks for reading.
-
If it’s as obvious to you as it was to me, you won’t need an hypothesis, lol. For a rigorous test, wouldn’t one need to have as a control group a significantly sized group of players, similar to BBO’s in makeup, playing random hands obtained outside of BBO, against BBO robots, and analyzing scoring variance in both scoring types against the BBO group playing daylongs of both scoring types? And wouldn’t one be looking for a significantly wider bell curve of scoring results, both on an average hand by hand basis, and upon an event by event basis, in the BBO deal pool group vs. the control group? I don’t know how one would perform such a test given that these conditions cannot be replicated. Other interesting tests while we’re discussing: 1—One could do bell curves of the scoring variance of different BBO events — live robot games vs. daylong robot games vs. robot national events. 2—One could also do bell curves comparing results obtained (by a computer) playing under a brand new BBO account vs. accounts of current players of varying records and experience playing on BBO. I’m not asserting we’d find anything in test 2, but it would be interesting data to have a look at.
-
I have stated at least twice that I am not asserting that the Deal pool cares about distribution or strength. But what if it cares (by which I mean the BBO program is made to care) about creating events that (one possible motivation) ensure good players who play well are more likely to win, instead of being beaten by bad players making bad plays succeeding because the robots are bad.
-
Prejudicial and completely unhelpful. Almost as if someone wanted to personify the thing I have criticized.
-
Neither of these. These questions (indeed your comment as a whole asking me to state an hypothesis) show your own assumption about “measurables.” They reflect the factors that creat scoring variance according to the two scoring types. This has much less to do with distributions or strength as it has to do with the following: MP’s capturing an over trick or conceding another undertrick; competing correctly on part score battles IMP’s game or slam hands, or hands where a swing is formed by being set in a part score (particularly doubled) vs. making a part score. Overtricks not particularly important at all. As I have said, you will see the difference most obviously in just declare hands where the bidding has been done by the computer as well.