Jump to content

Scarabin

Full Members
  • Posts

    381
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Scarabin

  1. But there has to be something more than knowledge. Science does not give a reason for it but we must have morality, wherever it comes from. "Dear God, I'd rather be a pagan suckled in a creed outworn so might I, standing on this pleasant lea, have some vision that would make me less forlorn."
  2. As a small personal point I see dogmatism as out of place in any scientific approach. I also view the holocaust as one of secular "science's" achievements (and one to rival the conquistadores), and when I get too confident about scientific method I remember Schiller's essay on logic (which I read many,many years ago but which still impresses me).
  3. But does culture not play a role in faith, whether faith in religion or science?
  4. If Obama says the insurer will pick up the tab, is this stealing?
  5. Thanks Barmar,MrAce, and Antrax, for your insights. I guess we have pretty well established that we should not expect a rational approach from a robot. That said I think my peeve with robots is that I want to feel I have played against a human expert, someone who plays like me only better, and not against an idiot savant. I think that is the main point and how well it plays and if it cheats are secondary in importance. Having said that I still want to verify Ginsberg's claims about how well GIB played the BridgeMaster deals, and to discover if GIB has weaknesses related to certain types of hands: my hypothesis would be these could include combining chances and choosing between alternative plays. The possible alternative of trying to reproduce and examine how GIB's methods operate is just too daunting and probably a waste of time since I do not want to produce a random simulation. Thanks again, I will report results.
  6. Thanks for that very helpful info. I have downloaded Ginsberg's paper which is new to me, and I am poring over it in detail. On a semi-related point,I discovered recently that Google can access our posts. It may be that I am the only person using BBO who did not realize this but if not should we warn users not to disclose personal details they would not wish to see on the internet?
  7. Perhaps not so limited? Remember however that a double dummy analysis knows exactly where the Q is so it sees the play as 100%. The probability is introduced by the sampling technique and this may be inaccurate. I think the real trouble is that a Monte Carlo technique is unlikely to spot a throw-in making the finesse unnecessary. I am not sure why this should be so. My observations (still anecdotal rather than the result of repeated trials)seem to show this. What you suggest in your last paragraph would improve these programs immensely but you would have to cover an awful lot of cases and this would take time.
  8. My apologies for misspelling your user name. My spelling and my memory are getting atrocious. Thank goodness for spelling checkers. My error in respect to Inquiry is actually a compliment: He analysed one of my example deals and extended the analysis to the workings of the underlying computer program. He convinced me that he was familiar with Monte Carlo simulations.
  9. You should respect anyone's opinions if they are logical and founded on truth. I think you should be able to check my statements. I have re-read the sentence you quote but fail to recognize any tautology. Perhaps it is like Thurber's English teacher at high school who ruined Shakespeare for him. He used to dream of walking through the forest of Arden and having figures of speech jump out from behind every tree.
  10. My apologies to Barmar and to you. My spelling has become atrocious and unfortunately BBO's spelling checker does not cover user names! :rolleyes:
  11. _ Surely this must require some addition of pragmatic reasoning to the basic random simulation? I have difficulty in believing that my copy of GIB can do this. Perhaps the BBO version has been developed far beyond the Ginsberg version?
  12. :) No examples. In this post I try to set out as clearly and as succinctly as I can my view of the current state of computer programming of bridge play. I will ignore computer bidding: bidding based on random simulations is on a par with using Buller's British Bridge as your bidding system. :lol: There are two main approaches to writing a computer bridge program to reproduce human card-play: The first is the "monte-carlo" or random simulation approach. This is a "one size fits all" type of approach requiring only a limited number of algorithms. It involves a dealer program which provides random hands for the concealed players constrained by the known cards in the open hands( the player and dummy)The number of random hands dealt is limited by the need to keep delays within acceptable limits. This is merely a quick way of approximating to the true probabilities of the distribution of the unknown cards. Next everyone of these sample distributions is examined on a double-dummy basis and the acceptable card or cards recorded.Then the card which is recorded on the greatest number of samples is played. If there is more than one card the choice between them is either made at random or according to some rule set in advance. We have seen that either method can lead to seriously weird results. The major advantage of the random simulation approach is that it is relatively quick to program - one estimate is 2 years to produce to develop a computer champion standard program. The great disadvantage is that it plays poorly on many types of deals and the timing of individual plays is decidedly eccentric. :( The second approach may be labelled "pragmatic reasoning". This requires a much longer period of development with separate groups of algorithms for every type of play, and the computer declarer at least is also required to formulate an initial plan. There are several programs on the market which could be classed as following this approach but I have not found any where programming has gone beyond a few basic plays. However Fred Gitelman described such a program in 1992 and estimated developing it would take a further 5 years on top of the work he had already put into Base III. I fear his estimate may prove conservative. Bridge Baron seem to have used a simpler program of this type, which they called Tignum2, when they won the championship in 1997. Unfortunately they lost to GIB in 1998 and seem to have reverted to a monte-carlo clone thereafter. Various academic papers (lists available) have discussed the limitations of random simulations and have formulated approaches to reproducing declarer play but no complete system or program has emerged so far. To date, the random simulation programs have the edge in competitive play; I would say largely because no one has been prepared to devote the time necessary to develop a pragmatic program capable of beating them. I would dearly like to see a fully developed pragmatic program before I die because it would offer so much more than a random simulation. It would be a program worth queuing in the rain for :rolleyes: I do see one slander ray of hope. Each year the programmers return from the championship and settle down to improving their programs competitiveness. Similar to Barmer and Inquiry improving GIB in response to our complaints. Now random simulations can be improved by: removing bugs removing errors in the dealer or double-dummy programs increasing the number of samples, this cannot go beyond making the probabilities more nearly accurate Thus at some time they have to realize that any further improvement must come from superimposing pragmatic reasoning on the monte-carlo results. Once this is done the door is opened to a seamless transition from a very imperfect random simulation to a potentially perfect pragmatic program. The program need never be off the market and the only visible evidence of change is that more and more deals begin to be played correctly. :)
  13. I think your comments correct and perceptive and I would only take issue with the relevance of your last sentence. I agree Inquiry did a very good job of analysing GIB's play from a human standpoint but ask yourself this: does GIB contain any heuristics (other than those necessarily contained in a double-dummy program and in an unseen hand sample simulation program)and does GIB just play to make the maximum number of tricks irrespective of contract target? I think you are correct that my present line of approach (of building a gradual case by examples) is proving unfruitful and I propose to scrap this and go for broke in SOA 4.
  14. Forgive me for intervening but I feel I may have lured Inquiry into an ambush and I think your comments may be intended for me rather than Inquiry. Certainly I fit the description: weak bridge analyst with some competence as a statistician whereas Inquiry is a pretty good analyst and bridge player. Inquiry seeks to convince me that the robot plays are consistent with intelligent analysis and I seek to prove they're not. To put it bluntly I am accusing him of anthromorphicism (that is not a word I use often)and confessing to this as well. The joke is we are probably both right in our analyses and I don't know how we can establish absolute truth but perhaps we may reach agreement. We are all reasonable men.
  15. Thank you for that prompt and detailed reply. Your analysis is masterly and I would not seek to refute it except to make the minor point that the opening lead was the ♦6, which I failed to specify, and that a 5-3 split is more likely than 4-4. I must confess that because my analysis is basic I seek out deals which have been the subject of analysis and I ignore match-point play. But, and this is the real point, your analysis is that of a bridge master and has nothing in common with a dumb robot or a monte carlo simulation? _ I am convinced that the last 4 paragraphs of the original post accurately describe the operation of the simulations, and I would be very happy to have you or Barmar confirm or deny this. I think the side plays are really random plays of equally permissible cards which we unfortunately endow with human motives and insights. - You will recall that you previously spoke of Jack and GIB making unblocking plays, and I wondered how Wbridge5 could discard a key ♣K? I have also credited Jack with entry-killing plays, probably because I can be careless about entries when I am not alert. In replying to a post "Why not lead A or K from AK", johnu observed that GIB discards at random without reference to rank. - I fear this is literally true and is the real reason for plays we try to endow with intelligence? Of course I realise that a program that is pure monte carlo simulation would not work: it has to be overlaid with some sort of pragmatic interface but given this, would you agree with my contention?
  16. Yes indeed, but on playing the same deal twice?
  17. Probably does not matter but could perhaps contribute to skewed samples and incorrect probabilities?
  18. [hv=pc=n&s=sjt75hak43dq4cak7&w=s986hqtdak762ct83&n=saq2hj87dj98cq652&e=sk43h9652dt53cj94&d=n&v=0&b=1&a=pp1np3nppp]399|300[/hv] Same bidding for all robots. As background this is a hand where we should try everything else before taking a spade finesse and since everything else is favourable we don't need the fatal spade finesse. However all the other plays have lower individual probabilities than the finesse. There is nothing much to the play except to see if any robot came close to making the contract. Incidently I sat West and East did not commit any blunders. In all cases declarer won the diamond lead with the ♦Q. Shark Bridge then led the ♣7 to the ♣Q, the ♥7 to the ♥K, and finessed the ♠J. Wbridge5 cashed the ♣K and ♣A and finessed the ♠Q. Jack Bridge also cashed the ♣K & ♣A and finessed the ♠Q. For a moment I thought GIB was going to disprove my prediction; he cashed the ♣A,♣K, & ♣Q before crossing to the ♥A and finessing the ♠J. Why did all four robots miss a fairly basic play and when they cashed various numbers of club tricks were they working toward the right play? Maybe Bridgify 104 can provide an answer: _ After declarer has won the first trick Bridgify shows his hand like this: ♠JT75♥AK43♦4♣AK7 --8888--8877---8---888 the subscript figures are the number of tricks made if the card is led, so that there is no indication he should not lead a spade. As long as he does not lead a spade this pattern persists: that he may lead any card but the 3 or 4 of hearts. If the lead is in dummy then Bridgify shows dummy must not lead the Q or 2 of spades and if declarer does lead a spade from his hand then Bridgify shows he must play the ♠A. Remembering that a new simulation is performed for every play I think the play of the clubs and hearts is purely random and the simulations keep showing the finesse is the best percentage chance. _ And yet GIB came so close and never even cashed the long club!
  19. What I am trying to say is that random simulations of the unseen hands are designed to put the robot in the position of playing double dummy (and then counting up how many times the card chosen appears in the other simulations). Playing double dummy there is no question of the robot recognising hand patterns or possible strategies, the only question asked (and answered by brute force analysis) is "If I play this card how many tricks will I win?" The human player on the other hand (if he is anything like me) goes through all sorts of logical difficulties and tries to apply pragmatic reasoning backed by known probabilities?
  20. Sorry, my semantics are as muddled as my thinking. I think there is an essential difference to the way humans and robots approach bridge play (obvious but bear with me for just a moment, please). The human approach is pragmatic or logical and takes into account, or should take into account, all available information. The robot approach is restricted-random: a series of random double dummy layouts restricted by taking into account the known distribution of declarer's and dummy's hands and any information about the other hands revealed by the bidding. This is a bit tortuous, what if any are the practical consequences? First, the only criterion the robot uses is the double dummy question: "How many tricks will I win if I lead this card?". Second a human player can say both these lines of play may fulfill the contract but one is superior to the other (in all circumstances). The robot will choose the line of play which figures in the majority of his samples. Next the human will evolve a plan and keep to it until it is found to be faulty, the robot seems to do a new simulation for every card played and hence may change to a new plan merely because he has generated a different set of samples. Another point which interests me but may have no relevance: computer random usually means pseudo random? Having said all that I hope it's not a hopeless muddle, perhaps SoA3 will help to clarify my thinking.
  21. :rolleyes: Being human I tend to class a squeeze as more difficult than a ducking play and the point I want to make is that the simulations are designed to by-pass logical difficulties and bring all problems to the same level. Would you agree? I share your frustration with robot defenders discarding key cards when free of any pressure to do so. I recently had Wbridge5 discard the ♣King from Kxxx when this was the key card.
  22. :D East-West were silent throughout the bidding. Appreciate your analysis of the monte-carlo approach: in my language this is akin to saying humans use a priori probabilities and the robots use a posteriori probabilities which require random trials. I started this series of posts with the robots on their default settings and have now set them all to the slowest, strongest play level. Would you agree that at their best the robots are very good, at their worst unbelievably bad? I started with the belief I could easily rank these robots in order of strength, now I do not think so; they seem equally prone to blunder or brilliance.
  23. Not sure I can help further. I know about DST in Australia,varies between states, and New Zealand but that's about all in the southern hemisphere. I think your surmise must be correct. My computer clock is also set to GMT + 10 as on BBO, and then adjusted for DST, And it's correct. Perhaps if no one else has mentioned it we should not worry too much?
  24. [hv=pc=n&s=sakj87h753da7c643&w=s3hakjtd964cqj952&n=sqt96hq964dk8cak8&e=s542h82dqjt532ct7]399|300[/hv]Today's deal is a simple squeeze and all 4 of our robots bid and made a 4 Spades contract, except that : _____________________________________________________ ________________________________________________ (1) 1 robot would have preferred to play in 3 No Trump (alerted as a strength showing raise) and 1 robot exchanged cue-bids in Clubs and Diamonds before settling in 4 Spades. (2) A good result for robot play: they played the deal twice and all faultlessly executed the squeeze, both times. __________________Except that I had intended to add a new robot (also a monte carlo simulation) to our team and unaccountably it went down on the first four rounds: opening lead ♥K, then ♥A, then ♥J which Dummy failed to cover, then ♥10 ruffed by East with the ♠2 and on which Declarer discarded the ♣3. _____________ ________ __________________ __ As a tentative conclusion it seems that the complexity of the problem in human terms may not be a factor in a robot's success or failure, and we should be careful not to ascribe human attributes to robots. This is a human failing, have you ever caught yourself apologising to your computer when you miss-key? I have.
×
×
  • Create New...