NickRW Posted July 6, 2010 Report Share Posted July 6, 2010 My preference is (almost) always to start with the simple and build to the complex: Here's what I would find especially interesting: Start with the following relay example: RR has just bid 3♦ showing precisely 3=5=1=4 shape6+ Slam points9 - 14 HCP The relay captain has a hand that 1. Is going to want to play in Hearts2. Wants to explore 6♥ as a contract See what the NN comes up with in terms of methods... Once the network has converged on something, the next thing that I'd look at is the sensitivity of the resulting solution by changing the relay captain's hand. Make sure that Hearts are still the best trump suit. Include the same basic choice between game and slam. Compare the two solutions and see how different the methods are... Yeah - simple is good - and the example interesting. But, what are we trying to do here - get a computer to design or assist in the design of a system - or are we getting it to play said system as well - you seem to be in the latter camp. Assuming for a moment that I am in that camp, I would like to know how a computer would know it is "captain" or not inherently - i.e. not with some procedural code that follows the rules of a pre-designed system. In other words, I don't know of an example of learned captaincy as emergent behaviour from any sort of AI. Perhaps I am just uninformed - if I am, then please refer me to whatever research papers there are about it - as I'd be enormously interested. Nick Quote Link to comment Share on other sites More sharing options...
bab9 Posted July 6, 2010 Author Report Share Posted July 6, 2010 The above figures are out of just over 700K boards (Matt Ginsberg's dd database). NickIs this database available to others? If so, how do we get a copy/access? Quote Link to comment Share on other sites More sharing options...
helene_t Posted July 6, 2010 Report Share Posted July 6, 2010 The above figures are out of just over 700K boards (Matt Ginsberg's dd database). NickIs this database available to others? If so, how do we get a copy/access? http://cirl.uoregon.edu/ginsberg/gibresearch.html Btw, Daniel Winograd-Cort's thesis is related to what we are doing here. He is using neural networks to identify features of two combined hands that are relevant to bidding:danwc.com/uploads/publications/DWC_Bridge_Thesis.pdf But, what are we trying to do here - get a computer to design or assist in the design of a system - or are we getting it to play said system as wellWell speaking for myself I would say that the computer should defently be able to play the system, otherwise it would not be very interesting. But once the system has been designed, getting the computer to play it will be a trivial task compared to the effort of designing the system. OK it may not be completely trivial since there may be situations where the system doesn't prescribe a call but the player has to use judgment - but still that is much simpler than the process of designing the system. Besides, the optimization algorithms presumably have to bid the hands in the training DB in order to steer the training process so most likely the computer can bid the hands to begin with. Quote Link to comment Share on other sites More sharing options...
ccw Posted July 6, 2010 Report Share Posted July 6, 2010 I've often thought about this from what seems to me like another perspective: Teach the computer how to efficiently encode information and then have it chose a definition of bids depending upon the auction thus far (with partner doing the same thing of course) Then make the bid representing the partition into which it's hand falls.Make the cheapest bids represent the largest bins The idea would be to use something much like arithmetic compression where the transmitted symbols are the bids and the model used to estimate the probabilities of various hands is based on some assumptions about the auction so far. Of course this may not get you to a playable contract but it will take advantage of the computer's strengths. Getting to a playable contract would need to take the weakest hands (by what metric?) and put them at the cheapest call (pass). Just something I daydream about as I mow my lawn... Quote Link to comment Share on other sites More sharing options...
hrothgar Posted July 6, 2010 Report Share Posted July 6, 2010 Yeah - simple is good - and the example interesting. But, what are we trying to do here - get a computer to design or assist in the design of a system - or are we getting it to play said system as well - you seem to be in the latter camp. My "interest", so to speak, is academic at best. To the extent that I have a dog in the fight, I am primarily interested in understanding the limitations of the methodology being used. Regardless of what answer the various algorithm's output, my immediate reaction is going to be to try and determine whether or not its safe to trust this information. In turn, this means understanding the stability of the solution. From my perspective it makes sense to do some preliminary investigating on the front end before immediately wading into the thick of things. Quote Link to comment Share on other sites More sharing options...
NickRW Posted July 6, 2010 Report Share Posted July 6, 2010 Regardless of what answer the various algorithm's output, my immediate reaction is going to be to try and determine whether or not its safe to trust this information. I suppose we can define that, at least for examples such as yours, trustworthy = min IMPs not lost compared to DD result - which is trivial to measure - or it is assuming you have two constructive bidding systems to compare. In the more general case, you can't simply have a system that simply bids constructively as it would be too easy to exploit... You then get into what is optimal strategy - and that, for a bidding system, would be enormously difficult to determine I think. For my part, my aims are simpler - does it look playable! Nick Quote Link to comment Share on other sites More sharing options...
tysen2k Posted July 6, 2010 Report Share Posted July 6, 2010 I have now made a tree based on Tysen's reward function. Okay, here's an interesting thing that I noticed about this. Look through the tree at all the places where it asks for information about the same feature twice (It asks about spades when it already asked about spades previously). I count 32 times where this happens. In 23 of those cases (72%) the question was asked again when a low answer was given to the first question. So "I'm weak" "Really? How weak?" was more common than "I'm strong" "Really? How strong?" Same with "I have short spades" "Really? How short?" Is this an artifact of the setup, or is there more value in telling partner what you don't have? Tysen Quote Link to comment Share on other sites More sharing options...
helene_t Posted July 6, 2010 Report Share Posted July 6, 2010 Intriguing observation. It could be that there is a left-skewed gain function related to cut-offs, so that the best cut-off is likely to make a small "low-value" subtree in terms of probability mass and that the second-best split then is to the left of that in general. For example the root split of <4 hearts send more hands to that left than to the right (you have 4+ hearts less than half the time). But on the other hand the first HCP split is always <10 HCP which sends slightly more hands to the right. I can't think of any particular reason why it would show the a bias you observed. It is also a little complicated because responder will use negative inference, for example take a shot at a diamond contract when opener's lengths in spades, hearts and clubs are all constrained. Sorry I am going on holidays now and wont have access to high-performance computers for one and a half week so I won't do any more simulations in that time. Quote Link to comment Share on other sites More sharing options...
NickRW Posted July 7, 2010 Report Share Posted July 7, 2010 Is this an artifact of the setup, or is there more value in telling partner what you don't have? Tysen Part of the problem - if it is a problem - is the way the Helen's system is presenting the data. What I mean is that, if you study the part of the tree that is H<4, S>=4 (the second quarter left to right basically), at some points it asks, "do you actually have 5 spades". There are also a few branches where this question is not asked, but, if you study what minors it also can't have along with the not 4 hearts, actually they really do mean 5 spades as well - in fact one branch actually implies at least 6 spades. 5 spades being important is a familar concept to human bidding system designers, of course. Looking at the data I've got, I can see why Helen's model is splitting, points wise, on P=0-9 versus P=10+, coz I get essentially the same split in the bid constructively versus pass or preempt decision. (There are some typically 4333 shapes it suggests waiting for 11 and some long major cases where it thinks 9 or even less is OK to bid constructively with - but the general split is a 10 versus not 10hcp). I guess it should be noted that both the data I am using and the way I think Helen's stuff is working is very frequency oriented. Traditional bidding systems mainly centre on waiting for about 12hcp or maybe a little less with compensating shape - and are very much geared around the concept of 2 opening hands = game - and it is safe to pass less than this because if P can't open either you won't miss a game. That is essentially an IMP oriented strategy. Whereas what we are doing is frequency based - and probably therefore likely to be suited to MPs. At the moment I'm looking at Helen's P=10+ cases (effectively shifting that decision to the top of the tree) and seeing what it has to say about useful shapes). Maybe I might post something about it - or there again... Nick Quote Link to comment Share on other sites More sharing options...
NickRW Posted July 7, 2010 Report Share Posted July 7, 2010 At the moment I'm looking at Helen's P=10+ cases (effectively shifting that decision to the top of the tree) and seeing what it has to say about useful shapes). Maybe I might post something about it - or there again... Crudely speaking Helen's tree, for the 10hcp+ cases anyway, breaks down into about one quarter denying a 4 card major and subsequently focusing on whether there is a 5 card minor. Much of the rest focuses on if there is a 5 card major or not. It isn't as complex as it looks. Where it really gets controversial is that you can take this as evidence to support 5 cards majors - or 4 card majors - or a MOSCITO style or whatever spin you want to put on it really! Nick Quote Link to comment Share on other sites More sharing options...
bab9 Posted July 7, 2010 Author Report Share Posted July 7, 2010 I have now made a tree based on Tysen's reward function. Okay, here's an interesting thing that I noticed about this. Look through the tree at all the places where it asks for information about the same feature twice (It asks about spades when it already asked about spades previously). I count 32 times where this happens. In 23 of those cases (72%) the question was asked again when a low answer was given to the first question. So "I'm weak" "Really? How weak?" was more common than "I'm strong" "Really? How strong?" Same with "I have short spades" "Really? How short?" Is this an artifact of the setup, or is there more value in telling partner what you don't have? TysenI have read about conventions that look later in the auction at first round controls and then ask the question is this due to an ace or a void. One implication of the decision tree may be the asking/giving this shortage information eariler in the bidding process. Perhaps there is a lot of value telling partner what you don't have, given that conventions have been developed to show shortage, eg splinters. Quote Link to comment Share on other sites More sharing options...
helene_t Posted July 7, 2010 Report Share Posted July 7, 2010 Crudely speaking Helen's tree, for the 10hcp+ cases anyway, breaks down into about one quarter denying a 4 card major and subsequently focusing on whether there is a 5 card minor. [.....]Where it really gets controversial is that you can take this as evidence to support 5 cards majors - or 4 card majors - or a MOSCITO style or whatever spin you want to put on it really! I think it is understandable if this crude approach leads to something more akin to 4-card majors (or maybe 3-card majors) than 5-card majors, for three reasons. First, although the node-to-bid assignment will (implicitly) give a small reward for natural bidding because 1♠ is a great opening whenever it allows partner to guess that 1♠ is the par result, this is a very small reward so you wouldn't expect the system to be natural except maybe that the 2-openings would tend to have some length in the suit they open. But one argument for 5-card majors is that in a natural system, showing majors take up more space than showing minors so must be more specific. Second, one argument against 4-card majors is that in a system with longest-suit-first and a natural notrump opener, you often won't be able to show your 4-card major anyway because the system will prescribe some other opening. But there is no particular reason to expect the system to be longest-suit-first or to have a natural notrump opening. Third, even if my reward function would make 5-card majors optimal, the algorithm might run into a locally optimal 4-card majors system because the first split will be on either HCP or 4-card majors for the simple reason that "spades<4" splits the set of hands closer to 50/50 than does "spades<5". Now in the "spades>=4" subtree you could split in "spades>=5" subsequently but that would not add much information so the next split is likely to be on a different variable. Quote Link to comment Share on other sites More sharing options...
NickRW Posted July 9, 2010 Report Share Posted July 9, 2010 FWIW at this stage, I finally got my feature maps working something like properly. It seems to suggest that for a white vs white, aggressive and perhaps more MP oriented 1st seat system: 1) A weak 2♣ is not as LOL as some people might have thought2) In contrast, a lot of traditional weak 2M hands, if you're going to be really aggressive, come within the band of what you should be regarding as a constructive holding.3) Generally 10hcp is worth opening if you have a 4 card major, maybe wait for 11 without if bal. What I've done does not suggest what to open with what - save for the highlight points above. However, the following relatively normal looking strong diamond system might be reasonable: 1♣ Minors or (14)15-17NT1♦ Strong 18+bal, maybe 17 with a one suiter1M, 10hcp+, 4+, can be as low as 8 with a 6 carder1N, 11-13(14)2♣, Weak, 6 cards, 5-92♦, Constructive 9-12 or so, 6 cards, could include 5=4 minors perhapsOther 2s Polish style perhaps Some of the above might test the patience of one or two regulatory bodies - the EBU wouldn't like rule of 17 one level openers iirc. Nick Quote Link to comment Share on other sites More sharing options...
Dirk Kuijt Posted July 14, 2010 Report Share Posted July 14, 2010 I'm working on a neural net attempt at this problem, so I'm bumping this topic. More to follow. Quote Link to comment Share on other sites More sharing options...
Dirk Kuijt Posted July 21, 2010 Report Share Posted July 21, 2010 An update, but not a successful report. I tried a Neural Net with these inputs: HCP# spades# hearts# diamonds# clubsand one input for all previous bids. It was a failure. I couldn't teach the program anything. I'm going to make a next try with:several inputs for HCP, essentially as rangesditto for suit lengthsone input for each possible previous sequence. We'll see how this goes. The number of possible sequences builds up very quickly, of course, so this may be unworkable from that respect. Quote Link to comment Share on other sites More sharing options...
bab9 Posted July 21, 2010 Author Report Share Posted July 21, 2010 An update, but not a successful report. I tried a Neural Net with these inputs: HCP# spades# hearts# diamonds# clubsand one input for all previous bids. It was a failure. I couldn't teach the program anything. I'm going to make a next try with:several inputs for HCP, essentially as rangesditto for suit lengthsone input for each possible previous sequence. We'll see how this goes. The number of possible sequences builds up very quickly, of course, so this may be unworkable from that respect.Unexpected results should not be classed as a failure. How did you determine that you could not teach the program anything?What type neural network did you use?How many nodes?How big was your data set?Were you looking at all bidding situations, or just uncontested ones? Quote Link to comment Share on other sites More sharing options...
helene_t Posted July 21, 2010 Report Share Posted July 21, 2010 Sounds like Dirk's experiment with neural networks was similar to mine which also failed. Unlike Tysen's neural networks which, although we don't yet know if they point in the direction of effective bidding systems, at least produce something that makes sense. My problem was, I think, that there were no neurons that represent two essential things:- what information partner has conveyed- what information I have already conveyed Since the auction is part of the input, it would be possible to deduce what information partner has conveyed (and what I have conveyed) by "inverting" the neural network so to speak: but this is not what happens. What happens is that the neural network will, in the course of evolution, change the way it bids early in auction. This will then have consequences for what could, ideally, be inferred later in the auction, but this link is not assured. So I think the way to go is to design the networks so that the information conveyed becomes represented by specific neurons. Quote Link to comment Share on other sites More sharing options...
tysen2k Posted July 21, 2010 Report Share Posted July 21, 2010 I have not had much time to investigate this problem further, but I'm still having issues with my model in the sense that I don't get "normal" preempts. It may be a combination of two things:There is a severe problem with local minima. Since the entropy function is minimized when similar hands are put together, it tends to group closer to its initial configuration and not want to move in a new direction.Trying to increase the opponent's entropy might not be the best function to determine the right preempts.The classic way to deal with the local minima is to start from several random starting points and see which leads to the best outcome. Unfortunately my current setup is way to slow for this. One solution to local minima may be to use Helene's method of decision trees. I think you'd need to divide it into at least 128 or 256 groups since preempts can be very rare/specific. The value function for the decision tree could be our entropy minus opp's entropy, although there may be problems with this (see below). Here's how I would do the hill climbing to take into account bidding space. Once you have the leaves of your tree, randomly pick 11 leaves and assign the opening bids Pass, 1C,..., 2N (that's 11 different bids) to them. Figure out the entropy to those 11 leaves. Leave pass alone, but give an entropy penalty of 'x' to 1C, '2x' to 1D, etc. Then do hill climbing allowing other leaves to attach themselves to one of these bids. Repeat with several different initial 11 leaves. We can vary the x penalty to make sure we get a distribution that we think is normal (pass should be 40-50% of all hands, 1C should be 20-25%, etc.). Also, feel free to pick a different number of leaves than 11 if you want. On to point #2 about my preempts. I tried several different starting points (including seeding it with "standard" preempts), but I found that it always wanted to do things like open at a high level with strong hands with long diamonds. The value function is finding hands where it can be relatively sure of its own contracts, but also eating up space when the opponents have a good contract too. And all that seems logical and the whole point of preempting. However, when we're looking at their "perfect contract" that contract is sometimes a sacrifice. Their perfect contract of 4S might be because they can make, or because they have a good sacrifice. So my model wants to open high with specific good hands like: -AxAKJxxxxAJxx because it has the logic of "my opponents can probably preempt against me, so I better describe my hand really well now before they get a chance." Kind of preempting a preempt, but with a strong hand. The model seems to think that using the high bids for these kinds of hands is valuable and puts the normal preempt hands into the constructive bids. Maybe it's on to something, but I'm thinking probably not. That's it for now. Tysen Quote Link to comment Share on other sites More sharing options...
junyi_zhu Posted July 21, 2010 Report Share Posted July 21, 2010 I have not had much time to investigate this problem further, but I'm still having issues with my model in the sense that I don't get "normal" preempts. It may be a combination of two things:There is a severe problem with local minima. Since the entropy function is minimized when similar hands are put together, it tends to group closer to its initial configuration and not want to move in a new direction.Trying to increase the opponent's entropy might not be the best function to determine the right preempts.The classic way to deal with the local minima is to start from several random starting points and see which leads to the best outcome. Unfortunately my current setup is way to slow for this. One solution to local minima may be to use Helene's method of decision trees. I think you'd need to divide it into at least 128 or 256 groups since preempts can be very rare/specific. The value function for the decision tree could be our entropy minus opp's entropy, although there may be problems with this (see below). Here's how I would do the hill climbing to take into account bidding space. Once you have the leaves of your tree, randomly pick 11 leaves and assign the opening bids Pass, 1C,..., 2N (that's 11 different bids) to them. Figure out the entropy to those 11 leaves. Leave pass alone, but give an entropy penalty of 'x' to 1C, '2x' to 1D, etc. Then do hill climbing allowing other leaves to attach themselves to one of these bids. Repeat with several different initial 11 leaves. We can vary the x penalty to make sure we get a distribution that we think is normal (pass should be 40-50% of all hands, 1C should be 20-25%, etc.). Also, feel free to pick a different number of leaves than 11 if you want. On to point #2 about my preempts. I tried several different starting points (including seeding it with "standard" preempts), but I found that it always wanted to do things like open at a high level with strong hands with long diamonds. The value function is finding hands where it can be relatively sure of its own contracts, but also eating up space when the opponents have a good contract too. And all that seems logical and the whole point of preempting. However, when we're looking at their "perfect contract" that contract is sometimes a sacrifice. Their perfect contract of 4S might be because they can make, or because they have a good sacrifice. So my model wants to open high with specific good hands like: -AxAKJxxxxAJxx because it has the logic of "my opponents can probably preempt against me, so I better describe my hand really well now before they get a chance." Kind of preempting a preempt, but with a strong hand. The model seems to think that using the high bids for these kinds of hands is valuable and puts the normal preempt hands into the constructive bids. Maybe it's on to something, but I'm thinking probably not. That's it for now. Tysen Interesting finding. Why diamonds are so special? Aren't clubs vulnerable as well?As I remember, there was an old list, stating that diamonds are specially good for the preemptive opening value.... Quote Link to comment Share on other sites More sharing options...
bab9 Posted July 22, 2010 Author Report Share Posted July 22, 2010 Sounds like Dirk's experiment with neural networks was similar to mine which also failed. Unlike Tysen's neural networks which, although we don't yet know if they point in the direction of effective bidding systems, at least produce something that makes sense. My problem was, I think, that there were no neurons that represent two essential things:- what information partner has conveyed- what information I have already conveyed Since the auction is part of the input, it would be possible to deduce what information partner has conveyed (and what I have conveyed) by "inverting" the neural network so to speak: but this is not what happens. What happens is that the neural network will, in the course of evolution, change the way it bids early in auction. This will then have consequences for what could, ideally, be inferred later in the auction, but this link is not assured. So I think the way to go is to design the networks so that the information conveyed becomes represented by specific neurons.As an idea what if instead of using 1 neural network, a different one was used for each round of bidding? Kind of getting each neural network to specialise in a particular round of bidding. There would be a strong agruement for having information from the previous neural network feed into the current one. Quote Link to comment Share on other sites More sharing options...
helene_t Posted July 22, 2010 Report Share Posted July 22, 2010 what if instead of using 1 neural network, a different one was used for each round of bidding? Kind of getting each neural network to specialize in a particular round of bidding. There would be a strong argument for having information from the previous neural network feed into the current one. The way I would like to think of the network is as the intermediate layer representing the information on which bidding decisions are made. So the synapses that connect the input layer to the intermediate layer condense/preprocess the information, while the synapses connecting the intermediate layer to the output make the decision. Therefore I chose a one-and-a-half network model: as for the inp-interm synapses I had one for 1st/2nd seat, and one for 3rd/4th seat, but for the interm-outp synapses I had only one set. The idea being that the decision rules should be the same but the preprocessing may be different because a pass in first seat could be informative and therefore the implication of an opening bid in 3rd/4th seat may be different from one in 1st/2nd. But now I feel that this is inconsequent because it is no different from a 1♥ response to 1♦ having different implications than a 1♥ response to 1♣, so by the same logic I should have separate set of inp-interm synapses for each opening bid. So I would consider going back to only one network. This would probably imply a certain symmetry in the bidding system. Although the system can still be highly asymmetric (a pass in 1st seat is a different input value than "it is not my turn yet" in 1st seat), I would expect the architecture of the network to bias the outcome of the evolution in the direction of somewhat symmetric systems. "Symmetric" here meaning that the opening scheme in 3rd/4th would be similar to 1st/2nd, but by a similar argument the responses to the different opening bids would be similar etc.* Now if I were to include the inference from earlier calls as input, I might consider having seperate inp-interm synapses for each round. Maybe it should depend on how I would formalize the inference from the earlier calls. But even if I still had only one set of inp-interm synapses, there would still be a distinction, at the level of the new input neurons, between "the inference from previous round is blahblahblah" and "the inference from previous round is void as this is the first round". *sidetrack: thinking of it, the fact that I have the same output neuron representing, say, the 2NT call regardless of what the previous call was, makes it difficult for the network to emulate step responses. Having one output neuron for 1st step, one for 2nd step etc. would make step responses more likely to evolve. The fixed neuron->call mapping probably favors natural bidding. Transfers would be equally easy to emulate with the fixed mapping, but the transfer accept would not. This would change by having the inference from earlier calls as input: now a transfer accept is a natural call since it is related to the suit in which partner has shown length. Anyway, I will let the neural networks rest for now. I think Tysen is much better than I in making neural networks, and there are probably others who also know more about that issue than I do. So I will focus on my decision trees instead. Quote Link to comment Share on other sites More sharing options...
NickRW Posted July 22, 2010 Report Share Posted July 22, 2010 Interesting finding. Why diamonds are so special? Aren't clubs vulnerable as well? I'm not sure that Tysen's work would suggest a different approach with the minors reversed. Part of what makes "strong preempting" with that hand good is the void in a higher ranking suit. Nick Quote Link to comment Share on other sites More sharing options...
NickRW Posted July 22, 2010 Report Share Posted July 22, 2010 As an idea what if instead of using 1 neural network, a different one was used for each round of bidding? For sure more nets for more specific situations has to be good - but personally I am uncertain as how best to make them evolve. If the first one behaves a certain way, then the 2nd can adapt to it - but if the 1st evolves, then the 2nd has to change - but equally if the 2nd makes a change to evolve to better fit with the 1st, maybe this makes the first able to tweak itself too. Kinda like you have both feedback and feedforward necessary going on between independent nets. Once you get a 3rd and a 4th net involved it looks to me like a mess. Ultimately I suppose it is like language - computers aren't very good at it. Take a reduced subset of this problem - the auction has gone 1♥-2♥ unopposed with relatively natural meanings for these calls. I can see a neural net representing opener's second turn quickly evolving sensible meanings for Pass and 4♥ - but what about all the calls from 2♠ up to 3NT - how should a net responding to whatever these mean evolve? And how does the first net decide it has hit a local minimum and try something completely different - short of human intervention - which is kinda not the goal of the exercise. Nick Quote Link to comment Share on other sites More sharing options...
helene_t Posted July 22, 2010 Report Share Posted July 22, 2010 If the first one behaves a certain way, then the 2nd can adapt to it - but if the 1st evolves, then the 2nd has to change - but equally if the 2nd makes a change to evolve to better fit with the 1st, maybe this makes the first able to tweak itself too. Well it is always going to be so that as the meaning of your (say) 1♠ opening changes, your responses to it should change as well. This is true whether the opening and the responses are governed by the same NN or by two different ones. The solution is, I think, to let the responses to 1♠ not use the information that "partner has opened 1♠", but rather use the information that "partner has shown 5+ spades and 11-20 points". Now as the system evolves so that 1♠ doesn't show 5+ spades any more, then the responses automatically adapt, it's not like they have to evolve to adapt to the new meaning. I think this is necessary. But it does make the project a lot more complex because- it is not clear how to code the information conveyed by partner.- however we chose to code it, it will be more complex than just reporting the opening bid.- we need some info about the mechanics of the auction anyway because we need to know how much bidding space we have etc. Alternatively, one can decide not to care about responses but just optimize the opening scheme in isolation, as Tysen and I have been working on so far. Once we get that working it is a (almost) trivial extension to generalize this to the rest of the system since the same algorithms that were used to optimize the opening scheme can be used to optimize the responses to pass, the responses to 1♣ etc. But of course it may turn out that the whole idea of optimizing the opening scheme in isolation is flawed, that in order to create an effective bidding system one would have to optimize every aspect of it as one whole, or at least a substantial part of it. Quote Link to comment Share on other sites More sharing options...
NickRW Posted July 22, 2010 Report Share Posted July 22, 2010 The solution is, I think, to let the responses to 1♠ not use the information that "partner has opened 1♠", but rather use the information that "partner has shown 5+ spades and 11-20 points". Now as the system evolves so that 1♠ doesn't show 5+ spades any more, then the responses automatically adapt, it's not like they have to evolve to adapt to the new meaning. I certainly see what you mean. However they don't automatically adapt. If the system suddenly decides that this opening will now be, say, 1NT, then that changes the meaning of pass, in response, considerably. Also, if you change to a 2 under transfer opening of 1♦, then the responding net has never had a chance to evolve a meaning for 1♥. Also, you can't just transpose the responses to 1♠ onto 1♥ even if the 1♥ opening is largely identical - because the former has never had the option of evolving a meaning for 1♠ in response - unless you go in step responses - but that seems to me to be only really applicable for forcing auctions. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.