Jump to content

MaxHayden

Members
  • Posts

    34
  • Joined

  • Last visited

MaxHayden's Achievements

(2/13)

4

Reputation

  1. You left out a lot of relevant information and it seems like you are actually looking for some kind of validation that you aren't the one at fault. But if you want to get better, "fault" isn't a helpful way of thinking about this. Miscommunication is a 2-way street. Better to understand why it happened and help each partner understand the other's thought process better. For starters, a 50% slam goes down half the time. Some of those are total face-plants. You are both at the low-end of what the bidding shows, you both have 4333 hands, and south has already stretched to make the bid he did. That face-plant is a lot more likely than not in this situation. We also don't know what the scoring and vulnerability situation was. Are we supposed to be bidding a slam with 50% odds? Or do we only need 40%? Are we far enough behind in MPs that we need to take some risks to have a good shot a decent result? If we are supposed to be stretching or taking risks, which partner is responsible for that on this bidding? How do you avoid "doubling up" on risk taking? I also don't understand what you mean with 2NT being a systemic response. Normally 2NT would show 8+ and a balanced hand. But if that's not what it shows for you, I don't know how you can possibly expect us to tell you if anyone was bidding sensibly when we don't know what it actually showed. Both partners should be making inferences in light of bids that weren't made. But what were the other options? For all we know the real problem is that you haven't actually thought through your bidding here and are flying by the seat of your pants. That seems pretty likely actually. Your NT structure is already abnormal because of the 3-point 2NT bid (4ish if we factor in that you upgrade with 19). Why do you have that wide range and where did the extra point come from? If you aren't using the 2NT response to 2♣ like most people do, then you are "missing" a bid somewhere. Where did it go? (How do you show 25-27, 28-30, and 31-32? Normally, after a 2♦ negative, these would be 3, 4, and 5NT respectively.) All of this stuff matters. Bids aren't made in a vacuum. I can't tell you whether someone made a good choice if I don't know what the other choices were. No one made any efforts to find a trump suit. So what kind of inference can I draw? Does that mean that no one has a suit longer than 4 cards? Or is it reasonable to expect 5 card minors with no side-suit shortness? What about semi-balanced hands? === Using normal bidding this is what was meant: S: "I have a hand that is game forcing opposite anything short of a stiff." N: "Good news! I've got a balanced hand with at least 8 points!" S: "Well, I've only got 23-24 points in a balanced hand." N: "Even if you only have 23 points, what I have is usually enough for a slam." South has 22HCP; north has 9HCP. Combined that's 31HCP. Two points short. But south claimed 23-24HCP. So north thinks they are at only 1 point shy in the worst case. If south has 24, then with North's 9, they have 33 and should bid a slam. If south has 23, then they are 1 point short. So, on paper, north should invite with 4NT, and south should bid a slam if they have anything above the bare minimum. By bidding 6NT, north says that with his hand, he doesn't think the difference between 23 and 24 points makes the difference. The slam is 50% either way. So inviting is going to cost points in the long-run because you'll pass too many hands that could make. If north passes, he's saying that his hand is bad enough that slam isn't a 50% possibility even with a 24HCP maximum in south. North's hand is a pretty bad 9. But the Ks are better than QJs, especially given the control they are providing. And I don't think adding a J to to either side really avoids this train-wreck. Moving the Q and J into ♥ is about as strong as you could get with these cards and it still wouldn't be enough to turn this situation around. Keep in mind: The hand south had wouldn't make 3NT with the help of 2 queens, but the hand south promised only required one queen for game. North had that queen and both missing kings as a kicker. So maybe north is right. Only way to know for sure is to simulate a bunch of balanced 23-24HCP hands and see if a slam with north's hand makes 50% of the time. If the thread is right, the slam should be 50% only on the hands that would accept the invite. If he should have passed, then it wouldn't be 50% even with a maximum. Similarly, if he shades by 2 points and gives a 2♦ negative, south will presumably bid 2NT. But what happens in your bidding if he now shows a "maximum minimum" and invites with 4NT? That's technically not possible given what he already told you. So what would it show? If you don't have some kind of agreement about that sequence, then that probably explains why he didn't try it: you'd probably think he was denying the second king and thus might miss a makeable slam. So it's plausible that this is the face-plant that your bidding will occasionally produce and not anyone's "fault". At the same time, no trump bidding is supposed to be pretty precise. You quickly describe and limit your hand and partner has access to lots of gadgets to find the right bid. If you play fast and loose, you are giving all of this up. If that means that partner doesn't trust your bidding, then he'll make things up trying to adjust for your erratic behavior, and then you'll both end up in a ditch. But north isn't the only one who overbid. South has no length, no texture, and is 1HCP shy of what he promised. The hand only has 6 tricks of playing strength and no possibility of developing any extras. This is all well below the norm. He has great controls, but those aces and kings are only valuable when they let you develop other tricks. So the upgrade from 2NT is marginal at best, especially given that 2NT will right-side the contract in most cases. After a 2NT bid shows a maximum of 22HCP, even with the best 9 HCP hand in the world, north will not invite slam. There are really only two reasons to upgrade the hand: either south is worried about north failing to invite when he should, or he is worried about missing a makeable game because of a hand that north would usually pass. You can go simulate a bunch of hands, but you should be visualizing what each other's hands look like. So what were you seeing that made you think you needed to make an adjustment? I think there are far too many things going on here to make this about blame. Sort out what you were both thinking about when you made those bids and try to get better at understanding one another.
  2. First, thanks for your help thus far. I'm curious about the two approaches you suggested. I'm unclear on what the deep learning gets me. Do you mean that it includes bidding? And what kind of analysis are you envisioning? I could see trying to deep learn the conditional probability itself or to solve the particular problem you are talking about (assuming that some more rudimentary automated method doesn't work). But what does something that actually plays bridge add for that complexity? As for the relay/bidding termination; my assumption was that this in itself was overly ambitious and that I should first try to quantify some more basic questions and systematically check my core assumptions. (Maybe you intended this to be included in the problem as you stated it?) For example, right now our best hand evaluation methods have a standard error of .9 tricks. And a more in-depth method that includes known adjustments that you can't practically do live would be better by some unknown amount. You can do something similar for the offense vs defense aspect. Same with some of these other things. So that's the baseline. For my whole idea to even be meaningful, I need some example scenario where there's a meaningful effect size. If I can't get a significantly lower variance by holding part of the deal constant and then rescrambling the rest, the whole enterprise is pointless. My assumption is that because a simple finesse is .5 tricks that this is a lower limit on what can be accomplished in most cases. If that's right there's a lot of potential here. But if it turns out that an ambitious unconditional model results in .7 tricks of error and that I can't find a scenario where the error inherent in double-dummy results upon rescrambling is less than .65, then this task won't even be fruitful. The impact needs to be something that produces a reasonable amount of MPs or IMPs over a typical tournament. So I need to first identify some low-hanging fruit before I even break out the more complex stuff. Otherwise I can't be sure that I'm even in a position to make a convincing case. (Plus having some kind of coherent documentation of our baseline knowledge and the areas most in need of improvement seems valuable in its own right since it doesn't seem like such a thing exists right now.) Is this a good line of thought or do you think there's a way to short-circuit this and jump into the problem itself?
  3. Most of the bridge statistics I've seen are statistics about all hands or all situations. But I want to understand how these statistics change depending on the the exact situation you are faced with. I'm honestly surprised that someone hasn't already come up with a tool that can answer these types of questions. Is there relevant research that I missed? Or tools that I failed to find? Either way, if anyone knows has suggestions on tools or methods I should be using, I'd appreciate any advice, especially advice that can save me time or hassle. So far, I've found DDS, which seems to be the best option for a double-dummy solver. But it seems like I'll have to write my own scripts to generate a database of hands or otherwise use the program to create the data I need for any given statistical analysis. My understanding is that there's no consensus on what makes for a quality single-dummy solver. So my current plan is to average over the double-dummy results of a number of hands that share the relevant cards but are otherwise randomized. To put things mathematically, I want to understand how many "bits" of information it costs to get a certain amount of variance reduction, and how the relevant information and its value changes in different bidding or card play scenarios. My initial plan is to use general linear regression models, but if someone has an alternative approach that would work better, please let me know. === Bidding examples: A ton of people have done hand evaluation analysis, but they mostly focused on how many tricks you can take as declarer. Unfortunately, that's not really what people need to know. If you could make a system that always gave you the right bid to maximize the score differential, it wouldn't matter that you couldn't tell if 4♠ was a good sacrifice or a solid game until you saw dummy's cards. Knowing which one you have is information but not relevant information. The difference between what you can take on offense and what you can take on defense usually matters more (per the Law of Total Tricks). Even if you had a perfect hand evaluation formula, the information you both need is not the same as the information you need. Getting partner exactly what they need to make a decision is the basis of most conventions and almost all of modern competitive bidding. It's also the reason for the captaincy principle and the distinction between telling and asking. And hand evaluation formulas only report averages over all possible hands. But once bidding has started, you should only average over the remaining possibilities. Different things matter different situations, and the cost of communicating them depends on what has already happened in the bidding. Some information is almost always needed. But if you are designing a convention or a bidding system, you want to know which hands require accounting for something that only matters once in 100 hands. (Because it might matter in 50% of the hands being handled by that convention at that point in the bidding.) Moreover, your hand's points, however you count them, aren't the only thing that matters. Once you see your cards, you *also* have a prediction about the cards of the other three players. And you should take this into account as you bid and revalue your hand. Knowing that your opponents didn't preempt, overcall, or double reduces the number of possibilities too. === Card play examples: I'd like to create some more detailed probability tables than the ones available in places like the Bridge Encyclopedia. How often do lines of play or outcomes swing drastically based on additional information that could be revealed in bidding or learned from counting and discards? (And similarly, using game theory, when is honesty optimal vs when does false-carding or keeping quiet during bidding outweigh the benefits of giving partner a correct signal or lead suggestion?) How often do different card-play techniques and situations end up mattering? How often can you combine lines of play vs having to make a choice? How does the variance on different lines of play change in light of other information you have? Even if it is still a 50% play, it can be more or less risky in different situations. So whether or not it is worth taking that risk depends on the score you have and the number of boards you have left. We have rules for how risky you should be on the average board, but can we quantify how you should shade that given the tournament situation and how that situation changes the balance between luck vs skill? If I have X deals left, what should I expect the distribution of hands to look like and how much uncertainty is there? (To answer these questions, you'll need to know things like, if we have game, how often does the other side have game? When the other side has game, how often do we have a profitable sacrifice? What is the distribution of optimal contracts by score? How often can everyone make 1NT vs someone being able to make 2 of a major or 3 of a minor?) [Edits made to correct typos]
  4. I will add to this further: I had been excited to write a both a forum review and an Amazon review of this book after I had time to test the results against the same type of datasets that were used in previous discussions about other point count systems. I am rapidly losing interest because of this behavior. The author is doing himself a huge disservice. If he wants to promote the book, he should get involved in the community and actually provide people with helpful answers. Minimally he could link to the point-counting systems thread to help this person find more details. This generic spam is just going to result in no one taking him or the book seriously.
  5. Apologies for the late reply. Times have been regrettably interesting. I really appreciate you taking the time to post this. It seems like BUMRAP and TSP are about the same and that it's just a matter of calculational ease and preference. Is this an analysis that you can run at will? As-in, could you run this analysis on the additional adjustments from the Darricades book if I posted them or messaged you? His complete list of adjustments takes about 2 pages. I feel like I do most of these in my head anyway though. So I'm curious as to how much error reduction you get with each additional piece using the numbers he's assigned.
  6. That was helpful. Do you have a stats package with a double-dummy solver set up that you could run a full test on the reliability of a few different count methods? The Darricades book recently referenced contains a ton of information and the ultimate list of adjustments is fairly complex. So I wonder how much of an improvement it provides over TSP or BUMRAP, and specifically, which elements provide the most bang-for-the-buck. I don't think people will be using it at the table any time soon, but it's good from a bidding system design standpoint to know what hands are or aren't statistically similar. Everything passes the "gut-check", but I haven't had a chance to do something rigorous. My understanding is that the author does not have a formal statistical background and got his results by manually checking something like 8000 hands. So I'd be interested in seeing a computerized validation of his results. For better or for worse, the current situation has left me swamped with work. So I keep putting off testing it. I discussed this back last year when I first got a copy of the books in question. The person going around bumping threads to mention it should have just posted there. They are giving a promising book a bad impression by doing a bunch of thread resurrections.
  7. That's the conclusion that Darricades reached as well, and it took it much further than Tysen2k did. I'd love to run Tysen2k's numbers for this new count. And I'd like to work out some shortcuts for doing it if it really gives a significant improvement.
  8. I wanted to follow up on this. Lawrence Diamond's _Mastering Hand Evaluation_ references and summarizes much of what has been written in English. It would be a good reference for someone who was doing research or someone who wanted an overview. But Patrick Darricades' books are more interesting. He didn't do any original research, his goal was simply to summarize the state of the art based on what had already been written. He starts with the entire corpus of statistical work done by J-R. Vernes (including his 1995 book) and adds in more recent findings with a coherent set of point values so that you have an actual system instead of a list of considerations. Importantly, the focus is on adjusting your valuation to incorporate new information you get from your partner and to guide you bidding decisions about what information would be most helpful. He has two versions: the Honor's Book _Optimal Hand Evaluation_ and a book from Tellwell, _Optimal Evaluation of Bridge Hands_. I bought both of them. The Honors book is an edited down more focused version of the Tellwell one. It's easier to follow if you are trying to *use* the method. But the Tellwell one made it easier to understand how he arrived at it, though I can't point to anything in the extra 50 pages of material that really stands out. He hand-checked the method using about 7000 contracts over the period of 2 years. He says that a 50% odds contract will have at least the right point-value 95% of the time and one that have poorer odds will be below that threshold 95% of the time. The errors tend to be within 1-2 points. Typically you'll be 2 points high because of not knowing exact honor placement and having 1-2 points of wasted honors that weren't accounted for. Or you'll be be about 1 point too low because you had multiple deductions for lacking kings, queens, having mirror suits, and having 4333. The rounding errors accumulate in extreme cases and you subtract too much. Since his results were by hand, I would love to rerun the numbers like we did for BUMRAP and TSP above so that we could put everything on the same footing and get an apples-to-apples comparison. Does anyone know if you can still access Tysen2k's databases and easily run the numbers like he did? There are two benefits of Darricades' approach that I didn't really appreciate until I tried it. First, being based on the normal 4321 scale really does make it easier to deal with despite having to track half-points. Second, because it is a 6-increment scale instead of the 5 for Zar or TSP, you always have a good or bad intermediate and never a straight middle value. The need for editing and computer statistics aside, there are only two things I'd want to add. First, a better way to teach people to use the count. He has a few ways to explain the pieces in ways that avoid memorization, but I'd have liked a comprehensive presentation that helped people learn how to do this quickly at the table. Second, when you use just HCP, it's very easy to figure out what opponents have based on what you have. But once you start adjusting for various factors, that becomes harder. I'd like something systematic that shows you how to make these inferences. (You could figure it out from his point value table, but it would be painful. So it should have been part of the book.) If anyone looks into this work further, please let me know. I've been a big advocate of teaching advanced hand evaluation for a while and I think that Darricades' book is a huge step in the right direction. I think we can go further, but I'm glad that someone took the time to sit down and put what we already know in one place for easy reference.
  9. Has anyone read Ken Rexford's _Modified Italian Canape System_? (Thanks to the person who suggested it.) It uses a canape closer to the original, and addresses a lot of the objections people above have to the original Italian ones. He also spends time talking about competitive auctions in ways that seem relevant to this discussion. He makes some good arguments that it still works. Agree or disagree, my key take-away is that canape is more about negative inference and our current methods are more about positive information. That fits with the "lack of popularity" idea quite well -- it's a lot harder to teach a new player how to reason about what their partner *didn't* say and to make inferences about probable hand distributions accordingly.
  10. I agree with the above about the game seeming like an old-person's game and not being packaged for today's generation. Conceptually, there's nothing that stops you from having a card "shuffler" that deals out semi-random deals such that both sides get "shots" at similar contracts. Or that can deal out easier or harder contracts for you to just play. Or a card tray that can read the cards you've got and make some bidding suggestions. Or even some nice manipulatives like with Hool. But it's going to take a new generation to take that stuff and make a marketable game. I'm going to go further and say that *as a game for entertainment*, rubber is objectively better. Duplicate is just a necessary evil. Rubbers is fun because it's random. But it's so random, that without duplicate, the winner of the tournament wouldn't be remotely based on skill. It takes an unreasonable number of hands for that to happen. But volatility is the "sauce" that makes games entertaining to watch and to play. And taking it out is inherently removing the fun. As a game of pure skill, duplicate bridge isn't particularly good. Most of the hidden information is trackable. Good players pretty much all do the same things. A single dummy evaluator that has near-perfect play isn't some engineering marvel. And there aren't constant innovations in card play. Watson's is and always will be good and mostly comprehensive. So while tournaments are fun for people who love the game. I think they really should be seen as lagniappe instead of the game itself. One thing that Culbertson and Goren got right was that the bidding needed to be something that could be explained in a few simple principles. The true problem is that we teach bidding using their general ideas but use systems that don't work by that logic. Adding the separate strength of each hand will never lead you to a modern system, no matter how many complications you add to TSP or Zar. Modern bidding is based on figuring out how well the hands work *together*. So distribution usually matters more than points. (Literally every widespread convention is about conveying good or bad fits that are hard to communicate naturally. And as we've gotten further and further away from "adding hands" we've needed more and more complexity to account for this.) E.g. For trump contracts, you primarily want to know the difference between the combined lengths of your partnership's longest and shortest suits. No amount of counting and adding points does this. 5 card majors with limit raises and splinters do. So we need a modern way to convey our tacit knowledge and intuition like how Culbertson and Goren did. I doubt either of them rigorously adhered to their point counting; like most good players, they just visualized the play. The point counts were *intended* to be a heuristic and a crutch. My tickler file has an entry for getting a well constructed database of hands and doing some advanced statistics to try to come up with some new "key principles" that can give us something remotely reasonably for beginners. I *think* this works. Look at a "modern" major suit auction: a single raise says 3 cards and not much else to add. A double raise says extra trumps and extra high cards. A splinter says "trump support and a singleton". These *are* doing what Culbertson and Goren were doing -- adding the hand strength and bidding accordingly with higher bids being more rare and therefore easier to make decisions about. It's just that they are communicating *different* information. So we need new principles of bidding and hand evaluation to explain it. I think the problem is one of economics -- people who go to tournaments are the ones spending the most money, so that's who gets catered to. The less randomness the game has, the higher the ROI on making all of these small bidding refinements. That's probably why many rubber bridge players are still playing Goren or Bill Root -- the effort to change doesn't really seem worthwhile.
  11. I have this. It's okay. But it's too text-book-y for my taste. (Though it's an excellent textbook that shows that this can be done.) We Love the Majors is my current favorite intro book. It teaches (most of) standard American. I still don't know why anyone is teaching it that way. They gloss over some of the rarer circumstances that are hard to do without conventions. But because they are separating the major and minor suit bidding anyway, I don't see why they don't just go all-in and teach 2/1 in response to the majors. (And just gloss over 1♦-2♣.) I haven't looked at Wanna Play Bridge the 2/1 Way?. Anyone tried it? I'm not wedded to teaching 2/1 though. My thinking on bidding systems is that it's a matter of how many different contexts and principles the players need to learn before they can use it. So if someone could make *very* simplified relay system that always did the exact same stuff, I'd be down for it. And part of me thinks that there's a really simple version of Fantunes waiting to be found. After all, there are some (older) books that are very solid introductions to Precision as well that I'd love to use if they were updated.
  12. The thing is that social games with lots of parts and deep strategy are popular right now, despite all of those problems. I think bridge could be. But maybe we need the kind of minor tweak that Vanderbilt used to jump start it originally. I'll think on this and talk to dome game designers I know.
  13. Okay.Thanks again for your patient help. Awm, do you have a write up of your stuff? Straube, do you have a write up of Meckwell / R-M that you recommend? I've found sketches but nothing in depth. I like the response structure idea but I don't know of a full write up of the whole thing. Is the B.U.S. System, or the stuff from HardL above good to check out? More generally, let's attack this from the other side: If you were starting a new partnership today and wanted a serious tournament system, what materials would you guys use? Is there something documented well enough that we could hypothetically learn it and be set in terms of modern bidding thinking? Maybe backing up from there is better than trying to navigate in. More or less: What is does a top tier system look like? What is a good starting spot that we can evolve in that direction? And what are the most important pieces to focus on? In general, my, perhaps mistaken, belief is that super elaborate bidding systems only really pay off at the very elite levels where everyone's play is essentially perfect so the only marginal advantage you can get is from being much more precise with the bidding. Maybe this problem kicks in lower than I think. But this is a new partnership and getting on the same page quickly and consistently matters. So better documentation trumps better system for now. And principles trump specialized sequences and treatments. But we do want to know what a top tier system looks like. You always have poorly bid hands sooner or later and knowing how better players deal with it is probably a good guide for what you want to do. And we will want to make improvements. So knowing what an improvement looks like is helpful. If people are skeptical of Meckwell lite, I can use the 1C structure from Precision today or Revision club or anything else with a solid write up. My main issue with Revision is that I like weak NT openings and they have a super-strong one. It makes sense in contex, so maybe I should try it. It has the best write up I've found. And the logic for that strong NT is pretty solid. So maybe I'll be pleasantly surprised. So, IDK, you guys can make a recommendation. I'll follow it.
  14. Bridge needs to be accessible, right now it isn't. Why is that? Why isn't there a Culbertson or a Goren bringing in a ton of social players? Board game night is a popular thing. People spend tons of money on Settlers of Catan, Carcassonne, and the rest. Bridge is a fun game. But no one ever says, "let's play a few rubbers tonight". And hell would freeze over before someone said, "I bought this nice box set at the game store and liked the concept. Let's take 15 minutes to get the basics and try it out." I learned bridge on board game night because my college friends couldn't agree on spades or euchre. So someone suggested we do bridge and taught us how. And as we got into the game and ran into problems, we got taught more stuff or someone went out and looked it up in a book and taught the rest of us. I.e. Exactly like the way people get into other games. Someone new can join, and the randomness makes winning possible even though skill and reasoning is a big part. And you can go online or buy a book if you want to get good. And you can go to conventions and do tournaments if you get serious. But bridge doesn't do that. It's damned near impossible for someone to buy a pamphlet and learn to play with their friends like you could back in the day. And there are no attractive "sets" of stuff to buy that make the game attractive or remotely marketable. No one says, "you can learn the game in less time than it takes to read the typical board game rules and it's every bit as fun and challenging!" We have classes and interminablely slow software. And we don't even teach anything remotely modern. So you can't even appreciate the higher levels of tournament play. And even a local duplicate tournament is intimidating as hell and a huge jump from social play. I'd have never gotten into this if it weren't for a friend group that liked playing the game til 2am over drinks. It's hella fun, to the point that "board game night" turned into "bridge night with some board games thrown in." I love the game, but it seems like too few of the people are doing it to have fun. Those people got scared off because we told them it was hard and pure skill like chess and only the truly brilliant could even understand it let alone play.
  15. I like the IMprecision structure that you linked, but it's more or less what I had in mind anyway. So maybe I'm using "2/1 GF" incorrectly? I haven't used relays much at all to date. We assume we'll need to incorporate them eventually. But starting out? No. Especially since the material for learning them doesn't seem to be that solid. We need something straightforward we can talk through and start using. We've both never used SAYC, and really learning it seems pretty pointless. We've both mostly played precision. (I was actually taught bidding using Rigal's book.) We just don't have a common dialect since precision isn't really standardized. The perfect is the enemy of the good; we want to get something that works well enough and that we can gradually change as we get more familiar with each other's style. Maybe I'm wrong and it would actually be far easier to learn symmetric relay or MOSCITO or whatever than to hash out "regular" bidding and the conventions that go with it. We'd like to try to stay within ACBL's restrictions. Living in the US, we don't really have an alternative for tournament play. So AFAIK, that means a host of silly restrictions on relay sequences and other generally reasonable things. But our current thinking is that we can find some system notes for a meckwell-lite precision and it'll be close to what we are familiar with. Dan's book seems pretty solid and the parts it omits aren't complicated and could be grabbed from elsewhere. We mostly want to avoid wasting energy hashing out areas where we don't do the same thing when we can just swap in something more modern that we'll want to use eventually anyway. (for bidding and defense.) And we want avoid painting into a corner and needing to gut a ton of work to get from good enough to legit good. (And legit good does seem to mean relays now.) But we don't think moving to cutting edge stuff will be hard if we know what those parts look like and have a feel for what the most important changes are. Are we wrong about all of this? It seems like this thread has been extremely helpful and informative, but if you or anyone else disagrees, let me know. It's not like I know what happened while I wasn't paying attention. If I'm way off on what is going to be the best course of action, tell me. That's why I posted here.
×
×
  • Create New...