ulven Posted August 8, 2012 Report Share Posted August 8, 2012 I’ve compiled vugraph data, starting 2006, up until (but not including) the Dublin Europeans for Fantunes (5097 deals) and used filtering in Double Dummy Solver (http://www.bridgecaptain.com/downloadDD.html) Some stats that might be interesting:They are then, compared to the other table in play (all deals are team play): +0.64 imps/board.Fantunes open (2423 deals): +0.71Opps open (2652 deals): +0.58Fantunes declare: +0.52Fantunes defend: +0.77 Fantoni open (1185 deals): +0.27Nunes open (1238 deals): +1.13They open, auction uncontested: +0.42They open, opps overcall: +1.03Fantoni overcalls (737 deals): +0.20Nunes overcalls (714 deals): +0.69 Looking at 1NT-opening:641 times: +0.66When LHO overcall 1NT (149 deals): +0.42RHO overcall (132 deals): +1.30 Can go on here for a while… 1 Quote Link to comment Share on other sites More sharing options...
bjacobs Posted August 8, 2012 Report Share Posted August 8, 2012 Welcome to the forums! Thanks for writing the book, and for your thoughts & anecdote here. It seems as though you've missed that the book and the OP are discussing net imps/board, not number of swings or total imps exchanged. The bidding for pair A and pair B have zero net effect on imps/board in the long run. No, I hadn't missed the discussion, it's just that I can't think of much to contribute. I did some calculations that are accurate if BBO Archives and Excel are to be believed, and drew a couple of conclusions from the results. Nothing in this thread makes me want to back off from those conclusions, although I will admit that there are some points made here that I simply don't understand. The analysis that I did is fairly important to me, because there are probably people in the world who are thinking: "Fantoni and Nunes are such great bridge players: think of what they could achieve if they played a normal system!" I am totally convinced that their system is one of the reasons for their greatness, rather than something that holds them back. Most of that belief stems from my own experiences with the system, for which I kept substantial statistical records that I chose not to publish in the book, although they can be found elsewhere on the internet if you are a REALLY good searcher :) Cheers ... Bill Quote Link to comment Share on other sites More sharing options...
glen Posted August 8, 2012 Report Share Posted August 8, 2012 ... I kept substantial statistical records that I chose not to publish in the book, although they can be found elsewhere on the internet if you are a REALLY good searcher :)I'm a REALLY good link clicker :) Fantoni open (1185 deals): +0.27Nunes open (1238 deals): +1.13Nunes Revealed! Can go on here for a while…Please continue and post stats by opening. Quote Link to comment Share on other sites More sharing options...
han Posted August 8, 2012 Report Share Posted August 8, 2012 Bill, it is wonderful that you have entered the discussion here. I haven't read your book yet, but I am very happy that you wrote a book on this topic and I'm looking forward to reading it. PrecisionL quoted this from your book: 2723 deals, net IMPs won = 1817, or 0.67 IMPs per deal on deals Fantunes opened the bidding.1676 deals, net IMPs won = 645, or 0.38 IMPs per deal where the contract was the same at both tables. CONCLUSION: Superior card play [both by declarer and the defenders] accounted for just over 57 % of the IMPs won, while superior bidding (bidding judgment & system) accounted for 42 % of the IMP gains. This conclusion is too strong for several reasons. In my mind the biggest problem with the conclusion is that IMP scores are relative to the other table. Fantunes have often played with extremely good teammates, who are among the best defenders in the world. Therefore the IMPs gained by Fantunes is partly because they received (on average) easier defense than the pair they are compared with. You might say, but this is true for all the hands, and there is still the difference between the hands where the contract is the same, and the hands where the contract is different. This is true, but good defense matters most on the hands where the contract is close, and not on boring flat hands. And boring flat hands are exactly the kind of hands where both tables are likely in the same contract. So, you would expect that good defense matters more on the set of hands where the contract is different. How big is this factor? I have no idea, probably nobody knows, which makes it hard to draw conclusions as above. But as Helene says, the data is very interesting, and it seems likely that Fantunes' bidding (system + judgement) is extremely good. I am totally convinced that their system is one of the reasons for their greatness, rather than something that holds them back. You play the system yourself in a high level partnership, and you have often seem them play. Your opinion on the system should therefore be taken seriously. Unfortunately, you did yourself a disservice by pretending that this data-analysis offers any evidence that your conviction is correct. Such a conclusion would be outrageous. Even if we could conclude that Fantunes win IMPs in the bidding (which seems likely, as I wrote above above), it would be almost impossible to distinguish the system wins from the judgement wins, except perhaps by a careful case-to-case study. To my mind the fact that you present this data as evidence for your conviction makes your claim seem much weaker. 5 Quote Link to comment Share on other sites More sharing options...
han Posted August 8, 2012 Report Share Posted August 8, 2012 For what it is worth, I would guess that of the three factors judgement, partnership understanding and bidding system, judgement is by far the most important, and second most important is the level of partnership understandings. Even at the highest level, bridge is still a game of mistakes, and consistently making good choices and having solid agreements is far more important than what those agreements exactly are. Even if you could find out exactly how many IMPs a pair would win on their bidding, it would therefore be impossible to deduce anything sensible about the quality of their bidding system. 1 Quote Link to comment Share on other sites More sharing options...
ulven Posted August 8, 2012 Report Share Posted August 8, 2012 1C revealed. based on same set of data as above.Total (534 deals): +0.48 imps/boardNo overcall (259 deals): -0.13 (!!)LHO overcall (136 deals): +1,40RHO overcall (139 deals): +0.71 Breaking down overcalls of 1C:Simple (240): +1.13Jump (20): +0.85Double jump+ (15): +0.07 2-level suit openings lumped together:Total (540 deals): +0.60No overcall (246): +0.47LHO overcall (170): +0.52RHO overcall (124): +0.98They overcall/We declare (85): -0.42They overcall/They declare (209): +1.17 Note: the filter does not state how X of opening bid is classified.Will check with author. 1 Quote Link to comment Share on other sites More sharing options...
ulven Posted August 8, 2012 Report Share Posted August 8, 2012 How would you interprete this? Uncontested auctions.Fantoni opens the bidding (646 deals): -0.12 imps/boardNunes opens the bidding (618 deals) +0.99 There might of course be something wrong with the filter but the author seems to have spent a lot of time to get it right.Other forum members with collected data might want to double-check. Quote Link to comment Share on other sites More sharing options...
han Posted August 8, 2012 Report Share Posted August 8, 2012 How would you interprete this? Uncontested auctions.Fantoni opens the bidding (646 deals): -0.12 imps/boardNunes opens the bidding (618 deals) +0.99 As far as statistical relevance, I don't know what the distribution for the number of IMPs per hand is. I would guess that perhaps the standard deviation is about 6 IMPs per hand (though if it turned out to be 3 or 8 I wouldn't be surprised). In that case the standard deviation for 625 hands is 6/25, or 0.24. That would make the difference more than 4.5 standard deviations, which is a lot (although not enough for people at CERN to conclude anything). Of course we are dealing with a difference between two tests, rather than a single test. For example, from this data we can conclude more surely that Fantunes scores positively on the hands where Nunes opens, than that Fantunes does better on the hands where Fantoni opens than on hands where Nunes opens. (the former is a single test, the latter is two separate tests.) Still, it seems likely that this is not a low-data fluke, but that they actually do score better on hands where Nunes opens. Now perhaps you are asking what we can conclude from that. I'd say: lacking more data we can't conclude anything. Perhaps opener plays more hands, and Nunes plays the cards better. Perhaps Fantoni ends up playing more notrump slams after Nunes opens in a suit, and Fantoni plays those better. Perhaps responder gets to relay for opener's hand, and Fantoni is better at placing the contract. Perhaps Nunes opens more soundly (or more aggressively) and this creates a higher score. Competitive auctions are very different from responder's seat and from opener's seat, but who is doing what different is very hard to judge. There are so many possibilities, and likely there is more than one factor at play. It is very hard to find out exactly what the cause is. Also, say that you did find out exactly what the reason is, and you see that Fantoni takes better competitive decisions (0.5 IMPs), Nunes plays the hand better (0.3 IMPs) and Nunes opens more disciplined (+0.2 IMPs) and the rest was noise (+0.11 IMPs), it is not clear that that would be interesting to anybody but Fantoni and Nunes. For a while I kept track of how well my partner and I did on hands where we defended, on hands where he was declaring and on hands where I was declaring. Somehow I thought that this might be interesting. After a while I decided it wasn't. For example, say we have a bad auction to a hopeless slam (sadly, this happened too often). Why is it relevant who is declaring? Perhaps you'd think that these hands even out, but they don't when you play a relay system (which we do). The only conclusion I drew was that these numbers were fairly meaningless. If you want to decide who does something better, you have to go through the hands manually to see whether somebody made a mistake or did something well. This is very time consuming and incredibly difficult to be objective about. I tried doing this for my own scores, but I know I was not able to be objective. (I think I did learn something though, just from going through the hands I played very thoroughly and keeping track of what kinds of mistakes we made. Also, often after a couple of days of bridge you feel like you played well or badly. After going through the hands manually, you might find that you made more mistakes than you thought you did, or fewer.) My answer turned out to be longer than I was planning, and probably less useful than you were hoping for. 1 Quote Link to comment Share on other sites More sharing options...
han Posted August 8, 2012 Report Share Posted August 8, 2012 I found in an old bluecalm (thanks bluecalm!) post that the standard deviation is more like 3.5 IMPs per hand (or 0.14 average IMPS for 625 boards) which makes the 1.11 IMPs per board difference even more reliable. (just a side note: I don't mean to come across as an expert on statistics. I am certainly not, and I could easily be making elementary mistakes.) 1 Quote Link to comment Share on other sites More sharing options...
semeai Posted August 8, 2012 Report Share Posted August 8, 2012 I found in an old bluecalm (thanks bluecalm!) post that the standard deviation is more like 3.5 IMPs per hand (or 0.14 average IMPS for 625 boards) which makes the 1.11 IMPs per board difference even more reliable. (just a side note: I don't mean to come across as an expert on statistics. I am certainly not, and I could easily be making elementary mistakes.) Was it this post? It looks like bluecalm is talking about the play-of-the-cards-vs-double-dummy standard deviation as 3.5 imps/board. Later in the same post bluecalm mentions the bidding-(with-double-dummy-play)-vs-par-score standard deviation as 6 imps/board. The par score is e.g. more often doubled than real contracts, so that number is presumably a bit high for more practical concerns. Your guess of 6 imps/board sounds pretty reasonable, but of course someone with actual data coming along would be best. Quote Link to comment Share on other sites More sharing options...
PrecisionL Posted August 8, 2012 Author Report Share Posted August 8, 2012 Thanks for all the constructive comments, especially data by bridgebrowser analysis. Here is a comment from the book to confound any conclusions or analysis: When opening 1NT, Fantoni sometimes passes 11 or 12 hcp hands when vulnerable while Nunes will open the same hands 1NT. {pg. 99-100). [8/9/12 Paraphrased] Quote Link to comment Share on other sites More sharing options...
Cyberyeti Posted August 8, 2012 Report Share Posted August 8, 2012 Thanks for all the constructive comments, especially data by bridgebrowser analysis. Here is a comment from the book to confound any conclusions or analysis: When opening 1NT, Fantoni sometimes passes 11 or 12 hcp hands when vulnerable while Nunes will open the same hands 1NT. {pg. 99-100).This is not uncommon, playing a weak no trump now, I open a lot more 11s than partner does. The stats from when one of them opens the bidding are interesting, be very interested to see the relative stats when one of them opens 1N vul. Quote Link to comment Share on other sites More sharing options...
han Posted August 8, 2012 Report Share Posted August 8, 2012 Was it this post? It looks like bluecalm is talking about the play-of-the-cards-vs-double-dummy standard deviation as 3.5 imps/board. Later in the same post bluecalm mentions the bidding-(with-double-dummy-play)-vs-par-score standard deviation as 6 imps/board. The par score is e.g. more often doubled than real contracts, so that number is presumably a bit high for more practical concerns. Your guess of 6 imps/board sounds pretty reasonable, but of course someone with actual data coming along would be best. Yes, that was the URL, thanks for pointing out my sloppy reading. :) Quote Link to comment Share on other sites More sharing options...
bjacobs Posted August 9, 2012 Report Share Posted August 9, 2012 Bill, it is wonderful that you have entered the discussion here. I haven't read your book yet, but I am very happy that you wrote a book on this topic and I'm looking forward to reading it. PrecisionL quoted this from your book: 2723 deals, net IMPs won = 1817, or 0.67 IMPs per deal on deals Fantunes opened the bidding.1676 deals, net IMPs won = 645, or 0.38 IMPs per deal where the contract was the same at both tables. CONCLUSION: Superior card play [both by declarer and the defenders] accounted for just over 57 % of the IMPs won, while superior bidding (bidding judgment & system) accounted for 42 % of the IMP gains. That's not a verbatim quote from the book. I really only did one "interesting" thing with the collected data. I totalled imps swung and won/lost on boards where the final contract was the same, in order to make an approximate calibration of card-play aspects. (Clearly approximate, as there would be some deals where the auction affects the card play, particularly the opening lead.) It still seems to me to be a valid device. Cheers ... Bill Quote Link to comment Share on other sites More sharing options...
han Posted August 9, 2012 Report Share Posted August 9, 2012 I agree that it is useful and smart idea, I wouldn't know how to test the effectiveness of their bidding any better than that. However, doing this does not tell you how good their system is, it only tells you how good their bidding is. That's the point that semeai was making on page 1 and I don't see any way around this. As long as you are only concluding that Fantunes do well in the bidding, what you are doing is good and interesting. But it doesn't tell you anything about their system, and unfortunately you seem to be using this as an argument for the strength of their system. 1 Quote Link to comment Share on other sites More sharing options...
gnasher Posted August 9, 2012 Report Share Posted August 9, 2012 Fantoni opens the bidding (646 deals): -0.12 imps/boardNunes opens the bidding (618 deals) +0.99 When opening 1NT, Fantoni sometimes passes 11 or 12 hcp hands when vulnerable while Nunes will open the same hands 1NT. {pg. 99-100). Oh good, that proves that opening balanced 11-counts is a massive IMP-winner. I knew I was right. 1 Quote Link to comment Share on other sites More sharing options...
aguahombre Posted August 9, 2012 Report Share Posted August 9, 2012 However, doing this does not tell you how good their system is, it only tells you how good their bidding is. That's the point that semeai was making on page 1 and I don't see any way around this. As long as you are only concluding that Fantunes do well in the bidding, what you are doing is good and interesting. But it doesn't tell you anything about their system, and unfortunately you seem to be using this as an argument for the strength of their system.New category for Posties? Best clarification of issue. Quote Link to comment Share on other sites More sharing options...
PrecisionL Posted August 9, 2012 Author Report Share Posted August 9, 2012 OK Math majors and Statistical savy posters and conscientious objectors: DESIGN AN EXPERIMENT THAT ALLOWS ONE TO DISCERN STATISTICALLY THE SIGNIFICANCE OF BIDDING, DECLARER PLAY, DEFENSE, LUCk and OPENING LEAD and whatever else you want to ascertain (and maybe System). I assume you are familiar with the Book this thread is based on and Richard Pavlicek's web page: http://www.rpbridge.net/rpme.htm Quote Link to comment Share on other sites More sharing options...
rhm Posted August 10, 2012 Report Share Posted August 10, 2012 I do not know why you are so defensive. You can argue with emotion and appeals to non-relevant authorities all you want. You can even attempt to discredit me by pointing out directly or indirectly that I am not as good as fantunes, and thus rate to lose to them. That would be relevant if I was like LOL FANTUNES SUCK. However, your op was very clear: You even wrote CONCLUSION so I assume it was clear what I was talking about with my "criticism" I offered, that the conclusion was not based on math or logic. It is basically one big logical fallacy. You see people attempt to use numbers in this way, but there is often a problem with causation. This is very common in many books, studies, etc where people try to analyze data. I realize that the conclusion might have been your conclusion from data that bill jacobs offered in his book, but I took it to mean that it was a conclusion bill jacobs drew in his book. Whoever drew that conclusion, specifically that 42 % of bidding accounted for their gains is obviously wrong. Here is an example: If my style is to bid scientifically, carefully catering to all possible slams etc, then I will sometimes find a good slam that the other table missed. Ok, great, I won the board with my bidding system/judgement, and that is factored in. However, how about the times that I do the same thing, and I give them lots of information to make the killing lead, or the winning defense. This is the tradeoff you make for bidding carefully rather than blasting when slam is unlikely. So now I got to the same game as the other table, but I went down and they made it. By the conclusion above, this would mean that my cardplay was inferior, but really it was my bidding that caused me to lose that swing, despite ending up in the same contract. So the fallacy here is that if we get to the same contract, our system/judgement in the auction was irrelevant, and imps won or lost are based solely on the cardplay. See, that wasn't so hard! There are more things like that where the conclusion drawn does not logically follow from the data given. Ergo, the premise that because they win .67 imps/bd on hands where they open, and .38 imps/bd when they play the same contract, they win 57 % of their imps from superior cardplay and 42 % from bidding is just wrong. It is based on underlying bad logic/math, which is what I said. For what it's worth, it's possible that fantunes are winning more than 42 % of their imps from bidding. It is possible they win less from bidding. I have no idea from the data provided, and I would not attempt to draw that conclusion from that data. I did not draw any conclusions from it, or offer whether the book was good or not, or whether fantunes were good or not, or whether I think their system is good or not, and what % of imps they win from bidding. I do not know those things. I do know that the conclusion given in the OP was ridiculous, which was the only thing I commented on. The rest of your last post was pretty amazing and also filled with bad logic, but I'll just end it here. Would you accept the following premise as likely to be right: Bidding and Bidding judgement play on average a bigger role when the contract is different in a team match.and as a corollary that card play will play on average a bigger role when the contract is the same? Rainer Herrmann Quote Link to comment Share on other sites More sharing options...
han Posted August 10, 2012 Report Share Posted August 10, 2012 OK Math majors and Statistical savy posters and conscientious objectors: DESIGN AN EXPERIMENT THAT ALLOWS ONE TO DISCERN STATISTICALLY THE SIGNIFICANCE OF BIDDING, DECLARER PLAY, DEFENSE, LUCk and OPENING LEAD and whatever else you want to ascertain (and maybe System). I assume you are familiar with the Book this thread is based on and Richard Pavlicek's web page: http://www.rpbridge.net/rpme.htm I am not a math major, nor nearly as statistically savy as some other posters, and I was born too late to be a conscientious objector. However I would suggest these tests: For bidding: Let different pairs bid 1000 randomly dealt hands (the same hands for each pair) playing against two computers. Then let 4 computers play the contract that was reached. Compare the results. Defense: Put different pairs into a room and let them defend 1000 randomly dealt hands against a computer program. Then compare scores. For declarer play: Let 4 computers bid 1000 hands. Then let different players declare the hands against the remaining 3 computers. Compare the results. As long as your computer programs and hands dealt remain the same for different participants, you get a reasonable test. 1000 hands is enough for my taste, but more is better. Alternatively, you could check whether a pair has won any European championships or Spingolds lately. If so, that should be a reasonable indication of their strength. Quote Link to comment Share on other sites More sharing options...
han Posted August 10, 2012 Report Share Posted August 10, 2012 OK Math majors and Statistical savy posters and conscientious objectors: DESIGN AN EXPERIMENT THAT ALLOWS ONE TO DISCERN STATISTICALLY THE SIGNIFICANCE OF BIDDING, DECLARER PLAY, DEFENSE, LUCk and OPENING LEAD and whatever else you want to ascertain (and maybe System). I assume you are familiar with the Book this thread is based on and Richard Pavlicek's web page: http://www.rpbridge.net/rpme.htm I didn't answer to your "(and maybe System)". In response I have a riddle for you. A big vase contains a large number of cubes, some small and some large, some red and some green. It is unknown how many of each kind there are. A blind man randomly draws a cubes from the vase and determines whether it is small or large. Then he puts it back, and repeats. You are convinced that the vase contains more red cubes than green. How many cubes should you let the man draw before you can be 95% certain of your conviction? Quote Link to comment Share on other sites More sharing options...
bluecalm Posted August 10, 2012 Report Share Posted August 10, 2012 For bidding: Let different pairs bid 1000 randomly dealt hands (the same hands for each pair) playing against two computers. Then let 4 computers play the contract that was reached. Compare the results. This actually could be arranged if there was some decent bridge program with API. I tried to approximate this with just double dummy result (after 1st lead) but having decent program to actually play the hands (possibly many times over) would be better. Quote Link to comment Share on other sites More sharing options...
rhm Posted August 10, 2012 Report Share Posted August 10, 2012 I agree that it is useful and smart idea, I wouldn't know how to test the effectiveness of their bidding any better than that. However, doing this does not tell you how good their system is, it only tells you how good their bidding is. That's the point that semeai was making on page 1 and I don't see any way around this. As long as you are only concluding that Fantunes do well in the bidding, what you are doing is good and interesting. But it doesn't tell you anything about their system, and unfortunately you seem to be using this as an argument for the strength of their system.My main objection is different:I think when you play a different system you have the advantage that your opponents are unfamiliar with the scenarios, which will crop up in different flavors and frequencies. At the top level, I do not mean that opponents do not understand the meaning of the bids or their implications and inferences. They will, at least in long team matches.But at the table you face a concrete bidding decision with a specific hand in a certain bidding scenario like after LHO has opened with a two-bid. Opponents, no matter how well they have prepared, will face unfamiliar decisions for which they will have less experience to draw on than the creators of a system, who play it all the time.This increases the chance that opponents will misjudge more frequently when playing against such new systems. I have no problem with that advantage for innovation and creativity.It is a bit like in chess, where top level players search for new moves in tried and tested opening variants to catch their unsuspecting opponent by surprise. When Precision was new, an unknown Taiwanese team only lost in the final of the Bermuda Bowl to the Blue team. Was stone age Wei Precision really so much better than the other bidding system played at that time? When the convention Multi was new it had a lot of spectacular success in top level play. Is this still true today?The same for Kamikaze notrump etc. The system is surely interesting and with an innovative approach. It is apparently good enough for top level play. So it looks competitive. But only time will tell of any real superiority. Rainer Herrmann Quote Link to comment Share on other sites More sharing options...
hrothgar Posted August 10, 2012 Report Share Posted August 10, 2012 OK Math majors and Statistical savy posters and conscientious objectors: DESIGN AN EXPERIMENT THAT ALLOWS ONE TO DISCERN STATISTICALLY THE SIGNIFICANCE OF BIDDING, DECLARER PLAY, DEFENSE, LUCk and OPENING LEAD and whatever else you want to ascertain (and maybe System). Here’s how I’d proceed Start with eight computers playing the identical baseline system. Synchronize the seeds of the RNGs. Have the computers play a 1,000 board team match against one another. (Hopefully, the match is perfectly tied). Repeat having al of your machines playing the system that you are testing. Repeat this same exercise, only lobotomize the teams. Allow them to understand the opponent’s bidding system during the bidding portion of the match. However, deprive them of this same information during the declarer play / defense. Once again, baseline the two teams. Next, repeat the same exercise a dozen or so time, varying the seeds of the RNGs. Here, you should start observing some variance in the results. The point of this part of the exercise is to understand how much naturally occurs when two teams playing 2/1 GF or MOSCITO or whatever compete against one another. At this point in time, you can probably start doing some reasonable comparisons. In particular, you can contrast the following situations 1. Two teams playing the same system2. Two teams playing different systems with information about the bidding systems impacting declarer play and defense3. Two teams playing different systems with zero information about bidding Quote Link to comment Share on other sites More sharing options...
chasetb Posted August 10, 2012 Report Share Posted August 10, 2012 When Precision was new, an unknown Taiwanese team only lost in the final of the Bermuda Bowl to the Blue team. Was stone age Wei Precision really so much better than the other bidding system played at that time?YES! Precision is relatively similar to Neapolitan / BTC , so it wasn't like people were (or should have been) unfamiliar with defending against it. What made it better was that it was simpler than both, and it used a basic tenet of bidding theory that the others didn't that shape is at least as important, if not more important, than strength. Being able to set an immediate GF was also very nice. In fact, when the Blue Team came out of retirement, ALL 3 PAIRS PLAYED PRECISION! Of course, we now know that Precision has an advantage over 2/1, but if you spend enough time on your system, most/all of those gaps can be plugged up and the difference is minute. Back in the 60's and 70's, bidding was still pretty bad; they were just beginning to understand how to build better systems. Precision was simple, streamlined, and yes, probably frustrating to deal with. After looking at the Fantunes system and watching it be played, I don't think their system is that great. What helps it is the fact that THEY know their system (except they occasionally forget whether a bid in competition is a FSJ or a splinter), other people don't have experience in defending against it, and the system has a tendency to grind opponents down, especially non-WC opponents. Even WC opponents can get run over by the Fantunes system. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.