Jump to content

Bridge Hands on Bridgebase.


IowaST8

Recommended Posts

I started playing daylongs at the begining of Covid-19 pandemia and played approximately 30 boards a day, i.e. more than 11000 boards by now so that the data sample is big enough to make some conclusions.

I absolutely do not believe that the boards are truly random.

During last several months, I met the hands with 6-6 distributions seven or eight times, that is far more than it should have been (just one example, other sick distributions as 7-5, 8-4 etc. are also much more frequent that they should have been).

I know you said your post is not about distribution, but I've done the same analysis numerous times over tens of thousands of daylong hands. And the results for shape are exactly what you would expect from random hand generation. Incidentally, if you had a sample since the start of the pandemic, why did you restrict your conclusion to 'the last several months'?

 

As for your "conspiracy theory" about BBO adding more variance to the hands, to be honest, I don't think they'd know how if they tried.

 

But if you wanted to quantify any numbers behind how you think they're doing so or what you think shouldn't be happening, I can attempt to refute them, like I have every other similar claim.

Link to comment
Share on other sites

Instead of "defending" the results over time of your method of dealing, why not simply reveal the method. For EX: my method is simply to create a randomized 52 card deck then deal :from the top: NESW. Just so happens this coincides with the recommended dealing method for FTF play. There can always be slight wrinkles, for EX: always give the first card to the dealer and then deal clockwise for the remainder of the hand. Stop using boards that have previously been played. They are great for teaching but totally unnecessary in these days of faster than light computers (ok so technically maybe a tad slower than that). IF a tournament is going to be NOT random, the exact reason for the lack of randomness must be revealed prior to a person committing to play in it.
Link to comment
Share on other sites

And how do you "create a randomized 52-card deck"? That's the actual problem. How you distribute said {0..51} sequence among the hands is irrelevant if the sequence is truly random, all possible options are available with the same frequency, and given previous information, it is impossible to predict the next. And really, the only way to prove that is to try it, to calculate percentages and sequences, and so on. "Defend it", in other words.

 

Please note, it is a very hard problem, one that computer people have been getting better at for 30+ years, and finding holes in "perfect solutions" for the same length of time.

 

Having said that, with hardware TRNGs available for <$100 (now, debiasing them and keeping them debiased is an art and takes much expert time), the only acceptable solution should be "generate 96 bits of randomness, and look that number up in The Big Book (or throw it out if you go over)", repeat ad nauseam. That's basically BigDeal, with a TRNG in place of the entropy pool (which frankly is good enough for practical purposes, even Bermuda Bowl-level practical purposes).

 

But for BBO casual games, anything that isn't clearly biased or predictable is Good Enough, and almost certainly better than handshuffling.

  • Upvote 1
Link to comment
Share on other sites

 

Having said that, with hardware TRNGs available for <$100 (now, debiasing them and keeping them debiased is an art and takes much expert time), the only acceptable solution should be "generate 96 bits of randomness, and look that number up in The Big Book (or throw it out if you go over)", repeat ad nauseam. That's basically BigDeal, with a TRNG in place of the entropy pool (which frankly is good enough for practical purposes, even Bermuda Bowl-level practical purposes).

 

 

I am of the opinion that the PRNG is superior to a TRNG for generating bridge hands.

 

1. The benefits of using a TRNG versus a PRNG simply aren't salient. So long as you're using a good PRNG, you're fine

 

2. There are enormous benefits to being able to demonstrate that the seeding and the hand generation were done in a fair / unbiased manner. Hans van Stavern did some nice work creating a protocol by which folks who are running tournaments with Big Deal can prove that they aren't biasing the deals in some weird way. I don't think that you can do the same with a TRNG.

 

There may be hardware based RNG's that are capable of attestation.

(I'm not aware of any)

Link to comment
Share on other sites

Definitely a point.

 

The opposite point is that whatever allows repeatability or attestation is now the key to that hand set. That is potentially easier to pass off or acquire than the hand records.

 

My concern about TRNG is "don't roll your own crypto". There are people who spend their entire working lives making randomness sufficiently secure for cryptographic purposes, for things that are much more data-intensive and crack-lucrative than the Bermuda bowl. Using TRNG, you have to do all the debiasing (and run, or verify, the tests of the maker). The chance you will do that wrong (or me) is not insignificant. The chance that a tool-user who doesn't have a crypto background will do it wrong is much higher. And it doesn't take much. The chance that all those people making randomness safe have missed something available to [sponsor to be unnamed here], if they're not secretly working for the NSA or Mossad (and are willing to cheat to win the BB using tools that are top-of-the-top class government secrets) are very small.

 

Between those two, you are probably right. Still doesn't matter what the "dealing" method is.

Link to comment
Share on other sites

And how do you "create a randomized 52-card deck"? That's the actual problem. How you distribute said {0..51} sequence among the hands is irrelevant if the sequence is truly random, all possible options are available with the same frequency, and given previous information, it is impossible to predict the next. And really, the only way to prove that is to try it, to calculate percentages and sequences, and so on. "Defend it", in other words.

 

Please note, it is a very hard problem, one that computer people have been getting better at for 30+ years, and finding holes in "perfect solutions" for the same length of time.

 

Having said that, with hardware TRNGs available for <$100 (now, debiasing them and keeping them debiased is an art and takes much expert time), the only acceptable solution should be "generate 96 bits of randomness, and look that number up in The Big Book (or throw it out if you go over)", repeat ad nauseam. That's basically BigDeal, with a TRNG in place of the entropy pool (which frankly is good enough for practical purposes, even Bermuda Bowl-level practical purposes).

 

But for BBO casual games, anything that isn't clearly biased or predictable is Good Enough, and almost certainly better than handshuffling.

 

My bad,,, simplest (not most efficient but at these speeds who cares) is the lottery method.

spades 1 thru 13 hearts 14 thru 26 dia 27 thru 39 club 40 thru 52

Pick a number btn 1-52 or 0-51 if you cannot stomach leaving 0 out of your programming.

that is card 1 now pick another 1 - 52 and compare it to the one(s) already picked and keep doing that until a unused number is picked and keep that up until 51 of the 52 possible are picked then assign the last card (checking the deck starting from 1 up to 52 and if a number is missing that is our last card.

 

This is "dating" me but the randomizer in any decent software is based on using the internal clock so the results will vary depending on the processing speed of each individual computer. This factor alone would make predicting the next deal nearly impossible (I suppose a quantum computer might someday accomplish this task).

Link to comment
Share on other sites

My bad,,, simplest (not most efficient but at these speeds who cares) is the lottery method.

spades 1 thru 13 hearts 14 thru 26 dia 27 thru 39 club 40 thru 52

Pick a number btn 1-52 or 0-51 if you cannot stomach leaving 0 out of your programming.

that is card 1 now pick another 1 - 52 and compare it to the one(s) already picked and keep doing that until a unused number is picked and keep that up until 51 of the 52 possible are picked then assign the last card (checking the deck starting from 1 up to 52 and if a number is missing that is our last card.

 

This is "dating" me but the randomizer in any decent software is based on using the internal clock so the results will vary depending on the processing speed of each individual computer. This factor alone would make predicting the next deal nearly impossible (I suppose a quantum computer might someday accomplish this task).

I think you're still missing the whole point here - the *only* part of interest / relevance when it comes to bridge deals is how you 'pick a random number between 1 and 52' in a way that not only satisfies random distributions but is also not predictable based on previous random numbers.

 

A simple pseudo random number generator seeded with the internal clock is not even remotely close to good enough, and would be extremely easy to crack, and also wouldn't be sufficiently random, given the limited number of values the clock may take*

 

About 5 years ago the ACBL random number generator was cracked; given three consecutive boards, it was possible to determine every layout from then on.

 

You might like to read this article on how BigDeal works, and everything it had to consider. Especially interesting is how they were able to generate a seed from human keystrokes.

 

* Well, it may be good enough for BBO, given the immense number of things going on in parallel on the server, which nobody could really keep track of. But you'd still need to use a good PRNG that doesn't repeat as often as most basic ones would.

Link to comment
Share on other sites

In the case of the ACBL hand generator, they were using a linear congruential generator which made it particularly easy to crack

 

The hard part about this was actually mapping from the random number to the deal.

 

Please note: The ACBL implementation only covered a very small fraction of the deal space.

You could, in theory, create a rainbow table enumerating the complete system.

Link to comment
Share on other sites

For understandable reasons, I suppose, this discussion always is diverted back to a debate about generation of deals as opposed to delivery of deals to tables in events. It’s more familiar ground I guess, and the conversation is never going to go anywhere because those methods are still secret.

 

I, for one, do not suggest that the generation of deals is flawed in any meaningful way. I do not believe that hands coming from the generator are flawed distributionally or in some other measurable way.

 

I have raised, and continue to raise, a suspicion about the delivery of hands from the generator to the so-called “deal pool,” and from there to the specific events which use these hands. I had these suspicions for a while, and they only intensified when my friends and I, who play lots and lots of robot challenges and have tried every variant of challenges possible, played a series of “non-best-hand” “just declare” MP’s challenges followed by a series of “non-best-hand” “just declare” IMP’s challenges. The hands were immediately, obviously different from the one scoring method to the other. This raised in our minds the strong possibility that hands were delivered from the deal pool based on some kind of “measured variance” (as Jardaholy wrote above) in respective scoring methods from play at other tables before delivery to our table. Playing “just declare” hands was eliminating one huge set of variables — human bidding — enabling us to see a clear difference in the subset of hands that was being chosen for our challenges.

 

Once you permit the supposition that the delivery of hands may be subject to an algorithm, which computers everywhere are doing in all sorts of ways, then you encounter all sorts of important issues. Once the hands are in the deal pool, they go from table to table, provided that no one sees them twice. They can be analyzed in all sorts of ways, including scoring variance. They can be delivered to specific events (and even, though I don’t suggest this myself) specific players. It’s a lot to process, and since there is no transparency about the deal pool, we know nothing.

 

(I once found a reference on a BBO ad page that stated that “deal pool worked a little like the college SAT’s.” The link is in an earlier post of mine. Like the SAT’s ???? Sure would like to know what whoever said that meant)

 

As I’ve said twice previously, this exact same test (two sets of robot challenges described above) can be done by anyone who is interested in really investigating what I am talking about, as opposed to diverting the conversation back to the deal generator blah blah. So far, no one has mentioned they even tried it. That is disappointing but also predictable. Confirmation bias and unwillingness to explore beyond one’s comfortable beliefs can cut both ways.

Link to comment
Share on other sites

my friends and I, who play lots and lots of robot challenges and have tried every variant of challenges possible, played a series of “non-best-hand” “just declare” MP’s challenges followed by a series of “non-best-hand” “just declare” IMP’s challenges. The hands were immediately, obviously different from the one scoring method to the other.

 

Please provide a precise and testable hypothesis describing how the two sets of hands differ

 

Is one set of hands stronger than the other?

Are the distributions skewed in one set as opposed to the other?

 

BE specific

 

Once you do so, it's easy enough to test this with a set of deals that are yet to be dealt

Link to comment
Share on other sites

For understandable reasons, I suppose, this discussion always is diverted back to a debate about generation of deals as opposed to delivery of deals to tables in events. It’s more familiar ground I guess, and the conversation is never going to go anywhere because those methods are still secret.

 

The reason that people don't talk about the deal pools is that there is no reason to do so.

 

The deal pool system doesn't need to care about whether hands are biased with respect to strength or the distribution of various shapes or any of this sort of stuff. Presuming that the hand generators aren't borked, you can simply take a stream of inputs and slice and dice them into different pools.

 

Indeed, trying to add logic to look at hand strength (and tweak the allocation of hands accordingly) adds code, adds complexity, and increases the chances that something will get screwed up.

 

I trust BBO not to have screwed things up in this mind bogglingly stupid a manner.

Link to comment
Share on other sites

smerriman has it right - you're missing the only relevant piece of information. "how do you pick card 1? 2? 3?" And yes, that is in fact one of the most critical, most studied, and most difficult parts of computer science.

 

Anyone who doesn't realize that right away is almost certainly failing at it. I have post-graduate research in cryptography and random number generation, and I would almost certainly fail at it. And were I to succeed, people like Hrothgar, who have different priorities from me, will disagree with my methods. And he isn't wrong - his priorities are at least as valid as mine, in practise. We're just concerned about different parts of the distribution chain being attacked.

 

My crypto professor, basically day 1, made it clear that almost never is the actual crypto the source of failure - it's errors in process, or deliberate sabotage, or systems causing incentives to skip steps that seem unimportant. In this case, not paying attention to your source of randomness is an error of process - blindingly obvious to anyone who has tripped over it before, but frankly something that isn't even a thought to people who haven't yet.

 

So, were I to write something like this (which I wouldn't, because you Don't Roll Your Own Crypto), to keep Hrothgar happy, I would:

 

seed = secrets.randbits(SEED_LENGTH)
random.seed(seed)
for i in range(1, hands_in_set):
   while not hand[i]:
       hand_no = random.getrandbits(96) # I guess it's possible to use randrange() here and skip the "if".  Assuming randrange is safe (see below)
       if hand_no < POSSIBLE_HANDS:
           hand[i] = BigBook.hand(hand_no)
BigBook.create_dup_file(hand)
BigBook.print_hand_records(seed, hand)

 

But watch, it's easy to make mistakes here, too:

Changed in version 3.2: randrange() is more sophisticated about producing equally distributed values. Formerly it used a style like int(random()*n) which could produce slightly uneven distributions.

getrandbits uses randrange if the numbers are big enough, so it's sort of a self-fulfilling loop. If it would call randrange at 96, we'd probably have to build it as multiple internal integers smashed together.

 

To be potentially safer, I could random.getstate() after a secrets seeding, and store that pickled with the hand record uniqueid. All that would leave the computer was the uniqueid, insufficient to seed another implementation. You could call the program with a uniqueid, and if that had been generated on this machine, it would setstate() and rerun the generation, proving that the hand record hadn't been tampered with.

 

To be potentially safer, I could... That's why you don't roll your own crypto.

Link to comment
Share on other sites

 

Is one set of hands stronger than the other?

Are the distributions skewed in one set as opposed to the other

 

Neither of these. These questions (indeed your comment as a whole asking me to state an hypothesis) show your own assumption about “measurables.”

 

They reflect the factors that creat scoring variance according to the two scoring types. This has much less to do with distributions or strength as it has to do with the following:

 

MP’s capturing an over trick or conceding another undertrick; competing correctly on part score battles

IMP’s game or slam hands, or hands where a swing is formed by being set in a part score (particularly doubled) vs. making a part score. Overtricks not particularly important at all.

 

As I have said, you will see the difference most obviously in just declare hands where the bidding has been done by the computer as well.

Link to comment
Share on other sites

Or, to put it another way, the drunk man searches for his lost keys under the lamp-post because it's too dark to see anywhere else?

 

Prejudicial and completely unhelpful. Almost as if someone wanted to personify the thing I have criticized.

Link to comment
Share on other sites

 

The deal pool system does need to care about whether hands are biased with respect to strength or the distribution of various shapes or any of this sort of stuff. Presuming that the hand generators aren't borked, you can simply take a stream of inputs and slice and dice them into different pools.

 

Indeed, trying to add logic to look at hand strength (and tweak the allocation of hands accordingly) adds code, adds complexity, and increases the chances that something will get screwed up.

 

 

I have stated at least twice that I am not asserting that the Deal pool cares about distribution or strength. But what if it cares (by which I mean the BBO program is made to care) about creating events that (one possible motivation) ensure good players who play well are more likely to win, instead of being beaten by bad players making bad plays succeeding because the robots are bad.

Link to comment
Share on other sites

Neither of these. These questions (indeed your comment as a whole asking me to state an hypothesis) show your own assumption about “measurables.”

 

They reflect the factors that create scoring variance according to the two scoring types. This has much less to do with distributions or strength as it has to do with the following:

 

MP’s capturing an over trick or conceding another undertrick; competing correctly on part score battles

IMP’s game or slam hands, or hands where a swing is formed by being set in a part score (particularly doubled) vs. making a part score. Overtricks not particularly important at all.

 

As I have said, you will see the difference most obviously in just declare hands where the bidding has been done by the computer as well.

 

If this is truly obvious, state a simple concise and testable hypothesis

Link to comment
Share on other sites

If this is truly obvious, state a simple concise and testable hypothesis

 

If it’s as obvious to you as it was to me, you won’t need an hypothesis, lol.

 

For a rigorous test, wouldn’t one need to have as a control group a significantly sized group of players, similar to BBO’s in makeup, playing random hands obtained outside of BBO, against BBO robots, and analyzing scoring variance in both scoring types against the BBO group playing daylongs of both scoring types?

 

And wouldn’t one be looking for a significantly wider bell curve of scoring results, both on an average hand by hand basis, and upon an event by event basis, in the BBO deal pool group vs. the control group?

 

I don’t know how one would perform such a test given that these conditions cannot be replicated.

 

Other interesting tests while we’re discussing:

1—One could do bell curves of the scoring variance of different BBO events — live robot games vs. daylong robot games vs. robot national events.

2—One could also do bell curves comparing results obtained (by a computer) playing under a brand new BBO account vs. accounts of current players of varying records and experience playing on BBO.

I’m not asserting we’d find anything in test 2, but it would be interesting data to have a look at.

Link to comment
Share on other sites

If it’s as obvious to you as it was to me, you won’t need an hypothesis, lol.

 

 

You are the making claims how this is "obvious" to you and friends

If things are simple, then you should be able to describe this.

 

Put up or shut up.

Link to comment
Share on other sites

 

For a rigorous test, wouldn’t one need to have as a control group a significantly sized group of players, similar to BBO’s in makeup, playing random hands obtained outside of BBO, against BBO robots, and analyzing scoring variance in both scoring types against the BBO group playing daylongs of both scoring types?

 

 

Your claim is that there are difference in the BBO deal pool

 

More specifically, you stated that you and your friends

 

played a series of “non-best-hand” “just declare” MP’s challenges followed by a series of “non-best-hand” “just declare” IMP’s challenges. The hands were immediately, obviously different from the one scoring method to the other.

 

Stop trying to muddy the waters with random distractions

 

Let's focus on a simple and specific claim that you made

Link to comment
Share on other sites

@Hrothgar: Respectfully, I think I have been quite clear.

 

I also think you are writing as if I am on the stand under cross examination. I think I have no obligation to play the role of “agitator” you have assigned me, while you play the role of “defender of order.” BBO is more than just our game, it is a for-profit concern. I have no idea why they do or don’t do anything, but I certainly did not come to BBO looking to make waves. What would be my motivation? What would be the motivation of all the people who have posted here over the years claiming to detect something not completely random in the mechanism? Yes, we all could be the saps who don’t recognize our own bias...but that is not science, that is psychology. We all have one. That just closes off the debate.

 

I outlined a test that would satisfy me. It’s clear but, alas, impossible to do. It seems (as I’ve repeated) that you would like a simple measurable that you can run through a computer on some hands—irrespective of the very factors I have stated about scoring types, events, players, and variance, complicated as they are. Perhaps as a reason not to try the test, yourself?

 

I think I have finished saying my say, and thanks for reading.

Link to comment
Share on other sites

@Hrothgar: Respectfully, I think I have been quite clear.

 

I also think you are writing as if I am on the stand under cross examination. I think I have no obligation to play the role of “agitator” you have assigned me, while you play the role of “defender of order.” BBO is more than just our game, it is a for-profit concern. I have no idea why they do or don’t do anything, but I certainly did not come to BBO looking to make waves. What would be my motivation? What would be the motivation of all the people who have posted here over the years claiming to detect something not completely random in the mechanism? Yes, we all could be the saps who don’t recognize our own bias...but that is not science, that is psychology. We all have one. That just closes off the debate.

 

I outlined a test that would satisfy me. It’s clear but, alas, impossible to do. It seems (as I’ve repeated) that you would like a simple measurable that you can run through a computer on some hands—irrespective of the very factors I have stated about scoring types, events, players, and variance, complicated as they are. Perhaps as a reason not to try the test, yourself?

 

I think I have finished saying my say, and thanks for reading.

 

You are asking people to do work to test and try and prove / disprove your claims

 

If you want people to do this, you need to be more precise than "this is obvious to me, take a go at it"

Link to comment
Share on other sites

Mythdoc, I'm really not understanding your posts at all.

 

Firstly, you say that you do not believe the generation of deals is flawed, and that you are only referring to the "deal pool". You included a reference to how BBO describes the deal pool. But then you talked about noticing the effects in robot challenges.

 

Deal pooling, as defined by BBO, is this. For every board in a daylong, a fixed number of hands is randomly generated (a pool). Each time a human plays each board in the tournament, one of that board's hands is selected at random from the pool. That way every hand is the pool is played approximately the same number of times, while avoiding the effects of cheating by people that enter the tournament multiple times. This is not a secret; it's just a trivial algorithm.

 

In a robot challenge, both players get the same boards. Deal pooling is therefore 100% inapplicable.

 

Correct me if I'm wrong, but it appears you've seen the words 'deal pool', misunderstood them, and are talking about something completely unrelated - that you believe that BBO are intentionally biasing the hand generator by not dealing random hands, but by throwing out 'flat hands' based on the scoring.

 

So let's continue based on that.

 

You then stated two things:

 

a) That it was "immediately obvious" with a "clear difference" solely by playing challenge hands.

 

b) That the only accurate way to test whether this is true or not is to get a large set of players to complete truly random generated hands and compare with BBO's "maybe-not-so-random" hands. And that testing this is basically impossible.

 

Your earlier posts lined up with a) - you made it extremely clear you thought this was testable:

 

As I’ve said twice previously, this exact same test (two sets of robot challenges described above) can be done by anyone who is interested in really investigating what I am talking about, as opposed to diverting the conversation back to the deal generator blah blah. So far, no one has mentioned they even tried it. That is disappointing but also predictable. Confirmation bias and unwillingness to explore beyond one’s comfortable beliefs can cut both ways.

 

If this is true, all you have to do is clearly quantify the factor that made it completely obvious to you. Then that can immediately be put to the test.

 

Yet when pressed to quantify it, you've moved towards statement b), which basically admits that your 'immediately obvious' was completely made up.

 

So which is it? I am happy to run tests based on a). All you have to do is quantify what was 'obvious' in your head.

Link to comment
Share on other sites

.....that you believe that BBO are intentionally biasing the hand generator by not dealing random hands, but by throwing out 'flat hands' based on the scoring.

 

Ok, yes, I have been saying this. Whether we call these “deal pool” hands or not is subsidiary to this point. I don’t think the hands from one event to another are the same, or from one scoring type to another. My hypothesis (hat tip to Hrothgar) is that hands that are flat for MP scoring occur less frequently than normal in an MP game, and hands that are flat for IMP scoring occur less frequently in an IMPs game.

 

[EDIT—note added] I am talking about individual robot games, which is what I have mainly played

 

So let's continue based on that.

 

You then stated two things:

 

a) That it was "immediately obvious" with a "clear difference" solely by playing challenge hands.

 

Yes, the thing that really threw up some questions for me and my friends was when we played the series of challenges. “Why in these just declare challenges were the MP set of hands so, so different from each IMP set, if they were not put through a filter?” we kept asking ourselves. I tried to elucidate these differences above, but perhaps I was too unclear.

 

—The MP hands typically had multiple decisions per hand designed to test one’s technique and appetite for risk in pursuing overtricks, saving undertricks, ruffing losers, finesses and other cardplay devices, establishing side suits, etc. (NOTE: The MP hands compared each to another didn’t have the same decisions, and these decisions were only occasionally influenced by distributions, splits and the like. All good bridge players know that the game is not as simple as distributions and splits.)

—The IMP hands were ridiculously simple by comparison.

 

Of course, my buddies and I playing was not a scientific process. But yes, seemingly time after time the MP hands just happened to have these MP-intensive decisions and the IMP hands didn’t. (I am not saying you could never go for an overtrick in an IMP hand. Please don’t infer that.)

 

b) If this is true, all you have to do is clearly quantify the factor that made it completely obvious to you. Then that can immediately be put to the test.

 

Yet when pressed to quantify it, you've moved towards statement b

 

Well then, do you know of an easy way to state a specific, simple quantitative factor that one could use to test hands for a “preponderance of scoring-method-specific decisions to be made”? I don’t.

 

But one could certainly test the hypothesis of fewer flat hands. That’s where method B, as you put it, comes in. In order to do so, one would have to do a study along the likes of what I described above. Testing for fewer flatter hands would not be a comprehensive test of the ways and purposes that an algorithm might use to select or leave out hands, but it is ONE specific way to inquire, and perhaps the only way, since bridge is such a complicated card game that scoring results don’t come down to one or another specific quantitative factor.

 

I hope this is clearer. Thank you for allowing me to elucidate in answer to less contentious questions.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...