Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/27/2023 in all areas

  1. Probably not, but Yesterday, the Royal Statistical Society provided a talk by someone who is doing a PhD in Large Language Models such as chatGPT. Apparently, the way it works is that it fits the coefficients in a Markov Model, i.e. a model that predicts the next word in the sentence. For example, if the previous four words are "... the spade finesse was ..." it may predict that the next word is marked: 20% likely: 30% unlikely: 30% doomed: 10% based on simple frequencies in the training database. Like the autocomplete in Google Search. Now you generalise this in various ways to allow it to predict the next word in situations where the exact sequence is very rare or non-existent, but where "similar" (in some sense) sentences appear in the training database. This was a bit of a deja-vu for me since we toyed a bit with such models in information theory classes when I was an undergrad in the 1980s. These were simple statistical models such as linear empirical Bayes models, and we would use something like 100 parameters and about 10 k of training data. Such miniature models can generate gibberish that looks like a desired language, but to generate meaningful or even useful text you will obviously need much larger models and more training data. The newest version of GPT has about a trillion parameters and is trained on a petabyte (1E15 bytes) of data. Generalising this, it can predict what answer someone would give to your question if you posted it on Reddit, which was originally the main source. Reddit is quite good as it is largely Q&A and contains a huge diversity of writing styles. Later versions use other sources also, but it is not entirely clear to me how it would use sources like Wikipedia to train a Q&A model.
    1 point
  2. Some clarification is in order: The query was ' How do you rule?' When I run a game I post and make these announcements prior to the session: 1. partnership's completed CCs must be presented to receive entry 2. exchange CCs for the round (to minimize the need to ask questions) 3. ALERTS ARE FORBIDDEN as provided by L81C2 & L80B2f I find that L16B1 specifies Alerts are a system of communication other than by call or play and thereby conflict with L73B1 in contravention of L80B2e and thus are forbidden in face to face contests, and, that their use is subject to PP.
    1 point
  3. With this attitude to alerting, I can guarantee that you would get kicked from our BBO table within 10 hands. If alerting your agreements properly is too much work, perhaps bridge is the wrong game for you. Your opponents are entitled to the information so when you withhold it you are committing the bridge equivalent of a yellow card offence. Doing it deliberately is simply cheating. Are you a cheat axman?
    1 point
  4. It looks like there was a system misunderstanding regarding splinters. These AIs get more human every year...
    1 point
  5. Sure, ... it happens, that the finally achieved score at the table is good for the non offending side, in this case the score stands, this means less work for the TD, and the no non offending side is happy.
    1 point
  6. Green vs red, with this hand structure and features, you are closer to 4S than to a weak 2. The 4-cd H would prevent a lot to open were you 6-4, but with the 7th S… In all cases, to answer your question, it is *not* a multi - unless one of the options is 7-4 majors, less than opening values
    1 point
  7. I'm sure that there are many folks in the UK who dislike tomatoes....
    1 point
×
×
  • Create New...