You are the math guy. I'm a computer scientist. I have an excuse for 2+3 = 7. I'm not used to base-10 addition. :)

You are the math guy. I'm a computer scientist. I have an excuse for 2+3 = 7. I'm not used to base-10 addition. :)

In What Base ?

is 2 + 3 = 7.

In Base 8 or Base 16, 2+3 is 5 .

trashy waitresses

Ever get a decent cocktail? How was the pen at borgata? The decks cutoff?

Your sequence *2-3-4-5-6-A-X * has the player hitting his hard 20s. However, Eliot noted that except for busting on the last cards, the players were playing BS. (This was an answer to my question as to what would happen if the player was required to follow BS).

So you need to move the position of the Ace. However, not any other position will work. For example, 2 � A, -3 � 4 fails also.

See if you can figure out where the Ace would have to be.

Second question: what was the dealer up-card ?

try base 4. :)

3 + 2 = 11 there. :)

Or base 5.

10

But really, what happened was that I originally started with the most dealer cards I could imagine, which started off with 2,2,2,2,3,A,A,A,A,X for a 10 card hand. Which meant that the player had to get 4's (or bigger) to split. I chose 4's.

Later I decided that it might be better to maximize the number of cards in each player hand (since there were 4 of 'em) and then maximize the dealer on whatever is left. So I changed the 4's to 2's but didn't correct the math. :(

:)

OK, Eliot said 27 cards with the player following BS.

symmetry makes this easier, so I'm going that route:

So four hands as follows. A-2-4 (soft-17 = hit BS) -5 (12, hit) -2 (14, hit) -7 (21, stand)

A-2-4-5-2-7

Dealer has 10-6 and hits and breaks since 6 is the smallest card left.

That gives 4 player hands of 6 cards, plus the dealer's 3 card hand, total = 27.

So if we follow correct BS, the dealer has to have a 6 or 10 up but if it were a 6 up, then the A-2 play would have been double, so I go for 10 up in the above scenario since that is all that is left. We can change the hands to this:

A-2-4-5-2-6, which gives 4 hands of 20,

and the dealer can have a 9-7 which still gives him a stiff that he has to hit, same number of total cards.

So, for mathprof, I can't figure out what the dealer has to have showing, since it could be a 10 in the first example, or a 9-7 in the second. Did I overlook yet another detail? :)

I realize that since I had a total of 7, it couldn't be base 4 math. :)

was just an editing error late at night. :)

*"I was thinking of this because in my bj simulation program I do not want to shuffle the whole deck each time, only the cards that had been played in the round need to be randomly swapped."*

A couple weeks ago, I posted on ET Fan's BJF PowerSim board a version of his sim program that cuts the execution time by 40% to 45% by avoiding shuffling the deck each time. Other than ETF's comments, I didn't receive any feedback about how valid this approach is. I think it has potential, and it seems to give good results, but perhaps it needs a little tweaking. Saving that much time seems to be worth exploring: Do you have any ideas for what data should be collected to test the algorithm?

(This is a copy and paste of the description I posted; my apologies to those who have already read it.) The general idea of the algorithm is this: Why shuffle every time a shoe is played out, when each shoe contains a large number of unique card sequences? Why not shuffle once, then deal at least a few of those unique sequences before shuffling again? Starting with that as a premise, I tried a lot of different techniques that actually didn't work very well. One thing that can make a difference -- the thing that makes card counting possible in the first place -- is that each shoe can have player-favorable and player-unfavorable segments, so it's important that the sequences used are not only unique, but also should be well distributed through the whole shoe. In general, the idea is that no single card or small set of cards in the shoe should appear in more than its fair share of the sequences, or that will skew the results.

The technique used in this version of PowerSimX2 is this: Each time the shoe is shuffled, a "deal offset" variable is initialized to 1. This deal offset is used both as the starting point in the shoe and the increment value each time a card is dealt (instead of incrementing by 1 card every time). So, the first shoe is dealt normally, starting with card 1 and continuing with 2, 3, 4, etc. When the shuffle routine is called again, instead of actually shuffling, the deal offset is incremented. So, on the second pass, card 2 is dealt first, followed by 4, 6, 8, etc. When the current deal position (the variable named shoeTop in PowerSim) reaches the end of the deck, it is reset to a position determined when the deal offset was set. In the case of deal offset 2, the dealing position is reset to card 1, so the deal continues with card 1, 3, 5, etc. (i.e. the cards skipped on the first pass, which insures that no card is dealt twice). When the deal offset is 3, the cards 3, 6, 9, etc. are dealt first, and the reset value can be set to 1 for a second pass (dealing cards 1, 4, 7, etc.), and then to 2 for another pass (dealing cards 2, 5, 8, etc.), which again insures that no card is dealt twice. In general, for any given deal offset, the reset values need to range from 1 to that offset minus 1 to insure that no card is dealt twice. But what I found, experimentally, was that using an orderly method for setting the reset values -- e.g. just starting with 1 and incrementing each pass -- produced some undesirable patterns in the number of times each card in the shoe was dealt. So, instead, this algorithm randomly selects an initial reset value between 1 and the deal offset minus one. Each time the deal position reaches the end of the deck, it is set to the reset number, and the reset number is incremented for the next pass. Then, if the reset position reaches the deal offset number, it is "wrapped" back to 1. (So, for example, for deal offset 6, the number 4 might be randomly selected as the first reset, so the reset values would be 4, 5, 1, 2, 3.)

This process continues to some preset limit, and then the deck is actually shuffled and the process repeated. In this version of PowerSimX2, I have set the limit to be half the number of cards, so for single-deck, the process above is used 26 times before shuffling again.

You wrote:

*So if we follow correct BS, the dealer has to have a 6 or 10 up but if it were a 6 up, then the A-2 play would have been double, so I go for 10 up in the above scenario since that is all that is left. We can change the hands to this: *

Doubling A2 is not an issue, since these rules were No DAS.

However, you would not hit 12 vs a 6 up.

If there is a 10 up, you would not split 2s.

Perhaps you think that the Aces are split, but then you could only draw one card.

He said it was single deck, right? Dealers card cannot be a 7 unless there is more cheating goin on.

You didn't use any 3s...dealer can receive them (nt)

Oops. Did you use the 2s twice? (nt)

A bizarre idea just came to mind that in some ways couples a sim with a CA.

What if, instead of attempting to maximize the number of ways one shuffled deck can be played before re-shuffling, we attempted to find ways to increase the results from one shuffled deck played just once?

For example, suppose the first 5 cards dealt in a single-deck end with the result of Player 8-7-5 and Dealer T-8. Even knowing that a simulator's playing parameters are set BEFORE the simulation actually begins, this hand (the five cards) nonetheless represents at least two outcomes, not just one. The second implied outcome of Player 8-7-5 and Dealer T-8 is Player 7-8-5 and Dealer T-8. By dealing just five cards, we've actually played two hands. In both cases, the same unplayed cards remain available later in the deck. In both cases, the player's count at play-decision-time remains the same. By playing just one hand, we've assumed that we've actually seen two decks, one that starts with 8-T-7-8-5 and one that begins with 7-T-8-8-5. But we don't stop there.

We deal a second hand from the remaining unplayed cards in our original deck. Suppose this second hand results in Player 6-9 and Dealer 2-T-6. We know that the second hand also represents two different outcomes, just like the first one did. This hand could've resulted either from 6-2-9-T-6 or 9-2-6-T-6. Taking the first hand's two "outcomes" (from the previous paragraph) and placing them next to this second hand's two "outcomes" gives us a total of four possible outcomes thus far. In other words, we've played the first ten cards in four different decks, not just one.

We continue to play in this manner until our pre-set penetration level is reached (or the deck is exhausted?). Suppose we set our sim for a penetration of 26 cards, meaning any hand in progress at the 26th card would be allowed to be played to its conclusion, at which time the deck would be re-shuffled. We then look back and see that we've played 5 hands to reach the "end" of this deck. In effect, we've played 32 different decks (2^5), not just one. We give each of the five hands we played "credit" for actually being played 32 times, not just once each. (To be completely accurate, you'd have to tally the original hand as "16" times when the Player's cards were 8-7-5 and "16" times when the cards were 7-8-5, but I suspect most sims drop them both in the same "bucket", anyway).

Of course, at this point it's easy to think that the result of our "new" sim will simply show 32 times the number of hands played, with the proportion of the outcomes (win vs lose vs push) exactly the same as the "old" sim. But that's not entirely true because every new deck we play won't always result in a multiple of 32 being applied to the results of each hand. Sometimes, we'll only be able to play four hands before the cut-card, and sometimes we'll be able to play six or even seven.

If I understand the notion of trying to achieve a particular level of certainty (confidence?) in our results, will this method of "implied exponential simulation" improve the confidence of the results? I'm guessing that it doesn't. However, without actually dealing any more decks, the results will truly *reflect* play from the visualization of exponentially more decks.

If this idea has any merit at all, then writers of simulators might have to re-think confidence in terms of hand played and instead approach confidence from the number of decks played, since the number of hands "played" will have increased without any "true" increase in randomness in our decks (or so, it seems to me).

I should also mention that a GREAT simulation programmer could theoretically create an even larger multiple for just one deck (or shoe) by including, whenever applicable, the doubling-effect of not just the Player's hand but the Dealer's hand, too. In other words, if the sim parameters say that the Player would hit an 8-7 hand against a Dealer T and the parameters also say that the Player would hit an 8-7 against a Dealer 8, then there are four possible outcomes from the scenario described in the initial hand described above, not just one. The four "decks" would be the original 8-T-7-8-5, the afore-mentioned 7-T-8-8-5, and two new ones, 8-8-7-T-5 and 7-8-8-T-5. From the original example, the final multiple for the deck has just grown from 32 to 64. In those cases where a play against a Dealer's 8 would be different than a play against a Dealer's T, the exponent wouldn't be affected (the factor would be "1", not "2").

When all is said and done, it may just be that it's not necessary to actually re-tool a sim to perform the acrobatics I've described. By examining millions of already-canned sims, it may be possible to derive a commonly-accepted approximate exponent that can be universally applied to all sims of similar decks and penetration. For example, for the single-deck game played to a penetration of 26 cards, we may just say "these results imply a rough multiple of 30" (or whatever the actual number turns out to be).

And I'm gonna be real pissed if this a valid idea and Norm's had it in his simulators for years. :)

So four hands as follows. 2-A (hit) -4 (soft-17 = hit) -5 (12, hit) -3 (15, hit) -6 (21, stand)

2-A-4-5-3-6 for 4 hands total = 24 cards.

Now the dealer.

Only 7 8 9 and 10's left. Dealer has to hit, which means he has to have 16 or less since there are no A's for a soft-17, so he has to have 7-8 or 7-9 to take the third card.

And now I see your point, the dealer's upcard has to be a 7 to split the 2's.

Total for BS play = 27 as Eliot stated.

Don't know how I used the 2's twice. I edited the thing enough times that apparently I totally whacked it up the final attempt...

Any glaring errors above? Only 4 of each card are used, and BS was played on all 4 player hands.

The random numbers are not independent.

Take the numbers 1,3,5,2,4. No matter where you start the "deal point" you get the same group of numbers, all in the same order, but with the chain merely broken in the middle and they are then spliced together.

The problem is that if you have an A-10 in the "deck" then every time except for once, you are going to deal an A followed by a 10. Or, thinking about it another way, you are doing a heads-up sim, and four consecutive cards are A-10-10-10

There are 52 different "starting points" in this deck. And except for the three that will split up the A-10-10-10, you are going to deal a snapper to the player and a 20 to the dealer each time you run through 52 cards. I haven't tried the tests, but I'd bet that most tests are going to fail on that stream of random numbers. Because they are not "independent".

Just my $.02... it is a good performance hack, but it is going to skew the results...

The "deal offset" is both the starting card and the deal increment -- the number of cards to *skip* -- so it's different each pass. ET Fan pointed out the case where in a heads-up game, if there is a TT v. TT push when the increment is 1, then either the player or the dealer will get TT when the increment is 2. However, that's not really the same cards coming out in the same sequence, and it's a very small effect with respect to the 26 passes that would be done for single-deck.

I have done quite a few tests with PowerSim sims ranging from 10 million rounds up to one billion rounds, and so far, I haven't seen any noticeable difference between using this "shuffle" and shuffling each time. Although it seems logical that there has to be *some* difference, it appears to be very small with respect to the normal distribution of results.

Try with 5 billion (nt)

Roger, you might want to try it first against a new deck in perfect sequence or even a deck prearranged to give an ideal result.

A sim that strays from how real world casinos actually shuffle would leave one wondering if he could rely on the results.

Your incremental shuffle would give more results but only the original shuffle would give the real results. The second and third shuffle would be in effect manipulated.

Go for it; sometimes you're looking for something and find something even better.

Here is the correct "answer"

Use your scheme and write each random number out to a file, in the order you "deal" them (I assume you are dealing 1-52 for a single deck game, but if you are doing a shoe, then maybe 1-312 are the values.) Keep cycling as you mentioned and continue writing the numbers out.

I don't know how easy they are to find, but there are lots of free random number generator test programs around that will do things like "the poker test", "runs test", "uniformity test", etc. See if your numbers "pass". If they do, then your idea might be workable. I suspect the numbers are going to flunk more than one test, because I've used random numbers a +lot+ over my lifetime, and I have seen most (but probably not all) of the strange results one can produce when the RNG is just a tad "off".

There are probably other ways of "shuffling". For example, define a 10-word array card[10], where card[0] is all 10's, card[1] is all A's, and card[2]-[9] are 2-9. You initialize each element to the number of that kind of card that is present. For a SD game, every array element except card[1] is set to 4, card[1] is set to 16 (16 tens in the starting deck). You also initialize a "total count" to 52 and now you generate a random number between 1 and 52. Say you get 21. You start with that value (21) and subtract card[0] from it. It is now 17, which is positive, so you keep going. Subtract card[1] from it and you get 1, so you keep going. Subtract card[2] from it and you get -3. The card you deal is a "2". You now subtract 1 from card[2] and decrement that global counter to 51 (only 51 cards left now). That is both fast and easy, and if you use a good RNG that is fast (A Mersenne twister is good here) this flies. It is one way that students often "deal cards" in a course I teach. This way there are no "randomly-based memory accesses", everything fits into one or two cache lines, and it will run like the blazes on fast hardware...

Hope my explanation was both (a) clear and (b) bug-free. :)

*Here is the correct "answer"
*

*Use your scheme and write each random number out to a file, in the order you "deal" them (I assume you are dealing 1-52 for a single deck game, but if you are doing a shoe, then maybe 1-312 are the values.) Keep cycling as you mentioned and continue writing the numbers out.
*

*I don't know how easy they are to find, but there are lots of free random number generator test programs around that will do things like "the poker test", "runs test", "uniformity test", etc. See if your numbers "pass". If they do, then your idea might be workable. I suspect the numbers are going to flunk more than one test, because I've used random numbers a +lot+ over my lifetime, and I have seen most (but probably not all) of the strange results one can produce when the RNG is just a tad "off".*

But this will test for pure random numbers, not random arrangements of a shoe, unless you know of a "random shoe test program." If the RNG test program is any good at all, the shoes will flunk, because every number from 1 to Cards is represented exactly once (or exactly n times for n shoes).

*There are probably other ways of "shuffling". For example, define a 10-word array card[10], where card[0] is all 10's, card[1] is all A's, and card[2]-[9] are 2-9. You initialize each element to the number of that kind of card that is present. For a SD game, every array element except card[1] is set to 4, card[1] is set to 16 (16 tens in the starting deck). You also initialize a "total count" to 52 and now you generate a random number between 1 and 52. Say you get 21. You start with that value (21) and subtract card[0] from it. It is now 17, which is positive, so you keep going. Subtract card[1] from it and you get 1, so you keep going. Subtract card[2] from it and you get -3. The card you deal is a "2". You now subtract 1 from card[2] and decrement that global counter to 51 (only 51 cards left now). That is both fast and easy, and if you use a good RNG that is fast (A Mersenne twister is good here) this flies. It is one way that students often "deal cards" in a course I teach. This way there are no "randomly-based memory accesses", everything fits into one or two cache lines, and it will run like the blazes on fast hardware... *

This works OK for shallow penetrations, but it's slower and more complicated than the old Knuth (actually predates him, I believe, but his name got attached to it somehow) algorithm:

FOR i = Cards TO 2 STEP -1

SWAP Shoe[i-1], Shoe[Rnd * i]

NEXT i

where the cards are indexed from Shoe[0] to Shoe[Cards - 1], and Rnd is a random number 0 <= Rnd < 1.

ETF

© 2009-2024 | www.bj21.com

Bj21 uses cookies, this enables us to provide you with a personalised experience. More info