different program, not a sim
The 18 vs A error was in a program which did a combinatorial analysis of all possible card combos following any given 2-card dealer hand vs any given dealer up-card. In that specific case, the error had to do with the treatment of that one specific high soft hand. Obviously, that program is not a "sim," and is an entirely different program.
The program which revealed the effects of bad players was not technically a "sim" either, but a "model" of the 6-deck stutter shuffle. Contrary to Mr. Can't's impression, there were no errors in it. The production of like-card bunching was actual, not exaggerated.
This program was written because I could just not get sims which used a computer random shuffle (CRS) to jibe with actual experience in Atlantic City (at a time when only the Resorts and Caesar's Boardwalk were open and the rules were somewhat different than they are now).
The first observation was that the computerized "players" did significantly worse against the stutter shuffle than they did against the CRS. Even using the hi-lo count, the players were at a nearly 1% disadvantage. Initially, this led only to a decision to stop playing in Atlantic City and play only in Vegas.
Some time (months) later, I learned about the "like-card clumping" claims made by some people; so revisited this program to see if this could be the reason for the differences previously noted.
It took me some time to decide on what would be a valid "yardstick" with which to measure like-card bunching. I eventually settled on using adjacent identical valued cards as the "ultimate like-card bunch." I chose this because it was easy and fast to test for, and because the math necessary to set up the theoretical ideal to compare it against was relatively simple. After running the tests, I found that adjacent pairs in newly shuffled shoes appeared nearly 3% more often than they "should" appear. Ergo, the stutter shuffle *was* producing more like-card bunching than a purely random shuffle would.
I attribute this to the fact that the stutter shuffle grossly undershuffles the cards (the top and bottom half decks before the cut getting only 2 riffles; and the middle 5 decks getting only three).
Furthermore, this undershuffling produces a dramatically different result than non-shuffling, for reasons I have set forth in other posts.
Once the existence of like-card bunching with this particular shuffle had been established, now I set out to see if there was any way to capitalize on it.
So I wrote other programs to record the results of playing the various hands every way they could be played, and keep track of the previously seen 5 cards in various categories. First, I had to define "needed" cards; which I did as follows:
For double downs: If the dealer up-card was 7 or less; a needed card was one which would produce a final score of 18 or more. If the dealer up-card was 8 or better; a needed card was one which would produce a final score of 19 or more.
For hit/stand: any card which would not break the hand was defined as "needed."
I then recorded the results in terms of the following categories:
0 of the last 1 needed; 1 of the last 1 needed;
0 of the last 2 needed; 1 of the last 2 needed; 2 of the last 2 needed;
........
0 of the last 5 needed; 1 of the last 5 needed;...5 of the last 5 needed.
The results of this program produced variations from Basic Strategy (e.g., do not double any soft combo if 1 of the last 1 card seen was "needed" and surrender 14 vs Ten if none of the last 3 cards seen were "needed").
The next program set up 6 players at a 6-deck table, all using these variations. As I reported earlier, the variatitions helped the first two players, but the performance got worse and worse the further you got from 1st base almost exactly linearly.
The subsequent tests (players #1 and #2 using the variations and #2-#5 using BS, etc.) are already reported.
The fact that in the final test (players #1-#5 using BS and only #6 using the variations) resulted in the first 5 players doing exactly as well as "normal" using BS should tell you the program was operating properly and correctly.
The final conclusions of the very specific situation stand:
First, there is/was no practical way to use "like-card bunching" to improve your results. Yes, it is an observable phenomenon in this specific situation, but no, it is not a reliable predictor of anything; and
Second, in this very specific situation, bad players to your right DO hurt you while bad players to your left do not.
Now, do the results obtained from the tests of this very specific, and no longer existant, situation apply to other situations, including the currently "normal" use of automatic shuffling machines?
I cannot say for sure; for I have never run any further computer tests, primarily because of the difficulty in programming the behaviour of a "bad" player. The main characteristics of "bad" players is inconsistency, and I don't know how to *accurately* recreate that behaviour in a computerized "player." Sure, I could program "players" who never hit stiffs; or who always split tens; or who always double 12s; BUT this is NOT representative of an actual "bad" player -- it is too consistent. Thus, any results would be pretty much meaningless.
BUT, in actual table experience, I have found the effects of "bad" players to be as I discovered. The effects of the bad plays of players to your left pretty much even out -- they help you as much as they hurt you. But bad players to your RIGHT hurt your a lot more often than they help you.
This seems to fly in the fact of logic, but it IS an observable (and usable) phenomenon.
If you can supply me with *valid* program specs which will *accurately* define the behavour of a "bad" player, I would be happy to incorporate them in any game/table/rules scenario you want, and provide you with the results.
But in the meantime, I suggest you go to actual tables and observe this very observable phenomenon for yourself.