Well, first, this technique cannot be analyzed the same as successive single-pass riffles; but examining how it's similar and how it's different may lead to some insight on how to improve it.

A one-pass perfect riffle would insert one card between each card in the original deck, which would essentially create two half-shoes where each was composed of precisely 50% of each of the previous half-shoes. But that, in itself, is not a large problem, because we would expect a random shuffle to do the same thing, on average (and I'd say that that's one property that makes riffling useful). But it does have other problems, one of which is that the mixing is not random. For example, *all* the cards in the first quarter of the original shoe are *guaranteed* to be in the first half of the shuffled shoe; and all card in the first 1/8th original shoe are guaranteed to be in the front half of both of the next two shoes, etc. The closer a card is to the front of a shoe, the more persistent it is in showing up in the front of successive shoes. A mirror effect occurs on the other end of the shoe, such that cards in the last 1/8th of the shoe have no chance at all of making it to the front half after the next two shuffles. That will certainly skew the results if we dealing out only the front x% of the cards from each shoe and trying to determine the player advantage: in effect, it reduces our sample size.

But my algorithm does not really behave exactly like the single-riffle. On the second "shuffle" (when the deal offset is 2) it effectively *removes* one card from between each card in the original deck (and in effect moves those to the end of the shoe). That's certainly not the same as a riffle, but it does have one characteristic that's similar: Looking it as two half-shoes, each is composed of 50% of each of the previous half-shoes. But there are a couple of differences: One is that with a one-pass perfect riffle, the ordinal card sequence 1, 2, 3, 4, ... becomes 1, x, 2, x, 3, x, 4, ... (or perhaps starting with an x), which would be the same cards in the same relative order. With my algorithm, the 1, 2, 3, 4,... sequence is followed by 2, 4, 6, 8,..., i.e. with every other card removed from the sequence and moved elsewhere. (Of course, the 1, 3, 5, ... cards do reappear in the same relative order after half the cards have been dealt.) The process is still deterministic, rather than random, but one difference (which I think is more important than the sequencing difference) is that only half the cards from the original first quarter-deck are guaranteed to be in the first half after the "shuffle", and only half the cards from the last quarter are assured of having no chance of making the first half after that "shuffle." That is, whether or not a card is *certainly* included or excluded from the first half on the offset=2 pass is determined by both it's position in the original deck *and* whether it's an even or an odd card. That's not totally random, obviously, but that produces a much better distribution than the single-riffle -- **if** all we're looking at is the relative player advantage or disadvantage of the first x% of the original shoe. Moreover, any remaining position/probability correlation effects rapidly diminish after going beyond the offset=2 pass, much more so than with continuing with single-pass riffles.

But things start to change in another way starting with the offset=3 pass. We start from the card 3 in the original deck and skip 2 cards, so we get to the end of the deck after dealing one third of the cards. At that point we *randomly* decide to proceed with either 1, 4, 7, ... or 2, 5, 8, ... So, starting with offset=3, we start to introduce an effect similar to striping -- breaking up the original *relative* order of the cards.

*"Suppose you have an unusually large number of low cards at the start of the deck. This will force up the count and the bet. You will bet far more during that shoe than usual. Now you 'shuffle.' But that shuffle keeps half of those low cards at the start of the shoe. You are likley to have two consecutive shoes with large betting. This would affect three CVData features. The Bankroll swing data, the sessioning logic and data and the Betting Strategy by Bankroll feature."*

Yes, that is one effect of the algorithm, if 1, 2, 3,... is followed by 2, 4, 6,..., and it's still a noticeable effect if that's followed by 3, 6, 9,... The solution is fairly obvious: don't do that. When I said "similar algorithms," I meant algorithms that only shuffle periodically, then take several unique sequences from that shoe before shuffling again -- not necessarily algorithms that produce those unique sequences in a similar way. The problem with the algorithm presented is that it does that too methodically, and it's really that methodical approach that produces the shoe-to-shoe correlations that would affect the stats you mentioned.

Here's one possible solution: Assuming single-deck, suppose I shuffle a deck, but instead of dealing the cards in *any* orderly way, I create a second array initialized with the numbers 1 to 52, and then I also shuffle that array. Now instead of dealing cards deck[1], deck[2], deck[3], ..., I deal deck[index[1]], deck[index[2]], deck[index[3]],... This would still be a (pseudo-)random permutation of the deck, correct? Now, suppose I have 26 such arrays of indices, each of which is a shuffled copy of the previous. I use each of those index arrays once, then shuffle the original deck, then use each of those 26 index arrays again to deal. (And actually, there's no reason to stop with 26.)

The difference with this approach, of course, is that the sequences are now (pseudo-)random instead of deterministic, and even if those sequences were reused, the subject deck would be (pseudo-)randomly shuffled before reuse. I believe that **all** possible shoe-to-shoe correlations (neglecting any possible flaws in the PNRG) have now disappeared.