If you generate a "shoe" of numbers as he suggested, then start generating pairs of (starting point, stride) as he also suggested, and you use that to produce a bunch of random numbers, if there is a problem with them I'd think that something like "the poker test" would fail. If we deal N 5-card poker hands, we know how many ought to be a single pair, two pair, three of a kind, etc. This is the most common test that the RNGs I have seen students come up with off the net seem to fail.
One example from last semester was that the student found the series 1,2,3,4,5 somewhere in the billion numbers and said "how can that possibly be random?" I asked "what's the probability of getting a straight flush (which that represents) in poker? Is it 0.0 or is it something a little higher? They thought and said "ahhh... you mean that if I did _not_ see a 1,2,3,4,5 ever, the numbers would not be random?" And another "light" comes on. :)
In my testing, I never have tried to test a "single shoe" for the very reason you gave. To solve this, my randomness testing has usually used "penetration". That is, students will generate a shoe of 6 decks, extract 260 cards leaving 52 in the shoe, then re-shuffling. Now since we cut off that deck, we don't get perfect uniformity since cards were left. But uniformity tests still tend to raise a red flag....I've typically used large numbers as well. A few years back I actually ran one trillion random numbers thru a test using a Cray, which could deal with 64 bit values easily and which had the ability to produce 64 bit random numbers with a period that was huge...
The thing I don't like about "shoe[0] and shoe[rnd]" is that can thrash cache badly, and can run slower. The shoe array and the code fiddling with it have to map into cache without colliding, and the bigger the shoe[] array is, the more likely colliding with code lines becomes. Not that big a deal with todays fairly large L2 cache sizes, but an issue.
On a good 64 bit processor like the opteron, each of the 10 card[n] values can be kept in registers (opteron has 8 extra thank goodness) so that no cache references are needed, and the simple approach I gave can actually run faster even though it seems to be doing a little extra work with the subtractions...
But back to the current discussion, I simply feel a bit uneasy about generating a string of pseudo-random numbers and then using a random starting point and stride to access them. I wonder how strong the correlation will be between sets of the numbers produced from the same starting "shoe". KISS (not the counting system) or Occam's razor seems to apply here, IMHO. When I feel a bit uneasy about something, I've learned to go with that feeling to avoid a lot of wasted time and effort that eventually just confirms it was right. :)
I've seen RNGs where the raw sequence produced looks perfectly random, but if you take every third number, it suddenly fails some test. Proving something to be acceptably random is quite a chore... But even worse, if you don't prove it, the results can be skewed without anyone knowing. The really good RNGs have been tested exhaustively, but the tests are done on the stream "as produced". Rather than on every Nth value or whatever, which might well have a different result. Hence my preference to use the thing as it has been tested...