This post contains a mathematical proof the floating advantage exists if the so-called "bow� effect exists. It has been vetted by my wife who has taught college math courses even if her graduate degrees are physics and computer science. I believe the proof accurate but would appreciate comments of inaccuracies of Theorems or steps of proof. I
will probably get a Clarke Rant which will not address the proof but will continue his practice of misstating math and other matters but I will say at the start I will not address any comments except comments pertaining to the validity of the math in this proof.
The post is divided into specific sections so as to be more understandable. The first section addresses the problem of using standard mathematical notation in a post so it explains the ways I am trying to meet that problem and still have the math make sense along with the way remarks are denoted in the text. Second, I will define the concepts I am trying to express in mathematical terms. Third, I will define the theorems, corollary, and postulate used in the proof. Fourth will be the proof proper. And fifth will be some ddiscussion about some items which, I think, flow from the proof.
I. Terminology. I do not know how to use subscripts so I will use the terminology f(sub i,n) for example to show i,n are subscripts of f. I will also use (note:) to include remarks not necessary to the proof but which I think will aid understanding of the proof. I will use NOTE to discuss things which I think flow from the proof. Also, the proof requires averaging certain expectations. Av(E(1),(2)) would be the arithmetical average of expectations 1 and 2 and does, of course, use the operation of adding the expectation of one sample to the expectation of the second sample and dividing by two just as any average of two numbers is calculated.
II. Definitions.
f=frequency. The frequency as used in the proof is exactly the same as is normally used in blackjack. This is the fraction of the time, expressed as a decimal, a specific true count arises. Of course, total frequencies should sum to one if all possibilities are dealt with. Note that the summation of all frequencies multiplied by all expectations results in an average expectation because multiplying frequencies by other terms results in an averaging process for the other terms and understanding the process as an averaging process is important in understanding the proof. To differentiate between the different frequencies which are encountered in blackjack, I will use the term f(sub i,n) to describe the frequency associated with a true count of i and number of cards remaining of n. Further the terminology i,n will be used similarly to describe true counts and number of cards in related instances.
E(i,n) = the expectation of true count i at number of cards remaining n.
Floating advantage. A smaller stack of cards should have a higher blackjack expectation than a larger stack of cards if both stacks have the same true count. That is, if m Bow effect. The difference in expectation between adjoining true counts should be greater the smaller the true count. That is, (E(i+1,n)-E(i,n)) < (E(i,n)-E(i-1,n) for all values of i. If (E(i+1,n)-E(i,n)) = k, a real, positive number, then (E(i,n)-E(i-1,n)) = k+r wwhere r is also a real, positive number. By the postulate below, this phenomenon remains true for all true counts so the difference in expectation between adjoining true counts always increases as true count decreases.
III Theorems, Corollaries, and Postulate.
Theorem I. Cutting a stack of cards of any arbitrary number of cards does not change the blackjack expectation of winning for basic strategy. The overall expectation at any cut level therefore remains the same.
Corollary to Theorem I. The basic strategy blackjack expectation for a stack of cards is identical to the total expectation of all subsets (not disjoint) of the stack of cards which have an arbitrary but constant number of cards, m, less than the number, n, in the original sstack. That is, E(i,n) = E(sub1)(m) + E(sub2)(m) +... E(sub m)(p) are all the possible subsets of the size m of the original set of size n. (note) The sum of all possible subsets has the same expectation as the original set because the sum includes exactly the same cards as the original ordered exactly the same as the original.
Theorem II.
As cards are removed from a stack of cards, creating subsets of the original set of cards, all the possible subsets of a specific number of cards will demonstrate various true counts, either less than, identical to, or greater than the original true count and the distribution of true counts is normal around the original true count. If we group the possible subsets by their true counts, combining those with the same true count and counting the frequency of appearance of true counts, E(i,n) = f(sub minus infinity, m)* E(-infinity,m) + ............. + f(sub -j,m) * E(-j,m) + .......... + f(sub i,m) + ....... + f(sub j,m) * E(j,m) + ........... + f(sub iinfinity,m) * E(+infinity,m). (note) This is a fundamental theorem of blackjack. As cards are removed, various different true counts will result in the remaining cards because stacks of cards are not uniform and the distribution of these true counts will be normal around the beginning true count. (note) This theorem is consistent with the corollary to Theorem I above because Theorem II does, in fact, take into account all possible subsets of size m so it takes into account the fact the expectation, however a subset is established, by removal or by sampling, that all subsets are being cconsidered as they must be if expectation is to stay the same. (note) I full realize infinite true counts are not possible but the theorem is written as if they are to make the point even large departures must be taken into account. Since the frequency of these large departures approaches zero the results will not be changed.
Corollary to Theorem II.
As the number of cards in the subsets of the original stack become smaller, the true counts of the subsets, while still remaining normal in distribution, will exhibit larger departures from the mean. (note) This corollary is also not required for the proof but it is rrelevant to the NOTES following. Again, it is an accepted concept because the smaller the subset, the larger the departure of true counts (the smaller divisor (smaller number of cards remaining) causes a larger true count if running counts are the same).
Postulate.
The �bow� effect is the accurate description of the difference in expectations between two adjacent true counts. That is, ((E(i+1,m) - E(i,m)) < ((E(i,m) - E(i-1,m). (note) This proposition is postulated for the purposes of this proof because there is no definitive literature on this topic as there is on the Theorems set out above. Griffin has made reference to the decreasing change in expectations between adjacent true counts as true counts rise and a large number of sims I have done invariably support the proposition but it is impossible IMHO to definitively prove the proposition. Every possibility cannot be simmed and I see no way to show the effect theoretically. It is my strong belief the effect exists because there is no evidence to the contrary I know of. (note) It should be stressed that, if there is such an effect, the aggregate average expectancies associated with each true count obey the law even if some setups of true counts would not (replacing an eight by a two will not cause as large a change in expectation as will replacing an eight by a five) because, if the average expectations do not, the �bow� effect would not be observed. Suport for this postulate is provided by my sims because one can find the same tendency exists for any arbitrary grouping of cards if one high card is removed at a time. (note) Experience, also, supports the �bow� effect proposition; more twenty pushes the higher the count, more chances to double 9,10,11 against stiffs at low counts and a higher loss rate, etc. (note) I will return to this proposition in my NOTES. (note) This exercise began as a response to Cant�s nonmathematical �proof� so as a response to his posts, this postulate has to be acceptable because he cites it (which, admittedly means little) but, after I began the exercise, I saw some implications more important than answering a ccrank�s posts and I am posting the proof now in a larger context.
IV Proof
1) By Theorem II, E(i,n) = f(sub minus infinity, m) * E(-infinity,m) + ............. + f(sub -j,m)* E(-j,m) + .......... + f(sub i,m) + ....... + f(sub j,m) * E(j,m) + ........... + f(sub +infinity,m) * E(+infinity,m) where n>m.
2) By the cumutative law which allows reordering of terms which are additive, E(i,n) = f(sub i,m)* E(i,m) + (f(sub i-1,m)* E(i-1,m) + f(sub i+1,m)* E(i+1,m)) + ..... + (f(sub -j,m)* E(-j,m) + f(sub j,m)* E(j,m)) +........(f(sub -infinity,m)* E(-infintity,m) + f(sub infinity,m)* E(infintiy,m). This reordering creates terms consisting first of the term which has a true count identical to the starting true count, i, multiplied by its frequency and paired terms consisting of the frequencies multiplied by the expectancies of all iidentical plus and minus counts using i as the reference point for establishing plus and minus counts.
3) Since the distribution of all plus and minus counts is normal (around i), the frequencies of each of the paired terms is identical and the arithmatical average of each of these paired terms may be substituted for each of these terms and values will remain unchanged. Substituting leaves us with the following expression:
E(i,n) = f(sub i,m)* E(i,m) + (f(sub i-1,m)* Av(E(i-1,m),E(i+1,m)) + f(sub i+1,m)* Av(E(i-1,m), E(i+1,m))) + ..... + (f(sub i-j,m)* Av(E(i-j,m), E(i+j,m)) + f(sub i+j,m)* Av(E(i-j,m),E(i+j,m))) +........(f(sub -infinity,m)* Av(E(-infintity,m), E(infinity,m)) + f(sub infinity,m)* Av(E(-infinity, m,E(infintiy,m))). (note) Clearly, the same sum remains after this step. To see why this is true, one can look at tangible numbers, take one and two as examples. The average of 1 and 2 is 1.5 and 1.5 + 1.5 =3 as does 1+2. Since the frequencies are identical, this will be true, whatever the expectations.
4. (note) The next step is to show each value of expectation in the equation as now set out which is not E(i,m) is smaller than the value of E(i,m). The induction method is proper for a proof in this instance because the method of induction states that if there is a series of similar terms and the first term of the series meets a specific criterion (in this instance, the criterion that the expectation of the term is less than the expectation of E(i,m)) and an arbitrary later term of the series meets the same criterion, one can conclude all terms in the series meet the same criterion. (note) In this step of the proof I am comparing only expectations and showing only that the expectation of each term in the equation other than E(i,m) is smaller in expectation than E(i,m). As will be shown in the next line of the proof, the fact that a frequency is multiplied by the expectations for each term creates a weighted average of these terms when the terms are added together so frequencies can be and are ignored in this step.
By induction.
Preliminarily it should be pointed out Av(E(i-1,m), E(i+1,m)) have been substituted for E(i-1,m) and E(i+1,m) so both values are identical and only one need be examined. Similarly, the arbitrary jth terms, E(i-j,m) and E(i+j,m) are identical and only one of those numbers need be examined.
a. (note) testing the first term.
By the �bow� effect, if E(i+1,m) - E(i,m)=k then E(i,m)-E(i-1,m)=k + r where k and r are positive numbers. The value of the two numbers would then be E(i,m) + k and E(i,m) -k-l. Subtracting the two equations gives E(i+1,m) + E(i-1,m) = 2(E(i,m)-r. Dividing by two to take the average gives [E(i+1,m) + E(i-m,m)]/2 = Av(E(i-1,m),E(i+1,m) = E(i,m) - r/2 where r is positive. Thus Av(E(i-1,m),E(i+1,m) is less than E(i,m).
b. (note) testing the jth term.
I am going to forsake formality here because it is apparent the average of E(i-j,m) and E(i+j,m) is also going to be smaller than E(i,m) because by the bow effect there must be a greater differential in values of (E(i-j+1,m) - E(i-j,m)) and (E(i+j,m)-E(i+j-1,m)) than there is for E(i+1,m) and E(i-1,m). Therefore, r would be larger and Av(E(i-j,m),E(i+j,m) would be even smaller than E(i,m).
Therefore, by induction, all expectations not E(i,n) in the equation are less than the expectation of E(i,n).
5. Next, by the associative law, all terms not f(sub i,m)* E(i,m) may be grouped into one term. By definition, f(sub i,m) and the combined frequency of all other terms denoted f(sub x ( for �unknowns�),m) add to 1.0. The combined expectancies associated with this frequency will be denoted E(x,m). Therefore:
E(i,n)=f(sub i,m)* E(i,m) + f(sub x,m)* E(x,m).
A familiarity with this type of equation containing terms of different values of similar things (in this instance expections) multiplied by frequency of occurrence makes the equation an averaging process. Further, it is also apparent the second term must have been calculated as a weighted average of all the expectation terms it includes. Since a calculated average of terms cannot be larger than the largest term and the largest expectancy term contained in the averaging process used to establish E(o,m) is less than E(i,m), then E(o,m) must be less than E(i,m). (note) These statements are mathematically true but might possibly not be apparent to everyone reading this proof so I will add more detail to explain the concepts further. We all know the averaging process. To establish an average we add all the terms in a series and divide by the number of terms in a series: (a(sub 1 + a(sub 2) +a(sub 3) + ... + a(sub n)/n establishes an average for the series. It is apparent when the average is written in this way, the average cannot be greater than the largest number contained in the series. If one does not sum the values before dividing by n but, instead, each number is first divided by n, the form of this equation becomes a(sub 1)/n + a(sub 2)/n + a(sub 3/n + ... + a(sub n)/n which is arithmatically identical because the final value is identical. When an equation is written this way, the value of 1/n may be expressed as the reciprocal of n and that reciprocal becomes the frequency. Reverting to numbers to simplify even further, we can compute the average of four numbers 1,2,3,4 the traditional way, 10/4 = 2.5 or write the equation as .25*1 + .25*2 + .25*3 + .25*4 = 2.5. .25 in this instance is the frequency of occurrence of each number and is the reciprocal of 4. Another point needs to be made to dispel doubt as to the validity of this step. It is immaterial for averaging purposes whether the frequencies add up to 1.0 to get an average of the values involved. By the cumulative law, we can divide the series any way we wish so long as we average the values lumped together. For example, for 1,2,3,4 we can lump 1,2,3 which would have a frequency of .75 and 4 which has a frequency of .25. But to preserve the final value which is the average of all the values. (1+2+3)/3 = 6/3 = 2 .75*2 + .25*4 = 1.5 +1 = 2.5. But, again, however the values are manipulated, the use of frequencies in an equation such as the one above establishes an averaging process and the average of a series of numbers cannot be larger than the largest number in the series. (note) This observation is relevant to the next and final step of this proof.
So repeating the conclusion of this step E(in) = f(sub i,m)* E(i,m) + f(sub o,m)* E(o,m) and E(i,m)>E(o,m).
5. As stated in the step above, the equation remains an averaging equation now having only two terms. Whatever the frequencies associated with each term, we know the expectation terms are not identical because E(I,m)>E(o,m) and the average of these terms equal E(in). Since all subsets are included in this analysis, the frequency of both terms must be greater than zero and since an average of two unlike terms must fall between the terms, E(i,n) must be smaller than one of the terms and larger than the other and since E(i,m) > E(o,m), E(i,m) > E(i,n). A floating advantage exists if the �bow� effect exists.
NOTE. I have been intrigued for some time by the A and B appendices of Chapter 8 of TOB. In A Griffin remarks that the initial two card hands do not totally explain the differences in expectations for different decks and in B he speculates but does not assert definitively probabilities of drawing the second card in different numbers of decks might be part at least part of the difference.
Since �perfect� decks are a specific case of this general proof, it would follow part of the difference is caused by the �bow� effect if the effect speculation is a true property of card counting. I think it is and, if it is, this proof might add some understanding as to why there is different expectations for different numbers of decks.
NOTE As I said at the start of this post, I would welcome mathematical criticisms of the proof or any part of it. I will not respond to generalized statements which do not rise to mathematical rigor. If Cant can find inaccuracies in the Theorems or the method, I
The average would then be ((E(i,m) + k) + ((E(i,m)-k-r)/2 or (2E(i,m) - r)/2 or E(i,m)-r/2 and since r/2 is a positive number, Av(E(i-1,m), E(i+1,m) is less than E(i,m).
implore him as I do others to point out the inaccuracies but for a criticism to be valid it must address the proof and not talk about a bunch of psedomathematical concepts which would be irrelevant to the conclusion developed by the proof. Have at it, folks