So it is posted on the yahoo.com CCC also. I appologise to the PBers for this, but it was simply too complex in more nice versions. The labels are more inline with ML's originals and ML is still to be congratulated for finding a non-Graham-Stokes proof of the bow effect even as his proof is flawed by the problems I give below:

I tried to develope a nice version of this post and could not. It

simply got too complex in nice versions. Perhaps Ted Forester or T-

Hopper can put it gently and still have it make sense. I either have

to base this on ML's recent post on bj21.com and evidence of clone

prevaricvation or rewrite everything.

First the two significant Theorems of ML's post:

Theorem I

Basic Strategy expectation is invariant for the original size of the

shuffled original pack of cards--and the rules etc.--with

penetration. Proof: any penetration can be transformed into any other

by simply cutting the pack.

Theorem II

The mean for all possible subsets of a given pack is the same as the

mean of that given pack.

ML took these two --and then strangely added a third not in his

original post in a reply to me--and demonstrated the bow effect

without needing to appeal to the Graham-Stokes formula. From that

formula the true count and actual edge cover the same range of

possible pack combinations but the true count has less accuracy. The

integral of their differences must thus involve equal areas of less

and more predicted edge.

ML took a different and unique approach of stepwise summation of the

edge differences.

In either case the bow effect curve of the true count versus actual

edge is just the required curve needed for the true count

underestimating edges with TCs near zero, and overestimating the edge

at more extreme TCs, and getting more bent with deeper penetration.

But the application by ML of Theorem II is incorrect and even

dishonest. Theorem II applies to the actual subsets and not the

perfect mean subsets of each True Count or other pack composition. ML

did not apply this theorem continuously.

The proper equation, for how such edges average out is: E(m,i)=edge

for the perfectly mean pack(s) m; E(m,a)=edge for actual packs:

E(m,a)=average summa subsets E(m,a)

ML's version is to substitute E(m,i)=average summa subsets E(m,a)

ML gives a very good, non-Graham-Stokes, proof for how E(m,i)>average

E(m,a), which points to dishonesty in the following (this is where I

had to get blunt to stay simple and clear) in substituting E(m,i) for

E(m,a), in that this is only true for pack compositions where

composition E(m,a)= composition E(m,i), which is very rare except for

the original pack in that the original full pack is the only

composition E(m,i) certain to occur.

The rarity of composition E(m,i)=composition E(m,a) is a binomial

distribution for predicting the probability that a subset will be

some perfect mean density. For blackjack this involves a reduction in

probability of E(m,i) by approximately 13!/10! for every level of

penetration where a perfect mean is possible. Thus every possibility

of a perfect mean is reduced in probability by 1/1716, at every such

point.

Don Schlesinger used a deep knowledge of the Graham-Stokes equation

for years, on Wall Street, to weed out bad predictive models for

market trading--presumably sucessfully. ML was able to demonstrate

the bow effect without G-S being invoked. They know what they are

doing. It is simply too far-feteched for such skilled mathematicians

to fail to apply Theorem II continuously. Every subset has to be

considered the same as the original pack set, where the claim that

edges average summa E(m,i)=edges average summan E(m,a) simply smells.

The distribution of subserts is a distribution from prior actual

subsets and cannot and does not involve substitution of perfect mean

subsets at the current penetration levels.

Donald Schlesinger knows this. Seek out a publication from Morgan-

Stanely, variously titled over the years, two past titles are, Where

Your Portfolio Is, and Where Do You Go From Now?, which were largerly

written by Don for private clients. That publication advised private

clients in how to decide on changing their strategy goals, and to

only include past results if they involved using the same strategies

in the past. ML and DS are violating these principles when they jump

from current situations to ideal starting points for a distribution.

It simply adds to the evidence that they know better than their

current claims.

The ML post is thus flawed and dishonest, despite its success in

achieving a seperate derivation of the bow effect, by changing

distributions in this way.

Message 14891