If you run two simulations that are EXACTLY identical for a given counting system, you will notice that performance is not the same when comparing the results. The statistics will differ slightly depending upon sample size (# rounds simulated). Standard error gives the user an idea of the magnitude of error associated with the simulation. Another way to put it is standard error is a means of quantifying the level of precision correlated with a simulation.
The concept of standard error is important for several reasons. When comparing two different count systems with similar performance, then it might be possible for one system to outperform another purely by chance (luck)! Let us look at a simple example.
Suppose we simulate 2 Billion Rounds on a simulator.
Spread, Rules, etc. are all identical. The only difference is the counting systems themselves.
Count system A: SCORE $51
Count system B: SCORE $50
Can we now conclude that system A is the stronger system? Well, maybe, maybe NOT!! You have to remember there will always be standard error associated with any simulation(s). The question we must ask ourselves is "What is the standard error associated with the SCORES for the aforementioned results"?
Suppose the theoretical mean SCORE for system A is $50 with a standard error of $1. Thus, in the simulation, the SCORE for system A worked out to be 1 standard deviation to the right of the mean (51-50=1).
Suppose the theoretical mean SCORE for system B is $51 with a standard error of $1. Hence, in the simulation, the SCORE for system B was 1 standard deviation to the left of the mean(51-50=1).
The consequence of all this is that the simulations yielded BOGUS results due to standard error! Judging by the mean SCORES, system B actually outperforms system A by $1!
How often can inaccurate results such as these occur? Well, the probability of being 1 SD to the left or right of the mean is 15.86% for each simulation. So, 15.86% x 15.86% = 2.51%. In other words, for every 100 simulations we run for each system, roughly 2.5 of them will produce misleading SCORES similar to the results given above!
Standard error can also give us CONFIDENCE in our results. Suppose system A and system B each has a standard error of only $0.10 for each simulation. Now, even if system A's results were 3 SDs to the left of the mean, that gives a SCORE of $50.70. Similarly, if system B were 3 SDs to the right of the mean, the SCORE would be $50.30. There is still such a wide gap in the results that we can be virtually assured that the results observed are NOT due to chance.
In closing, I would urge all simulator writers to include the standard error for various performance statistics including SCORE, WR, and EV. Omitting this valuable information is affectively rendering the user blind, leaving him with no idea of the magnitude of error associated with the simulation. Simply saying, "just simulate a large #rounds and hope for the best" is grossly inadequate. While some may argue my aforementioned examples are unrealistic, they were just intended to MAKE A POINT, and there are many variations on the theme.