Hello all.
Does anyone know if the formulas used for the Q & Q^1 function used algorithms to generate a solution or if it used the coefficientbased equations found in Abramowitz & Stegun's book of Mathematical Formulas?
32E's Normal & Inverse Normal Distribution


« Next Oldest  Next Newest »

▼
04072012, 02:50 AM
Hello all. Does anyone know if the formulas used for the Q & Q^1 function used algorithms to generate a solution or if it used the coefficientbased equations found in Abramowitz & Stegun's book of Mathematical Formulas? ▼
04072012, 03:52 AM
Don't know for sure, but as you can see from A&S certain series and CF expansions for P(x) and Q(x) ( which equals 1P(x)) are quite elegant and easy to code in loops. I suspect that much less precious ROM is taken up than if the coefficients of a rational approximation need to be stored. As for the inverse normal, I would guess that a lowerdegree rational approximation is used is used to generate an initial estimate that is then refined with an equation solver and the builtin routines for Q(x). I am interested in this too and am interested in the ensuing discussion. Les
04072012, 03:56 AM
Peter Acklam's work is obviously much newer than the 32E, but if you want to see how a full precision inverse normal function can be achieved with a less precise initial estimate that is refined by a solver routine (in this case an extension of Newton known as Halley's method), look here. Edited: 7 Apr 2012, 6:33 p.m. after one or more responses were posted ▼
04072012, 06:46 AM
You can also look at the pdf file on my site (click here) to view some empirical approximations for popular inverse prob. distribution functions that I have come up with.
Namir Edited: 7 Apr 2012, 6:49 a.m.
04072012, 01:26 PM
Well, the Normal/Gauss CDF Q(x) can be evaluated easily with the wellknown usual series expansions and continued fraction methods found in A+S (cf. 26.2.10 and 26.2.11 resp. 26.2.14) and other sources. The 34s project uses this approach. Both methods are used, depending on the value of x (small => series, large => cont. fraction). The inverse (i.e. the quantile function) is another story. A+S list two very simple approximations from Hastings' work from the 1950s which even in the better case provide just three digit accuracy. The current standard approximation (as found in several numeric packages) is algorithm AS241, published by M.J. Wichura in Applied Statistics, 1988. The original Fortran code uses 20 digit precision and provides results with a relative error less than 1E16, i.e. correct double precision results over the full range of this data type (i.e. down to about 1E308). On the other hand, there is nothing magic about these approximations. Any person with some basic mathematical knowledge and Mathematica/Maple or even Excel is able to set up an individual rational approximation optimized for the desired interval (here: p = 1E99...0.9999999999). Personally, I'm a big fan of this approach. Some time ago I designed such an approximation for the classic calculators (10 digits, range down to 1E99). It requires about 20 numeric constants and returns 10 significant digits +/ 1 ULP (if evaluated with internal 13digit precision). I cannot say which approach was used in the 32E, but here's the method that's used in the 34s: Provide a first rough guess and then use a common and effective root solving algorithm. The 34s uses a NewtonHalley method which approximately triples the number of correct digits with each step. I am currently testing a similiar approach with an initual guess with an absolute error not much more than 0,001 (requires only four 4digit constants), and then one single correction step (cf. A+S p. 954, example 5), which then provides a result with an error as low as a few units in the 12th (!) significant digit. Which should be fine for a 10digit machine. ;) OK, the calculation has to carry at least 12 internal digits. Maybe HP has used a similar approach as well. I assume you own a 32E. Just out of curiosity: how long does it take to evaluatate Q(x) and its inverse for x = 1, 5 and 20 resp. p = 0.1, 1E9 and 1E99? Dieter
Edited: 8 Apr 2012, 6:24 a.m. after one or more responses were posted ▼
04072012, 04:51 PM
Well, my original 32E I got in '78 had to be retired long ago. But, the good news is I found another one from Romania (Singapore made, it seems) and it works beautifully! Even though the radix/comma notations are European, it's still nice to have my first HP friend back in the family. So, as soon as the 32E comes out of its recharging nap, I'll run those samples by and then post how the 32E responds. Edited: 7 Apr 2012, 4:53 p.m. ▼
04092012, 12:08 AM
Matt, I recommend checking out the relevant WP34S source code here. All of the material for the standard normal distribution (pdf, cdf, and inverse cdf) is from lines 1024 to 1151. The mnemonics for the functions and constants seem selfexplanatory to me in most cases, and in other cases one can look for the terms in the relevant included files (namely decn.c and decn.h) to get oriented. It looks like the developers settled on a pretty elegant, selfcontained approach for the inverse cdf. It looks it makes an estimate with qf_Q_est() then runs that result up to 10 times through a refinement process, breaking out of it early if double precision is achieved. I am going to try to translate this into readable pseudocode and see what I learn. Les
▼
04092012, 12:33 AM
Have a look in doc/distributionformulas for a description of what the 34S does for the statistical distribution functions. Dieter and I had a lengthy discussion about how to best implement these.
▼
04092012, 02:49 AM
Pauli, I did look at that doc file, and it seems a bit out of sync with what's actually going on in the code. At least with respect to what you do with the inverse normal CDF, the actual code in stats.c is somewhat simpler. In stats.c, initial estimates are generated in two different ways whether the 0 < p < .2 or .2 < p < .5. (Probabilities greater than .5 are subtracted from unity and fed through the algorithm with sign changed in the final result.) The initial estimates for the quantile seem really quite good to start when the probability is small, rough but in the ballpark in the .2 to .4 range, yet get better again as one moves closer to .5. This is particularly important, because in the document one uses a series expansion that is exact between .495 and .5, yet there is no trace of this in the final code. I can't tell yet if the refinement loop is Halley or straight Newton (I think it's the former), but it is important to me to note that the calculator's solver is not called and this refinement process requires one guess, not two, and is selfcontained. I am really interested as to the source of the formulas for the initial estimates on the two intervals noted. Les ▼
04092012, 03:01 AM
The solver code used to be in charge for internal purposes but got eventually replaced by a special purpose solver solution (Newton).
04092012, 03:06 AM
The source of the formulas is Dieter in general. Some are well known though. The document I referenced contains a couple of iterations for the normal distribution  thus there is some fluff there still. My two primary concerns were accuracy and code space (they still are BTW), thus things had to be modified to accommodate this. Hence the revisions and iterations.  Pauli
Edited: 9 Apr 2012, 3:11 a.m. ▼
04092012, 05:41 AM
Thanks, Pauli. I wanted to share that I have been playing around with what I have extracted from the stats.c file, and it seems to me that the best cutoff to use in generating the inverse normal CDF initial estimate is .325, not .2. I did some fiddling with Mathematica:
If .2 is used as the cutoff, closer to the break the relative error can be as high as .4 (I don't show this), but on moving the cutoff up to .325, the estimating curve is continuous (for practical purposes) and max relative error is 15% or so. It just seems to me that the better the initial estimate, the fewer refinement iterations, and the faster things go. Any thoughts? ▼
04092012, 06:40 AM
Nice work. I'm not really surprised there is a better threshold value. One constraint we're living under is to reuse constants as much as possible. 0.2 is available. 0.2214 and 0.25 are both present too and I could use these instead. The next constant up is 0.4. Adding a new constant 0.325 would consume more bytes which we didn't have at the time and would be best to avoid consuming even now. Given the extremely rapid convergence of the followup optimisation steps, I doubt the actual threshold used will make much difference. At most a single iteration and most likely no difference. However, that isn't the entire story. The standard normal guess is also used to provide an initial estimate for other distributions. Still, a separate verification of the methods we've used is nice and improvements in the initial estimates are worthwhile.  Pauli ▼
04092012, 03:58 PM
Ah, yes! Those constants in compile_consts.c must take up a heck of a lot of space, so the fewer the better. Switching the cutoff to .25 is a plusthe max relative error on the range drops to 30% from 40%. But you are rightit probably makes little difference in the number of refinement iterations needed.
▼
04092012, 06:01 PM
Les, Quote:Please forgive me if I missed something, but the current implementation has a max. relative error near 2,2% (estimate: 0,8229 vs. exact value 0,8416 . Moving the switch point to p=0,25 would change this to 2,0%. Dieter
04092012, 05:31 PM
Les, thank you for your work and the error plot. However, it seems you used different formulas for the initial quantile guess than those that actually got implemented. ;) After a first approach with the solver that required two guesses (one a bit high, the other a bit low) the whole thing was changed to an algorithm that requires just *one* initual guess. This guess is different for 0<p<0,2 and 0,2<p<0,5. The details can be found here in the original discussion.
In VBA it looks like this: const root2pi = 2.50662827463100 Edit: the actual implementation may slightly differ  the last term for small p may be 1/u^{2} instead of the original 0,2/u. But that will not change much. The switch at 0,2 was chosen because at this point the absolute (!) errors match quite closely. They are 0,015 resp. 0,019, in both cases a bit low in absolute terms. You are right in that a switch at p=0,25 would be preferable in terms of the relative (!) error. And 0,22 looks best for both. ;) Here's a graph of the absolute error. Sorry for the quality, I had to squeeze it below 125 K.
This initial guess is then improved by a NewtonHalley iteration. When evaluated with sufficient precision, three iterations usually return 35...40digit accuracy  please see the examples in the thread mentioned above. BTW, the guess for larger p actually uses the first two terms of the series expansion for the Normal quantile. That's why it gets better and finally exact as p approaches 0,5. ;)
Dieter Edited: 9 Apr 2012, 6:29 p.m. ▼
04092012, 08:02 PM
Dieter, I thoroughly misread several DecNumber library commands in extracting the algorithm. Thanks for correcting me, especially given that the max relative error is more like 2%. Excellent work. Sorry I missed last year's discussion, and I am glad you pointed me to the thread. Les ▼
04102012, 06:20 PM
No problem, Les. Finally, here's a better version of the error graph. The xaxis represents the exact quantile, and the red and blue lines show the absolute error (this time based on the 1/u^2 tweak that, I assume, was also implemented in the 34s).
The red line refers to the central estimate for p >= 0.2, and the blue line for the other one (p < 0.2). The vertical dotted line marks the break at p = 0.2 (resp. x = 0.8416). At this point the error changes from 0.02 to +0.02. From there it drops down to 0.02 again and finally approaches zero far out at the 34s working range limit. Dieter
Edited: 10 Apr 2012, 6:23 p.m.
04102012, 09:25 AM
If HP were going to include a single pair of probability integral CDF and inverse CDF functions, I would have thought that providing this for the tdistribution would have been more useful. By taking one more argument from the stack representing the df it would allow better small sample statistics, as well as asymptotic approximations to z by setting a large df. This was my workaround for the HP20B Normal distribution bug, I would plug in df=9999 for t. Nick
Edited: 10 Apr 2012, 10:06 a.m. ▼
04102012, 06:13 PM
I suspect this one was code space, most (all?) t quantile function implementations I seen rely on the normal QF at some stage.
Distribution Parameter PDF CDF QF extras total The extras column is for the incomplete beta function that the CDF requires. The t, F and binomial distributions all use this function so this could be shared if multiple distributions are implemented. The standard normal doesn't need any special parameter handling, its parameters are 0 and 1 always. The Studentt distribution, on the other hand, requires checking its parameter for being a positive integer. It also requires some special case code for very small degrees of freedom (1 & 2). The analytically solvable 4 df case isn't handled specially but could (should) be. Additionally, the t distribution requires even more involved underlying functions: gamma and log(1+x) which weren't otherwise available on the 32s. the gamma implementation only needs to handle integers and half integers, so some short cuts could be taken here. Finally, the t QF requires the normal QF as a starting guess in some cases.
▼
04102012, 10:37 PM
Pauli, I am wondering if the code could be further economized by using the regularized incomplete gamma function as the starting point for the Phi and ChiSquare CDFs as well as erf and erfc? The error functions and the standard normal and chisquare CDFs are special cases of the incomplete gamma function. If the issue is speed or accumulation of inaccuracies, that could be why you prefer calculating those distributions directly... ▼
04112012, 12:10 AM
Quote: I use the normal QF for the erfc function and the incomplete gamma function for erf. The reason is one of accuracy for small arguments. The 1  z computation to reflect the error function causes problems  this follows through to the distributions as well. I originally did compute them all from the incomplete gamma and had to change once the inaccuracies became apparent.
04112012, 01:50 AM
I was thinking more in terms of which function (along with its inverse) to offer, if all the keyboard area one had available was a single calculator button. Though it seem that the added error checking and complexity of implementation which you mention, may have been too much to fit in a classic HP calculator. Nick ▼
04112012, 03:48 AM
Nick, Quote:Now that goes into my area. I agree with you on the t distribution being more beneficial than the standard normal in real world problems. See e.g. page 19 of the most recent edition of "the manual". Since OTOH even most engineers I know hardly went beyond the Gaussian distribution, we decided very early to offer the standard normal on the one and only location left on the keyboard in statistics area. Second reason was there is a sample program with the standard normal distribution in almost every manual of an HP scientific where this function isn't hard coded. Depending on your area of interest there may be more useful distributions, but for them you need more keyboard space. You find [Phi] and its inverse below PROB now, where all the other distributions live. You may assign your favourite to one of the hotkeys. Enjoy!
04102012, 10:29 PM
Hi there. Dieter,
Edited: 10 Apr 2012, 10:31 p.m. ▼
04112012, 04:25 PM
Now, that's interesting. Your results do not reflect the right tail Q of the normal distribution, but the left one that's usually designated P. Strange. Any ideas why HP did it this way? My own program BTW returns both P and Q in x and y. ;) Would you mind doing the same calculations again with negative x, i.e. 1, 5 and 20? And of course the quantile function for 0.1, 1E9 and 1E99. Pleeeease. 8) Just for comparison, here are the timings of my HP35s user program: P(1) requires 3 seconds, P(5) about 2, and P(20) just one second. So today a 35s user code program runs faster than the respective internal (machine code) function from 30 years ago. Dieter
Edited: 11 Apr 2012, 4:28 p.m.
04142012, 01:53 PM
My guess is that the 32E is using a series approximation irrespective of whether the value of the argument is large or small. So higher values result in longer computation times since the bigger the argument the more terms are required to achieve convergence. This strikes me as a bit strange, since I would expect that computing the normal CDF in a Spice would be a little more sophisticated. The solver an integrator of the 34C were cutting edge in their day. Les ▼
04142012, 06:07 PM
Les, Quote:If (!) the 32E really used a series to determine the CDF I wonder which one might have been used. At least it cannot be one of the "usual" two (i.e A+S 26.2.10 and 26.2.11) since these actually evaluate the integral from 0 to x and finally add 0.5 to get the value from infinity to x. In other words: the result loses valuable significant digits as soon as Q(x) drops below 0.1, let alone smaller results. This, on the other hand, does not happen with the CF expansion the 34s uses for large x, which is another reason for the two different methods implemented there. To be precise, the switch should happen at x = 1.2815515655446... where Q(x) is 0.1, but actually it is somewhere near x = 3 (i.e. Q(x) around 0.001) so that two or three digits are lost. Which doesn't matter much with 39 digits working precision. So, if in fact a series approximation was used in the 32E it cannot be one of the two wellknown ones, simply because very small results like 1E20 or even 1E99 cannot be evaluated with the 32E's 13 internal digits. In cases like these a plain zero would be returned. Dieter 