Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Algorithm 908: online exact summation of floating-point streams
Zhu Y., Hayes W. ACM Transactions on Mathematical Software37 (3):1-13,2010.Type:Article
Date Reviewed: Dec 28 2010

At the innermost core of many algorithms in scientific computing is the task of summing up many floating-point numbers. Therefore, it is important to have algorithms that can handle this problem fast and accurately. A key issue in this context is the propagation of rounding errors. Zhu and Hayes’ algorithm, OnlineExactSum, presents an impressive solution to this problem.

As its name suggests, it is possible to prove that the algorithm computes the correctly rounded value of the exact sum of the input data. Moreover, the input data stream can be arbitrarily long, whereas the algorithm only needs to see one input at a time. The only restriction imposed is that the intermediate results are not allowed to contain overflows. Under this assumption, the authors prove the correctness of their method and demonstrate that it runs very fast; it only requires five floating-point operations per summand. The use of instruction-level parallelism then leads to a runtime that exceeds the potentially highly inaccurate ordinary recursive summation (which requires just one floating-point operation per summand) only by a factor in the range of two to three.

The method is based on HybridSum, the authors’ previous algorithm [1], which is based on an idea of Demmel and Hida [2] to use extra precision accumulators to sum floating-point numbers with identical exponents, and to provide a fast and accurate method (iFastSum) to compute a correctly rounded sum of these accumulators. The new idea is to double the number of accumulators, thus avoiding the necessity to split the input data. The algorithm is not only fast, but its memory requirements are also very modest and, in particular, independent of the length of the input stream.

A nice side effect of the fact that the method always produces the correct result is that it always produces the same result, no matter the order in which the summands are processed. This is important, for example, in parallel computations such as matrix/vector multiplications or scalar products, where each processor handles a part of the vectors or matrices under consideration and different runs of the program with identical input data may, due to different workloads of the processors, lead to intermediate results arriving out of order. Less sophisticated algorithms will then lead to rounding errors being propagated in different ways and, hence, to the loss of reproducibility of the results.

Readers will find that this algorithm has many applications in the field of scientific computing.

Reviewer:  Kai Diethelm Review #: CR138669 (1106-0645)
1) Zhu, Y.-K. ; Hayes, W.B. Correct rounding and a hybrid approach to exact floating-point summation. SIAM Journal on Scientific Computing 31, (2009), 2981–3001.
2) Demmel, J.; Hida, Y. Accurate and efficient floating point summation. SIAM Journal on Scientific Computing 25, (2003), 1214–1248.
Bookmark and Share
  Reviewer Selected
Editor Recommended
Featured Reviewer
 
 
General (G.1.0 )
 
Would you recommend this review?
yes
no
Other reviews under "General": Date
Solving function space problems with guaranteed close bounds
Kaucher E.  A new approach to scientific computation (, IBM Thomas J. Watson Research Center, Yorktown Heights, NY,1641983. Type: Proceedings
Mar 1 1985
Numerical mathematics: theory and computer applications
Fröberg C. (ed), Benjamin-Cummings Publ. Co., Inc., Redwood City, CA, 1985. Type: Book (9780805325300)
Mar 1 1986
Integrals and series of elementary functions
Prudnikov A., Bryčkov Y., Maričev O., Science, Moscow, Russia, 1981. Type: Book
Sep 1 1986
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy