The paper deals with the widely known problem of converting a string of characters into the internal integer form. It is shown that, unfortunately, most programs fail to detect integer overflow correctly. Many suggestive examples taken from undergraduate computer science texts or from currently available compilers illustrate the author’s opinion. A method of correct conversion of character to integer is presented and a Euclid module that processes both positive and negative integers of the ADA-based number variety is also given.
The problems presented in this paper are of real interest for all computer scientists, and especially for computer programmers who implement conversion routines. The brevity and clarity of the paper should also be pointed out.
Perhaps it would be interesting to pay attention to the reverse problem: the conversion of the internal integer form to signed decimal representation. This case is also, in my opinion, not correctly treated in some compilers in use. This problem was originally signaled by Nicolescu in a Letter to the Editor [1].