ABSTRACT

If a scientist wishes to learn something about a situation, then he will use a mixture of empirical and theoretical procedures to increase his knowledge. In a very precise sense (which has relevance in atomic physics) we can say that only knowledge about interactions is gained, since we must interact with a system in order to learn about it. How far that knowledge can be translated into knowledge about the investigated system itself is still a matter of some debate e.g. in connection with different interpretations of quantum mechanics. For a mathematical problem we are usually more confident that there does exist in principle a solution, which we could approach by using a more and more accurate and speedy computer in an appropriately designed program. (That ‘accuracy’ is a software concept as well as a hardware one is, of course, part of my theme in this book.) Real computers, however, give rounding errors, overflow problems, etc so that we are always looking at our problem with some dirt on the telescope lens, so to speak. One obvious point is that computers work internally with binary numbers, but have input and output in decimal form for the operator’s convenience. The quirks of the translation process mean that a machine such as a ZX-81 might appear to have 9 or 10 digit accuracy, depending on which part of the real number region we are using. A necessary prelude to a serious study of a numerical problem is the task of calibrating the apparatus, so that we can be sure that later results refer to the mathematical problem and not to the computer’s own internal characteristics. In this chapter I outline a few typical ways of discovering (and correcting) the weak spots in a microcomputer’s armour, and then discuss how the correct analysis of a problem can help to improve speed and accuracy.