Number Representation, Significant Digits, Precision, Accuracy and Errors

Numerical calculations obviously involve the operation (i.e., addition, subtraction, multiplication, etc.) of numbers. Numbers can be integers (e.g., 5,10, -16 etc.), fractions (e.g., -5/6, etc.), or an infinite string of digits (e.g., π=3.1415926535…).

In numerical analysis we deal with numerical values and calculations. There are several concepts that must be considered

  1. Number representation
  2. Significant digits
  3. Precision and accuracy
  4. Errors
  5. Rate of Convergence

These concepts are discussed briefly in this section.

Number Representation

Numbers are represented in number systems. Any number of bases can be employed as the base of a number system, for example, decimal number system has base 10, the octal number system has base 8, and the binary number system has base 2, etc.

The decimal number system is the most commonly used system for human communication. Digital computers use the binary system. In a digital computer, a binary number consists of a number of binary bits.

The number of binary bits in a binary number determines the precision with which the binary number represents a decimal number. The most common size binary number is a 32-bit number, which can represent approximately seven digits of a decimal number.

In many engineering and scientific calculations, 32-bit arithmetic is adequate. However, in many other applications, 64-bit arithmetic is required. Some digital computers have 64-bit binary numbers, which can represent 13 to 14 decimal digits. In a few special situations, 128-bit arithmetic may be required.

Significant Digits

The significant digits, or significant figures, in a number are the digits of the number which are known to be correct.

Rules for Significant Digits

  • All non-zero digits are significant. For example, 1947 contains four significant digits.
  • All zeros that occur between any two non-zero digits are significant. For example, 205.00407 contains eight significant digits.
  • All zeros that are on the right of a decimal point and also to the left of a non-zero digit is not significant. For example, 0.00786 contains three significant digits and 100 contains one significant digit.
  • All zeros that are on the right of a decimal point are significant, only if, a non-zero digit does not follow them. For example, 500.00 contains five significant digits.
  • All the zeros that are on the right of the last non-zero digit, after the decimal point, are significant. For example, 0.0078600 contains five significant digits (78600).
  • Exact numbers have an infinite number of significant digits. This rule applies to numbers that are definitions. For example, 1 meter = 1.00 meters = 1.000 meters =
    1.000000000000000 meters, etc.
  • All the zeros that are on the right of the last non-zero digit are significant if they come from a measurement. For example, 1080 m contains four significant digits.
  • If an overline is present as in a number the overlined zero is significant but the trailing zeros are not significant. For example, 790\overline 0 0 the overlined zero is significant but the trailing zeros are not significant. Hence this number has 4 significant digits.

When these numbers are processed through a numerical algorithm, it is important to be able to estimate how many significant digits are present in the final computed result.

Precision and Accuracy

Measurements and calculations can be characterized with regard to their accuracy and precision.

Accuracy refers to how closely a value agrees with the true value. Therefore, for numerical methods, Accuracy refers to how closely a number agrees with the true value of the number it is representing.

Number Representation, Significant Digits, Precision, Accuracy and Errors,

In above figure, the values closer to the true value are said to be more accurate while values away from true value are said to be less accurate.

Precision refers to how closely values agree with each other. Therefore, for Numerical methods, Precision refers to how closely a number represents the number it is representing.

Number Representation, Significant Digits, Precision, Accuracy and Errors,

In the above figure, all values are nearly same and they are close to each other hence they are precise (but not accurate).

The termerrorrepresents the imprecision and inaccuracy of a numerical computation.

Inaccuracy(also calledbias) is a systematic deviation from the true values.

Imprecision(also calleduncertainty) refers to lack of exactness/precision.

Number Representation, Significant Digits, Precision, Accuracy and Errors,

In above figure the accuracy and precision both are less because the values do not agree with each other as well as with true value. In other words, we can say that, these values are inaccurate and imprecise.

Number Representation, Significant Digits, Precision, Accuracy and Errors,

In this figure the accuracy and precision both are high because the values agree with each other as well as with true value.

We use the termerrorto represent both inaccuracy and imprecision in our results.

Errors

The accuracy of a numerical calculation is measured by the error of the calculation. Several types of errors can occur in numerical calculations. Some of these are explained in brief as follows.

  • Iteration errors
  • Approximate errors
  • Round-off errors

Iteration error is the error in an iterative method that approaches the exact solution of an exact problem asymptotically. Iteration errors must decrease toward zero as the iterative process progresses.

Approximate error Ea is defined as the difference between the present approximate value and the previous approximation (i.e., the change between the iterations).

approximate error (Ea)= present approximation – previous approximation

Every computer has finite word length (expressed in memory). Round-off error is the error caused by the finite word length employed in the calculations. Round-off error is more important when small differences between large numbers are calculated.

Most computers have either 32-bit or 64-bit word length, corresponding to approximately 7 or 13 significant decimal digits, respectively.

Rate of Convergence

In numerical analysis, the order of convergence or the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit.

Recent posts

4 thoughts on “Number Representation, Significant Digits, Precision, Accuracy and Errors”

  1. I’m more than happy to discover this site. I want to to thank you for ones time for this wonderful read!! I definitely enjoyed every little bit of it and i also have you bookmarked to check out new stuff on your web site.

Comments are closed.