Computational method is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Some common ideas and concepts of computational methods are explained in this article.
Numerical mathematics has been used for centuries in one form or another within many areas of science and industry but modern scientific computing using electronic computers has its origin in research and developments during the Second World War.
Around 1950s the foundation of numerical analysis was placed as a separate discipline of mathematics.
The new capability of performing billions of arithmetic operations reasonably in short time has led to new classes of algorithms, which need careful analysis to ensure their accuracy and stability.
As a rule, applications lead to mathematical problems which in their complete form cannot be conveniently solved with exact formulas, unless one restricts oneself to special cases or simplified models.
When we present and analyse numerical methods, we study in detail special cases and simplified situations, with the aim of uncovering more generally applicable concepts and points of view which can guide us in more difficult problems.
It is important to keep in mind that the success of the methods presented depends on the smoothness properties of the functions involved.
In most numerical methods, one applies a small number of general and relatively simple ideas. These are then combined with one another in a creative way and with such knowledge of the given problem as one can obtain in other ways.
Common Ideas and Concepts of Computational Methods
One of the most general ideas in numerical calculations is iteration or successive approximation. In general, iteration means the repetition of a pattern of action or process. Iteration is used to improve previous results.
Consider the problem of solving a nonlinear equation of the form
x = F(x),
where, F is assumed to be a differentiable function whose value can be computed for any given value of a real variable x, within a certain interval.
Using the method of iteration, one starts with an initial approximation X0, and computes the sequence
x1 = F(x0),
x2 = F(x1),
x3 = F(x2), . . .
Each computation of the type xn+1 = F(xn), n = 0, 1, 2, . . . , is called a fixed-point iteration. As n grows, we would like the numbers xn to be better and better estimates of the desired root.
This is the fastest method, but requires analytical computation of the derivative of f (x). Also, the method may not always converge to the desired root.
We can derive Newton’s Method graphically, or by a Taylor series. We again want to construct a sequence x0, x1, x2, . . . that converges to the root x = r.
Let x0 be an approximate root of the equation for which function f(x) has value nearer to zero. Then Newton’s method tells us that a better approximation for the root is
Starting Newton’s Method requires a guess for x0, hopefully close to the root x = r.
Newton’s method is based on linearization. This means that a complicated function is approximated with a linear function.
Linearization and Extrapolation
The secant approximation is generally used when one “reads between the lines” or interpolates in a table of numerical values. In this case the secant approximation is called linear interpolation.
When the secant approximation is used in numerical integration that is in the approximate calculation of a definite integral, it is called the Trapezoidal rule.
Extrapolation involves making statistical forecasts by using historical trends that are projected for a specified period of time into the future. It is only used for time-series forecasts.
The extrapolation to the limit can easily be applied to numerical integration with the trapezoidal rule.
Finite Difference Approximations
The local approximation of a complicated function by a linear function leads to another frequently encountered idea in the construction of numerical methods, namely the approximation of a derivative by a difference quotient.