Numerical Methods for Eng ENGR 391 Lyes KADEM 2007

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

CHAPTER V

Interpolation and Regression

Topics

Interpolation Direct Method; Newton’s Divided Difference; Lagrangian Interpolation; Spline Interpolation.

Regression Linear and non-linear.

1. What is interpolation?

A function is, often, given only at discrete points such as . How does one find the value of y at any other value of x?

Well, a continuous function may be used to represent the n+1 data values with passing through the n+1 point. Then we can find the value of y at any other value of x. This is called interpolation. Of course, if x falls outside the range of x for which the data is given, it is no longer interpolation, but instead, is called extrapolation.

So what kind of function should we choose? A polynomial is a common choice for an interpolating function because polynomials are easy to

- Evaluate

- Differentiate, and

- Integrate

as opposed to other choices such as a sine or exponential series.

Polynomial interpolation involves finding a polynomial of order ‘n’ that passes through the ‘n+1’ points. One of the methods is called the direct method of interpolation. Other methods include Newton’s divided difference polynomial method and Lagrangian interpolation method.

1.2. Direct Method

The direct method of interpolation is based on the following principle. If we have 'n+1' data points, fit a polynomial of order 'n' as given below

(1)

through the data, where a0, a1, . . ., an are n+1 real constants. Since n+1 values of y are given at n+1 values of x, one can write n+1 equations. Then the 'n+1' constants, a0, a1, . . ., an, can be found by solving the n+1 simultaneous linear equations (Ahaaa !!! do you remember previous course !!!). To find the value of y at a given value of x, simply substitute the value of x in the polynomial form.

But, it is not necessary to use all the data points. How does one then choose the order of the polynomial and what data points to use? This concept and the direct method of interpolation are best illustrated using an example.

1.2.1. Example

The upward velocity of a rocket is given as a function of time in Table 1.

Table 1. Velocity as a function of time

t [s] / v(t) [m/s]
0 / 0
10 / 227.04
15 / 362.78
20 / 517.35
22.5 / 602.97
30 / 901.67

1. Determine the value of the velocity at t=16 s using the direct method and a first order polynomial.

2.. Determine the value of the velocity at t=16 s using direct method and a third order polynomial interpolation using direct method.

Figure 5.2. Velocity vs. time data for the rocket example.

1.3. Newton’s divided difference interpolation

To illustrate this method, we will start with linear and quadratic interpolation, then, the general form of the Newton’s Divided Difference Polynomial method will be presented.

1.3.1. Linear interpolation

Given fit a linear interpolant through the data. Note taht and , assuming a linear interpolant means:

Since at : ,

and at :

Then

so

And the linear interpolant,

Becomes:

1.3.2. Quadratic interpolation

Given and fit a quadratic interpolant through the data. Note that and assume the quadratic interpolant given by

At

At

then

At

then

Hence the quadratic interpolant is given by


Figure 5.4. Quadratic interpolation

1.3.3. General Form of Newton’s Divided Difference Polynomial

In the two previous cases, we found how linear and quadratic interpolation is derived by Newton’s Divided Difference polynomial method. Let us analyze the quadratic polynomial interpolant formula

where

Note that and are finite divided differences. and are first, second, and third finite divided differences, respectively. Denoting first divided difference by

the second divided difference by

and the third divided difference by

where and are called bracketed functions of their variables enclosed in square brackets.

We can write:

This leads to the general form of the Newton’s divided difference polynomial for data points, as

where

where the definition of the divided difference is

From the above definition, it can be seen that the divided differences are calculated recursively.

For an example of a third order polynomial, given and

1.4. Lagrangian Interpolation

Polynomial interpolation involves finding a polynomial of order ‘n’ that passes through the ‘n+1’ points. One of the methods to find this polynomial is called Lagrangian Interpolation.

Lagrangian interpolating polynomial is given by

where ‘’ in stands for the order polynomial that approximates the function given at data points as , and

is a weighting function that includes a product of terms with terms of omitted.

1.5. Spline Method of Interpolation

Spline method was introduced to solve one of the drawbacks of the polynomial interpolation. In fact, when the order (n) becomes large, in many cases, oscillations appear in the resulting polynomial. This was shown by Runge when he interpolated data based on a simple function of

on an interval of [-1, 1]. For example, take six equidistantly spaced points in [-1, 1] and find y at these points as given in Table 1.

Table 1: Six equidistantly spaced points in [-1, 1]

-1.0 / 0.038461
-0.6 / 0.1
-0.2 / 0.5
0.2 / 0.5
0.6 / 0.1
1.0 / 0.038461

Figure.5.5. 5th order polynomial vs. exact function.

Now through these six points, we can pass a fifth order polynomial

through the six data points.

When plotting the fifth order polynomial and the original function, you can notice that the two do not match well. So maybe you will consider choosing more points in the interval [-1, 1] to get a better match, but it diverges even more (see figure below). In fact, Runge found that as the order of the polynomial becomes infinite, the polynomial diverges in the interval of –1 < x < 0.726 and 0.726 < x < 1.

Figure.5.6. Higher order polynomial interpolation is a bad idea.

1.5.1. Linear spline interpolation

Given , fit linear splines to the data. This simply involves forming the consecutive data through straight lines. So if the above data is given in an ascending order, the linear splines are given by


Figure.5.7. Linear splines.

.

.

.

Note the terms of

in the above function are simply slopes between and .

1.5.2. Quadratic Splines

In these splines, a quadratic polynomial approximates the data between two consecutive data points. The splines are given by

.

.

.

Now, how to find the coefficients of these quadratic splines? There are 3n such coefficients

1, 2, …, n

1, 2, …, n

1, 2, …, n

To find ‘3n’ unknowns, we need ‘3n’ equations and then simultaneously solve them. These ‘3n’ equations are found by the following.

1)  Each quadratic spline goes through two consecutive data points

.

.

.

.

.

.

This condition gives 2n equations as there are ‘n’ quadratic splines going through two consecutive data points.

2)  The first derivatives of two quadratic splines are continuous at the interior points. For example, the derivative of the first spline

is

The derivative of the second spline

is

and the two are equal at giving

Similarly at the other interior points,

.

.

.

.

.

.

Since there are (n-1) interior points, we have (n-1) such equations. Now, the total number of equations is equations. We still then need one more equation.

We can assume that the first spline is linear, that is:

This gives us ‘3n’ equations and ‘3n’ unknowns. These can be solved by a number of techniques used to solve simultaneous linear equations.


2. Regression

2.2. What is regression?

Regression analysis gives information on the relationship between a response variable and one or more independent variables to the extent that information is contained in the data. The goal of regression analysis is to express the response variable as a function of the predictor variables. Duality of fit and accuracy of conclusion depend on the data used. Hence non-representative or improperly compiled data result into poor fits and conclusions. Thus, for effective use of regression analysis one must

§  Investigate the data collection process,

§  Discover any limitations in data collected,

§  Restrict conclusions accordingly.

Once regression analysis relationship is obtained, it can be used to predict values of the response variable, identify variables that most affect response, or verify hypothesized casual models of the response. The value of each predictor variable can be assessed through statistical tests on the estimated coefficients (multipliers) of the predictor variables.

2.3. Linear regression

Linear regression is the most popular regression model. In this model we wish to predict response to n data points (x1,y1), (x2,y2), ....., (xn, yn) data by a regression model given by

where a0 and a1 are the constants of the regression model.

A measure of goodness of fit, that is, how predicts the response variable y is the magnitude of the residual, at each of the n data points.

Ideally, if all the residuals are zero, one may have found an equation in which all the points lie on the model. Thus, minimization of the residual is an objective of obtaining regression coefficients.

The most popular method to minimize the residual is the least squares method, where the estimates of the constants of the models are chosen such that the sum of the squared residuals is minimized, that is minimize.

Why minimize the sum of the square of the residuals? Why not, for instance, minimize the sum of the residual errors or the sum of the absolute values of the residuals?

Alternatively, constants of the model can be chosen such that the average residual is zero without making individual residuals small. For example, let us analyze the following table.

x / y
2.0 / 4.0
3.0 / 6.0
2.0 / 6.0
3.0 / 8.0

To explain this data by a straight line regression model,

and using minimizing as a criteria to find ao and a1, we find that for (Figure 5.8)

Y = 4x -4

Figure.5.8. Regression curve y = 4x – 4 for y vs. x data.

The sum of the residuals,as shown in the table below.

x / y / ypredicted / ε = y - ypredicted
2.0 / 4.0 / 4.0 / 0.0
3.0 / 6.0 / 8.0 / -2.0
2.0 / 6.0 / 4.0 / 2.0
3.0 / 8.0 / 8.0 / 0.0

So does this give us the smallest error? It does as. But it does not give unique values for the parameters of the model. A straight-line of the model: Y = 6.

Figure.5.9. Regression curve y = 6 for y vs. x data.

also makes as shown in the table below.

x / y / ypredicted / ε = y - ypredicted
2.0 / 4.0 / 6.0 / -2.0
3.0 / 6.0 / 6.0 / 0.0
2.0 / 6.0 / 6.0 / 0.0
3.0 / 8.0 / 6.0 / 2.0

Since this criterion does not give unique regression model, it cannot be used for finding the regression coefficients. Why? Because, we want to minimize

Differentiating this equation with respect to a0 and a1, we get

Putting these equations to zero, give n= 0 but this is impossible. Therefore, unique values of a0 and a1 do not exist.

You may think that the reason the minimization criterion does not work is that negative residuals cancel with positive residuals. So is minimizing criterion may be better? Let us look at the data given below for equation. It makes as shown in the following table.

x / y / ypredicted / |ε| = |y - ypredicted|
2.0 / 4.0 / 4.0 / 0.0
3.0 / 6.0 / 8.0 / 2.0
2.0 / 6.0 / 4.0 / 2.0
3.0 / 8.0 / 8.0 / 0.0

The value of also exists for the straight line model y = 6. No other straight line for this data has. Again, we find the regression coefficients are not unique, and hence this criterion also cannot be used for finding the regression model.

Let us use the least squares criterion where we minimize

Sr is called the sum of the square of the residuals.

Figure.5.10. Linear regression of y vs. x data showing residuals at a typical point, xi.

To find a0 and a1, we minimize Sr with respect to a0 and a1:

giving

Noting that

Solving the above equations gives:

Redefining

we can rewrite

2.4. Nonlinear models using least squares

2.4.1. Exponential model

Given, , . . . , we can fit to the data. The variables and are the constants of the exponential model. The residual at each data point is

The sum of the square of the residuals is

To find the constants a and b of the exponential model, we minimize Sr by differentiating with respect to and and equating the resulting equations to zero.

or

These equations are nonlinear in a and b and thus not in a closed form to be solved as was the case for the linear regression. In general, iterative methods must be used to find values of a and b.

However, in this case, can be written explicitly in terms of as

Substituting gives

This equation is still a nonlinear equation in and can be solved by numerical methods such as bisection method or secant method.