/ College of Engineering and Computer Science
Mechanical Engineering Department
ME 692 – Computational Fluid Dynamics
Spring 2002 Ticket: 57541 Instructor: Larry Caretto

Engineering Building Room 2303 Mail Code Phone: 818-677-6448

E-mail: 8348 Fax: 818-677-7062

Finite differences for convection ME 692, L. S. Caretto, Spring 2002 Page 7

Characteristic Curves

Introduction

The notion of characteristic curves occurs in the solution of compressible flow equations. In general, such curves can be calculated for general differential equations, but only hyperbolic partial differential equations have real characteristic curves.

The simplest model equation that we can consider in CFD is a one-dimensional inviscid flow equation with a constant velocity, called the wave velocity, c.

[1]

We can use a slightly more general form of this equation to discuss characteristics. First we use the following notation for partial derivatives:

[2]

We will work with the following, more general, form of equation [1].

[3]

We can recover equation [1] from equation [3] by setting a(t,x,u) = 1 and b(t,x,u) = c.

The solution to equation [3] would be a three-dimensional surface giving u = u(t,x). The normal to this surface would be a vector whose components are the gradient of the solution. That is the t and x components of the normal would be ut and ux. The third component of the gradient would be uu, which is one if we wanted the normal in the plus u direction or minus one if we wanted the normal in the minus u direction. If we take the normal to the surface in the minus u direction, that normal vector can be expressed as [ut ux -1]. By definition, the normal vector is perpendicular (orthogonal) to the tangent vector. Thus, if we express the tangent vector as [a b f], the orthogonality condition requires that the vector dot product [a b f]T[ut ux -1] equals zero. This gives the same result as equation [3].

[4]

This gives us a geometric interpretation of equation [3]. Any solution to equation [3] defines a three-dimensional surface, u(t,x) at which all points are tangent to a vector [a b f], whose components are the coefficients in the differential equation.

Because u = u(t,x) we can write the total derivative, du, as follows.

[5]

If we compare equation [5] with equation [3], we see that we must have the following relationships.

[6]

With this background on equation [3], we seek to prove the following theorem about its solution.

Theorem: The general solution of the equation aut + bux = f is given by G(v,w) = 0, where G(v,w) is an arbitrary function and where v(t,x,u) = c1 and w(t,x,u) = c2 form a solution of the equations dt/a = dx/b = du/f.

To begin the proof of this theorem we examine the proposed solution.

[7]

The equations in [7] can be rearranged into two different ordinary differential equations. For example if we use the dt/a = dx/f, we can write the following equation, where we explicitly write the dependence of a and b on x, y, and u.

[8]

This derivative gives the derivative for the two independent variables and the curve whose slope is dx/dt is called a characteristic curve because it determines the characteristics of the solution as defined by a curve in t=x space, that is a curve in the space of the independent variables x and t. In equation [8], the most general value of dx/dt depends on u. If the coefficients on the right hand side of this equation are functions of x and t only, i.e., if a = a(t, x) and b = b(t, x), we have a simple ordinary differential equation which can, in principal, be solved for x(t). When a and b are both constants, this equation reduces to the statement that the projection of the solution surface onto the x-t plane becomes a series of straight lines, i.e., lines of constant slope, dx/dt.

We have already noted that equation [5] and equation [3] constitute two separate relationships between ux and uy. We can recap these equations below as two simultaneous linear algebraic equations.

[9]

The methods of characteristics postulates that there are lines on the solution surface along which both equations in [9] are satisfied, but the values of the partial derivatives ux and uy do not have a unique solution. From linear algebra, we know that this occurs when the ratio determinants in the Cramer’s rule solution is indeterminate. Writing the indeterminate conditions for the Cramer’s rule solutions for ut and ux gives the following results.

[10]

[11]

From the numerator and denominator of equation [10] we get the following equations

[12]

This confirms the results shown in equation [6].

Another approach is to start with the two-dimensional wave equation shown below.

This equation can be derived from equation [1] with f = 0 as shown below. This derivation uses equation [1] and the fact that you can reverse the order of differentiation for mixed second-order partial derivatives.

Starting with the second order equation we can show that it has the general solution, known as the D’Alambert solution that is written in terms of coordinates ξ and η, defined as follows:

[]

The D’Alambert solution to the wave equation is written in terms of two arbitrary functions, F(ξ) and G(η). This gives the following solution.

[]

To show that this is true, we have to rewrite the wave equation in terms of these two functions. To start the coordinate transformations we look at the first derivative terms.

The D’Alambert solution gives a simple expression for the ξ derivative since F is a function of ξ only and G is a function of η only.

In this equation we have defined F’(ξ) as the first derivative of F; we will subsequently define a second derivative in a similar manner.

In a similar fashion we can write

Finally, from the definitions of ξ=x+ct and η=x-ct, we can write the following partial derivatives.

Combining this equation with the equation [] gives the following result.

We find the second time derivative by taking the time derivative of this first derivative. We can simplify the result using the equations above for coordinate transformation, the D’Alambert solution that u = F + G, and the definitions of F’’ and G’’ as ordinary second derivatives.

We can repeat this process for the x-derivatives; the main difference is in the partial derivatives of the new coordinates with respect to x.

With these relationships, we can obtain the first derivative with respect to x as follows.

The second derivative is obtained by taking the derivative of the first derivative.

We see that the two expressions for the second derivatives in equations[] and [] satisfy the original two-dimensional wave equation in [].

Thus the D’Alambert solution satisfies the differential equation. In order to satisfy both the differential equation and the initial conditions we need modify the D’Alambert solution to satisfy the following initial conditions.

We can show that the solution to this equation is given below.

In this equation, we have used the D’Alambert solution for the first two terms where the arbitrary function, f(), in this case, is the initial condition for u: u(t=0,x) = f(x). The second term is the integral of the initial derivative condition, that is integrated over the dummy variable, n. The first two terms have just been shown to satisfy the differential equation. In order to show that the integral satisfies the differential equation we need to use the following formula for differentiation under the integral sign.

We can apply this general result to compute the space and time derivatives for the wave equation. First find the time derivatives. Using the general rule above, we find the following result.

In the final step we set the derivative equal to zero because n is a dummy variable. The second derivative of the integral is just the derivative of the first derivative, which becomes

We have used the usual notation, g’, to indicate an ordinary first derivative.

In a similar fashion we can take the second-order space derivative of the integral by starting with the first derivative.

[]

We then take the second derivative from the result above.

[]

If we substitute equations [] and [] into the original two-dimensional wave equation in [] we see that the integral term satisfies the differential equation.

n. We can now show that the integral is added to the solution to satisfy the initial conditions. First consider the condition that u(0,x) = f(x). Setting t = 0 in the proposed solution gives.

Here we use the result that the value of a definite integral is zero when both the lower and upper limits are the same. Taking the time derivative of the solution gives the following result.

Here the final term in the second row comes from equation []. Carrying out the indicated manipulations

WPartial differential equations in two independent variables, with no derivatives greater than second order, can be written, in general, in the following form.

[4-2]

In this equation, the coefficients A to G may be zero, constant, functions of x and y only, or functions of x, y, and u. In addition, the coefficients A to E may be functions of ∂u/∂t or ∂u/∂x as well. Partial differential equations like [4-2] are classified as parabolic, elliptic or hyperbolic depending on the following relationship among the coefficients:

If B2 – 4 A C > 0, the equation is hyperbolic.

If B2 – 4 A C = 0, the equation is parabolic. [4-3]

If B2 – 4 A C < 0, the equation is elliptic.

These partial differential equations (PDE) have nothing to do with conic sections. It is just that the conditions for the different behavior of the PDEs are the same as the equations that determine the kind of curve that results from a plot of the equation At2 + Bxt + Cx2 + Dt + Ex + G = 0.

You can show the classification of various partial differential equations that is summarized in the lined below.

The wave equation, , is hyperbolic.

The conduction equation,, is parabolic.

Poission’s equation,, is elliptic.

Burgers equation [4-1] is seen to be a parabolic equation.

The different classes of equations have different requirements for their boundary or initial conditions. Elliptic equations require boundary conditions at a continuous closed boundary. Hyperbolic and parabolic equations have an open boundary; this is typically in the time variable where conditions specified at t = 0 are called initial conditions. Parabolic equations require one initial condition and two boundary conditions. Hyperbolic equations require two initial conditions and two boundary conditions.

Stability of Finite-Difference Formulations

In our review of finite-difference solutions for the conduction equation, we found that the explicit method was unstable for values of f = Δt/Δx)2 0.5.[* We showed how this condition could be derived based on arguments about physical reality. We also saw that the Crank-Nicholson method, which we claimed to be a stable method, produced results that were not physically realistic. How can w] We showed how this condition could be derived based on arguments about physical reality. We also saw that the Crank-Nicholson method, which we claimed to be a stable method, produced results that were not physically realistic. How can we determine, in general, if a finite-difference method is stable?

Recall the definition of stability: a finite-difference equation is stable if errors in the solution do not grow in time. How can we analyze this error growth and determine if it will grow or decay? The answer to this question is provided by a von Neumann stability analysis. In this analysis we look at the behavior of the error on our finite-difference grid. We use the explicit method for the conduction equation from the last set of notes as an example. Here we have a two-dimensional grid in space and time. We start with initial conditions at t = 0 for all x and with boundary conditions for all t at x = 0 and x = L. We ask what happens if we have a set of errors at some time in the solution. For convenience we will take the start of the stability analysis as t = 0 and write the error distribution at this point as εx,0). We assume that we can write the error at a future time, εx,t) by the following discrete Fourier series using an imaginary exponential.[** The explicit solution for the conduction equation is given by equation [3-21]. This equation is copied below with k used in place of i as the grid index in the x direction. ]* The explicit solution for the conduction equation is given by equation [3-21]. This equation is copied below with k used in place of i as the grid index in the x direction.

[3-21]

We start by writing the computed solution, T, as the sum of a hypothetical exact solution to the difference equation D, plus an error term, ε.

[4-4]