Home / Technology / Dynamics of Real Time / Chapter 6 – Special for Simulating Linear Systems

Chapter 6 – Special for Simulating Linear Systems

CHAPTER 6

Special Methods for Simulating Linear Systems

6.1 Introduction

In the simulation of dynamic systems it is often found that at least portions of the system can be described by linear ordinary differential equations. For example, control systems, although generally nonlinear, will frequently contain linear subsystems which are linear, such as controllers and/or linear filters. Also, elastic modes in flexible structures are often modelled using normal or generalized coordinates, which are represented by lightly damped second-order linear systems. In such cases it is usually more computationally efficient to employ specialized techniques for simulating these linear subsystems instead of standard integration algorithms. One of the techniques which has been widely used is the state-transition method. Another technique is Tustin’s substitution method, which is simply trapezoidal integration. In this chapter we describe these techniques and analyze their dynamic accuracy in the simulation of linear systems. We will employ the same error measures used in the previous chapters, namely, the characteristic root and sinusoidal transfer function errors in the form of asymptotic formulas for small dimensionless integration step sizes.

6.2 The State-transition Method

The state-transition method is based on the analytic solution of the linear state equations over the integration step h in terms of the state-transition matrix*. By definition the method produces solutions that have characteristic roots which match exactly the roots of the linear system being simulated. This in turn means that the method permits any integration step size h without causing instability, if the system being simulated is itself stable. The state-transition method does, however, involve the evaluation of a definite integral which contains the input function and the state-transition matrix. The computer evaluation of this definite integral for arbitrary input functions represented as data sequences can only be approximate. This is where the state-transition method introduces dynamic errors into the simulation of linear systems. We will examine these dynamic errors in the frequency domain for various numerical methods of evaluating the definite integral. In particular, we will determine the transfer function gain and phase errors which each method produces as a function of the step size h, the input frequency ω, and the zeros and poles of the linear system being simulated. These gain and phase errors will then be compared with those produced by both conventional integration algorithms and other specialized algorithms.

The transfer function gain and phase error comparisons will show that in many cases the state-transition method is less accurate than conventional integration methods, e.g., when the input frequencies are well below the bandwidth of the linear system being simulated. We will also show that evaluation of the definite integral based on analytic approximations to the input function over the interval of integration can lead to transfer function gain and phase errors which are completely independent of the system being simulated and depend only on ωh, the product of the frequency and the step size.

* See, for example, Chen, C.T., Introduction to Linear System Theory, Holt, Reinhart and Winston, Inc., 1960.

We will assume that the linear system being simulated is of order k and has m scalar inputs. The system can be represented by the following state vector equation:

(6.1)

Here X is the state vector with scalar components x1, x2, … , xk, A is the kxk state variable coefficient matrix, F(t) is the m-component input vector, and B is the kxm input coefficient matrix. The exact analytic solution for X(t) starting with the initial state X(t0), can be written as*

(6.2)

where W(t) is the kxk state-transition matrix. Thus the element wij(t) of the matrix W represents the solution xi(t) that results from a unit initial condition at t = 0 on xj. To use Eq. (6.2) for digital simulation of the linear system we let t0 = nh and t = (n + 1)h, where n is an integer and h is the time step used for the numerical solution. Denoting X(nh) by Xn and W(h) by W, we obtain from Eq. (2.2) the following vector equation for Xn+1:

(6.3)

For F = 0 and any initial condition X(0) = X0, the difference equation represented by Eq. (6.3) produces a digital solution X1, X2, X3, … which will be exactly correct at the discrete times h, 2h, 3h, … , respectively. This means that the characteristic roots of the digital solution will also be exact. Any dynamic errors exhibited by the state-transition method will result from the numerical method chosen to calculate the definite integral in Eq. (6.3).

To illustrate the use of the state-transition method we consider the first-order linear system represented by the equation

(6.4)

where λ is the characteristic root. Here the 1×1 state-transition matrix W is simply the scalar time function

(6.5)

Substituting Eq. (6.5) into Eq. (6.3), we obtain

(6.6)

The simplest way to evaluate the definite integral is to use rectangular integration. Thus we let the integrand be equal to its value at τ = 0, in which case the integral becomes simply (nh) = n, and Eq. (6.6) becomes

* Chen, op. cit.

 

(6.7)

Assuming a fixed step size h, we next use Z-transform theory to determine the transfer function for sinusoidal input data sequences. Taking the Z transform of the difference equation represented by Eq. (6.7), we have

(6.8)

where H*(z) is the Z transform of the digital system represented by the state-transition simulation of our first-order system.

The root z1 of the denominator of H*(z) is related to the equivalent characteristic root λ* of the digital system by the formula z1 = eλ*h. But from Eq. (6.8) z1 = eλh. Thus λ* = λ, i.e., the characteristic root of the digital system matches exactly that of the continuous system, which is inherent in the state-transition method, as noted earlier.

6.3 Determination of Transfer Function Errors

We recall from Chapter 2 that the transfer function for sinusoidal input data sequences is given by H*(ejωh), where H*(z) is the system Z transform. Thus the transfer function for the simulation of the first-order system with Z transform given by Eq. (6.8) becomes

(6.9)

From Eq. (6.4) we see that the transfer function of the continuous system is

(6.10)

The fractional error in digital transfer function, (H*H)/H, can be determined by dividing Eq. (6.9) by Eq. (6.10) and subtracting 1. Expanding ejωh and eλh in Eq. (6.10) with power series and retaining terms to order h2, we obtain the following approximate formula:

(6.11)

For any simulation of reasonable accuracy the complex number represented by the right side of Eq. (6.11) will have a magnitude which is small compared with unity. Earlier, in Eqs. (1.53) and (1.54) in Chapter 1, we showed that the real part of this complex number, denoted by eM, is approximately equal to the fractional error in transfer function gain and the imaginary part, denoted by eA, is equal to the transfer function phase error. Thus we can write

(6.12)

(6.13)

Note that the gain and phase errors both vary as the first power of the step size h. This is to be expected, since we used rectangular integration to approximate the definite integral in Eq. (6.6). Note also the the phase error corresponds to a time delay of h/2, which again is representative of rectangular integration.

An alternative method of evaluating the definite integral in Eq. (6.6) is to approximate the input over the interval h by a constant ƒn. The resulting integral is then evaluated analytically to obtain the difference equation

(6.14)

This results in the following system Z transform:

(6.15)

With the same method used to obtain Eqs. (6.12) and (6.13) from (6.8), we obtain the following approximate formulas for the transfer function gain and phase errors in this case:

(6.16)

Here the gain eM is zero to order h, whereas the phase error eA is of order h and corresponds to a time delay of h/2. It is interesting to note here that the transfer function error of order h exhibits no dependence on the characteristic root λof the first-order system being simulated. This is not unexpected when one recalls that we have evaluated the definite integral of Eq. (6.6) exactly once we have approximated ƒ(τ + nh) with ƒn over the integration interval h. In this case Eq. (6.6) is an exact solution for a continuous input function that is equivalent to zero-order extrapolation based on the sample ƒn of the continuous input ƒ(t). The only errors exhibited by the state-transition method, then, will be the errors due to the staircase-like approximation used for ƒ(t), which in the frequency domain is dominated by the phase error –ωh/2, as we have already seen in Chapter 5.

Since the transfer function for any order of linear system can be represented as a partial fraction expansion of first-order transfer functions of the type shown in Eq. (6.10), even in the case where λ is complex, we conclude that the results in Eq. (6.16) are equally valid for the approximate transfer function gain and phase errors in simulating any order of linear system when the state-transition method is used with ƒn as the approximation to the input ƒ(t) over the interval of integration. We have computed the gain and phase errors in Eq. (6.16) by determining to order h the fractional transfer function error, H*/H – 1. When H*/H is determined to order h2, the following approximate formula is obtained:

(6.17)
The formulas for eM and eA follow directly. The formula for the gain error to order h2 is obtained by taking the square root of the sum of the squares of the real and imaginary parts of the right side of Eq. (6.17). The phase error is simply the imaginary part of H*/H in Eq. (6.17). Thus we obtain

(6.18)

We note that the gain error of order h2 does not depend on the system characteristic root λ. However, the h2 term in the phase error does depend on λ. Thus our earlier conclusion that the gain and phase errors are independent of the system being simulated applies only for a sufficiently small integration step size h.

We now consider the performance of the state-transition method when using trapezoidal integration. In this case the definite integral in Eq. (6.6) is given by the formula (ƒn + eλhƒn+1)h/2 and the difference equation becomes

(6.19)

Based on this difference equation we obtain the following asymptotic formulas for the transfer function gain and phase errors:

(6.20)

Note that the gain and phase errors are proportional to h2 and depend on the characteristic root λ.

We consider next a linear approximation to the input function ƒ(τ + nh) in Eq. (6.6) based on ƒn and ƒn+1. Thus we let

(6.21)

When the integral in Eq. (6.6) is evaluated in this case, the following difference equation is obtained:

(6.22)

From this difference equation we obtain the following approximate formulas for the transfer function gain and phase errors:

(6.23)

Here the gain error eM is of order h2 and the phase error eA is zero to order h2. Also in this case, where we have used an analytic representation of ƒ(t) in the evaluation of the definite integral, the dominant transfer function error for the state-transition method is independent of λ, i.e., independent of the system being simulated, and depends only on ωh. As noted earlier, this means that the transfer function errors given in Eq. (6.23) apply equally well when the state-transition method is used to simulate a linear system of any order and the input ƒ(t) is approximated as a linear function based on ƒn and ƒn+1.

Table 6.1 summarizes a number of candidate formulas for approximating the input ƒ(t) over the interval h when using Eq. (6.6) to mechanize the simulation of a linear system. In each case the asymptotic formula for the transfer function gain and phase error is also listed. As noted earlier, use of a constant to approximate ƒ(t) results in a phase error of order h and a gain error which is zero to order h. From the table we see that use of a linear approximation for ƒ(t) results in a gain error proportional to h2 and a phase error which is zero to order h2. Finally, the use of a quadratic approximation for ƒ(t) results in a phase error proportional to h3 and a gain error which is zero to order h3. Note also that when ƒn+1 is used in the approximation formula for ƒ(t), the error coefficients are much smaller. If ƒ(t) is an external input in a real-time simulation, ƒn+1 is not available during the nth frame and the approximation for f(t) must be based on fn and past values of f. If both ƒ and ƒ̇ are state variables in the simulation, then ƒ̇n will be available at the start of the nth frame for use in the ƒ(t) approximation formulas shown in Table 6.1.

Table 6.1

Transfer Function Errors Using the State-transition Method

Comparison of the formulas in Table 6.1 for the transfer function errors with the individual integrator transfer function errors which are summarized in Table 3.1 of Section 3.12 shows that the formulas are identical for the same ƒ(t) approximation forms. For example, Table 3.1 shows that the AB-2 integrator transfer function is given by the following approximate formula:

(6.24)

Thus the AB-2 integrator has a gain error equal to (5/12)(ωh)2). This is identical to the transfer function error shown in Table 6.1 when a linear approximation based on ƒn and ƒn-1 is used for the input ƒ(t). This result is not surprising, since the AB-2 algorithm also approximates the integrator input ƒ(t) over the interval from 0 to h with a linear representation based on ƒn and ƒn-1. Thus all of the single-pass formulas in Table 3.1 can be used to predict the transfer function errors when the state transition method uses the same formula to approximate ƒ(t) in the definite integral of Eq. (6.6).

6.4 Application of the State-transition Method to a Linear Controller

We now consider an example application of the state-transition method. Assume that we wish to simulate a linear system given by the following transfer function:

(6.25)

Eq. (6.25) represents a typical bandwidth-limited proportional plus rate controller which would be used in a feedback control system. First we represent the system as the sum of two first-order systems. Thus we let

 

(6.26)

Using Eq. (1.44), we obtain the following formulas for the constants A1 and A2:

(6.27)

Each of the first-order systems in Eq. (6.26) can be simulated by utilizing any of the state-transition formulations described in the previous section. The method which represents the input ƒ(t) as a linear time function based on ƒn and ƒn+1 will be used here. We let X1 and X2 be the state variables representing the outputs of linear systems with transfer functions given by A1/(1 + T1s) and A2/(1 + T2s), respectively. From Eq. (6.22) we see that the difference equations have the following form:

(6.28)

Noting that A/(1 + Ts) = –/(s – λ) when λ = -1/T, we obtain from Eqs. (6.22) and (6.27) the following formulas for the constants in Eq. (6.28)

(6.29)

From Eq. (6.26) it follows that the controller output X is simply the sum of the two state variables X1 and X2. Thus

(6.30)

Eqs. (6.28) and (6.30) are the difference equations that must be solved every integration step to simulate the controller represented by Eq. (6.25). From Eq. (6.23) we know that the approximate fractional error in transfer function gain for the resulting digital system is given by -(1/12)(ωh)2, while the phase error to order h2 is zero.

Next we consider the state-transition method applied directly to the second-order system represented by Eq. (6.25). In this case we introduce a transfer function H1(s) given by

(6.31)

Comparison with H(s) in Eq. (6.25) shows that

(6.32)

If we let ƒ(t) be the input and X1 be the output of the system represented by H1(s), then X1 obeys the differential equation

(6.33)

Letting the state variable X2 = 1, we can write from Eq. (6.32) the following equation for the controller output X:

(6.34)

For the second-order system considered here the state-transition matrix is a 2×2 matrix. To obtain the elements of the matrix, we must solve Eq. (6.33) with ƒ(t) = 0 and appropriate initial conditions. When ƒ(t) = 0, the resulting transient (complementary) solution has the form

(6.35)

Here C1 and C2 are constants determined by the initial conditions; λ1 and λ2 are the characteristic roots, i.e., the roots of the denominator of H1(s) in Eq. (6.31). It follows that

(6.36)

Thus the general transient solution for X1 and X2 becomes

(6.37)

First we obtain the solution for X1(0) = 1, X2(0) = 0. With these initial conditions, Eq. (6.37) gives us

from which C1 = T1/(T1T2) and C2 = T2/(T2T1). Thus Eq. (6.37) becomes

(6.38)

To obtain the solution for X1(0) = 0, X2(0) = 1, we follow the same procedure, which results in

the following solutions:

(6.39)

In Eq. (6.38), X1(h) represents the response at t = h to a unit initial condition on X1. Thus X1(h) = w11 for the state-transition matrix W in the vector differential equation given by (6.3). Similarly, in Eq. (6.38) X2(h) = w21, the solution for X2 at t = h for a unit initial condition on X1. In the same way, in Eq. (6.39) X1(h) = w21 and X2(h) = w22. Thus the following formulas are obtained from Eqs. (6.38) and (6.39) for the elements of the 2×2 state-transition matrix of the linear system described by Eq. (6.33):

(6.40)

We now turn to the calculation of the definite integral in Eq. (6.3), where we will use the same analytic approximation to the input ƒ(t) that was employed earlier in this section. Thus we represent ƒ(t) with the linear formula in Eq. (6.21). This, along with the formulas in Eq. (6.40) for the state-transition matrix elements, permits us to determine analytically the definite integral in Eq. (6.3). Alternatively, based on Eq. (6.21), we see that the definite integral represents the response to a step input of amplitude ƒn and a ramp input of amplitude (ƒn+1ƒn)/h. If U(t) is the unit step response of the system and V(t) is the unit ramp response, then it follows from the superposition principle for linear systems that the definite integral is equal to U(h)ƒn + V(h)(ƒn+1ƒn)/h. The unit step response is in turn easily obtained from Eq. (6.33) by noting that the steady-state response to a unit step input, ƒ(t) = 1, is simply X1 = 1. Adding this to the transient solution of Eq. (6.35) and solving for the constants C1 and C2 using the initial conditions X1(0) = X2(0) = 0, we obtain the following solution for U(t):

(6.41)

Since a unit ramp function is the time integral of a unit step function, the unit ramp response, V(t), will be the time integral of the unit step response, U(t). Integrating Eq. (6.41) and choosing the integration constant such that V(0) = 0, we obtain

(6.42)

For the second-order system represented by Eq. (6.33), with Eq. (6.21) used to approximate the input ƒ(t) over each frame, the state-transition equations take the form

(6.43)

We have noted above that the term b11ƒn + b12ƒn+1, which represents the definite integral in the first of the two state equations, can be expressed in terms of the unit step and unit ramp response functions with the following formula:

(6.44)

Since X2 = 1, it follows from the second of the two state equations in (6.43) that

(6.45)

Here we have replaced with U. Substituting Eqs. (6.41) and (6.42) into Eqs. (6.44) and (6.45), we obtain the following formulas for b11, b12, b21 and b22:

(6.46)

(6.47)

(6.48)

(6.49)

From Eq. (6.34) we see that the formula used to compute the final controller output X at the n+ 1 frame is given by

(6.50)

Eqs. (6.43) and (6.50) are the difference equations for simulating the controller when the state-transition method is applied directly to the second-order system. Comparison with Eqs. (6.28) and (6.30), which are based on representing the controller as the sum of two first-order systems, shows that three additional multiplications and two additional summations are required when Eqs. (6.43) and (6.50) are used. In the case of an Nth-order linear system with distinct real characteristic roots (no repeated roots), we can represent the system as the sum of N first-order systems, as noted in Eq. (1.43) in Chapter 1. When the individual first-order systems are simulated using the state-transition method, with the first-order system outputs summed to obtain the final output, the state-transition matrix will have only diagonal elements. In this case the computation of WXn in Eq. (6.3) requires N multiplications. On the other hand, if the Nth-order system is simulated directly, with the final output and its N – 1 derivatives as the state variables, the state-transition matrix will in general contain the full N2 number of elements. Calculation of the term WXn in this case requires N2 multiplications plus N2N additions. The two mechanizations will yield identical results, yet the second mechanization will require a much longer execution time when simulating a high-order linear system.

If the system being simulated has any complex roots, the subsystem corresponding to each complex root pair will be second order. In this case the overall Nth-order system can be represented as the sum of S second-order subsystems and N – 2S first-order subsystems, where S is the number of complex root pairs. The state-transition matrix will have 2S off-diagonal elements and the calculation of WXn will require N + 2S multiplications and 2S additions. In the next section we derive the formulas for the state-transition coefficients for the case of complex characteristic roots.

It should be noted that an Nth-order linear system can also be represented as a serial combination of first and second-order subsystems. For example, the transfer function of Eq. (6.31) can be written as the product of two transfer functions, 1/(1 + T1s) and 1/(1 + T2s). Then the output of the first subsystem becomes the input to the second subsystem, and each first-order subsystem can be simulated using the state-transition method. This is another way to reduce the computational requirements associated with the simulation of high-order linear systems using the state-transition method. However, it has the disadvantage of exhibiting larger overall transfer function gain and phase errors, since the errors associated with each of the low-order subsystem simulations will be additive. Thus each of the first-order systems, when connected in series to represent the second-order transfer function given by Eq. (6.31), will have an asymptotic gain error equal to -(ωh)2/12 when the input ƒ(t) is represented by a linear approximation based on ƒn and ƒn+1. The net result will be an overall transfer function error of -(ωh)2/6, which is twice the error exhibited by the state-transition method when simulating the system as a single, second-order system, or as the sum of two first-order systems.

6.5 The State-transition Method in the Case of Complex Roots

In the controller example of the previous section we developed formulas for the difference-equation coefficients when applying the state-transition method to the simulation of a second-order system with real characteristic roots. In this section we determine the coefficients for a second-order system with complex roots. In terms of the undamped natural frequency ωn and the damping ratio ζ, the system differential equation can be written as

(6.51)

The characteristic roots for ζ < 1 are given by the formula

(6.52)

With ƒ(t) = 0 in Eq. (6.51), the transient solution takes the form

(6.53)

where C1 and C2 are constants. We let the state variables X1 and X2 be defined as X1 = X and X2 = in Eq. (6.51). As in the previous section, we first determine C1 and C2 for the initial conditions X1(0) =1, X2(0) = 0. The resulting solutions for X1(h) and X2(h) correspond to w11 and w21, respectively. In the same way, for initial conditions X1( 0) = 0, X2(0) = 1, the solutions for X1( h) and X2(h) correspond to w12 and w22, respectively. In this manner we obtain the following formulas for the elements of the 2×2 state-transition method when ζ < 1:

(6.54)

(6.55)

(6.56)

(6.57)

Next we determine the unit step and unit ramp response functions in order to be able to construct the definite integral formulas when first-order analytic approximations to the input ƒ(t) are utilized in the state-transition method. Consider first the unit step response, U(t). For ƒ(t) = 1 in Eq. (6.51), it is evident that the steady-state response X(t) = 1. Adding this to the transient solution of Eq. (6.53) and solving for the constants C1 and C2 with zero initial conditions, we obtain the following equation for the unit step response of the second-order system for ζ < l:

(6.58)

The unit ramp response will simply be the time integral of the unit step response. Alternatively, for a unit ramp input, ƒ(t) = t, it is easy to show by substitution into Eq. (6.51) that X(t) = t – 2ζ/ωn is the steady-state solution. Adding this to the transient solution of Eq. (6.53) and solving for the constants C1 and C2 with zero initial conditions, we obtain the following equation for the unit ramp response of the second-order system for ζ < 1:

(6.59)

If we use the first-order analytic approximation to the input ƒ(t) based on ƒn and ƒn+1, as given in Eq.(6.21), then the difference equations for the state-transition method take the form shown in Eq.(6.43). Eqs.(6.44) and (6.45) represent the definite integral terms, from which we can write the following formulas for the coefficients of ƒn and ƒn+1:

(6.60)

Here (t) is obtained by differentiating Eq.(6.58) for U(t). Thus we have

(6.61)

From Eqs. (6.58) through (6.61) we can now determine the formulas for calculating the coefficients b11, b12, b21 and b22 for the underdamped second-order system considered in this section. Eqs. (6.54) through (6.57) give the formulas for the coefficients w11, w12, w21 and w22.

If our first-order linear approximation to ƒ(t) is based on ƒn and ƒn-1 instead of ƒn and ƒn+1, as would of necessity be the case when ƒ(t) represents an external input in a real-time simulation, then Table 6.1 shows that the following equation is used to approximate ƒ(t):

(6.62)

In this case the state-transition difference equation takes the form

(6.63)

From Eq. (6.62) it is easily seen that the formulas for the coefficients b11, b12, b21 and b22, in terms of the unit step and unit ramp response functions, in this case become

(6.64)

If the input ƒ(t) is a state variable in the simulation, then the derivative ƒ̇n is available and another first-order approximation to the input is given by

(6.65)

For this case the state-transition equations have the form

(6.66)

In terms of unit step and ramp response functions, the coefficients b11, b12, b21 and b22 become

(6.67)

To develop the difference-equation coefficients for the quadratic approximation formulas for ƒ(t) given in Table 6.1 we need only add the derivation of the unit step acceleration response, A(t), to the formulas already derived for the unit step response, U(t), and the unit ramp response, V(t). For example, if we use a quadratic approximation to ƒ(t) based on ƒn, ƒn-1 and ƒn-2, the state transition difference equations have the following form:

(6.68)

Based on the formula in Table 6.1 for the quadratic approximation to ƒ(t) that utilizes ƒn, ƒn-1 and ƒn-2 we obtain the following formulas for the coefficients b11 through b23:

(6.69)

In the second equation of (6.69) we have replaced with U and with V. The unit step acceleration response can be determined by taking the time integral of the unit ramp response. Alternatively, we can derive the unit step acceleration response by noting that for ƒ(t) = t2/2, the steady-state solution, X(t) = (4ζ2 – 1)/ωn2 – (2ζ/ωn)t + t2/2, satisfies Eq. (6.51). Adding this to the transient solution of Eq. (6.53) and solving for the constants C1 and C2 with zero initial conditions, we obtain the following equation for the unit step acceleration response of the underdamped second-order system:

(6.70)

The above equation for A(t), along with U(t) in Eq. (6.58), V(t) in Eq. (6.59), and (t) in Eq.(6.61), can be used in formulas similar to those in Eq. (6.69) to compute the appropriate coefficients for any of the other three quadratic ƒ(t) approximations listed in Table 6.1.

To obtain A(t) for the overdamped second-order system of the previous section we note from Eq. (6.33) that the steady-state response for a unit acceleration input is given by X(t) = T12 + T1T2 + T22 – (T1T2)t + t2/2. Adding this to the transient solution of Eq. (6.35) and solving for C1 and C2 with zero initial conditions, we obtain

(6.71)

This equation for A(t), along with U(t) in Eq. (6.41) and V(t) in Eq. (6.43), can now be used to determine state-transition difference equation coefficients when using any of the quadratic formulas in Table 6.1 as an analytic approximation for ƒ(t).

6.6 The State-transition Method in the Case of Repeated Roots

In this section we consider a second-order system with repeated roots, i.e., one where both characteristic roots are the same. This corresponds to ζ = 1, in which case Eq. (6.52) shows that both characteristic roots are equal to –ωn. When this occurs, the transient solution takes the form

(6.72)

As in the previous two sections, we determine the elements of the state-transition matrix by obtaining the solutions for X1(h) and X2(h) with initial conditions X1(0) = 1, X2(0) = 0, and X1(0) = 0, X2(0) = 1. In this way we obtain from Eq. (6.51) with ζ = 1 and ƒ(t) = 0 the following formulas:

(6.73)

The above formulas can also be obtained by letting ζ approach 1 in Eqs. (6.54) through (6.57) for the underdamped second-order system. Equations for the unit impulse, step, ramp and acceleration responses can be derived using the same methods employed in the previous section. Or they can be obtained from Eqs. (6.58), (6.59), (6.61) and (6.70) by letting ζ approach 1. The following formulas are obtained:

(6.74)

(6.75)

(6.76)

(6.77)

With the formulas in Eqs. (6.73) through (6.77) we can determine the coefficients in the state-transition difference equations using analytic approximations up to second-order for the input ƒ(t). The procedure to do this for the repeated-root system considered in this section is the same as that presented in the previous section for the underdamped second-order system.

In Section 6.4 we considered the simulation of a linear controller. The second-order controller transfer function in Eq. (6.25) was represented in Eq. (6.26) as the sum of two first-order transfer functions. The state-transition method was then used to simulate each first-order system. However, when the two controller time constants, T1 and T2, are equal, the second-order transfer function can no longer be represented as the sum of two first-order transfer functions. This is apparent in the formulas of Eq. (6.27). In this case the controller must be simulated directly as a second-order system. When T1 = T2, both characteristic roots are the same, and the formulas developed in this section for repeated roots when using the state-transition method can be used. Note also that if the two characteristic roots of the controller are complex, the controller must also be simulated directly as a second-order system. In this case the state-transition formulation developed in Section 6.5 applies.

After introducing and analyzing Tustin’s method in the next section, we will make some specific transfer function accuracy comparisons with the state-transition method. This will enable us to better understand some of the relative advantages and disadvantages of these two schemes in simulating linear systems.

6.7 Tustin’s Method

We now turn to the consideration of Tustin’s substitution method for the simulation of linear systems.* Actually, the method is simply trapezoidal integration, which we have already analyzed in Section 3.5 of Chapter 3 as one of the single-pass implicit integration algorithms. From Eq. (3.81) we have the following formula for the Z transform of the trapezoidal integrator:

(6.78)

As noted in Section 3.3, H*I(Ζ) is the digital equivalent to 1/s in the transfer function for the continuous linear system. Thus 1H*I(Ζ)is equivalent to s. For single-pass integration algorithms, such as implicit trapezoidal integration, this leads directly to the substitution formula given earlier in Eq. (3.41), i.e.,

(6.79)

From Eq. (6.78) this results in Tustin’s substitution formula for determining the Z transform of the digital system used to simulate a linear system with transfer function H(s). Thus

(6.80)

When we apply this formula to the controller transfer function given in Eq. (6.25), we obtain the

*See Smith, Jon M., Mathematical Modeling & Digital Simulation for Engineers, John Wiley & Sons, 1077.

following Z transform:

(6.81)

Based on Eq. (6.81) the difference equation for the controller output Xn+1 takes the following form:

(6.82)

where the coefficients are given by the following formulas:

(6.83)

where

(6.84)

The difference equation in (6.82) requires 5 multiplications and 4 additions per integration step. When we simulate the controller as the sum of two first-order systems using the state-transition method, Eqs. (6.28) and (6.30) require 6 multiplications and 5 additions. When the controller is simulated as a single second-order system using the state-transition method, Eqs. (6.43) and (6.50) require 9 multiplications and 7 additions. With either mechanization it is clear that the state-transition method will require more processing time than Tustin’s method in simulating the controller.

Next we compare the dynamic accuracy of the two methods. For Tustin’s method (i.e., trapezoidal or second-order implicit integration) we see from Table 3.1 that the integrator error coefficient, eI, is equal to -1/12. Because the order k of the method is equal to 2, Eq. (3.45) shows that the fractional error in characteristic root, when the root λ is real and equal to -1/T, is given by (h/T)2/12. Since the two roots of the controller, -1/T1 and -1/T2, both have the same form, we are able to determine the root errors as a function of the integration step size h. By definition, on the other hand, the state transition method exhibits no error in its equivalent characteristic roots.

But in simulating a controller, dynamic accuracy of the sinusoidal transfer function is probably a more significant error measure. When using the state-transition method with a first-order analytic approximation to the input ƒ(t) based on ƒn and ƒn+1, we see from Table 6.1 that the fractional gain error is given approximately by -(ωh)2/12, whereas to order h2 the phase error of the overall transfer function is zero. In the case of Tustin’s method, for which k = 2 and eI = -1/12, we see from Eq. (3.48) that the gain and phase errors in simulating a first-order system with the transfer function 1/(1 + Tqs) are given, respectively, by

(6.85)

To obtain the gain and phase errors for the simulation of the overall controller transfer function represented by Eq. (6.25), we utilize Eqs. (3.69), (3.70) and (6.85) with Tq equal to T1, T2, and Ce for q = 1, 2 and 3, respectively. This results in the following formulas:

(6.86)

In order to compare the above gain and phase errors for Tustin’s method with those for the state-transition method, we consider a specific case. Let T1 = T2 = 0.1Ce. For these parameters the transfer function H() in Eq. (6.25) represents a bandwidth-limited controller which exhibits a maximum phase lead of 40.8 degrees when ωCe = 2.1. Figure 6.1 shows plots of the gain and phase errors when normalized through division by (ωh)2 versus the dimensionless frequency ωCe for Tustin’s method based on Eq. (6.86) and for the state-transition method. Although the state-transition method has zero phase error to order h2, which is not the case for Tustin’s method.

Figure 6.1. Normalized gain and phase errors versus ωCe for controller transfer function

simulated by Tustin’s method and the state-transition method.

Figure 6.1 shows that the normalized gain error of -1/12 (= -0.0833) for the state-transition method represents a larger magnitude than the normalized gain error for Tustin’s method. This is especially true for the lower frequencies, i.e., ω < 1/Ce. It is only at high frequencies, ω ≫ 1/Ce, that the gain error coefficient for Tustin’s method approaches that of the state-transition method. Note that the normalized phase error for Tustin’s method approaches zero for very high frequencies.

The behavior of the gain and phase errors for Tustin’s method as a function of frequency can be easily understood by examining the terms in the asymptotic error formulas of Eq. (6.86). For very low frequencies, that is, ω ≪ 1/Ce in the first term and ω ≪ 1/T1, ω ≪ 1/T2 in the second and third terms on the right side, the gain error coefficient is proportional to ω2. Thus the lower the frequency, the smaller the normalized gain error, eM/(ωh)2, which is clearly evident in Figure (6.86). Over the same low-frequency regime, reference to Eq. (6.86) shows that the normalized phase error, eA/(ωh)2, is proportional to ω. Again this is confirmed in the figure. On the other hand, as noted above, the normalized gain error for the state-transition method is a constant, -1/12, independent of the frequency ω. We conclude that the gain error in the low-frequency regime when using Tustin’s method is significantly less than the gain error when using the state-transition method, more than enough to offset the small phase error associated with Tustin’s method.

The question naturally arises as to why the state-transition method, which is based on an exact analytic solution to the linear differential equation, yields poorer transfer function accuracy than Tustin’s method (trapezoidal integration) at lower frequencies. The answer lies in the examination of the general state-transition difference equation in Eq. (6.3), or its specific form in Eq. (6.6) for the simulation of a first-order system. When the frequency of the input sinusoid is well below the bandpass frequency of the system, the derivative of the state-variable output is low in magnitude, resulting from the small difference of two larger terms. Under these conditions, for example, the term λx on the right side of Eq. (6.4) almost cancels the input term ƒ(t) for input frequencies much lower than 1/|λ|. When we solve Eq. (6.4) numerically using a standard integration method, such as trapezoidal integration, the small difference is being integrated directly. Any integration truncation errors apply directly to this small difference. On the other hand, only the input ƒ(t) and not the state x appears in the integrand for the state-transition difference equation given by (6.6). In this case the truncation errors apply directly to the full input ƒ(t) , which results in a significantly larger integration truncation error in the final solution of the difference equation. Put simply, the state-transition method integrates first and then computes the small difference which is added to xn to produce xn+1, whereas conventional methods compute the small difference and then integrate.

We recall that the formulas given by Eq. (6.86) and the normalized error plots in Figure 6.1 are all based on the assumption that the integration step size is small enough to maintain fractional gain error and phase error magnitudes that are themselves small compared with unity. The exact fractional gain and phase errors for the transfer function can, of course, be computed for any frequency ω and step size h by evaluating H*(ejωh) based on the difference-equation Z transform, H*(z). When this is done, the results agree surprisingly well with the asymptotic formulas, even for ωh values approaching unity. Thus the approximate formulas can be used with some confidence for most practical simulations.

It should be noted that Eq. (6.82) for simulation of the our bandwidth-limited proportional-plus-rate controller utilizes a single difference equation to simulate the second-order system represented by the transfer function of Eq. (6.25). This is why Eq. (6.82) includes terms in Xn-1 and ƒn-1 in addition to terms in Xn and ƒn. In this case Xn and Xn-1 are the state variables at time tn. An alternative procedure is to utilize Eqs. (6.33) and (6.34), from which the state equations for the continuous system can be written as

(6.87)

The difference equations for trapezoidal integration (Tustin’s method) then become

(6.88)

(6.89)
Eq. (6.88) for Xn+1 can be substituted into Eq. (6.89) to obtain an explicit formula for X2n+1. In this way we obtain the following equations:

(6.90)

where

(6.91)

and

(6.92)

Eqs. (6.88), (6.90) and (6.92) constitute the difference equations for simulating our bandwidth-limited proportional-plus-rate controller using trapezoidal integration. They require a total of 6 additions and 5 multiplications per integration step, compared with the 4 additions and 4 multiplications needed to solve the single difference equation given earlier in the mechanization represented by Eq. (6.82). However, Eqs. (6.88), (6.90) and (6.91) only involve states and the input at time tn, which eliminates the startup problems associated with Eq. (6.82). In summary, we have seen that Tustin’s substitution method can be used to obtain a single difference equation for the simulation of a linear system using trapezoidal integration or, alternatively, explicit difference equations for each of the first-order state equations can be derived and utilized for the simulation, which removes any startup problems.

We should also recall that trapezoidal integration (Tustin’s method) requires both ƒn+1 and ƒn as inputs. This is why, for comparison purposes, we have also used the state-transition method based on ƒn+1 and ƒn for the results presented in Figure 6.1. As noted in an earlier section, if ƒ(t) is an external real-time input, then ƒn+1 is not available for calculations during the nth frame in a real-time simulation. For the state-transition method in this real-time case we can employ the algorithm in Table 6.1 which uses ƒn and ƒn-1 instead of ƒn+1 and ƒn. Note that this increases the transfer function error magnitude by a factor of 5. To utilize trapezoidal integration when ƒ(t) is a real-time input, however, we must replace ƒn+1 with an estimate, which we denote as ƒ̂n+1. For example, this estimate can be based on a linear extrapolation from ƒn and ƒn-1 using the formula

(6.93)

This is simply the extrapolation formula in Table 4.1 with inputs rn and rn-1, and dimensionless extrapolation interval α = 1. The Tustin (trapezoidal) formula for integrating = ƒ can in this case be written as

(6.94)

where ƒ̂n+1/2 is considered an estimate of the input at the midpoint of the integration step given by

(6.95)

If we replace ƒn+1 in Eq. (6.95) with ƒ̂n+1 as given in Eq. (6.93), we obtain the following real-time estimate, ƒ̃n+1, for the input at the n + 1/2 frame:

(6.96)

This is in fact the extrapolation formula in Table 4.1 with inputs rn and rn-1, and dimensionless extrapolation interval a = 1/2. We also observe that Eq. (6.96), when combined with Eq. (6.94), represents nothing more than AB-2 integration. Table 4.2 shows that the extrapolation algorithm in Eq. (6.96) produces a gain error approximately equal to (3/8)(ωh)2 and a phase error of zero to order h2. Eq. (6.95) can be considered equivalent to extrapolation based on ƒn and ƒn-1 with a = -1/2. For this case Table 4.2 shows that the algorithm produces an approximate gain error given by (-1/8)(ωh)2. When the formula in Eq. (6.96) is substituted into Eq. (6.94), the net change in gain to order h2 will be (3/8)(ωh)2 – (-1/8)(ωh)2 = (1/2)(ωh)2. This means that the gain error when using trapezoidal integration to simulate the controller, but with Eq. (6.93) used to estimate ƒn+1, will be given by the formula for eM in Eq. (6.86) plus (ωh)2/2. This will result in a substantial increase in the transfer function gain error. To order h2 there will be no change in the phase error.

By the same token, if we substitute Eq. (6.93) for ƒn+1 into the state-transition difference equations based on ƒn and ƒn+1, the gain error is increased by (ωh)2/2. This changes the gain error in Table 6.1 from (-1/12)(ωh)2 to (5/12)(ωh)2, i.e., precisely the gain error formula when using the first-order approximation of ƒ(t) based on ƒn and ƒn-1. As noted earlier, this represents an increase by a factor of 5 in the magnitude of the transfer function gain error in order to accommodate the real-time input requirement.

Based on the above results, one might ask why Tustin’s method should even be considered, since the first-order estimate for ƒn+1 in Eq. (6.93) adds (ωh)2/2 to the fractional error in transfer function gain. Why not simply use the AB-2 predictor algorithm, which will produce comparable gain accuracy? The answer to this question lies in the comparative stability of the two methods. Figure 3.2 shows the stability boundary in the λh plane when using AB-2 integration. For our controller example with T1 = T2 = 0.1Ce, the figure shows that the simulation will become unstable for h > T1. On the other hand, Figure 3.4 shows that the stability boundary for second-order implicit integration, i.e., Tustin’s method, is the imaginary axis in the λh plane, with the region of stability represented by the entire left-half plane. Thus the method is stable for any step size h as long as the linear system being simulated is stable. Stability of the algorithm is one of the principal reasons for using Tustin’s method. In our controller example we are more likely to be interested in the controller transfer function fidelity at frequencies in the neighborhood of 1/Ce. For T1 = 0.1Ce this corresponds to the frequency 0.1/T1. If we let h = T1 and ω = 0.1/T, the gain term (ωh)2/2 is equal to 0.005 when using Eq. (6.93) to convert Tustin’s method into a real-time algorithm. If we use AB-2 integration for the same simulation, a step size h = T1 will cause the simulation to be neutrally stable (due to the extraneous AB-2 roots). As noted earlier, the question of algorithm stability does not arise when using the state-transition method, since by definition the algorithm matches exactly the characteristic roots of the system being simulated.

It should be noted that the difference equation for Tustin’s method in tl1e case of real-time inputs cannot in general be obtained simply by using Eq. (6.93) to substitute ƒ̂n+1 for ƒn+1 in the original difference equation. Such a substitution procedure only works when simulating a first-order system. Thus replacing ƒn+1 in Eq. (6.82) with 2ƒnƒn-1 will not lead to the correct difference equation for simulating our second-order controller with a real-time input. The correct equation can be obtained by writing the difference equations for solving the original state equations represented by (6.33) and (634) using trapezoidal integration, but with ƒn+1 replaced by 2ƒnƒn-1. By taking the Z transform of these equations and solving for the overall system Z transform, H*(z) = X*(z)/F*(z), we can determine the final difference equation. In this way we obtain

(6.97)

Here the formulas for the coefficients a0 and a1 are the same as those given earlier in Eq. (6.83). The formulas for b0, b1 and b2 are

(6.98)

where D is given by the previous formula in Eq. (6.84).

On the other hand, if Eqs. (6.88), (6.90) and (6.92) are used for the trapezoidal-integration simulation of our bandwidth-limited controller, the correct difference equations for the case of real-time inputs can be obtained by simply replacing ƒn+1 in Eq. (6.89) by ƒ̂n+1, as given in Eq. (6.93).

6.8 Example of a Third-order Substitution Method

The substitution method for simulating linear systems using implicit integration is not limited to Tustin’s method. For example, we can use a third-order substitution based on the third-order implicit integration algorithm given in Eq. (3.84). From Eq. (3.85) we see that the integrator Z transform is given by

(6.99)

Substituting the reciprocal of H*1(z) for s in the controller transfer function H(s) in Eq. (6.25), we obtain H*(z) for the digital system Z transform when third-order implicit integration is used. The corresponding difference equation takes the form

(6.100)

Alternatively, we can write the difference equations for third-order implicit integration of the state equations given in (6.87) and obtain an explicit formula for X1n+1. In this way we obtain the following equations for simulating our bandwidth-limited controller:

(6.101)

(6.102)

(6.103)

where

(6.104)

The implementation of third-order implicit integration using Eqs. (6.101), (6.102) and (6.103) requires 11 additions and 11 multiplications versus the 8 additions and 9 multiplications needed to implement the single difference equation given by Eq. (6.100). On the other hand, Eqs. (6.101), (6.102) and (6.103) pose a less severe startup problem.

If we use the state-transition method with the analytic approximation for the input ƒ(t) based on ƒn+1, ƒn and ƒn-1, the difference equations take the form

(6.105)

where the final output Xn+1 is given by Eq. (6.103). For our specific controller example with T1 = T2 = 1/ωn, the coefficients w11, w12, w21 and w22 are given in Eq. (6.73). From the ƒ(t) approximation formula in Table 6.1 based on ƒn+1, ƒn and ƒn-1 we obtain the following formulas for b10 thru b22 in terms of the unit impulse, step, ramp and acceleration response of Eqs. (6.74) through (6.77):

(6.106)

From Eqs. (6.105) and (6.103) we see that the state-transition method requires 11 multiplications and 9 additions per integration step. For the input representation based on ƒn+1, ƒn and ƒn-1, Table 6.1 shows that the transfer function will have a phase error given approximately by -(ωh)3/24 and a gain error of zero to order h3.

The transfer function errors for the third-order substitution method are given by Eqs. (3.47), (3.69) and (3.70), where from Table 6.1 we see that eI = -1/24 and k = 3. This results in the following approximate gain and phase errors:

(6.107)

Comparison with the errors in Eq. (6.86) for Tustin’s method shows that the normalized error coefficient magnitudes for gain and phase here have simply reversed roles. For T1 = T2 = 0.1Ce, Figure 6.2 shows plots of the gain and phase errors, normalized through division by (ωh)3, versus dimensionless frequency ωCe. The errors are shown for both the third-order implicit substitution method introduced in this section and the third-order state-transition method considered here. Note that the phase error for the third-order implicit method is in general smaller in magnitude than the phase error for the state-transition method, especially at frequencies below 1/Ce. Only as the frequency becomes large compared with 1/Ce do the phase errors for the two methods become equal. Figure 6.2 also shows that the gain error for the third-order implicit method is comparable with the phase error at intermediate frequencies but becomes very much less at high frequencies. As noted earlier, the gain error for the third-order state-transition method is zero to order h3. We recall, of course, that all the formulas used here for transfer function gain and phase errors are approximate, based on the errors themselves being small. However, as noted in the previous section, the formulas work surprisingly well even when the error magnitudes are appreciable compared with unity.

The third-order implicit-substitution and third-order state-transition methods of this section both require ƒn+1 as well as ƒn and ƒn-1 as inputs for the nth integration frame. This means that neither of the two methods is suitable for a real-time simulation when ƒ(t) is a real-time input.

Figure 6.2. Normalized gain and phase errors versus ωCe for controller transfer function simulated

by third – order implicit substitution and third- order state – transition methods.

For the state-transition method we can shift to the analytic input approximation in Table 6.1 that is based on ƒn, ƒn-1 and ƒn-2. However, this increases the magnitude of the transfer function phase error from (1/24)(ωh)3 to (3/8)(ωh)3, i.e., by a factor of 9. For the third-order implicit substitution method our only choice is to compute an estimate for ƒn+1 based on ƒn and past values. To preserve third-order accuracy we use ƒn, ƒn-1 and ƒn-2. For quadratic extrapolation with a = 1 we obtain from Table 4.1 the following formula:

(6.108)

The estimate ƒ̂n+1 then replaces ƒn+1 in Eq. (6.101) in order to implement the real-time algorithm. Note that this changes the term (5ƒn+1 + 8ƒnƒn-1) to (23ƒn – 16ƒn-1 + 5ƒn+1), which is equivalent to using the AB-3 method instead of the third-order implicit method to integrate the controller input ƒ(t). From Table 3.1 we see that the approximate phase error associated with AB-3 integration is (3/8)(ωh)3, whereas the approximate phase error associated with third-order implicit integration is (-1/24)(ωh)3. It follows that the net change in overall transfer-function phase shift when AB-3 replaces third-order implicit integration of the input becomes (3/8)(ωh)3 – (-1/24)(ωh)3 = (5/12)(ωh)3. This must be added to the phase error eA given in Eq. (6.107) in order to determine the overall transfer function phase error for this real-time version of the third-order substitution method. As pointed out for Tustin’s method in Section 6.7, the main advantage in using the third-order implicit method in this real-time case as opposed to standard AB-3 integration lies in the improved stability of the overall controller simulation. In particular, comparison of the stability boundary in Figure 3.4 for third-order implicit integration with the stability boundary in Figure 3.2 for AB-3 integration shows that the implicit method permits an order of magnitude larger step size h before the integration becomes unstable. This could represent a very important reason for using the third-order substitution method in simulating our example controller. Of course, the third-order state-transition method will be stable for any step size h.

6.9 Time-domain Response Errors for Step Inputs

Up to this point in the chapter we have examined the dynamic performance of the state-transition, Tustin, and third-order substitution methods in the frequency domain by analyzing the transfer-function gain and phase errors. In this section we examine the dynamic errors in the time domain by considering the response of the example linear controller to step input functions using each of the simulation methods. The transfer function of the bandwidth-limited controller is given by Eq. (6.25). As in Sections 6.7 and 6.8, we let T1 = T,2 = 0.1Ce. This results in the following transfer function:

(6.109)

The state equations for the system can be written as

(6.110)

(6.111)

The two state equations for X1 and X2 represent a second-order system with repeated characteristic roots equal to –ωn, as discussed in Section 6.6. Thus the analytic solutions for X1 in the case of unit impulse, step, ramp and acceleration inputs are given, respectively, by Eqs. (6.74) through (6.77). It follows from Eqs. (6.74), (6.75) and (6.113) that the analytic formula for the unit step response of the overall controller is given by

(6.112)

For ωn = 10/Ce, this solution is plotted as a function of t/Ce in Figure 6.3.

Figure 6.3. Unit-step response of controller.

We consider first the controller simulation using the state-transition method with the continuous input ƒ(t) over the nth frame represented by a linear approximation based on ƒn and ƒn+1. The method employs Eqs. (6.34) and (6.43), with the coefficients given by Eqs. (6.60) and (6.73) through (6.76). We assume an ongoing simulation with the input ƒn = 0 for n < 0 and ƒn = 1 for n ≧ 0. The controller states X1 and X2, and hence the controller output X, are all assumed equal to zero for t < 0. For the frame beginning at t = – h it follows that X1-1 = X2-1 = 0, ƒ-1 = 0, and ƒ0 = 1 in Eq. (6.43). For an integration step size h = 0.025Ce, the resulting solution for Xn is shown in Figure 6.4. Also shown is the exact step response of the continuous system. Note that the state-transition solution appears to lead the ideal continuous solution, with the state-transition solution X0 at t = 0 equal to 1.07 instead of zero. To understand why this happens, we need to examine the nature of the numerical approximation to the step input. In Figure 6.5 is shown a unit step input ƒ(t) which occurs at t = 0, along with the linear approximation ƒ̂(t) based on ƒn and ƒn+1, as utilized here by the state-transition method. Although the ideal unit step input

Figure 6.4. Unit step response of controller when simulated by the state-transition

method based on ƒn and ƒn+1.

does not begin until t = 0, the linear approximation ƒ̂(t) extends from t = –h to t = 0 over the previous frame. This is because of our assumption that the simulation has already been running for negative t. Thus the area under ƒ̂(t) versus t is equivalent to the area under an ideal unit step that begins at t = –h/2, not t = 0. Actually, when the input data sequence is obtained by sampling a continuous input ƒ(t), the sample ƒ0 = 1 can result from a step input ƒ(t) which occurs anywhere over the interval –h < t ≤ 0, with t = –h/2 being the most likely time of step occurrence. Based on the nature of the input interpolation function ƒ̂(t), then, it is more appropriate to compare the state-transition solution with the continuous solution for a unit step applied at t = –h/2, not t = 0. This has been done in Figure 6.6. Note that the state-transition solution now provides a much

Figure 6.5. Linear interpolation function ƒ̂(t) for a unit step occurring at t = 0; the equivalent

step input based on the area under ƒ̂(t) versus t occurs at t = –H/2.

Figure 6.6. Comparison of state-transition method based on ƒn and ƒn+1 with the ideal

controller response to a unit step advanced in time by h/2 seconds.

better match to the ideal solution. In Figure 6.7 is shown the simulation error, i.e., the difference between the state-transition solution and the exact controller response to a unit step input applied at t = –h/2 = – 0.0125Ce. Also shown in Figure 6.7 is the simulation error for Tustin’s method using Eqs. (6.90), (6.91) and (6.92). Again, the error is based on the continuous solution for a

Figure 6.7. Controller simulation; comparison of unit step-response errors using Tustin’s

method with errors using that state-transition method based on ƒn and ƒn+1.

unit step applied at t = –h/2, since Tustin’s method, i.e., trapezoidal integration, utilizes the same linear approximation for he step input that was illustrated previously in Figure 6.5. Figure 6.7 shows that the state-transition method produces a more accurate simulation of the controller response to a unit step input. Since both methods are second-order, for any given time t the error for step sizes other than h = 0.025Ce will vary approximately as h2.

In the step-function response errors shown in Figure 6.7 for the state-transition method and Tustin’s method, both techniques required ƒn and ƒn+1 as input data points for the nth integration frame. As we have noted previously, this is not compatible with real-time inputs in a real-time simulation. In the real-time case Eqs. (6.63) and (6.64), which are based on linear extrapolation of the input using ƒn and ƒn-1, can be used for the state-transition method. For Tustin’s method Eq. (6.90) can be employed, with ƒn replaced by ƒ̂n+1, as given in Eq. (6.93) in terms of ƒn and ƒn-1. For both cases Figure 6.8 shows ƒ̂(t), the approximation to the input based on linear extrapolation using ƒn and ƒn-1. As in Figure 6.5 we have assumed an ongoing simulation, with ƒn = 0 for n < 0 and ƒn = 1 for n ≥ 0. The incremental area, h/2, under the triangular extrapolation function from t = 0 to t = h in Figure 6.8 is equivalent to the incremental area that results when the unit step is initiated at t = –h/2 instead of t = 0. This is the same conclusion we reached in consider-ing the unit-step interpolation function in Figure 6.5. Thus it is appropriate to compare the unit step response obtained by the real-time state-transition and Tustin methods with the ideal controller response to a unit step which occurs at t = –h/2. This is further verified by the results shown in Figure 6.9, where the real-time state-transition solution for h = 0.025Ce is compared with the exact controller step response and the exact step response advanced in time by h/2. After the initial startup errors associated with the use of first-order extrapolation with a step-input function, it is clear that the state-transition solution is a much better match with the exact solution when the exact solution is advanced in time by h/2. Similar results are obtained in the case of Tustin’s method.

For both the real-time state-transition and Tustin methods the errors in simulated controller step response, based on the exact response to a unit step input at t = –h/2, i.e., advanced in time by h/2, are plotted versus dimensionless time t/Ce in Figure 6.10. Comparison with the earlier results in Figure 6.7 shows that the errors in Figure 6.10 for real-time inputs using ƒn and ƒn-1 are significantly larger than those for the non real-time inputs using ƒn and ƒn+1. This is not only true for the initial transient errors, which are very large indeed, but also for the ongoing transient errors.

Figure 6.8. Linear extrapolation function ƒ̂(t) for a unit step occurring at t = 0;

the equivalent step input based on the area under ƒ̂(t) versus t occurs at t = h/2.

Figure 6.9. Unit step response of controller when simulated by the state-transition method

based on ƒn and ƒn-1; comparison with exact response to step input at t = 0 and t = –h/2.

Figure 6.10. Controller simulation; comparison of unit step-response errors with both the

state- transition and Tustin’s method based on inputs ƒn and ƒn-1.

6.10 Time-domain Response Errors for Acceleration-limited Step Inputs

In Section 3.13 we noted that utilization of step-input functions for dynamic systems is not a very effective way to study the time-domain accuracy of different numerical simulation methods, unless one is specifically interested in the response to such step inputs. The reason is the non-existence of the time derivatives of step functions, which causes very large errors in any predictor-type integration algorithm. These errors can completely dominate the overall response error. We have seen this in the controller simulation examples presented in the section just concluded. Furthermore, true step inputs are unlikely to occur in most real-world dynamic systems. For this reason we consider again the same test input introduced in Section 3.13, namely, the acceleration-limited step input given in Eq. (3.176), illustrated originally in Figure 3.14, and shown again in Figure 6.11 with the time constant T in Eq. (3.176) equal to 0.1Ce. Also shown in Figure 6.11 is the exact controller response to the acceleration-limited step input, as well as the exact controller response to a true step input which is delayed in time so that it occurs at the midpoint of the transient buildup associated with the acceleration-limited step input. The figure shows that the response to the acceleration-limited step input is quite similar in shape to the response for a true step input, and therefore represents a valid test of various controller simulation algorithms when subjected to a step-like input, but one with no displacement or velocity discontinuity.

Figure 6.11. Controller response to an acceleration-limited step input.

For the same acceleration-limited step input, Figure 6.12 shows the output response error when the controller is simulated by Tustin’s method and by the state-transition method, where both methods use the input data points ƒn and ƒn+1, and an integration step size h = 0.025Ce. Thus Tustin’s method is based on Eqs. (6.90) through (6.92), and the state-transition method is based

Figure 6.12. Output error in simulated controller response for an acceleration-limited

step input based on input data points ƒn and ƒn+1.

on Eqs. (6.90) through (6.92), and the state-transition method is based on Eqs. (6.34) and (6.43), along with (6.73) through (6.76). The figure shows that the maximum time-domain errors are quite comparable for both methods, which is also consistent with the transfer function errors shown earlier in Figure 6.1. The effect of the step changes in input acceleration that occur at t = 0, 0.1Ce and 0.2Ce is also evident in Figure 6.12, where both methods show substantial transients generated at each of these times.

We consider next the controller response errors for an acceleration-limited step input when real-time versions of the state-transition and Tustin methods are used. In this case we employ Eqs. (6.63) and (6.64) for the state-transition method and Eq. (6.90) with ƒn+1 replaced by ƒ̂n+1, as given in Eq. (6.93), for Tustin’s method. For both methods the input approximation is based on linear extrapolation using ƒn and ƒn-1. Figure 6.13 shows the errors in simulated response, as well as the errors when AB-2 integration is used for the simulation. As before, the integration step size h = 0.025Ce. The errors for the state-transition and Tustin methods are almost identical, and they are much larger than the errors in Figure 6.12 for the non real-time case that uses the input data points ƒn and ƒn+1. Also, the maximum errors in Figure 6.13 for the AB-2 method are comparable to the maximum errors for the state-transition and Tustin methods. This suggests that the predominant cause of the errors lies in the input approximation ƒ̂(t), which for all three methods is based on linear extrapolation using ƒn and ƒn-1, as illustrated in Figure 6.14. Note that ƒ̂(t) falls below the exact input ƒ(t) for 0 ≤ t < 4h in Figure 6.14. This will cause a decrease in simulated controller response compared with the ideal response, which explains the initial negative errors that appear in Figure 6.13 for all three methods.

Figure 6.13. Output error in simulated controller response for an acceleration-limited

step input based on input data points ƒn and ƒn-1.

Figure 6.14. Linear extrapolation function ƒ̂(t) based on ƒn and ƒn-1, as used to approximate

the acceleration-limited step input function in the case of real-time inputs.

In contrast with Figure 6.14, Figure 6.15 shows that when the simulation input for the nth integration frame is based on linear interpolation between ƒn and ƒn+1, as in the state-transition and Tustin simulation results shown in Figure 6.12, the approximation ƒ̂(t) is a much better representation for the exact acceleration-limited step input function. This explains why the simulation errors in Figure 6.12 are much smaller than those in Figure 6.13.

Figure 6.14. Linear interpolation function ƒ̂(t) based on ƒn and ƒn+1, as used to approximate

the acceleration-limited step input function for the state-transition and Tustin methods.

6.11 Alternative Definition of the Controller State Equations

Earlier in this chapter we utilized Eq. (6.87) to represent the first-order state equations describing the bandwidth-limited proportional-plus-rate controller, as defined by the transfer function of Eq. (6.25). In this representation the state variables are Xl and X2, with = X2 and the controller output given by X = X1 + CeX2. Alternatively, we can define the controller output X as a state, in which case the state equations can be written as follows:

(6.113)

Differentiation of the above equation for leads directly to the second-order differential equation represented by the controller transfer function of Eq. (6.25). The above form for the controller state equations avoids the extra calculation of X in terms of X1 and X2, which requires an extra multiplication and addition. On the other hand the calculation of in Eq. (6.113) takes an extra multiplication and addition, so that there is no net computational saving over Eq. (6.87). Furthermore, the state equations must be written in the form of Eq. (6.87) when using the state-transition method, where the input ƒ cannot be included in the definition of a state. Also, the derivation of two explicit difference equations in the case of trapezoidal integration (Tustin’s method) is simpler when Eq. (6.87) is used to define the controller state equations.

6.12 The State-transition Method Applied to the Simulation of Nonlinear Systems

The state-transition method, as presented in Section 6.2, is clearly limited to the simulation of time-invariant linear systems. It can also be applied to the simulation of time-varying linear systems, in which case the matrices become explicit functions of time. Indeed, if we are willing to let the matrices be functions of both the state and input vectors, the state-transition method can be applied to the simulation of nonlinear systems. In this case, however, it is more efficient to write the state-transition vector difference equation at the nth integration frame directly as a function of the state vector Xn and input vectors Fn, Fn-1, … . Thus we have

(6.114)

Here the number of past input vector data points Fj included in the function G depends on the selected order of the approximation for F(t). For example, a linear approximation for F(t) requires Fn and Fn-1.

Since an analytic solution to nonlinear differential equations is in general not available, the only way the nonlinear function G in Eq. (6.114) can be determined is through numerical simulation. This is accomplished off-line using conventional numerical integration methods with a step size h/N much smaller than the step size h used in the difference equation represented by Eq. (6.114). Thus for a given initial state Xn and given values for the input data points Fn, Fn-1, … , the off-line simulation proceeds through N integration steps in order to calculate Xn+1. This process is repeated for enough different values of Xn, Fn, Fn-1, … to span the required space of X and F with sufficient resolution. In this way a multi-variable function table is generated to represent the function G. Table lookup and interpolation (probably linear) is then used to implement Eq. (6.114) in an on-line, real-time simulation. If the number of scalar components constituting the state vector X and input vector F is large, then clearly the dimension of the multi-variable function table used to represent G will be high, and the required size of memory needed to store the table may easily exceed practical limits. On the other hand, if the nonlinear system is of low order and the input vector consists of only one or two components, the memory required to store the multi-variable function table that represents G can be quite compatible with available memory.

For example, suppose the nonlinear system being simulated is second-order and has only one input, and assume that 10 data points provide sufficient resolution for each state variable and the input. Furthermore, assume that a linear representation of the input based on Fn and Fn-1 represents an adequate approximation for F(t). Then the required memory size for each of the two 4-variable function tables is 104 or 10,000 words. Each frame of the real-time simulation is then accomplished using Eq. (6.114) instead of conventional integration. For very difficult nonlinear-ities, e.g., mechanical systems with different levels of running and sticking friction, the time required for table lookup and linear interpolation each integration frame may be much less than the time required when using conventional real-time integration methods, especially if a much smaller step size is needed to obtain comparable accuracy in the latter case. The nonlinear state-transition method can be especially effective in simulating stiff nonlinear subsystems. Also, in many cases the dimension of the function table can be reduced with no sacrifice in accuracy by utilizing Fn+1/2 as a single input data point each frame, instead of the two input data points Fn and Fn-1, as needed for a linear approximation for F(t).*

As memory size and speed continues to increase, the function-generation method for simulating nonlinear subsystems of low order will become even more competitive with conventional integration, especially in applications where the time required for off-line calculation of the function-table entries is affordable because of the resulting improvement in on-line performance.

6.13 Summary of Results

In this chapter we have introduced the state-transition method and Tustin’s method for simulating linear systems. We have seen that the only errors for finite integration step size when

* See, for example, K.C. Lin and R.M. Howe, “Simulation Using Staggered Integration Steps, Part II: Implementation of Dual-speed Systems,” Transactions of the Society for Computer Simulation, Vol. 10, No. 4, December, 1993, pp 285-297. Also, R.M. Howe and K.C. Lin, “The Use of Function Generation in the Real-time Simulation of Stiff Systems,” Proc. of the 1990 AIAA Flight Simulation Technologies Conference, pp 217-224.

using the state-transition method are those associated with the algorithm used to approximate the input data sequence as a continuous function of time. In Table 6.1 a number of first, second, and third-order approximation algorithms are summarized, along with the corresponding transfer function errors. In the case of Tustin’s method, which is actually trapezoidal integration, the dynamic errors are those already derived in Chapter 3 for second-order implicit (trapezoidal) integration. In Section 6.8 we introduced a third-order substitution method by utilizing the same procedure already employed to generate explicit difference equations for Tustin’s method. Using a second-order proportional plus rate controller as an example, we compared the transfer function errors of both second and third-order state-transition and implicit methods in Sections 6.7 and 6.8. We found that for input frequencies below the bandwidth of the controller, Tustin’s method, as well as its third-order counterpart, produces smaller errors; for frequencies above the bandwidth of the controller, the state-transition method has a small edge. Both methods have very robust stability characteristics, which is one of their attractive features compared with conventional real-time integration methods such as AB-2 and AB-3.

When either method is used in a real-time simulation with real-time inputs, the algorithms must be selected so that input data points are not required before they are available in real time. For the state-transition method this restricts the choice of formulas used to provide a continuous approximation to the input function. In the case of Tustin’s method with an input data sequence {ƒn}, this requires that an estimate ƒ̂n+1 based on extrapolation be used to replace ƒn+1 in the derivation of the explicit difference equations for the nth frame. For both methods the real-time input requirement increases substantially the dynamic errors for a given integration step size h.

In Sections 6.9 and 6.10 we considered the time-domain errors when simulating the controller with the state-transition and Tustin methods. In the simulated controller response to a unit step input, we found that both methods produce solutions consistent with a step that occurs one-half integration frame ahead of the ideal step input. On this basis the time-domain errors of the second-order state-transition method are roughly half those for Tustin’s method when using input data points ƒn and ƒn+1. When real-time algorithms using input data points ƒn and ƒn-1 are used, both methods produce larger step response errors than in the non real-time case which uses ƒn and ƒn+1. The entire error in the simulated controller response when using the state-transition method is caused by the function ƒ̂(t) used to approximate the input. In the case of Tustin’s method some but not all of the error is caused by the input approximation associated with the interpolation or extrapolation that is inherent in the method.

Thus the effect of the discontinuity in an ideal step input tends to dominate the errors in simulated controller response. In an attempt to overcome this, we studied in Section 6.9 the simulated controller response to an acceleration-limited step input. We found that this did indeed reduce substantially the controller response errors in the non real-time case using input data points fn and fn+1. However, the errors actually increased in the real-time case. Again, this results from the substantial errors associated with the first-order extrapolation formula based on ƒn and ƒn-1.

We conclude by observing that the main advantage in using either the state-transition or Tustin method in simulating linear systems with real-time inputs lies in the increased stability of the methods when the integration step size is relatively large. When numerical stability is not an overriding issue, conventional real-time methods, such as AB-2 integration, are often just as effective.

In the next chapter we consider yet another special method for simulating linear systems, namely, the modified-Euler method. This method can also be used in the real-time simulation of some nonlinear systems, but is especially effective when combined with root-matching in the simulation of lightly-damped structural modes, as well as the simulation of high-order linear filters.