You are on page 1of 6

And is guaranteed to be nonsingular. It is also easy to verify that.

+ = 0 (1.6.32)

The fact that = 2 together with (1.6.30) through (1.6.32) yields = 2 . Next,
let us argue that in the realization of () has no purely imaginary axis eigenvalues. To
obtain a contradiction, suppose = for some () = 0 and 0. Consider the 2-2
block entry of (1.6.30), which is

1 + 1 + = 0 (1.6.33)

Pre-multiplying and post-multiplying by and yields = 0 . Now the 1-2 block entry
of (1.6.30) yields

+ + = 0 (1.6.34)

And so = , which contradicts the fact that is (asymptotically) stable.

Since both and have no imaginary axis eigenvalues, neither does . In the light of
the Lemma of the previous subsection, the non-singularity of and the absence of imaginary
axis eigenvalues of . The matrix is easily seen to have the same inertia as the matrix

0 0
[ ]=[ ] (1.6.35)
0 1 1 0 2 ()1

And the choice of ensures that = ( 2 ) has positive and negative


eigenvalues. It is then immediate that has negative part eigenvalues and positive
real part eigenvalues. Let be partitioned as [1 2 ] in the same manner as . Notice that
when + is formed from , through deletion of the last columns and rows, will
be replaced by 1 and similarly for and
. We shall now argue that the left half plane
eigenvalues of , which will be poles of , are all controllable from 1. A corresponding
observability conclusion follows in the same way.

To obtain a contradiction, suppose that = and 1 = 0 for some [] <


0 and 0. Now the 1-2 block entry of (1.6.30) is

0 = + + = + + 1 (1.6.36)

It follows that = , which contradicts the asymptotic stability of .


In general, it cannot be shown that the right half plane eigenvalues of are all
controllable. It follows that () has degree while the unstable part of () after
truncation of the last rows and columns, namely (), has degree at most and may
have degree less than .

When exceeds 1 , 1 is negative definite and the 2-2 block of (1.6.30) and the fact
that has no imaginary axis eigenvalues imply that (,
) is controllable. Similarly,
) is observable. Then () has degree equal to 1.
(,

Linear fractional transformations

In the previous subsection, we have described how one particular suboptimal Hankel norm
approximation can be found. Our aim now is describe how all suboptimal approximations can
be made. For this purpose, we need to use a device termed a Linear Fractional Transformation,
and in particular appeal to a number of properties of such transformations. This subsection is
devoted towards summarizing those properties, and the next subsection uses them in
describing the class of all suboptimal Hankel norm approximants.

Figure 1.6.1 A lower linear fractional transformation

Consider the arrangement of Figure 1.6.1. The () and () are all proper transfer
function matrices of compatible dimensions and in order that the loop be well-defined, there
holds

[ 22 ]() 0. (1.6.37)

The closed-loop transfer function from 1 to 1 is

= 11 + 12 ( 22 )1 12 . (1.6.38)

This is termed a lower fractional transformation (LFT), and one uses the notation

= 1 [, ] (1.6.39)
(The subscript stands for lower. Clearly, we can define an upper LFT also.)

An alternative view is provided from network theory, see Figure 1.6.2. The matrix is
the scattering matrix of a network, and a second network of scattering matrix is used to
terminate the first network. The scattering matrix of the combination is then .

Figure 1.6.2 Network view of LFT

In Green and Lime beer (1995) a number of properties of LFTs are established. These
are several kinds: properties dealing with the magnitude or norm of , , and , an
invertibility property and properties dealing with the poles (given conditions on magnitudes).

We sum up the magnitude results in theorem, see Section 4.3.2 of Green and Lime beer
(1995). A number of the results will be no surprise, reflecting properties such as the fact that
an interconnection of passive or lossless network is again passive or lossless.

Theorem 1.6.2. Consider the LFT = 1 (, ) under the well-posed ness condition
(1.6.37). Then

1. 1 and 1 imply 1
2. = and = imply =
3. = and < 1 implies 21 () has full column rank for all real
4. Suppose = and 21 () has full row rank for all real . Then
(a) > 1 if and only if > 1
(b) = if and only if =
(c) 1 if and only if 1
(d) < 1 and 21 is square if and only if < 1

The invertibility question is one of guaranteeing that, for a given , and can be found
providing a prescribed . The answer is as follows.

Theorem 1.6.3. Let transfer function matrices , be prescribed, with and


finite. A sufficient condition for to exist such that = 1 (, ) and is
finite (given dimension compatibility) is that 12 (), 21 () are square and nonsingular
for all real , and 22 () = 0.

This may be proven by solving (1.6.38) for .

Finally, we have the following result, lemma 4.3.4 of Green and Lime beer (1995). In
the statement, the word () has poles in the region for a rational proper () should be
taken to mean that the matrix in a minimal state-variable realization has eigenvalues in
the region .

Theorem 1.6.4. Suppose that

1 2

=[ 1 0 2 ] (1.6.40)
2 21 0

Where 21 = 12 () and 21 = 21 () are nonsingular. Suppose that has exactly


1 1
eigenvalues in [] < 0, and 1 12 2 and 2 21 1 have all eigenvalues
in [] > 0. Suppose that 22 < 1 for some square rational () of appropriate
dimensions. Then = 1 (, ) has exactly + poles in [] < 0 if and only if has
exactly poles [] > 0.

We remark that the condition 22 < 1 is a type of small gain condition. Roughly
speaking, it prevents changes from the pole distribution of , in open loop as a result of
closing the loop. In open loop of course, and have exactly ( + ) stable poles.

The class of all suboptimal Hankel norm approximants

In discussing suboptimal Hankel norm approximation, we started with a (). Added zero
rows and columns to it make (), and then found a square (), with realization
{, ,
,
} such that = , a scaled all-pass satisfying = 2 . Here,

(+1 (), ()).

()
The determination of all approximants is now straightforward. Let denote the
set of real rational transfer function matrices such that the matrix of an associated minimal
state variable realization has eigenvalues in [] < 0. We claim that the set of all
()
() that satisfy.
() () < (1.6.41)

Is given by

1
() = 1 ( , ) < (1.6.42)


Of course, is the set of real rational transfer functions with all poles in [] > 0. This
may be proved as follows. Suppose that () satisfies (1.6.41) and recall that

1 2
= [1 ]
2 0

Minor manipulation of the earlier expressions for etc. yields the following two equations:

1 1 0
[ ]=[ ][ ] (1.6.43)
2 0

And

2 1 ] [ 1 0
[ ] = [ ] (1.6.44)
1 0 1

From which it is clear that 12 and 21 are nonsingular on the imaginary axis and have all
zeros in [] > 0, while also 22 () = 0. By Theorem 1.6.3 of the previous subsection,
(1.6.42) may be solved for with finite. Now observe that the zero blocks in yield

= 1 ( , ) = 1 ( , ) (1.6.45)

Or

1 ( ) = 1 ( 1 , ) (1.6.46)

Since 1 is all-pass and (1.6.41) holds, by Part ()4 of Theorem 1.6.2, < 1.

Last, consider = 1 ( , ) in the light of Theorem 1.6.4. The zeros of 12 , 21 are


in [] > 0 and and both have precisely poles in [] < 0. The zero blocks in
mean that

[ ]22 = [ ]22 = [ ]22 (1.6.47)

And so

22 (1.6.48)
And

22 22 < 1. (1.6.49)


Hence by Theorem 1.6.4.

Obtaining some optimal Hankel norm approximations of a square transfer function matrix

In earlier subsections, we have obtained all reduced order models which satisfy a Hankel
norm error bound exceeding the infimum attainable. The problem of optimal (as opposed to
suboptimal) Hankel norm approximation is to find () of McMillan degree at most which
achieves equality in (1.6.18), . . yields a Hankel norm for the error:

= +1 () (1.6.50)

The construction given earlier for the suboptimal approximation problem cannot be used here,
as it involves the inverse of a matrix which in the optimal case is clearly singular. The broad
approach is however similar.

We shall first consider square (), and find a limited family of optimal
approximations. Then in later subsections we shall remove the square ness restriction, and
expand modestly the family of optimal approximations. Finally, we will consider the
construction of all optimal approximations.

You might also like