53 min

Topics: Review Of Last Lecture: LTI Systems And Convolution, Comment On Time Invariant Discrete Systems, The Fourier Transform For LTI Systems; Complex Exponentials As Eigenfunctions, Discussion Of Sine And Cosine V. Complex Exponentials As Eigenfunctions (Generally They Are Not), Discrete Version (Discrete Complex Exponentials Are Eigenvectors), Discrete Results From A Matrix Perspective

http://www.youtube.com/watch?v=K5_YieHrSNk

Instructor (Brad Osgood):Hey. Jesus. Man, I just back and I’m tired. And welcome back everybody. Let’s see if we can get our head back in the game. It’s not so easy, somehow. I’m sure I speak for all of you. Anyway, I hope everybody had a very good holiday, either here or elsewhere, and we can now sprint to the finish. All right.

I want to wrap up today, the discussion on LTI systems. There are a lot of topics to do a lot of little things to do, a lot of big things to do, and like many things in this class, it goes off in a lot of different directions, and is often the subject of very specialized courses. I was going to do some material on a little bit more on filters, on digital filters today, but I decided not to that. It’s discussed in the notes, and there’s, of course, we have an entire course of digital filters. So you have plenty of opportunity to see it. If you don’t see it there you’ll see it elsewhere, worked into other courses.

So I thought I do some fairly general things. Do a couple sample calculations and talk about their relationship, the connection between the LTI, Linear Time Invariant systems and the 4A transformer, which is the main sort of important fundamental foundational information that I think everybody should maybe know.

So I want to remind you where we finished up last time. Last time we got a pretty satisfactory answer about the general structure of linear systems in terms of the impulse response. So if L is the linear system, I’ll label it linear system, and we introduce the impulse response, a function of two variables separately, L of XY is L of delta X minus Y. And then the basic result, which is known in the theory of distributions of the Schwartz quantum theorem, but for us, it’s something you probably heard about when you had your first course in signals and systems, that the output of the system can be given in terms of integrating the impulse response against the input. All right.

So if W of X is equal to LV of X, then you actually get W of X is [inaudible] for minus infinity of H of XY, Z of Y, DY. All right. So once again, the output of the system is obtained by taking input of the system and integrating that against the impulse response. It’s a very satisfactory result.

Now, in the special case where we have an LTI system, the integrating reduces the convolution. So let me remind you what an LTI system is. You say that L is time invariant or shift invariant, invariant, let me get the word better here, or shift invariant if the following happens: Say if W is equal to L of V of X, then W of X minus Y is equal to L of V minus XY. I write it in symbols. It’s easier to say it in words. L has time invariant if the delay of input was also in corresponding delay of the output. All right. So if V is the output of L, V is the input and W is the output, V of X minus Y is the delayed form of V, then W of X minus Y is the delay formula of W and its delayed input is in correspondence with its delayed output. And a system is time invariant or shift invariant if and only if the impulse response is given by or the interval is given by convolution. Okay. So in this case, the impulse response for an LTI system is a little bit easier. Delta of X, I just set to the L of – I’ll get it again, I’ll get it right. H of X is L of the un shifted delta function, say delta of X, and then by the time invariants, H of X minus Y is equal to L of delta X minus Y, so that is the impulse response. And the action of the system is given by W of X the output is the interval from minus infinity to infinity, H of X minus Y, DY, DY. Same form, right, integrating the output of systems by integrating against the impulse response, but now the impulse response is of a special form. It doesn’t depend on X and Y separately, it depends only on the difference. And we recognize this as a convolution. H star V FX, and in fact, this characteristic of time invariant systems. That is to say, a system is time invariant, a linear system is time invariant if, and only if, it’s given by convolution. So system is time invariant if, and only if it’s given by convolution. All right. That’s where we finished up last time. All right and it’s in a very satisfactory state of affairs, as far as a structure of linear systems go. Any linear system is given by integration against the impulse response this is the time invariant if, and only if, that integration reduces to a convolution. So it’s another indication how fundamental convolution is in the whole theory, all right, or just as an operation.

We’re going to see how 4A transformer comes into this in just a second because anytime, should be one of the great lessons of this class, is that anytime anybody mentions convolution, bells should go off your head suggesting that you take 4A transformer. But wait! We’ll do that in just a minute. I did want to comment, all of this in the context of continuous time systems, but I did want to comment, as a matter of fact just write down a simple system, that the same sort of consideration holds when you have discrete systems. The same considerations hold for discrete systems. All right. Now, any discrete system, remember, is multiplication by a matrix. All right. If W is equal to L of V, then this is given by – I’m thinking about this W and V as vectors here, is given by multiplication by a matrix. All right. We talked about that before. Any linear operator is given by multiplication by matrix. Any functional operator is given by multiplication by matrix. And it is the definition of shift in variance, of time variance is the same as before, except this time you’re shifting a discrete variable instead of a continuous variable. And again, you have a system that is time in variant or shift in variant if, and only if, it is given by convolution. In this case, vector convolution or matrix convolution, well, vector convolution. So L is an LTI if, and only if, W is equal to H star V. Okay?

Now, in this case, so here is the impulse response, D is the input vector and W is the output vector. All right. So again, H is L of delta. All right, the M shifted to delta function. And H of M minus N is L of delta, so let me write this, minus M is L of the delta function shifted to M. Now, it’s interesting, I just wanted to share this example. I want to do a couple calculations today so you’ll feel sort of comfortable with how these things work out. The matrix A that realizes though the linear system has special form in the case of a timing variance system. It’s cute and actually, it’s very important in a lot of number calculations. So again, the operator is given by matrix multiplication. If we write a system as a matrix multiplication, say again, let me write it A times V, all right, where A is a matrix. Then A is special form for time and variance systems. Let me just do it, rather than try and give you the state of general fear here, let me just do an example so you see how it works.

All right. So let’s take e.g., let’s take just a four by four system. So I’m going to take H to B and the matrix or the vector, one, two, three, four. All right, just to take a random example that I happen to work out in detail before I got here so I wouldn’t make any mistakes. So if W is equal to A times V, which is also given by H convolved with V, the question is what is A? All right. So I’m telling you the system is given by convolution. All right. H star V, where H is this vector. So even when your system’s given by matrix multiplication, the question is what is a matrix? All right. So it’s clear what I’m asking here? Well, how do you find the matrix A? How do you find any matrix A? You have to find the image of the basis factors. All right. The columns of A are given by A of – well, the first basis vector is just what we’re calling, in fancy language, delta naught. Delta naught is 1000, right? That’s the first column. The second column is A delta one, delta one is 0100, if I use the language of delta functions instead of the language of lie algebra. Second column, the next column, is A delta two, and the next column is A delta three. So A delta two, delta two is 0010. And the next thing here, remember there’s always this problem when you’re working in the sort of context of linear systems, DF etc., wherein the index is usually from zero to N minus one, instead of one to M. All right. So delta zero is this is the zero slot, the first slot, the second slot, the third slot, 0123, and so on. And the final column is A delta three, where delta three is the last basis vector, that’s 0001.

All right, so how do I compute all these things? Well, I compute them by convolution. All right, because by definition the system is given to you as convolution with the vector H. So A of delta to naught is H convolved with delta naught. All right, now, what is H convolved with delta naught? Wait! Don’t tell me, I know, it’s H. All right. Convolving it with the un shifted delta function doesn’t do anything to the vector, it’s H. All right, so that’s one, two, three, four. That letter’s a column. Okay. What about A of delta one? What is A delta one? It’s H convolved with delta one. Now, what do you get if you convolve H with the shifted delta function? It’s a shifted H. All right. H convolved with delta one at the index M is H of M minus one. Okay. Just like the continuous case, just like the continuous case, all right. So what is that? Well, now, here’s where you have to say something a little extra. All right, if H is the vector of one, two, three, four, what is H shifted by one? All right. Now, you have to use the fact that H has to be assumed. Any time convolution comes into the picture, we haven’t brought the DFT in, although we will, but anytime any of that sort of stuff comes in, you always have to assume that your signals, you’re discrete signals are extended to be periodic. All right. So that it makes sense to consider H for values other than the index, and to see zero, one, two, three, remember it’s index one, two, three, all right. So it makes sense to consider H defined for all integers and you just keep repeating the pattern. All right. So what is H convolved with delta one as a vector, he looks at his note to make sure. I just forget what you shifted. It’s shifted like one to the right, right? You know, if I look at F minus one, it’s like shifts the function over. Well, one, two, three. All right, make sure you see this, okay?

Again, you have to assume that H is extended to be periodic, and it’s shifted down by one. So if it’s shifted down by one the four goes up top. Or you can think about it this way, the zero component here is H of minus one. Right? Delta one convolved with H at zero is H of minus one. But H of minus one is the same thing as H of three because of the periodicity. And H of three is the third component, remember, we’re indexing zero, one, two three, so that’s four, and so on. Okay? What about the rest of them? Now you see what the pattern is. What is A of delta two, is that where I am now? Yes, that’s H convolved with delta two. So that’s H shifted by two or if I shift this thing one more, so what would this be? I ask. Pardon me?

Student:[Inaudible].

Instructor (Brad Osgood):Be bold.

Student:[Inaudible].

Instructor (Brad Osgood):Thank you! All right. Shift it down again. And finally HA of delta three, is H involved with delta three, that’s just H by three, and that is two, three, four, one, right? Yeah. All right.

Now, again, those are the four columns of the matrix A. So what is the matrix A? What is the matrix A? Or simply what is the matrix? Neo. A is one, two, three, four, that’s the first column. The second column is, what do I have here, four, one, two three. Third column was three, four, one, two, four by four matrix. And the fourth column is two three, four, one. All right.

So again, and you can check, you can check that this is a different description of the system. As the system is given by convolution but it’s also given by matrix multiplication. W is equal to A times V. That is A times V, multiplying matrices out, is the same thing as convolving with H.

Now, this is kind of cool matrix. All right, if you look at this matrix, this is what’s called a circulant matrix in the biz. And I think I actually mentioned this once before. Someone asked a little bit about this. Circulant matrix. All right, this is a special case of more general matrices called Toeplitz matrices. They come up in a lot of different applications in discrete systems, all right. Circulant matrix is constant on the diagonals. The columns are periodic, as they are in this case, so the pattern just repeats the cycles around and each column is obtained form the previous column by a shift. And consequently, it’s constant on the diagonals, so it constant on the main diagonals, all ones, fours here, threes here, twos, twos, threes, four, and so on. Okay. And it’s called circulant. That sort of property, if it’s constant on the diagonal, and I’m a little hesitant to give you the general definition here, because I don’t want to get it wrong. But there’s standard terminology of the standard matrices that come up a lot in various applications. Typically it’s Toeplitz matrices and circulant matrices and they have the circulant matrices are like Toeplitz matrices except they have the additional property that columns are periodic. Okay. But each one is obtained from the previous one by shift. Okay. Bob Gray, in our department, has a whole book on, he has a whole book on a lot of things, actually, but you know there’s a whole book in particular on Toeplitz matrices and their applications. So they come up a lot. We’ll come back to this matrix a little bit later. All right. It’s kind of cool. And it’s the sort of calculation you should be able to do, all right, you know. You should be able to take this result in the continuous case, and bring it over to this brief case and realize what form it takes, and realize that it’s not so different than what you’re doing in the continuous case. We set it up this way just so the formalism just so that symbols and everything else would look, just as much as possible, like the continuous case. It’s nice. Okay.

Now, at long last, let’s bring back the 4A transform for LTI systems. Okay. LTI systems get convolution, bells should go off in your head, buzzers should go off in your pocket, who knows what else should happen, but whatever happens, you should think of the 4A transform. Bring in the 4A transform. All right, so with an LTI system we have convolution. So if W is equal to 8 involved with V, V is the input, W is the output, H is the impulse response, so H is fixed and V varies over different inputs. and if you take the 4A transform you, of course, get via the convolution theory 4A transform value is the transform of H times the 4A transform of V, or as it is universally written in uppercase letters, capital W is equal to capital H of S times capital V of S. And in this context, in terminology, again, I’m sure you have heard, capital H of S is called the transfer function. Little H is called the impulse response capital H is called the function. You have to be a little careful here. That’s all right, never mind. Sometimes what you call the impulse response, what you call the impulse response H of X or H of X minus Y, I guess that’s the only question. But it’s not important in this case.

Anyway, the standard terminology I gave you, again, that I am sure you have heard, I think we even used back when we talked about filters when we first started talking about part of convolution, is that capital H is called the transform function. It’s also sometimes called the system function. Any other terms of this anybody knows, just out of curiosity. Other than capital H function, I don’t know. Either call it the system function or the transform function. Now, I want to point out something here, again, sort of in the spirit of this such beautiful structure involved in linear systems. When we start talking about linear systems, I said, I made the bold statement, that the most basic example of linear system is, is the relationship in direct proportionality. The output is directly proportional to the input. All right. And I said, boldly, that any linear system is somehow some generalization of that, or somehow you can trace back the idea of direct proportionality or you can redirect proportionality into any linear system. And for LTI systems, this is staring you in the face. All right. Because what this says is that for an LTI system in the frequency domain it’s exactly described by multiplication. It’s exactly described by direct proportionality. Okay. In the frequency domain, the system is really given by direct proportion, the relationship of direct proportion. In the time domain, it’s a little more complicated. In the time domain, it involves convolution, but in the frequency domain, it really is the relationship between the input and the output is given by direct proportion, the most basic relationship that underlies linearity. All Right.

And of course, again, we have you know part of the point of this course is that the time domain and the frequency domain in some sense are equivalent. You can pass back and forth between the two. They’re different pictures of the same thing, different views of the same phenomena. You can use one to study the other. All right. So again, I just want to point this out because I think it’s just another, I don’t know, an example of how beautifully unified and coherent this subject is when you talk about linearity and frequent of time and variance of convolution, this whole idea, again, direct proportion comes out very strongly here. Okay. It’s not just almost there, it’s there! It’s right in front of you. Now, the importance of bringing the fully transform to LTI systems is the fact that it would not be obvious if you didn’t do it with 4A transforms that complex exponentials are agin functions for all LTI systems. Let me write that down and then I’ll explain what I mean. This is sort of a last general fact that we’re going to talk about for linear systems and LTI systems in particular. For the last gray fact on LTI systems is the complex exponentials are agin functions. Now, this actually is an extremely important result but we are not going to take it anywhere. All right. I’ve got to say, just because again, this goes off in a lot of different directions and becomes, you see this more in special applications. And I would be surprised if you didn’t see this in special applications. But for us, I just want to make sure you understand where it comes about, and why, and how it happens, what the basic definition is. So we’re not going to do any particular application with this because to do one application is to do dozens of applications, probably, and again, you’ll probably see these more in other courses that come up. Quantum mechanics comes up in a lot of various aspects of sequel processing, but I just want to make sure you see the basic fact. So here’s what I mean by this, so then L of V is given by H star V. Okay. So it’s a timing variance system, let’s call it W on the left side. W is the output, V is the input, and if I take the 4A transform I get W of S is equal to H of S times V of S. Now, what happens if I input a complex exponential into the system? Input V of X is E of the two pi I nu times X, any nu. Okay. And the question is what is L of V of X? I think sometimes people call this the frequency response because you’re inputting a pure frequency or a pure harmonic. All right, but I tend not to use that term.

Anyway, what is it? Well, first of all, what is a 4A transformer? A 4A transformer either a two pi I nu X – ladies and gentlemen, let’s work in the frequency domain. That is, work in the relationship between the 4A transformer of the output function of the 4A transformer, the input function, and the transfer function. All right, so the 4A transform of the, you know, the two pi nu X is delta X minus nu. Okay. It gives you the shift of delta function. And so the output is W of S, the output in the frequency domain is W of S is H of S times delta S minus nu. But now, there’s the fundamental sampling property of the delta function. H of S times delta S minus nu is H of nu, a constant times delta S minus nu. All right. Now, take the universe 4A transformer on both sides. Go back to the time domain. If I go back to the time domain, then H of nu, that is to say, if I take the 4A transform, H of nu just comes out, it’s kind of along for the ride because it’s a constant, the universe 4A transform of delta, the shifted delta function is a complex exponential, again, so I get W of X is equal to H of nu times E into pi to I nu X. All right. In other words, remember, the input was into the nu pi X and then the output is a multiple of that, mainly the value of the transfer function at nu. So i.e. L of E of the two pi I nu X is equal to H of nu times into the two pi I nu X. All right. That says exactly, that either the two pi, the complex exponential, either the two pi nu X is an agin function with agin value H of nu. That’s exactly what that statement means. Okay. This says, all right, this says exactly, either to two I pi nu X is an agin function, agin function, of any LTI system, of any LTI system. Now, it doesn’t always have the same agin value because the LTI system depends on the particular transfer function. The agin value for a given LTI system are values of the transfer function at the frequency nu. The agin values, request for the agin value is H of nu, the value of the transfer function at nu. All right. That’s a fundamental fact. And again, some people interpret this, you know, often, I think I probably even put this thing in the notes, time invariance is a further indication of how natural and important convolution is from the fact that complex exponentials are agin functions is, I don’t know, either a further statement of how important shifting invariant or linear timing invariant systems are, or how important complex exponentials are. All right. The fact that they enter into the theory of linear systems in such an important way, and I mean, it’s very important. And if you’re analyzing a linear system, is I’m sure you’ve seen in various classes, to know the possible agin values an agin vectors, and to know that there are agin values and agin vectors. I say agin function here, instead of agin vector, because I’m thinking of functions instead of vectors. That is, a function of the continuous invariable. So I’ll do a discrete action, actually. But same idea, it’s the same terminology in linear algebra but just applied, as you use your linear algebra, but just applied to sort of the continuous case. Okay.

So any LIT system has complex exponentials as, actually, as it turns out, as a basis of agin functions, and that turns out to be very important. That allows you to diagonalize the operators associated with LTI systems and understand how they operate in a much more natural way. But again, as I say, to do one application is to do a lot of applications so I would rather let that wait for other occasions. But it’s also important to realize that, you know, we often talk about how you can work with complex exponentials but, you know, but really you’re sometimes thinking about real signals so you take the real part and put a sign, you know, properties of complex exponentials are the same as the properties of signs and cosigns, and so on and so on. But this is a case where that’s not true. All right. That is, it’s the complex exponentials that are agin functions of linear time invariant systems, not the sign and cosign separately. So let me show you that. So it’s not true without additional assumptions. There are some cases where it’s true, but generally, it’s not true that sign and cosign are themselves agin functions of an LTI system. All right. Watch what happens here. So let’s take cosign for example. So e.g. take Z of X to a cosign, cosign of two pi nu times X. All right. And the question is, what is L of VX. If it’s an agin function L of VX has to a multiple of itself. It has a be a multiple of V of X. All right. Well, is it or not? We can calculate this by expressing to cosign in terms of complex exponentials as the real part. So L of cosign of two pi nu X is L of ½ E to the two pi I nu X plus E to the ½ of the sum, plus E to the minus the two pi I nu X. All right. L will apply to the sum. The ½ comes out and L is linear, so L of each one of those sum of the complex exponentials is the sum of L applied to the complex exponentials. And we know what happens in that case, actually. So this is ½ L of E to the two pi I nu X plus L of E of the minus two pi I nu X. All right. And the complex exponentials are agin functions. All right. So this is ½ H of nu times E to the two pi I nu X plus H of minus nu, you know, the minus two pi nu X is E to the two pi minus nu times X of H minus nu X equal minus two pi I nu X, on half of that whole thing, and now, you’re stuck. All right. You are stuck. Because unless you have additional assumptions, all right, you can’t combine these terms. You have H of nu, H of minus nu, and without further assumptions, they don’t have anything to do with each other. Okay. So you are now stuck without further assumptions. Okay. So it’s an agin function without further assumptions.

Now, there are assumptions even made. Actually, one of the most natural assumptions, which almost gets you there, but not quite, is to assume that H is real. L of H, the impulse response, which is a natural assumption, is real, as it will be in most cases, all right. If H is real, then what symmetries does the 4A transform have? If H is real, then H has asymmetry, H of minus nu is equal to H of nu bar. That’s the basic symmetry of the 4A transform, when you have the 4A transform of the real signal. Okay. So if I plug this in, let’s go with that assumption. And if I go with that assumption, then L of again, cosign of two pi nu X is equal to ½ H of nu either the two pi I nu X plus H of nu bar, and let me write this as, actually, E to the two pi I nu X bar. All right, either minus two pi nu X is a congregant of E to the two pi nu X. So that is the real part of H of nu E to the two pi I nu X. Now, you’re still not quite there. Right? You’re still stuck. If H of nu were real, then you’d be okay. Right? But I’m not assuming H of nu is real, just that H satisfies the symmetry property. There’s a little more you can do, all right. Let’ write, so you’re still stuck in the sense that it’s not an agin function, it’s just not. So don’t say that it is. Don’t make me mad. All right? So if you write, though, there’s a little bit more that you do on it, and it’s the common thing to do, is if I write H in terms of its polar form, H of nu, the magnitude times E to say IC. You’ve got to say E to the IC before you can say it, so in terms of the magnitude of the phase. Then H of nu times E to the two pi I nu X is I to the value of H of nu times E of the I five times this is E to the two pi I nu X plus E. Okay. Or put the two pi, yeah, I guess I’ll write it like that.

So two pi I – I’ll get it, I’ll get it, don’t panic. I times two pi nu X, here we go, plus V. Okay. So the real part of this does give you a cosign, but it’s a phase shifted cosign. The real part of H of nu times E to two pi I nu X is going to be absolute value of H of nu times cosign of – so the real part of this is, this is already real, the real part of this is a cosign with a phase shift. So it’s cosign of two pi nu X plus V. Okay. All phase shifts always drive people out. All right. So it’s still not an agin function, right, but it’s as close as you’re sort of going to get. This says L of cosign of two pi nu X is equal to absolute value of H of nu times cosign of two pi nu X plus C. All right, the cosign is not an agin function but it’s close. Okay? Yeah.

Student:[Inaudible] even.

Instructor (Brad Osgood):Well, then you can tend your business. The more assumptions you can put on this, because then you know you’ll be okay. All right?

There are extra assumptions you can put on here, but all I’m saying is that if you don’t put those extra assumptions on, it’s just not the case that the signs and cosigns are separately agin functions for LTI systems. Only the complex that’s really interesting. I mean, it’s almost really, sort of the fundamental difference between the complex exponentials world and the real world is that for any LTI system, complex exponentials are agin functions, but not the real imaginary parts are not agin functions without extra assumptions. Okay.

All right, let’s finish up. Let’s do the discrete case. This is the discrete version of this. Again, same considerations hold for the discrete case. A discrete case is W is equal to L of V, which is H involved with V, but everything here now is a discrete signal. Again, W of M is the 4A transform of H times the transform of V, that everything is discrete here. Okay. And again, discrete complex exponentials are agin functions. Discrete complex exponentials are agin functions. All right. Maybe in this case I should call them agin vectors, I suppose. All right.

So for example, what if I input V equals omega to the K for any case. Or maybe, as a discrete vector complex exponential, right, that we use many times, but let me put it to the omega K. Well, the 4A transform of omega to the K is, if you will recall, I don’t recall so I have to write it down, is N times delta K. All right. There’s that extra damn factor of N in there that comes in. What are you going to do? It just is, it’s a pain in the neck, but here it is.

So again, what is L of VK? So to find L of omega to the K, I thought that was EK, thought it was Ek, omega to the K, look in the frequency domain, same argument as before. And I get W of, say, N is equal to H of N times – let me just write it like this, without the indexes, W is equal to H times this, N times delta K. All right. So that’s equal to H of K times N times delta K. Because of the sampling property of the discrete delta function, same property, same damn thing. It’s the same damn thing over and over, again. It’s the same argument

So back in the time domain, it’s the same thing. W is equal to H of K times omega to the K. Okay. That is to say, i.e., L of omega to the K, the discrete complex exponentials of the K, the discrete complex exponential is H of K times omega to the K.

Now, an interesting thing happens here because whereas, in the continuous case you had sort of a infinite family discrete complex exponentials, even when the two pi nu X where nu can be anything and a continuous variable between minus infinity and plus infinity. Here these powers cycle, right. Up to zero minus to the first power and then they start repeating. All right. So what you have is, this is maybe a difference or this is a special feature of the discrete case, so here, now, you see that one omega to the one, omega square, up to omega to the end minus one, form a basis of agin vectors for any linear time invariant system. They’re independent, they’re [inaudible], they’re each agin vectors, they actually form a basis, and almost a normal basis. They’re not quite worth normal, right, because the lengths are N no one. Damn. All right. Form the basis of agin vectors for any LTI system. All right. Now, I want to make sure you understand, again, what this says and what this doesn’t say. All right. Any LTI system, these discrete complex exponentials, one, this is a vector will all ones in it, all right. One omega, omega square up to omega and minus one, any LTI system, these are agin vectors and they form a basis of agin vectors. All right. The agin values are different. The agin values depend on the system. Because the agin values are the values of the transfer function at the index K and that’s going to be different for different systems. All right, because H is going to be different for different systems. To define a system, a discrete LTI system, is to give the H, and so that gives the agin value in these terms. But the vectors, themselves, one, omega, omega square, four omega N minus one are agin vectors for any LTI system. Another way of putting this is, any LTI system, so a system given by convolution in this discrete case, is diagonalized by the complex exponentials. All right. This is another important property, makes for good quals questions. All right. It’s an important property of discrete complex exponentials that they form a basis of agin vectors for any LTI system.

All right, now, I’ll do one more calculation for you. Let’s take that system we had before. Let’s do a 263 problem in a slightly way. All right. Let’s take, again, W is equal to H involved with V, in the discrete case where H was the vector one, two, three, four. All right. And we found that this was given by matrix multiplication, W is equal to A times V where A was this matrix, right. A was the matrix I don’t have to write it down again. One, two, three, four, that’s the first column, one, two, three, four. And I start shifting. Four, one, two, three; then three, four, one, two; he said looking at his notes desperately, then two, three, four, one. All right. All right. That’s the matrix. All right. Now, agin vectors of the system, then therefore, are agin vectors of A. All right. Of the system are agin vectors of A. I should say agin vectors and agin values are agin vectors and agin values of the matrix A. All right. Let’s find them. Now, you know how to do that, actually, by matrix methods where you look at A minus 11 times the identity, and figure out the determinant, and you know, figure out the roots and the characters of the components. No, no, that’s the thing where you just plug in the map lab and let the map lab chug away, and so on, and so on. Right. So you can do this. But let’s solve this problem by using what we know. Okay. So let’s do this via a theory of LTI systems. The agin values are given by values of the transfer function. All right. So we need the transfer function. Agin values are H of zero, H of one, remember I’m using that same one, two, three, H of two, and H of three. Those are the agin values. I mean, actually, I know what the agin vectors are. The agin vectors are, of course, are complex exponentials. All right.

So how do I find those numbers? Well, H is the 4A transform of little h, of course. All right. So I can calculate this directly. The 4A transform of little h is the sum from I equals zero to three of the values of H HMI omega to the minus I. Maybe I shouldn’t use I, maybe I should use K. K, omega minus K. All right. That’s the discrete 4A transform of H. All right. Now, H is easy. H is given explicitly. This is the sum from K equals zero to three of K plus one. Right, H of zero is one, H is one of two, three, four, so it’s K plus one times omega in the minus K. All right. Write this out. Write this out in terms of vectors. Write this out. But I will do it. Okay. It’s a sum of vectors. K plus one times, this is a zero omega to zero’s all ones, and omega to minus one, omega minus two, and so on, and here’s what you get. You get very easily, very quickly you get this equals, I’ll write it out for you, ten minus two plus two I, minus two and minus two minus two I. All right. Just by evaluating the sum, all right, very, very easy, you do it by hand. Okay. That’s what you get. And that tells you exactly the agin values. The agin values are exactly minus ten, minus two plus two I. That is the agin values of A. The matrix A are given by minus ten, minus two plus two I minus two and minus two minus two I. Okay. It drops to like a piece of light fruit. And in fact, I wasn’t so sure my – well, I was sure of myself, of course, but I decided to check this and if you put this into matrix mathematics, and sure enough, if you ask for the agin values in that matrix this is what you get. All right. So it works like a charm. Once again, how does it work? An LTI system given by convolution, the convolution is also realized by matrix multiplication this is the matrix, therefore, agin vectors of the system are agin vectors of the matrix. But I know the matrix corresponds to an LTI system so the agin vectors are power of complex exponentials one, omega, omega squared, omega tubed, all right. And the agin values are the values of the corresponding transfer function at the value zero, one, two, three, all right.

So all I have to do is find the transfer function to find the agin values of the matrix. And to find the transfer function I calculate the 4A transform discrete function directly from the definition of discrete 4A transform. All right. It’s just the component of H times omega to the minus K, add up those vectors, and you get this vector and the entry here happen to be the agin vectors of the system, okay, of the matrix. It’s cute. All right. It’s pretty cute. All right. We will now leave the theory of linear systems, as much more as there is to do, and there’s plenty more to do. I want finish up the course with a discussion of how two dimensional transforms work, so we’ll start on that on Wednesday. As always, please sort of read around and read ahead in the section so, again, our pursuit is going to try to make it look as much like one dimensional cases, as possible. And that will mean more to you if you sort of read ahead a little bit and familiarize yourself with how the formulas look so I can jump back and forth more easily. Okay. See you on Wednesday.

[End of Audio]

Duration: 53 minutes

Source: http://see.stanford.edu/materials/lsoftaee261/transcripts/TheFourierTransformAndItsApplications-Lecture25.html Labels: Linear Systems and Optimization, The Fourier Transform and its Applications

## Responses

0 Respones to "TheFourierTransformAndItsApplications-Lecture25"Post a Comment