1 hr 10 min

Topics: Reachability, Controllable System, Lest-Norm Input For Reachability, Minimum Energy Over Infinite Horizon, Continuous-Time Reachability, Impulsive Inputs, Least-Norm Input For Reachability

http://www.youtube.com/watch?v=l_orrP7AaOU

Instructor (Stephen Boyd):– email – for example, if you don’t live in the Bay area, you should email us to let us know when you want the final emailed to you. That’s the first announcement. And I guess, even for people in the Bay area, sometimes traffic is a big pain or something and in which case this is an easier option. Second announcement is homework nine – we’ll post the solutions Thursday so Thursday evening after homework nine is due. And I think we’ve now responded to maybe 10, and growing, inquiries. I guess there is a problem involving – the title is something like time compression equalizer, does this strike a bell? Vaguely. You look worn out. No? Okay. It’s just early. Okay. All right. So we fielded a bunch a bunch of questions about the convulsion, we didn’t put the limits in the sum, in the convulsion, but you’re to interpret, I think it’s W and C as 0 when you index outside the range. So a bunch of – maybe 10 people pointed this out to us or something like that. An important announcement, sadly, I have to leave tomorrow morning to go to Austin. I don’t like doing that, but I have to go. So I’m off to Austin, and that means that Thursday’s lecture, which is the last lecture for this class will actually be given this afternoon. And I think it’s Skilling Auditorium 415 this afternoon, but whatever the website says that’s what it is. And that’s on the first page, the announcements page. So that’s where. If you were are around this afternoon and want to come, please do come. You should know that it is every professors worst nightmare, maybe second or third worst, but it’s way up there on the list that you should give a tape ahead and no one would come. This would cause you to give a lecture to no one. It’s never happened, but it doesn’t work. So at least, statistically, some of you should come. My guess is someone will come. We’ve had long discussions about this. Several colleagues have suggested that we should do tape ahead’s from wherever we are, sort of like a nova show or something like that. So you could say hi, I’m here in Rio and we’re gonna talk about the singular value decomposition or just something like that, but we haven’t actually approached SCPD to see if they can pull that off, but I do want to do that sometime. Anyway, this afternoon is a tape ahead. Please come, statically. So as long as some of you come. My guess is that some people will come anyway. All right. Any questions about last time or administrative stuff? Oh, I have to say that one of the problems is because I’m actually in between this lecture and then Thursdays lecture, which is this afternoon, I also have to give a talk at NASA Ames so I’m gonna have to leave my office hours early today around noon. I have to be walking out the door by noon. So I feel quite bad about that. In fact, I’ll even be gone when you get your final. That might be a good thing. But I’ll be back Saturday morning. I’ll be on email and I’ll be contact, let’s put it that way. And I’ll be back Saturday. And we have a couple of Beta testers taking it; I think one in about an hour and a half. So someone is gonna debug it for you. It’s already been debugged pretty well. Okay. Any questions? Then we’ll continue on reachability. So last time we looked at this idea of just reachability. Reachability is the following state transfer problem. You start from zero and the question is where can you go? So it’s a special state transfer problem. You start from zero and you want to hit something like, in states base, at time T. And we said that our sub T is the reachable sub space. This is sub space. If you can hit a point in T, seconds, or epics, you can certainly hit twice the point and it’s a sub space, if you can hit one point or another, you can hit the sum. So it’s the sub space. And it’s a growing family of sub spaces. So we’ll know exactly what the family is. Actually, we already know for discrete time. For discrete time it’s interesting, but it’s just nothing but an application of the material in the course. It’s basically this. Our sub T is the range of this matrix, CT; this is the controllability matrix at time T. I think I mentioned last time that this matrix, you will see in other courses. I mean, it comes up in, for example, scientific computing, in which case RT is actually called a [inaudible] sub space. I may have mentioned that last time, but [inaudible] you will see that this matrix doesn’t come up in just this context. It comes up in lots of others. So this matrix here and I think we discussed it last time, as you increase T it gets fatter and fatter, in fact, every time you increment time, the matrix gets fatter by the width of B. That’s the number of inputs, which is M, is what we’re using here. So what happens is you have a matrix, you start with B, that’s where you the range of B in one step, then the range of B and AB is where you can get in two steps and that was parched very carefully and I guess I shouldn’t have said it so quickly. When I said the range of B and AB, it means the matrix B space AB. So it’s the linear combination of columns of B plus columns of AB. That’s where you can get the two steps together. Okay. Now we noted by the Cayley Hamilton Theorem, once you get to N steps, A to the N is a linear combination of I, A, A squared up to A and minus one and so the rank of CT or the range, does not increase once you hit above N. So for example, the range of CN and plus 1 is also the range of CN. So it doesn’t grow. Okay. Now that means we have a complete analysis of discrete time system where you can get starting from zero in T epics. The answer is just this. You can get to the range of CT for T less than N, and then after that, once you hit N, it’s the range of C. And C is just CN. That’s called the controllability matrix. And the system is called controllable if CN is onto. So in other words, if it’s range is RN. So that’s the idea. And so you can say, you get something that’s not totally obvious, it’s this, you have the following. In the discrete time system any state you can reach in any number of steps, can be reached, in T equals N steps. Now, that doesn’t mean that’s a good idea. We will see why very shortly, but nevertheless, as a mathematical fact, it says that if you can’t reach a state in N steps then you can’t reach it ever. So giving you more time to hit the step is not gonna help at all. Okay. In the reachable set, that’s the set of points you can hit with no limit on time, is simply the range of C. It’s the range of this matrix. Okay. Now a system is called controllable or reachable, now, unfortunately there are people who distinguish between reachable and controllable, sadly, so sometimes controllable means something slightly different, but don’t worry about it for now. A system is controllable if you can reach any state in in steps or fewer, and that’s if and only if this matrix C is full rank. So that’s the condition. And we’ll just do a little stupid example here is this. You have XT plus 1 is this matrix zero 1 1 0 X of T plus 1 1 U of T, now, we can just look at this and know immediately what it does. It does absolutely nothing but swap the roles. That’s the swap matrix, I mean, if you ask me to describe it in English, that’s a swap matrix. It simply swaps X1 and X2. The input, and this is the important part, acts on both states the same way. So the point is there’s a symmetry in the system. It’s just a stupid simple example. There’s a symmetry in the system and it basically says that whatever you can do to one state, and I’m arguing very roughly now, it will do the same thing to the other. So that’s a hint right there that there’s gonna be some things you can’t get to. We’ll wait and see what they are. The controllability matrix is B, that’s AB, and sure enough, B AB is not on two. It’s singular. And the reachable set is all states where X1 is equal to X2. So no matter what you do here, no matter how you wiggle, you will never reach a state that doesn’t have the form of a number times the vector 1 1. It just can’t happen. And it’s obvious here you certainly didn’t need controllability analysis to see this here. And to be blunt about it, that’s often the case in almost all examples. I mean, sometimes you don’t know, you actually have to check, they’ll be something, and in fact, not only that, but most lack of controllability comes down to symmetries like this. They can do much more sophisticated in large mechanical systems and things like that or after the fact you’ll realize that something symmetric in your actuator configurations is symmetric and of course, you couldn’t do something after the fact. We’ll see actually there’s a much more interesting notion of controllability that we’re gonna get to of quantitative work. Okay. Now let’s look at general state transfers. So general state transfers, that’s a general problem. We’re gonna transfer from initial to a final time, from an initial state to a final state, and of course this is the formula that relates the final state to the initial state and of course, this is completely clear, that’s simply the dynamics propagating the initial state forward in time. That’s nothing else. So this in fact what would happen if you did nothing, if you were zero over the interval? This is the effect, I stacked my inputs in a big M times TF minus T1 plus 1 vector and I multiply it by this controllability matrix here. And this gives you the effect of the input, how it changes your final state. Okay. So what this says is this equation holds, if and only if, I’ll take X desired to be the state you want XTF to be, so I take XTF minus this is in the range of that because this is in the range of that and there’s your answer. So it actually makes a lot of sense. It’s actually quite beautiful. It basically says something like this. If you want to know if you can transfer from an initial state to a desired state, then it’s really the same as the reachability problem, what you want to reach is an interesting state. You don’t want to reach X desired. You want to reach X desired minus what would happen if your initial state were propagated forward in time. That’s what it comes down to. Okay. So this is simple, but it’s quite interesting. So I guess another way of saying it is something like this. The U, if you want to transfer from T initial to X of T initial to some X desired, it says don’t aim at X desired. What you do is pretend you’re starting from zero and aim for this point, which takes into account the drift dynamics. Okay. So that’s kind of what you want you want to do. Okay. So general state transfer reduces to reachability problem, and now I believe last time somebody asked the following question. We talked about reachability and your ability to get from one state to another, let’s say over some fixed time interval. And the question is if we made the time interval longer, can you get to more points? Certainly if the initial state is zero, that’s true. If the initial state is not zero, that’s false. It’s just wrong. So it is entirely possible in general reachability to be able to hit a state from one initial state in four steps, but then in five steps to be unable to hit it. Okay. That’s entirely possible. It does happen and so that’s entirely possible. Now, there’s a very important special case. Some people think of it as the dual of reachability and sometimes people call this controlling, I mean, if you distinguish between reaching and controlling, that is driving a state to zero. So sometimes the problem of taking a state that’s non-zero and finding an input that manipulates the state to zero is called regulation and sometimes it’s just called controlling. I can tell you the background there. The basic idea in regulation is that X represents some kind of – your state actually represents what we call X here represents an error. It’s an error from some operating conditions. So you have some chemical plant, you have a vehicles, you have whatever you like, X equals zero means you’re back in some state that you want to be in, in some target state or bias point in a circuit or trim for an aircraft or something like that and then regulating or controlling means there’s been a wind gust or something’s happened, you’re not in that state and you want to move it back to this standard state which is zero. This equilibrium position, which is zero. So that’s why it’s called the regulation problem or control problem or something like that. And here you can work out exactly what that is, here it turns out this is just zero so it depends on whether or not, and of course, that’s a sub space so I can remove the minus sign here. If I give you a non-zero state, let’s just even just check that. So how would we do the following? I give you a system, I give you A and B and I give you a non zero state and I ask, “What is the minimum number of steps required to achieve X of T equals zero?” That’s the minimum time control problem or whatever you want to call it. How do you solve that? So this is what you’re given. I’m gonna give you A, I’m gonna give you B and I’m gonna give you this, X zero. How do we do it? How do I minimize T for which X of T is zero? Let’s handle a simple case. If X zero is zero, then we’re already done before we started and the answer is T equals zero in that case. Okay. How can you do it in one step? What do you do?

Student:

[Inaudible]

Instructor (Stephen Boyd):It’s interesting. What you want to do here is the following. You want to check whether A to the T times X0 is in the range of B up to A T minus 1 B. That’s it. I think. Make sense? This is what you need to check and you simply increment T now to check. You try T equals 0, we just did that. You try T equals 1, so you hit AX0; you want to check if that’s in the range of this. Okay. Now, if you test this and you get out T equals N and the answer is still no, what do you say?

Student:[Inaudible] Instructor

That is cannot be done. Actually, because of this term, that actually requires a little bit of argument, but that’s correct. So that’s the basic idea. We have a homework problem that’s actually a more, it’s actually a more sophisticated version of this. I think. Good. Okay. All right. Okay. Now, again, just applying all the stuff we know, because this is nothing but applied linear algebra. There’s nothing interesting here. Let’s look at least-norm input for reachability. That’s actually much more interesting. So let’s assume the system is reachable, although, now that you know about SVD it wouldn’t matter if it weren’t, but let’s assume it is. And let’s steer X of 0 to an X desired at time T with inputs user of the UT minus 1. I’ll stack them in reverse time. That’s just so I can use CT this way. So I stack them in reverse time and I get X desired is this matrix, that’s a fat matrix times this is my control, my controls stacked or you could actually call this a control trajectory. That’s a good name for that vector. I want to put out one thing about that vector. It runs backwards in time. That’s just indexing. I could’ve run them forward in time, too, but then I would’ve had to of turn CT around to start A T minus 1B, A T minus 2B….down to B. But everyone writes this as B, AB, A squared B. So time runs backwards in this vector. Okay. Now, in this state C is square or fat and it’s full rank so it’s on 2 and we want to find the least-norm solution of that. The norm of this by the way is the sum of the squares of the norms of the components. That’s true actually for any vector. If I take a big vector and I chunk it up, if I divide it up, any way I like, the sum of the norm squared of the partitioned elements is this norm squared to the original vector. So that’s what this is and you just want to get the one that minimizes this. This makes a lot of sense. Some people would call this the minimum energy transfer. That would be one. That’s, generally speaking, a lie. It generally has nothing to do with that. It’s extremely rare to find a real problem where the actual goal is to minimize the sum of the squares of something. They do come up, but they’re very rare. Okay. Well, this is nothing. We know how to do this. So that’s called the least-norm or the minimum energy input that affects the given state transfer. And if you write it out in terms of what CT is, you get something very interesting. CT of course is B AB A squared B and so on and when you line that up with C transpose C, you get B transpose on top of A transpose B transpose and so on and when you put all the terms together you get a formula that just looks like that. There it is. So that’s the formula. And again, there’s nothing here. You’re just applying least-norm from week three in the class. That’s nothing else. But it’s really interesting. First of all, notice that it’s just a closed form formula for the minimum energy input that steers you from zero to a desired point in T epics and it just looks like that. And everything’s here. The only thing in here is a matrix inverse and you might ask, “Why do you know that that matrix is invertible?” What makes that matrix invertible? This matrix in here is nothing but CT CT transpose. It’s a fat matrix multiplied by its transpose. That is non singular if and only if C is full rank. And in that case, it corresponds to controllability. But in the case where it is controllable, C dagger is in fact this whole big thing here. By the way, it’s really interesting to see what some of these parts are. Let’s see what they are. There’s actually one very interesting thing is you see something like this. There’s sort of a transpose here and the really interesting part is that its running backwards in time. So we don’t have any more time left in the class so I’m not going to go into more detail here, but it’s just an interesting observation. By the way, this is related to things like you may have seen in other contexts, in filtering you may have seen single pluses, you may have seen matched filters, which is basically where the optimum receiver is sort of the same as the original signal but running backwards in time. If you’ve seen that, this is the same thing. It’s identical. So this is not exactly sort of unheard of. Okay. Now, this is the minimum input. By the way, these are the things that I showed on the first day, as I recall, you were completely unimpressed. So this is where we’re just making inputs to some, I don’t know, 16 state mechanical system to take it from one state to another in a certain amount of time. They were pretty impressive. We’re just using this formula. Absolutely nothing else. Just this. And all I was doing was varying T to see what the input would look like. To see what it would require to take you to a certain state. This is much more interesting. We can actually work out the energy, the actual two norm squared of this least-norm input. Now, if you work out what that is, I mean in general what the least-norm input is is actually it’s going to be a quadratic form. And the quadratic form is very simple. It turns out when all the smoke clears I’ll just go through all this. When the smoke clears, it’s this. It’s a quadratic form. This makes perfect sense that the minimum energy – let me explain what this is. This is the minimum energy, defined as the sum of the squares of the inputs. By the way, this is the minimum energy. So this is the energy if you apply the input to hit that target state if you do the right thing. You are welcomed to use inputs that use more energy than this and many exist. Well, actually, unless C is squared, in which case if you hit it, there’s only one way to hit it in that, and oh, I’m sorry, C is squared which means there’s a single input and T equals N. If C is square there’s only one way to hit it so all inputs are minimum energy. But if square is fat, and real simple, there’s lots – you can go on a joyride and burn up a lot of energy and still arrive at X desired. That’s it. This is the minimum. It’s a quadratic form. And that quadratic form looks like this, and it’s actually quite pretty. Inside here it’s a sum of positive semi-definite matrixes. Now, I know they’re positive semi-definite because each term looks like this. It’s A to the tow B times A to the tow B transpose because this part is just that. But whenever you take a matrix and multiply it by its transpose, you get a positive semi-definite matrix. That’s what you get. So it’s a sum of positive semi-definite matrixes. Well, sums of positive semi-definite matrixes are positive semi-definite. And in fact, you can even say this and as a matrix fact, it’s correct. When you increment T you add one more positive semi-definite term to this positive definite matrix once T is bigger than N or at some point and that makes the matrix bigger. And I mean now in the matrix sense. So this is a matrix here, which is getting bigger with T, and I mean in the matrix sense. That means, by the way, the inverse is getting smaller. The inverse is getting smaller. That means that the minimum energy required to hit a target in T seconds, as a function of T can only go down. Well, it could be the same in there. It could be the same. Actually, normally it goes down. All right. So it’s actually quite interesting here. It says that we now have a quantitative measure of how controllable a system is or reachable. The reachable is sort of this platonic view that says, “Can you get there at all,” and this one is much more subtle. It’s less clean but it says basically this. It says oh, I can get to that state, no problem. I can get there, but what it’ll do it tells you if for some example, getting there is something that takes a huge amount of input, a very large input is required to get there and for all practical purposes, you can say, “I can’t get there.” So that’s the idea. Then we do beautiful things. I can ask you things like this. I can give a target state and I could say that the energy budget is 10 and I can say, “What is the minimum number of steps required to hit this target and stay within my input energy budget?” I could ask you that question and you could answer it by incrementing T until this goes below 10. One possibility is this will never go below 10. In which case, you announce that, well, you can announce several things. You can announce that is too little energy for me to get there no matter how long you let the journey be. So that’s one option there. You can actually solve a lot of very sophisticated problems. So what this does it gives you a quantitative measure of reachability because it tells you how hard it is. It also allows you to say things like, “What points or directions in states base are expensive to hit,” and expensive means require a lot of control. Cheap means, you can get there with very little control. And it’s actually quite interesting. These are lipoids of course, and they basically show that the set of points in states based are reachable at time T with one unit of energy if that’s a one. Actually, let’s go through the math first and then I’ll say a little bit about how this works. So as I said before, if I have T bigger than S then this matrix, that’s a matrix in equality is better than that one because the difference between the two is the sum of a bunch of terms of the form, you know, FX transpose between time S and T. So that’s what this happens here. Now, you know that if one matrix is bigger than another, the inverse actually switches them. So the inverse is less than the inverse here. Now we’re done because if this matrix is less than that, and anytime you put Z transpose Z here and Z transpose here and Z here, this inequality becomes valid. It’s an ordinary scalar in equality and it works. And that says it takes less energy to get somewhere more leisurely. So that’s the basic idea. It all makes perfect sense. Now, I should mention something here for general state transfer, the analog is false. Absolutely, or is it? Ewe. Wow, and I put all the intensifier up in front, didn’t I. Well, I think it’s false. But all of a sudden I had this panic that – I think it’s false. Let’s just say that. That’s what I think. I think it’s false. I retract my intensifier at the beginning. It’s probably false. There we go. We’ll leave it that way. So I think with general state transfer, it’s false. Okay. All right. I’m gonna have to think about that one for a minute. I’m pretty sure it’s false. Okay. Let’s just look at an example. So here’s an example. It’s a 2 x 2 example because that’s the only states based I can draw anyway so here’s a 2 x 2 example. And here’s some system. It increments like this. There’s an input, and I want to hit this target state 1 1. I just made it up. There’s no significance to any of this. It’s all just made up. And what this shows is the minimum energy required to hit the target point 1 1 as a function of time. And you see a lot of interesting things here. You can see that if you hit it in two samples it costs you an energy of over nine. If you say three, you can get there in almost half the energy. I guess it’s half the energy if you double, if you say, instead of two steps, do it in four, and so on and you can see. And it goes down. Now, what’s interesting is it appears to be going to an asymptote here, which means that to get to that point, with infinite leisure, it still costs energy. Now, I can explain that. That’s actually reasonably easy to explain. If a system is stable – someone have a laptop open. So anyway, no never mind, you don’t even need a laptop. Can someone work out the item values of this for me? I need a volunteer. Can you do it? Do you have a pen? So he’s working on the item values, which he’ll get back to us in a minute. I put him on the spot. We’ll let you work on that for a bit and then – it’s just because you have to write out a quadratic or something like that. So the conjecture is that this is actually – well, no. Cancel the item value thing. What ai was going to say is if this is stable, then in fact, you have to – if a system is stable and you have to get somewhere, you actually have to fight the dynamics to take it out to some place because if you take your hands off the controls, this is very rough. If you do nothing, the state will just decay back to zero. So you’re swimming upstream when you’re doing reachability for a system that is stable. Okay. Now, if it’s unstable, let’s talk about reachability. Let’s say a system is violently unstable, so basically, all of the eigenvalues for a discrete time system have magnitude bigger than one. So what that means basically is if you do nothing, the state is gonna grow step by step anyway. Now, let’s talk about what happens when I give you more and more time to hit a state. What’s gonna happen? If I give you, like, a hundred steps and you have a system that’s highly unstable or just unstable. If I give you a hundred steps to hit somewhere, what happens is all you have to do is push X of 0 away from the origin. All you do is you push X away from the origin the tiniest bit and then take your hands off the controls and you let the drift, which is the unstable dynamics, bring the system out to where you want to go. Does this make sense? So you kind of work with the different – there, you’re not fighting the stream, it’s actually on your side for reachability. Does everybody see what I’m saying? So what that suggests is that for an unstable system, as you give more and more time to hit a target, the energy is gonna go down, in fact, it’s gonna go down to zero. So we’ll get to that now. It is very hard to hit an isotopic target point like that. It is very easy to hit a target point like that. It’s very cheap to hit this one and very expensive to hit that one. So the controllability properties are not isotropic in this case. Okay so let’s examine this business of this energy going to zero. That is a sequence of a function of T, that is a sequence of increasing positive definite matrixes. And I mean increasing in the matrix order. That is a sequence of positive definite matrixes, which is getting smaller. Now, a sequence of positive definite matrixes that are getting smaller at each step converges just the way a sequence of non negative numbers that are monotone and decreasing converges. This converges to a matrix. That matrix has a beautiful interpretation. It’s called P here, that’s actually called the controllability gramian, this matrix. And actually it’s the inverse of the gramian, but that doesn’t matter what it’s called. So this matrix comes up and actually it’s beautiful. It’s a quadratic form that tells you how hard it is to hit any point in states based with infinite leisure. That’s what this matrix tells you. And by the way, if the system is violently unstable, P can be 0. That’s extremely interesting. So it takes, basically, 0 energy to hit anywhere in a system that is violently unstable. Let me just do a simple example. Let’s take B to I and let’s A be 1.01 times the identity. It’s a very simple system. U just adds to the input. The dynamics is you just times equal the state at each step by 1.01. So basically it says, “If you do nothing, the state just grows by 1 percent each step.” That’s all that happens. It’s a violently unstable system. All the eigenvalues are outside the unit disc. They’re all equal to 1.01 and now it’s completely obvious that the longer you take, you name any point you want to hit, and what you do is if you take T samples you go back, I guess, by 1.01, you actually find out what input is required to hit that and you take that point and divide it by 1.01 to the T and that’s the U that you’ve set on the first input. That’s a sequence of inputs that just kick it out and then let the dynamics take it there. Those inputs will have, as T gets longer and longer, the energy will go to zero and the – by the way, if P is zero, it does not mean that you can hit any point with zero energy. The only point you can hit with zero energy is the zero state. So when you interpret Z transpose PZ, you’d say that that’s the energy required to hit it with infinite leisure. It’s really a limit. It says that you can hit it. When this is zero it basically says that you can hit that point, not with zero energy, but with arbitrarily small energy by taking a longer and longer time interval. That’s what it really means. Okay. Now, it turns out that if A is stable then this matrix is positive definite. That follows up here. If a matrix is stable, well, what it means, is its power, that’s A to the tow are going to zero geometrically. In fact, they go to zero at least as fast as the spectral radius, the largest magnitude and eigenvalue of A to the T. So that means this is a converging series. This thing converges to some positive definite matrix. The inverse of a positive definite makes a positive definite and you have this. So if A is stable, you can’t get anywhere for free. But if A is not stable, then you can have a zero null space. Zero null space means just what we were just talking about. You can get to a point in the null space of P using the use of energy as small as you like so that’s it. And all you do is just kick if a little bit and let the natural dynamics take you out where you want to go. You have to be careful doing this, obviously that this is way it works. So this is actually used in a lot of things. For example, it’s used in a lot of what people call statistically unstable aircrafts, so if you look at various sort of modern fighter aircraft, some of the really bizarre ones will actually have the wings swept forward slightly and it just doesn’t look right. It just looks like it’s flying backwards actually, and it just doesn’t look right, and sure enough, it’s not right because it’s open loop and unstable. That’s what they mean by statically unstable. Most other ones are stable. Commercial ones are, at least so far, stable. I think they’re probably gonna stay that way, but who knows. So with forward swept wings or statically unstable aircraft, you might ask why would anyone build an airplane, which basically sitting at a trim position, in some flight condition, is unstable. So let’s think about what this means. It means things like your nose goes up and instead of there being a force or moment that pushes your nose down, when your nose goes up, actually, there’s an up torque and your nose goes up faster. First of all, why on earth would you ever do this, that is the first question. So and this is just for fun. Someone give me a guess. By the way, I made a guess and it was totally wrong when I talked to someone who knew what they were doing.

Student:

[Inaudible]

Instructor (Stephen Boyd):Yes, that’s the idea. You want to get a nice snappy ride. Okay. And you do. You get a very – as you can imagine you do. Right. You pop your elevator down a little bit or whatever it is and your nose is now going to go very fast. So is the idea that you can just do it with a small U so it’s efficient? Okay. So what’s the objective? Well, I assumed it was – I don’t know. I actually finally talked to someone who knew what they were talking about, at least on this topic, and they told me in fact why you do this. The main reason, actually, has nothing to do with efficiency or anything like that. Obviously. You want small control surfaces for smaller radar cross sections. So the reason you want small control surfaces, obviously if you’re flying at mock two or something like that, you’re not really worried about energy efficiency or anything like that. What you want is a small control surface because control surfaces reflect radar stuff. So that’s the real reason. And I actually found out how they work. They have, like, five back up control systems because, let’s remember, you flip up, but you better be very careful with this, right, and you flip up with a tiny, very small, little subtle control surface that just goes like that. You flip up, and when you get to where you want, you better have just the right input to make you stabilize there and all that kind of stuff because if you lose it, I guess in this case, it’s all over in three seconds. It’s in under three seconds that whether the pilot likes it or not that explosive bolts go and you’re out. So that’s the way it works. And the way it works is I think that there were four redundant control systems. So I guess if the first one fails, the second one is all ready to go, if the fourth one fails, you’re out the top whether you push the button or not. And that’s the way this is and they actually do this. And actually now there’s a move to do this for some chemical processes, too. By the way, there’s a name for a chemical process that’s statically unstable. What would be the common name for it?

Student:[Inaudible]

Instructor (Stephen Boyd):Yes, it’s called an explosive. Yes, that’s correct. So I don’t know if these things are good or bad or whatever, but that’s the – and people are doing it. They just said, no, we operate this process at an unstable equilibrium point because it’s more efficient in terms of the overall operation. So that’s it. All of these obviously require active control to make sure everything’s okay. Right. Everything will become – that’s the whole point of an unstable system. Things will become not okay very quickly. There was a question back there.

Student:No.

Instructor (Stephen Boyd):Maybe no? Just stretching. Okay. All right. So. Okay. Let’s look at the continuous time case and see how that works. It’s a little bit different but there’s nothing here you wouldn’t expect. And in fact, this allows me to kind of say something that I should’ve said earlier but that’s good. Now I get the excuse to say it. To make a connection between the conditions – there is a question.

Student:[Inaudible]

Instructor (Stephen Boyd):Right.

Student:[Inaudible]

Instructor (Stephen Boyd):Really. It’s a homework. I can’t do the homework, generally, just like that. I had a discussion once. Some people came to my office and I started explaining something, 10 minutes, dead end. I tried again, dead end. And then after 25 minutes they said, “Do you think it’s fair to assign homework that you can’t do? And I said, “Yes, absolutely because I said at one point, clearly, I could do it, and at that point, it obviously was trivial then.” So all right. So let’s answer your question. What was it? I can try, but I’m just – I can’t do it. I’m not embarrassed in the slightest, but go on.

Student:[Inaudible]

Instructor (Stephen Boyd):That’s a good problem. I wonder who made it up. No, I’m kidding. All right. Okay. So you’re given an initial state and you want to steer it, not to the origin, but to within some norm of the origin with what, with a –

Student:Minimum amount of input.

Instructor (Stephen Boyd):– with a minimum amount of input. That’s a great problem. Is it continuous time?

Student:[Inaudible]

Instructor (Stephen Boyd):Okay. Fine. All right. So I don’t know. Can you solve that? I guess the answer is no. That was a rhetorical question. Let’s talk about it. Right. It’s safer for me in case I can’t solve it. So what happens is you want to – let’s fix a time period. Okay. So then it’s a linear problem. Right. As to where you can get. So I guess it’s sounding, to me, like a bi-objective problem. Am I not wrong? It’s sounding to me like one. Right. So the final state is what? Let’s just say if you go T seconds, it’s T epics, it’s A to the T X 0 plus and then something like CT times – I’ll call it U, but everyone needs to understand U is really a stack of the times in reverse time. Is that cool? This is actually a sequence of U. The whole trajectory. Right. That’s what you got and then what did you want to do? The condition is that this should be less than some number. What was the number I gave?

Student:.1.

Instructor (Stephen Boyd):.1. Good. A nice number. There we go. So we have that. And what did you want to do? You wanted to minimize the norm of U. And then your point is that we never did this, right? Is that your point?

Student:[Inaudible]

Instructor (Stephen Boyd):It seems to be. So we didn’t do this. That’s true. You can look through the notes and you won’t find this anywhere. Any comments?

Student:[Inaudible]

Instructor (Stephen Boyd):What?

Student:[Inaudible]

Instructor (Stephen Boyd):Yes, thank you. Okay. So yeah. We didn’t do this. Absolutely true. This is a bi-objective problem. This is a perfect example of how these things go down in practice, right, because basically, you go back and look at like week four, it was all clean. It was, like, “Yes, let’s minimize AX minus Y with small x and then we drew beautiful plots and all this kind of stuff, right?” Here, it’s clouded by the horrendous notation of the practical application. In this case, the practical notation is steering something from here to there so it doesn’t look as clean. But it is the same. So you make a plot here trading off – I don’t remember how we did it before, but you would trade off these two things like that and there’s an optimal trade off curve here. There we go. I know one thing to do, you could set U equals zero, there, I got one. You could nothing and run up a very small bill here. So how do you solve this? How do you solve this? Anyway, I’ve already said enough. Are we okay now? So now what happens is you make the trade off curve here and then on this plot what do you look for? I find the point here, which is 0.1 and I go up here and I’m looking for that point and that will solve it, right? Are you convinced?

Student:

Yeah.

Instructor (Stephen Boyd):Okay. So that’s it. All right. So it’s true. You didn’t do that before. But we did things that allowed you do it. So. Okay. Are you happy now? Okay. Good. Okay. Let’s continuous time reachability. So how does this work? Well, it’s actually in some ways trickier and in some ways it’s actually much simpler. It’s gonna be interesting, actually. So here’s the way it works. Actually, in some ways it’s gonna be uninteresting. That’s the interesting part about controllability in the continuous time case. Okay. So we X. is AX plus BU and the reachable set of time T is actually now an integral and this, it’s parameterized by an infinite dimensional set. It’s the set of all possible input trajectories you could apply over the time period zero key. Absolutely infinite dimensional. Okay. Now, it turns out that this sub space is super simple. It’s just this. It’s actually much simpler than the discrete time case. In a discrete time case you can get weird things like this state you can hit it in five steps, but not four. This state you can hit in seven, but not three. You can get all sorts of weird stuff. I mean, all the weirdness stops. Once you hit N steps, you can hit anything you’re ever gonna hit, you can hit. That’s starting from zero in the discrete time case. In the continuous time case, it just bumps up to anything you’re ever gonna be able to hit, you can hit. You can hit anywhere, you can hit it in one nanosecond, at least according to the model. So it’s basically this. You form the matrix B AB up to A and minus 1B, that’s the controllability matrix. And it basically says if this matrix is full rank, you can hit this set is all of RN for any positive T. And in continuous time, it says any place you can hit, any point you can reach in any amount of time, you can actually reach infinitely fast. That’s what it says. And this makes perfect sense. You have to have your input act over a smaller, and smaller time. And it really couldn’t have been otherwise. I mean, it would’ve been really weird if there was a state here you could reach in three seconds, but not two. That would’ve been kind of weird because you’d think, “Well, like, what exactly happened?” And in fact, because that’s a sub space, it’s dimension is an integer, so had this other thing happened, it’d be, like, you know, the dimension of the reachable set would’ve gone up to equals, you know, T equals 2.237 it would’ve jumped to three or four. And you think now, “What on earth would allow you, all of a sudden, at some time instance to manipulate the state into some other dimension?” I mean, it makes no sense at all. So in fact, it kind of had to be this way. So this is it. So that’s the result. And we’ll show it a couple of different ways. Actually, there’s a bunch of ways to connect it up here to the discrete time case and see how it works. Now, one way to see that you’re always in the range is C is simple. Let’s start from zero, E to the TA is a power series, but I could use the Cayley-Hamilton as a back substitute with powers of A starting at N, N plus 1 and so on. I can back substitute powers of smaller powers of A. And I’ll end up with this, it says that basically E to the TA is for sure, for any key, it is a polynomial in IA up to A and minus 1 period. [Inaudible] polynomial of A, a degree less than A. Okay. Now, X of T is just this integral, but now I’m gonna plug that in and I get this thing and now I switch the integral and the sum and I get the following. It’s the sum from I equals one to N of this. But that is just a number. You could actually work out how these are exactly, but it doesn’t really matter for us because that’s a number and that’s our friend the controllability matrix. So what this says is if you have a continuous time system, no matter you do with the input, and you start from zero, you will never leave the range of the controllability matrix. Ever. Now, we’re gonna have to show the converse which is that any point in the range of the controllability matrix can be reached. First we’ll cheat a little bit and we’ll do that with impulsive input. If we’re gonna use impulsive inputs we have to distinguish between zero minus and zero – well, T minus and T plus whenever T is a time when there’s an impulse put. So let’s just say before the impulse, we’ll put zero and we apply an impulse, which is A. It’s distributed across the inputs by a constant vector F, that’s F1 through FM and it’s multiplied by this K differentiated delta function. That’s what it is. And here, the laplace transfer of that, is S to KF. The laplace transfer of the state is SI minus A inverse B is S to the KF. I’ll do a series expansion on this, I think that’s a called a law expansion. Did I say that at the time? I don’t think I did. No, I didn’t think I did, but that’s what it is. I think we used it to do the exponential. So if I expand this, I take out the powers that are going to multiply the S to the K and I get things like this. A bunch of them look like this and let’s look at this very, very carefully. When I take the inverse laplace transform these correspond to violent impulses in X of T. This S inverse is gonna be the first one. That’s sort of like a step term. This is all the stuff that happens between zero minus and zero plus. This is what happens right after zero plus. It makes perfect sense. It says that if you apply an input differentiated K times, it has an immediate effect on the state and the state is to move it to A to the K B. But now, you know how to transfer the state to anything in the range of C because if I make an input that looks like this, it’s a delta function times F 0 up to a delta function differentiator and minus [inaudible] and I multiply this if I apply this, then X of 0 plus is C times this vector and now we’re done. Now if it says that at least using impulsive inputs, I can reach anything in zero time using impulsive inputs. That’s what this says. So that’s the picture there. And the question is can you maneuver the state anywhere starting from X equals zero. Is the system reachable? If not, where can you get it? Well, you can kind of figure out what it is, but to kind do some of the calculations we can actually work out what it is. You work out the controllability matrix. It’s A AB A squared B and you get this matrix here and you look at it for a little bit and you’ll quickly realize its rank two. All right. Let’s move on to a much more important topic, which is least [inaudible] reachability in the continuous case. It’s gonna be very similar, except it’s gonna be kind of interesting now because it’s gonna be that we’ll have this possibility of actually affecting a state transfer infinitely fast. And that’s gonna come out of this. Let’s see how that works. That’s your minimum energy input. If you have X. is AX plus BU and you seek an input that steers X of 0 to X desired and minimizes this integral here. Now, this is not anything we did before. In fact, this has got a norm. People would call this, by the way, the two norm – just the norm squared of U. Okay. But this is not anything you’ve seen before and when this was discrete time, U was sort of a stacked version and it was big, possibility, but it was finite dimensional. That’s an integral, were in the infinite dimensional case here. Actually, it’s not anything you need to be afraid of. Some of you, depending on the field you’re in, will have to deal with infinite dimensional things. I might even just be in continuous time or something like that. My claim is if you actually understand all the material from 263, none of the infinite dimensional stuff has any surprises whatsoever. Absolutely none. I mean, a few details here and there, some technical details, everything we did has an analog. And a simple, elementary one. Now, don’t dress it up and make it look very fancy to justify, I don’t know, just to make it look fancy, right, but you’ll see the concept for example [inaudible] so instead of calling it something symmetric you’ll have a self a joint operator. That’s the other thing. You’re then welcomed to call linear transformation an operator, which sounds fancy by the way. Or some people think of it as fancy. So you can talk about linear operator and you can find out, for example, a symmetric one can be diagonalized. There are some things that get more complicated, but if the operator is what’s called compact, then it’s gonna be exactly the same. It’s gonna look exactly the same. It’s gonna be something to the SVD also works, at least for compact operators. I’m just mentioning this because some of you will go on – if you ever have to do that, I mean, it should be avoided of course, dealing with these things, but if you find you’ve already chosen or are too deep into a field where these [inaudible] dimensional things do appear, don’t worry because I claim if you understand 263 you can understand all of that just with some translations. There are a few additional things that come up that you don’t – you’ll have continuous spectrum and things like that, but otherwise it’s fine. Has anyone actually already encountered these things? I think there’s a lot of areas in physics where you bump into these things, so okay. All right. This is your first fore ray into that. So let’s just discretize the system with an interval T over and. Okay. And later we’re gonna let N go to infinity so that’s what we’re gonna do. So we’re actually not gonna look at first over all possible input signals. We’re gonna look at input signals that are constant over consecutive periods of length H which is T over N. So that’s what we’re gonna do. So we’re not solving the problem. So we’ll let them be constant and we’ll just apply our various formulas from various things. It turns out [inaudible] exactly what we had before. Now, it’s finite dimensional and this is now the controllability matrix of the discretized system. And remember, these have formulas, like, AD is E to the H A and BD is this integral here. Okay. And the least norm-input – now, this is all finite dimensional so there’s no hand waving, nothing. It’s week four of the class. The discrete least-norm input is given by this expression here. Now, if I go back and express this in terms of A using these powers of these things, after all, A is an exponential and powers of exponentials is just the same as multiplying the thing by that, you get something kind of interesting. What happens is BD turns into T over NB, so you get the following. That’s this expression here. That’s this first expression here. As N gets big, that converges to something that looks like that. Now, the sum is nothing but a ream on sum for an integral and the integral is that. Now, you put these together, in other words, you take this thing and then multiply by the inverse of that. Notice that the N conveniently drops out. That just goes away. So does the T for that matter. And I get a formula, and this is in fact different, it’s this, it’s B transposed times this [inaudible]. By the way, if you compare this to the discrete time case you will see that it is essentially the same, well, you have to change integrals and things like that. Now, what’s really cool about this thing is the following. Now that it’s completely and horribly marked up and no one can read any of it, but imagining that you could read it, the cool part is this matrix is non singular as long as T is positive. I can make T 10 to the minus nine and this matrix will be non singular. By the way, it’s gonna be non singular, but if you integrate something – again, you have to assume some reasonable time scale and things like that, if I integrate something from zero to 10 to minus 9, that integral is gonna be very small. So that says that this inverse is going to be absolutely huge. And so what this says is oh, I can steer the input, I can steer the state from zero to a desired state in any number of steps. Sorry. In any amount of time I can do it very, very quickly, but it’s gonna take a huge input. That’s what this says. It all makes perfect sense. It all goes together and it makes absolute perfect sense here. Now, in the discrete time case, you might want to know why breaks down and what breaks down is real simple and it’s for a simple reason. Let’s see if I can say this and not sound like an idiot. The problem in the discrete time case is the time is discrete. This is the problem. Here, time is continuous. I can make it as small as I like. But here, what happens is I’ll decrease T. When T equals N, I’m still safe by Cayley-Hamilton, but the minute I drop T below N, then there will be – I can take T down and at some point, this matrix can become non singular, in which the case, the inverse doesn’t work. By the way, if I replace the inverse with a dagger, and make that a pseudo inverse, you get something very interestingly related to our famous homework problem. If I put a dagger in here, I’ll get something really interesting. I’m gonna get you the least-norm input that will get you as close as you possibly can get to the desired target. Did this make sense? So that’s what C dagger will do. And that’s not the dagger from lecture four. That’s not CC transpose C inverse C. Sorry. C trans – help me with this one. C transposed – whichever it is. C transposed quantity CC transposed inverse. Yes, that was it. It’s not that dagger. It’s the general dagger that requires the SVD. So that’s what happens. Okay. Now, the energy required to hit a state is give by this integral. This integral from zero to T. And the cool thing about the integral is no matter how small T is, Q is positive definite. It’s invertible. And I’m not gonna go over a lot of that, but that’s sort of the basic idea. Let’s see. And I’ll just make the connection to the minimum energy [inaudible]. The same story happens. I have an integral, a positive semi-definite matrixes here. If I increase the time T that you’re allowed to use to hit a target, this matrix goes up, this one goes down, and that’s the quadratic form that gives you the minimum energy so you have the same result again. Okay. Let’s quit for today. For those of you who just came in, I think I announced at the beginning of the class there’s a tape ahead. It’s today. It’s today, 4:15, Skilling Auditorium, but as usual, you cannot trust me. Whatever it says on the website is what it really is. And statically, some of you should come because otherwise I’d be put in the terribly awkward position of giving a lecture to no one. It’s never happened. Hopefully, this afternoon won’t be a first. Okay. We’ll quit here.

[End of Audio]

Duration: 74 minutes

Source: http://see.stanford.edu/materials/lsoeldsee263/transcripts/IntroToLinearDynamicalSystems-Lecture19.html Labels: Introduction to Linear Dynamical Systems, Linear Systems and Optimization

## Responses

0 Respones to "IntroToLinearDynamicalSystems-Lecture19"Post a Comment