49 min

Topics: Shift Theorem In Higher Dimensions, Shift Theorem: Result, Stretch Theorem Derivation, Stretch Theorem Result, Special Case: Scaling, Special Case: Rotation, What Reciprocal Means In Higher Dimensions (Inverse Transpose), Deltas In Higher Dimensions (Basic Properties, Scaling)

http://www.youtube.com/watch?v=7tDT5kYe2Qw

Instructor (Brad Osgood):All right. Let me remind you of the exam information as I said last time. I also sent out an announcement to the class this morning via the website. So the exam is a week from Thursday, the 13th crack of dawn, 8:30 - 11:30 in the morning and it is in Dinkelspiel Auditorium. It seats 700 or something like that, so it shouldn’t be any problem. And as before, as I said last time, it is open books, open notes and all the rest of that jazz. And I will provide you with the formula sheet that you already have from the mid-term exam and is already posted on the class website. Okay. Also, as I sent out the announcement this morning by email, I changed the due date of the last problem set from Wednesday to Friday, so it’s due on Friday instead of due on Wednesday. Okay. Wanted to have a chance to have a couple more groups to make sure we finish the relative material and give you a couple extra days. Okay. What a guy. All right. Any questions about anything? Anything that makes money? Okay. Wish we had one more day, I’m sure you’re saying the same thing. I wish we had one more day of this class, but we don’t. So it may be a little bit of a rush to try to get to the stuff on medical imaging. I wanted to finish up with that. We’ll have to see if we can.

I won’t be able to do as much of that as I wanted to, so I’m trying to arrange the material that we can do as much of that as I can because I think it’s such a nice application. And out of a lot of the ideas that we’ve seen, certainly in the higher dimension, two-dimensional case, but to get there, we’ve got to do a few more foundational results on the higher dimensional Fourier Transform. So again, this is under the general heading of getting to know your higher dimensional Fourier Transform and again, the fact is you already know a lot about your higher dimensional Fourier Transform because you know the one dimensional version very well. Today, I want to do two other examples of that. At least two other examples of that. Maybe a little bit more. One where the situation is very much the same. The formula looks very much the same as the one dimensional case, and one where there’s a difference where you can see traces, certainly, of the one dimensional formula, but there are [inaudible] phenomena in higher dimensions, already in two dimensions, that show up in the formula and it allows you to make a new maxim or a new aphorism that explains a little bit that has to do with the relationship between the time domain and the frequency domain and reciprocality in higher dimensions. All right. So I want to do a couple examples of that. The first one that I want to talk about looks very similar to the one dimensional case and that is the so-called shift theorem that we’ve seen and made much use of. Shift theorem. So again, I’m gonna remind you of the one dimensional version. A shift in time is reflected by a phase shift in the frequency domain. That’s a way of saying it in words, and in formulas, it says if F of T corresponds to [inaudible] capital F of S. This is the signal in the time domain; this is the Fourier Transform and then what happens if you make a shift.

If you make a shift by B then that corresponds to E to the minus 2 pie ISB, that’s the phase shift times the Fourier Transform of the original function. All right. That’s easy result. That’s one of the very first results that we proved when we were talking about general properties of the Fourier Transform and it follows, like many other formulas, just by making a change of variable in the interval that defines it. All right. Interval defines a Fourier Transform. Well, what does the situation look like in higher dimensions? As a matter of fact, let’s look at it in two dimensions because already there you can see what the general pattern is and the argument in higher dimensions is exactly the same. What’s a 2D shift? When I say a 2D shift, I mean a function of 2 variables, and what does it mean to shift it? You can shift each variable independently. So X1 can get shifted to X1 minus B1 and X2 can get shifted to X2 minus B2 and so the function F of X1, X2 can get shifted to F of X1 minus B1 on an X2 minus B2. All right. And the question is what happens to the Fourier Transform if you make that shift? Well, we have no recourse other than to the definition. So the definition of the Fourier Transform and the shifted function is, and again, I’m just doing the two dimensional case so I have to write this out in coordinates. It goes from minus infinity over minus infinity E to the minus 2 pie I X1 times C1 plus X2 times C2 times F of X1 minus B1, X2, minus B2, DX1 and DX2. Man, is it tough to do higher dimensional Fourier Transforms on the board. All right. Now, there’s really only one thing to do here and that is make a change of variable. The techniques in a lot of these results are making a change of variable interval and that’s where some tricky things can come in, and some not so tricky things.

This is a not-so tricky case where it is an easy to make a change of variable. So you can make a change in variable, an easy change. And in this case, the area element, the X1 DX2 changes the DU1, DU2. It’s the change in the area element. Not a change in DX1 and DX2 separately. That’s one of the things that’s a complications in higher dimensions, but for such a simple case, it’s just a direct change. So DX1 and DX2 is equal to VX1, VX2 and the interval becomes the limited integration stay the same, again, going from minus infinity to infinity minus infinity to infinity. So that’s E to the minus 2 pie I, so X1 becomes U1 plus B1 times C1 plus U2 plus B2, C2, F of U1, U2, DU1, DU2. All right. Now, this is not hard to manipulate [inaudible]. I could write this as minus infinity to infinity minus infinity to infinity, E to the minus – what I would do is I want to split up the complex exponential. All right. So E to the minus 2 pie, U1 times – let me put together here. Let me do this. B1 times C1 plus B2 times C2 times E to the minus 2 pie I, U1 times C1 plus U2 plus C2 times F of U1, U2, DU1, DU2. I split it up that to make sure I got everything right there. I took all the exponents into account. And then what remains is just the Fourier Transform. I did make a mistake. There’s a minus sign there. Okay. So that’s E to the minus B1, C1 plus B2, C2 times the interval, double interval minus infinity to infinity minus infinity over infinity, and what remains is just the Fourier Transform, but the U1 and U2 are replacing X1 and X2, so U1 times C1 plus U2 times C2, F of U1, U2, DU1, DU2 so what is the answer [inaudible] the answer is oh, well, nag, nag. Minus 2 pie I. What? What? What? Took my argument. Minus 2 pie I. Is that all? All right. So what’s left? E to the minus 2 pie I, C1 times B1 plus C2 times B2 times the Fourier Transform of F at C1, C2. All right.

Now, there is, after all this work, a much easier way of writing it or a more compact way of writing it that shows the shift theorem in higher dimensions to look exactly the same really as the shift theorem in one dimension. All right. Once again, here’s the phase shift term out in front, but you can see that this exponent here, what’s in the exponential is the inner product of the vector C1, C2 with a vector B1, B2. All right. So if I write B as B1, B2 and of course C as before is C1 times C2 then F of X1 minus B1, X2 minus B2 is just, in vector form, I’d write that as just F of X minus B. You can make it look as much like the one dimensional case as possible. And that formula holds in any dimensions. All right. Any number of dimensions. It’s pretty. All right. It’s the same formula in higher dimensions as it is in one dimension because you’re writing things in vector notation. The vector notation allows you a very compact and very easy way of writing it and a very easy way of remembering it. All right. I think it’s much easier, I think, to remember this than it is to remember this. Whether or not you can remember your 2 pie I's. Okay. All right. So this is an example. I wanted to go through the derivation just to show you how it worked. The main technique, the only technique involved was a change of variable in the multiple interval. And I wanted to go through this example so you could see how that worked and as an example of how something looks very much the same in the two dimension as it does in one dimension or even higher dimensions as it does in the one dimension. Now, let me look at one of the formulas where a new phenomena comes in. And that’s the stretch theorem. So there’s two basic theorems that we’ve used a lot. One is the shift theorem that I’ve just derived, and the other is the stretch theorem when the variables are scaled.

Okay. And again, we’ve used that many times in many different contexts. And the interesting, sort of physical aspect of the theorem or the thing that comes up often is there’s sort of a reciprocal nature between stretching in one domain and shrinking in the other domain. All right. That is the reciprocal relationship here between what happens in the time domain scaling by A and the frequency domain scaling by one over A. So this sets up this reciprocal relationship between scaling the two domains. I don’t know if that’s a good sense or not, but you know what I mean, scaling in the two domains. All right. Two domains. Long quarter, huh. Two domains. All right. Scaling by A in the time domain means scaling by one over A in the frequency domain. And again, the proof of this was integration was the change of variable in the integration. There’s only really sort of two techniques that ever come up. One is a change of variable and the other is integration by parts. Those are the two techniques that come up all the time. Now, what about in the higher dimensional case? So again, let me look in the two dimensional case and I’ll state one formula that is directly analogous to the two dimensional to the one-dimensional formula and then I want to do something that’s a little bit more general. All right. Let’s look at the 2D case. And again, we can scale the variables independently. So we scale, so we have a function of X1 and X2 and we can scale X1 and X2 independently. All right. And the question is what happens to the Fourier Transform? All right. I’m not going to go through the derivation. But I’ll give you the result. The derivation is a change of variable, a simple change of variable because you can change the variables. You can make a change of variable independently so it’s not difficult to change the variables in the multiple intervals that compute the double interval that computes the Fourier Transform of A1 and A2.

Here’s what you get. Let me give you the special case first. It looks like this. Again, if F of X1, X2 transforms to F of C1, C2, all right, the special domain and the frequency domain, then F of A1, X1, A2, X2 transforms to one over absolute value of A1 times 1 over F of value of A2 times F of C1 over A1, C2 over A2. Just in pretty close correspondence to the one dimensional case. Okay. If the one dimensional variables X1 and X2 scale separately, then so do the frequency variables, C1 and C2 and it’s a reciprocal relationship again. Okay. So again you have reciprocal relationships. There is a more general notion of scaling in higher dimensions and requires a little bit more complicated derivation and a little bit more complicated looking formula, but it’s a very important phenomena that comes up when you do this, and it’s very similar to what we were talking about when we talk about linear systems. All right. The basic result of linear systems is direct proportionality. That’s scaling. All right. But the more general way of looking at linear systems is say multiplication by a matrix. And likewise in this case, a more general notation of scaling or stretching is not just stretching X1 and X2 independent but allowing it to mix up what happens to X1 and X2 by multiplying by a matrix. And here’s where two dimensions and higher dimensions are richer than one dimension. There’s more degrees of freedom or general notation of scaling. All right. Instead of just X1 going to A1, X1 and X2 going to A2 times X2 independently; you could have something like X1 goes into a culmination. To write that a little more compactly, [inaudible] could go the matrix ABCD times X1, X2. There’s more degrees of freedom in higher dimensions that there are in one dimensions. That’s the richness, and some cases, the complication of the field, but there’s not much of a difference between two dimensions and higher dimensions here.

So in two D and up a general scaling is multiplication by a matrix A, say, and I want to assume that A is non singular, all right. So there’s no collapsing here. We’ll assume the determinant of A is not zero. So in other words, F of X gets scaled to F of A times X. All right. And the question is what happens to the Fourier Transform if you do this kind of scaling? Now, there is actually a very attractive formula and I want to show it to you. And a very important formula. Again, we have no recourse here other than to appeal to the definition of the Fourier Transform and to see what we can do. What’s more complicated here is – and the technique is pretty much the same to make a change of variable in the interval, but the change of variable is a little bit more complicated. All right. So the Fourier Transform of F of AX – I’m gonna do this in higher dimension just for the thrill of it. I’m gonna do this in RN. All right. The Fourier Transform of F of AX is the interval over all of RN E to the minus 2 pie I X.T F of AX DX. That’s just the definition of Fourier Transform where F is scaled by A. Okay. Now, I want to make a change of variable [inaudible] U equals AX. All right. That’s fine. But it introduces a number of complications, so the variables are not changing independently here. What I mean is they’re all coupled. Okay. Now, what you have to know, and I’m certainly not gonna derive this and I don’t know your experience in dealing with the changes of variables in multiple intervals, so I’m gonna give you the formula and if you would ever like to have a quiet conversation about how to change variables in multiples, say, over a drink sometimes, which is what it takes, I’d be happy to do that. All right. But it looks like this. What happens to the volume element is DU becomes the absolute value of the determinant of A times DX.

When you make a linear change of coordinates, when you make a linear change of variables, the volume scales by the absolute value of a determinant. All right. So if you think of DU as the volume element DU1, DU2 up to DUN, and DX as the volume element in the X variable DX1, DX2 and so on, then the volume scales by multiplication by the determinant of A. I wouldn’t be surprised if you’ve seen that as a basic geometric fact about linear transformations, what do linear transformations do, they scale the volume by the determinant. Okay. There’s something else that goes on here. Okay. There’s something else. So the other thing that happens is in the exponential there. So if I want to make a change of variables, what happens to the exponential? What about a complex exponential. So I have X.C, that’s the thing I’m worried about there, what happens to X.C? Well, again, I’m assuming the matrix is non singular; the determinant of A is different from zero so if U is equal to AX then X is equal to A inverse U. I want to express everything in terms of U. Everything in the variable has to be expressed in terms of U. So if U is equal to AX then X is equal to A inverse you, that’s fine, then I have to look at X.C is equal to A inverse U.C. All right. Now, something more actually can be done with this. All right. There is a general relationship, and this is very important, there’s a general relationship between matrix multiplication and dot products. And it says this. Let me just give you the general statement. It says [inaudible], but there’s a general relationship that’s going on here. So it says that if I have a matrix B then B of X.Y is the same thing as X.B transpose of Y. All right. You can shift the matrix from one variable to the other and in so doing, the transpose comes in. All right.

Now, I want to plug this into the formula for the Fourier Transform. Okay. Plug all this into the formula for the Fourier Transform. Okay. So where are we? It’s an integral still over all space because you’re making a change of variable by a non singular matrix space goes the space and not a lower dimensional space. [Inaudible] E to the minus so I’m making a change of variable here in this interval. And everything should be written in terms of U. So it’s an interval over all of RN, E to the minus 2 pie X.C becomes E to the minus 2 pie IU.A inverse transpose of C. All right. F of AX becomes F of U. That was the whole point of making the change of variable. All right. That becomes F of U. All right. And DX, the volume element in the X coordinates becomes one over the determinant of A [inaudible] of the U coordinates, DU. One over determinant of A DU. All right. Now, you see that interval is nothing but the Fourier Transform of F evaluated at this variable. Not a value of C, but evaluated at A inverse transpose of C. All right. So this is one over the determinant of A times the Fourier Transform of F evaluated at A inverse transpose of C. That is the formula. What is the formula? Let me summarize it for you. So the general and stretch theorem is very attractive really, if you like this sort of thing.

But here’s the stretch theorem. All right. So F of X corresponds to, say, capital F of C, all right, so this the special function, this is its Fourier Transform. Then F of AX, I want to write it so it looks like the one dimensional case as much as possible, it belongs to one over the Fourier Transform. This is a determinant to of A times the Fourier Transform of F evaluated at A inverse transpose at C. A inverse transpose at C. That’s the formula. All right. That looks a little different, right, than the classical stretch theorem, but it includes the one dimensional stretch theorem. It also includes the theorem that I had first. Let me say a couple things about this. All right. That’s how the Fourier Transform changes, deal with it. All right. So now there’s particular cases. This scaling is given by this matrix times X1, X2, A1 times X1 zero plus zero is zero times X1 plus A2 times X2. All right. And of course, so this A then the determinant of A [inaudible]. All right. What is A inverse? Well, for a diagonal matrix, A inverse – and I have to assume that A1 and A2 are different from zero here. I’m assuming the determinant is different from zero. So for a diagonal matrix, A inverse is equal to one over A1, 1 over A2, 00 and it’s diagonal again so it’s transpose is equal to itself, A inverse transpose is equal to the same thing, 1 over A1 00 1 over A2 and if you unpack all this, you will find that applying the general stretch theorem, in this special case, leads exactly to the theorem as I had stated it before. Now, there’s another important special case. We haven’t lost anything. And we’ve gained something, but in particular, we’ve included the formula that I had before.

Another special case, let’s just do it in two dimensions, is when coordinates are rotated. That is two dimensional rotation matrix. Two D rotation. That is A equals – it’s usually written this ay – cosign of theta minus sign theta, sign theta, cosign theta. All right. That is a rotation through an angle theta clockwise, I think. I always have to think about that for a couple minutes, but I don’t have a couple minutes. All right. That’s a rotation by an angle theta. Now, this matrix has a special property. This matrix is orthogonal in the following sense. So rigid motion, it just rotates. To say it’s orthogonal is to say that A transpose times A is the same as AA transpose is the 2 by 2 identity matrix. Okay. So there’s a couple of consequences of that. One consequence is the determinant is absolute value 1 because the determinant of A transpose A, on the one hand, is equal to the determinant of the identity, which is 1. On the other hand, the determinant of A transpose A is the determinant of a transpose times the determinant of A. The geometric meaning of that is that A preserves volumes. And it does. It’s just a rotation. It’s a rigid motion. So it is not distorting volumes. Volumes are distorted under linear transformation by the absolute value of the determinant. But in this case, the absolute value of the determinant is one. The further consequence is [inaudible] A transpose times A is equal to the identity. That says that A transpose is the same thing as A inverse because the inverse of a matrix is the identity so that identifies a transpose as an inverse or the other way of putting that is A inverse transpose is equal to A. That is A inverse transpose is equal to A. Oaky. All right. Plug that into the formula in the case of rotation matrix. In the case of a matrix like this, plug this into this general formula.

Plug in the formula. If you plug into the formula, F of AX goes to 1 over the determinant of A times the Fourier Transform at A inverse transpose at C. all right. The determinant of A is one, A inverse transpose is A, so this is the Fourier Transform of A applied to C. That is F of AX goes to the Fourier Transform if I evaluated it AX. It goes to the Fourier Transform at AC. All right. In words, a rotation of the special domain corresponds to the same rotation of the frequency domain. This is a rotation of the special coordinate. This is a rotation of the frequency coordinate. So this says rotation of the spatial domain corresponds to the same rotation of the frequency domain. If you rotate an image, that’s the same thing as rotating its spectrum. Okay. You take the image, you take its spectrum, you rotate the image, you rotate the spectrum by the same amount. All right. So this is an important result in imaging. All right. It tells you that the spectrum somehow is rotated along with the original image. Okay. Very nice. Very consistent. Finally, there’s a new aphorism that goes with this theorem. It’s also very important. And this is something new. All right. This is something that you probably have not said to yourself as you’ve been drifting off to sleep at night. Once again, if F of X corresponds to F of C then F of X corresponds to times F of A inverse transpose C. Now, if you believe that scaling in the one domain has to do with – there’s a reciprocal relationship between scaling the one domain and scaling the other domain. All right.

If you believe that from a one dimensional case and if you want to maintain that intuition that there’s a reciprocal relationship between the two domains then what you have to say that is in higher dimensions, reciprocal relationship means A inverse transpose. That’s something new that you have to say to yourself. That’s a new interruption of the world reciprocal. All right. You buy the [inaudible], you buy the gag as they say in show biz and if you believe in this formula – you have to believe in the formula because I just derived it, but if you believe that there ought to be a reciprocal relationship between scaling in one domain and scaling the other domain then what you have to say to yourself is in higher dimension, as tough as it is for me, I have to believe that reciprocal means, somehow, means, inverse transpose. Okay. Now, let me just say a few words about delta functions. Good news. First of all, we don’t have to deal with rapidly decreasing functions, [inaudible] functions, distributions or any of that stuff, that all carries over. All right. So I don’t really feel the need and you don’t have to be afraid of a whole secondary treatment of rapidly decreasing functions in the theory of distributions. So we’re gonna use these things pretty much as before, and delta functions also go through us before. [Inaudible] okay, delta functions, deltas et al. So again, delta is defined the same way – you can think of this in terms of distribution. Delta operates on a test function, you can write it out in terms of the interval if you wish, but the basic definition or the basic properties of delta functions are as before. As in 1D. So for example, delta paired [inaudible] a higher dimensional delta function paired with thee is thee of zero. So thee is a function of N variables and zero is just the origin. So delta operating on thee just pulled out the value of the origin. And I can look at a shifted delta function. I can write it as delta X minus D or I can write it as delta sub B if I want. So B is our vector here so it’s shifted to another point in RN and the basic relationship is the same as in one dimension. That is delta sub B paired with a function fee is fee evaluated at B. Okay. So B here is B1 up through B10. Okay. [Inaudible] and it’s also true, good news, that the Fourier Transform behaves this same way. You have the same formulas for the Fourier Transform. That is to say the Fourier Transform of the delta function is 1, the constant function 1 and the Fourier Transform of the shifted delta function, delta sub B is E to the minus 2 pie I B.C. All right. So if you use the vector notation, it looks the same as the one dimensional case.

Okay. It’s a complex exponential. That’s all great. And again, it’s also true that if I multiple delta times – I’m sorry, I should’ve mentioned this before, if I multiply delta times a function, it just samples the function. Multiplication – I should’ve mention this when I was talking about the basic pairing here – is if I take a function F times delta as F of 0 times delta and if I take F of 0 this is a vector function now. A function of N variables, and likewise, if I take F of delta sub B, that’s the same thing as F of B times delta sub B. Same results as before. Okay. Everything is the same. All right. Now, where are things different? Well, again, when you start to bring matrix’s in, actually, things becomes a little bit different. When you start to talk about scaling. And I think this will be the last thing I say today. I probably won’t derive it. I’m gonna use it more next time. Okay. But let me mention now the scaling properties. Scaling. The scaling properties of delta so here I actually have to write the formula in terms of a variable. The basic phenomena is – well, again, what was the one dimensional case?

The one dimensional case was if I take delta and scale it, then delta AX is 1 over absolute value of A times delta of X is the way I allow myself to write the variable delta of X instead of thinking of it as a distribution. Okay. That’s a result we used in various context. All right. The tricky thing here is that there’s no scaling in the inside here. All right. It’s not the stretch formula for the Fourier Transform, it’s an independent fact about how delta scales. Okay. And what do you suppose the result is in higher dimensions? If I scale the variable inside by A, there are ways of making this precise without bringing the variable in, but I’m not gonna do that. It takes a little more effort. So if A is a matrix here, then this is 1 over the determinant of A – it’s really analogous to the one dimensional case – times delta of X. That’s how delta scales. We’re gonna need this result. The derivation is actually given in the notes, but I’m not gonna derive it. We are gonna use it however. I won’t try to push ahead today. We’re gonna use this next time when we talk about higher dimensional Shaw functions. That’s where this sort of lattice is reciprocal lattice, there’s actually higher dimensional sampling theory, it all comes in. And we’ll have a chance to do a little bit of it, so we’ll see this in connection with higher dimensional Shaws.

It’s really cool. All right. So read through that material if you would. You don’t have to read through the sampling part of it because I don’t think we’re gonna have a chance to do that although it’s my favorite stuff, but I do want to talk about the crystallography, the application of the Shaw stuff to crystallography because that’s where you see this wonderful fact about reciprocal means inverse transpose come in in a very important setting. All right. So we’ll do that next time and I hope to actually push forward a little bit and start talking about medical imaging. All right. See you all then.

[End of Audio]

Duration: 50 minutes

Source: http://see.stanford.edu/materials/lsoftaee261/transcripts/TheFourierTransformAndItsApplications-Lecture28.html Labels: Linear Systems and Optimization, The Fourier Transform and its Applications

## Responses

0 Respones to "TheFourierTransformAndItsApplications-Lecture28"Post a Comment