5. Positive Definite and Semidefinite Matrices

positive semidefinite matrix
This is a topic that many people are looking for. thevoltreport.com is a channel providing useful information about learning, life, digital marketing and online courses …. it will help you have an overview and solid multi-faceted knowledge . Today, thevoltreport.com would like to introduce to you 5. Positive Definite and Semidefinite Matrices. Following along are instructions in the video below:


1 00:00:00,000 –> 00:00:01,550 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. 9 00:00:19,026 –> 00:00:24,235 GILBERT STRANG: OK, let me make a start. On the left, you see the topic for today. Were doing pretty well. This completes my review of the highlights of linear algebra, so thats five lectures. 15 00:00:41,550 –> 00:00:44,550 Ill follow up on those five points, because the neat part is it really ties together the whole subject. Eigenvalues, energy, A transpose A, determinants, pivots– 20 00:00:59,485 –> 00:01:02,010 they all come together. Each one gives a test for positive and definite matrices. Thats where Im going. Claire is hoping to come in for a little bit of the class to ask if anybody has started on the homework. And got Julia rolling, and got a yes from the auto grader. Is anybody like– no. Youre taking a chance, right? Julia, in principle, works, but in practice, its always an adventure the first time. So we chose this lab on convolution, because it was the first lab last year, and it doesnt ask for much math at all. Really, youre just creating a matrix and getting the auto grader to say, yes, thats the right matrix. 37 00:02:05,865 –> 00:02:10,288 And well see that matrix. Well see this idea of convolution at the right time, which is not that far off. Its signal processing, and its early in part three of the book. 43 00:02:25,100 –> 00:02:27,880 If Claire comes in, shell answer questions. Otherwise, I guess it would be emailing questions to– I realize that the deadline is not on top of you, and youve got a whole weekend to make Julia fly. 48 00:02:44,360 –> 00:02:48,170 Ill start on the math then. We had symmetric– eigenvalues of matrices, and especially symmetric matrices, and those have real eigenvalues, and Ill quickly show why. And orthogonal eigenvectors, and Ill quickly show why. But I want to move to the new idea– positive definite matrices. These are the best of the symmetric matrices. They are symmetric matrices that have positive eigenvalues. Thats the easy way to remember positive definite matrices. They have positive eigenvalues, but its certainly not the easy way to test. If I give you a matrix like that, thats only two by two. We could actually find the eigenvalues, but we would like to have other tests, easier tests, which would be equivalent to positive eigenvalues. Every one of those five tests– any one of those five tests is all you need. Let me start with that example and ask you to look, and then Im going to discuss those five separate points. 69 00:04:01,665 –> 00:04:04,610 My question is, is that matrix s? Its obviously symmetric. Is it positive, definite, or not? You could compute its eigenvalues since its two by two. Its energy– Ill come back to that, because thats the most important one. Number two is really fundamental. Number three would ask you to factor that. Well, you dont want to take time with that. Well, what do you think? Is it positive, definite, or not? I see an expert in the front row saying no. Why is it no? The answer is no. Thats not a positive definite matrix. Where does it let us down? Its got all positive numbers, but thats not what were asking. Were asking positive eigenvalues, positive determinants, positive pivots. 91 00:04:53,670 –> 00:04:56,630 How does it let us down? Which is the easy test to see that it fails? AUDIENCE: Maybe determinant? GILBERT STRANG: Determinant. The determinant is 15 minus 16, so negative. So how is the determinant connected to the eigenvalues? Everybody? Yep. AUDIENCE: [INAUDIBLE] GILBERT STRANG: Its the product. So the two eigenvalues of s, theyre real, of course, and they multiply to give the determinant, which is minus 1. So one of them is negative, and one of them is positive. This matrix is an indefinite matrix– indefinite. So how could I make it positive definite? OK. We can just play with an example, and then we see these things happening. Lets see. OK, what shall I put in place of the 5, for example? I could lower the 4, or I can up the 5, or up the 3. I can make the diagonal entries. If I add stuff to the main diagonal, Im making it more positive. So thats the straightforward way. So what number in there would be safe? AUDIENCE: 6. GILBERT STRANG: 6. OK. 6 would be safe. If I go up from 5 to 6, Ive gotta de– so when I say here “leading determinants,” what does that mean? That word leading means something. It means that I take that 1 by 1 determinant– it would have to pass that. Just the determinant itself would not do it. Let me give you an example. No for– let me take minus 3 and minus 6. That would have the same determinant. 133 00:06:50,510 –> 00:06:55,010 The determinant would still be 18 minus 16– 2. But it fails the test on the 1 by 1. And this passes. This passes the 1 by 1 test and 2 by 2 tests. So thats what this means here. Leading determinants are from the upper left. You have to check n things because youve got n eigenvalues. And those are the n tests. And have you noticed the connection to pivots? So lets just remember that small item. What would be the pivots because we didnt take a long time on elimination? So what would be the pivots for that matrix, 3-4-4-6? Well, whats the first pivot? 3, sitting there– the 1-1 entry would be the first pivot. So the pivots would be 3, and whats the second pivot? Well, maybe to see it clearly you want me to take that elimination step. Why dont I do it just so youll see it here? So elimination would subtract some multiple of row 1 from row 2. I would leave 1 one alone. I would subtract some multiple to get a 0 there. And whats the multiple? Whats the multiplier? AUDIENCE: In that much– GILBERT STRANG: 4/3. 4/3 times row 1, away from row 2, would produce that

0. But 4/3 times the 4, that would be 16/3. And were subtracting it from 18/3. I think weve got 2/3 left. 167 00:08:39,990 –> 00:08:43,960 So the pivots, which is this, in elimination, are the 3 and the 2/3. And of course, theyre positive. And actually, you see the immediate connection. This pivot is the 2 by 2 determinant divided by the 1 by 1 determinant. The 2 by 2 determinant, we figured out– 18 minus 16 was 2. The 1 by 1 determinant is 3. And sure enough, that second pivot is 2/3. This is not– so by example, Im illustrating what these different tests– and again, each test is all you need. If it passes one test, it passes them all. And we havent found the eigenvalues. Let me do the energy. Can I do energy here? OK. So whats this– I am saying that this is really the great test. That, for me, is the definition of a positive definite matrix. And the word “energy” comes in because its quadratic, [INAUDIBLE] kinetic energy or potential energy. So thats the energy in the vector x for this matrix. So let me compute it, x transpose Sx. So let me put in S here, the original S. And let me put in of any vector x, so, say xy or x1. Maybe– do you like x– xy is easier. So thats our vector x transposed. This is our matrix S. And heres our vector x. So its a function of x and y. Its a pure quadratic function. Do you know what I get when I multiply that out? I get a very simple, important type of function. Shall we multiply it out? Lets see. Shall I multiply that by that first, so I get 3x plus 4y? And 4x plus 6y is what Im getting from these two. And now Im hitting that with the xy. And now Im going to see the energy. And youll see the pattern. Thats always what math is about. Whats the pattern? So Ive x times 3x, 3x squared. And I have y times 6y. Thats 6y squared. And I have x times 4y. Thats for 4xy. And I have y times 4x. Thats 4 more xy. 219 00:11:39,920 –> 00:11:44,060 So Ive got all those terms. Every term, every number in the matrix gives me a piece of the energy. And you see that the diagonal numbers, 3 and 6, those give me the diagonal pieces, 3x squared and 6y squared. And then the cross– or I maybe call them the cross terms. Those give me 4xy and 4xy, so, really, 8xy. So you could call this thing 8xy. 229 00:12:16,040 –> 00:12:20,190 So thats my function. Thats my quadratic. Thats my energy. And I believe that is greater than 0. Let me graph the thing. Let me graph that energy. 236 00:12:34,510 –> 00:12:38,560 OK. So heres a graph of my function, f of x and y. 239 00:12:42,670 –> 00:12:45,340 Here is x, and heres y. And of course, thats on the graph, 0-0. At x equals 0, y equals 0, the function is clearly 0. Everybodys got his eye– let me write that function again here– 3x squared, 6y squared, 8xy. 246 00:13:04,975 –> 00:13:09,460 Actually, you can see– this is how I think about that function. So 3x squared is obviously carrying me upwards. It will never go negative. 6y squared will never go negative. 8xy can go negative, right? If x and y have opposite signs, thatll go negative. But the question is, do these positive pieces overwhelm it and make the graph go up like a bowl? 256 00:13:45,065 –> 00:13:49,890 And the answer is yes, for a positive definite matrix. So this is a graph of a positive definite matrix, of positive energy, the energy of a positive definite matrix. So this is the energy x transpose Sx that Im graphing. And there it is. This is important. This is important. This is the kind of function we like, x transpose Sx, where S is positive definite, so the function goes up like that. This is what deep learning is about. This could be a loss function that you minimize. It could depend on 100,000 variables or more. And it could come from the error in the difference between training data and the number you get it. The loss would be some expression like that. Well, Ill make sense of those words as soon as I can. What I want to say is deep learning, neural nets, machine learning, the big computation– is to minimize an energy– is to minimize an energy. Now of course, I made the minimum easy to find because I have pure squares. Well, that doesnt happen in practice, of course. In practice, we have linear terms, x transpose b, or nonlinear. Yeah, the loss function doesnt have to be a [INAUDIBLE] cross entropy, all kinds of things. There is a whole dictionary of possible loss functions. But but this is the model. This is the model. And Ill make it the perfect model by just focusing on that part. Well, by the way, what would happen if that was in there? I shouldnt have Xd it out so quickly since I just put it up there. Let me put it back up. I thought better of it. OK. This is a kind of least squares problem with some data, b. Minimize that. So what would be the graph of this guy? Can I just draw the same sort of picture for that function? Will it be a bowl? Yes. If I have this term, all that does is move it off center here, at x equals 0. Well, I still get 0. Sorry. I still go through that point. If this is the 0 vector, Im still getting 0. But this, well bring it below. That would produce a bowl like that. Actually, it would just be the same bowl. The bowl would just be shifted. I could write that to show how that happens. So this is now below 0. Thats the solution were after that tells us the weights in the neural network. Im just using these words, but well soon have a meaning to them. I want to find that minimum, in other words. And I want to find it for much more complicated functions than that.

Of course, if I minimize the quadratic, that means setting derivatives to 0. I just have linear equations. Probably, I could write everything down for that thing. So lets put in some nonlinear stuff, which way to wiggles the bowl, makes it not so easy. 326 00:17:55,790 –> 00:17:59,880 Can I look a month ahead? How do you find– so this is a big part of mathematics– applied math, optimization, minimization of a complicated function of 100,000 variables. Thats the biggest computation. Thats the reason machine learning on big problems takes a week on a GPU or multiple GPUs, because you have so many unknowns. More than 100,000 would be quite normal. In general, lets just have the pleasure of looking ahead for one minute, and then Ill come back to real life here, linear algebra. I cant resist thinking aloud, how do you find the minimum? By the way, these functions, both of them, are convex. So that is convex. 343 00:18:59,100 –> 00:19:04,940 So I want to connect convex functions, f– and what does convex mean? It means, well, that the graph is like that. [LAUGHTER] Not perfect, it could– but if its a quadratic, then convex means positive definite, or maybe in the extreme, positive semidefinite. Ill have to mention that. But convex means it goes up. But it could have wiggles. It doesnt have to be just perfect squares in linear terms, but general things. And for deep learning, it will include non– it will go far beyond quadratics. Well, it may not be convex. I guess thats also true. Yeah. So deep learning has got serious problems because those functions, they may look like this but then over here they could go nonxconvex. They could dip down a little more. And youre looking for this point or for this point. 367 00:20:21,580 –> 00:20:24,820 Still, Im determined to tell you how to find it or a start on how you find it. So youre at some point. 371 00:20:32,980 –> 00:20:35,950 Start there, somewhere on the surface. Some x, some vector x is your start, x0– 374 00:20:45,900 –> 00:20:49,890 starting point. And were going to just take a step, hopefully down the bowl. Well of course, it would be fantastic to get there in one step, but thats not going to happen. That would be solving a big linear system, very expensive, and a big nonlinear system. So really, thats what were trying to solve– a big nonlinear system. And I should be on this picture because here we can see where the minimum is. But they just shift. So what would you do if you had a starting point and you wanted to go look for the minimum? Whats the natural idea? Compute derivatives. Youve got calculus on your side. Compute the first derivatives. So the first derivatives with respect to x– so I would compute the derivative with respect to x, and the derivative of f with respect to y, and 100,000 more. And that takes a little while. And now Ive got the derivatives. What do I do? AUDIENCE: [INAUDIBLE] GILBERT STRANG: I go– that tells me the steepest direction. That tells me, at that point, which way is the fastest way down. So I would follow– I would do a gradient descent. I would follow that gradient. This is called the gradient, all the first derivatives. Its called the gradient of f– the gradient. 410 00:22:25,070 –> 00:22:29,950 Gradient vector– its a vector, of course, because f is a function of lots of variables. I would start down in that direction. And how far to go, thats the million dollar question in deep learning. Is it going to hit 0? Nope. Its not. Its not. 420 00:22:55,120 –> 00:22:58,060 So basically, you go down until it– 422 00:23:02,040 –> 00:23:04,720 so youre traveling here in the x, along the gradient. And youre not going to hit 0. Youre all going here in some direction. So you keep going down this thing until it– oh, Im not Rembrandt here. Your path down– think of yourself on a mountain. Youre trying to go down hill. So you take– as fast as you can. So you take the steepest route down until– but you have blinkers. Once you decide on a direction, you go in that direction. Of course– so what will happen? Youll go down for a while and then it will turn up again when you get to, maybe, close to the bottom or maybe not. Youre not going to hit here. And its going to miss that and come up. Maybe I should draw it over here, whatever. So its called a line search, to decide how far to go there. And then say, OK stop. 443 00:24:17,655 –> 00:24:20,440 And you can invest a lot of time or a little time to decide on that first stopping point. And now just tell me, what do you do next? So now youre here. What now? Recalculate the gradient. Find the steepest way down from that point, follow it until it turns up or approximately, then youre at a new point. So this is gradient descent. Thats gradient descent, the big algorithm of deep learning of neural nets, of machine learning– of optimization, you could say. Notice that we didnt compute second derivatives. If we computed second derivatives, we could have a fancier formula that could account for the curve here. But to compute second derivatives when youve got hundreds and thousands of variables is not a lot of fun. So most effectively, machine learning is limited to first derivatives, the gradient. 466 00:25:33,910 –> 00:25:37,150 OK. So thats the general idea. But there are lots and lots of decisions and– why doesnt that– how well does that work, maybe, is a good question to ask. Does this work pretty well or do we have to add more ideas? Well, it doesnt always work well. Let me tell you what the trouble is. Im way off– this is March or something. But anyway, Ill finish this sentence. So whats the problem with this gradient descent idea? It turns out, if youre going down a narrow valley– I dont know, if you can sort of imagine a narrow valley toward the bottom. So heres the bottom. Heres your starting point. And this is– you have to have think of this as a bowl. So

the bowl is– or the two eigenvalues, you could say– are 1 and a very small number. The bowl is long and thin. Are you with me? Imagine a long, thin bowl. Then what happens for that case? You take the steepest descent. But you cross the valley, and very soon, youre climbing again. So you take very, very small steps, just staggering back and forth across this and getting slowly, but too slowly, toward the bottom. So thats why things have got to be improved. If you have a very small eigenvalue and a very large eigenvalue, those tell you the shape of the bowl, of course. And many cases will be like that– have a small and a large eigenvalue. And then youre spending all your time. Youre quickly going up the other side, down, up, down, up, down. And you need a new idea. OK, so thats really– so this is one major reason why positive definite is so important because positive definite gives pictures like that. But then, we have this question of, are the eigenvalues sort of the same size? Of course, if the eigenvalues are all equal, whats my bowl like? Suppose I have the identity. So then x squared plus y squared is my function. Then its a perfectly circular bowl. What will happen? Can you imagine a perfectly circular– like any bowl in the kitchen is probably, most likely circular. And suppose I do gradient descent there. I start at some point on this perfectly circular bowl. I start down. And where do I stop in that case? 524 00:28:59,690 –> 00:29:02,960 Do I hit bottom? I do, by symmetry. 527 00:29:07,205 –> 00:29:11,520 So if I take x squared plus y squared as my function and I start somewhere, I figure out the gradient. Yeah. The answer is Ill go right through the center. So really positive eigenvalues, positive definite matrices give us a bowl. But if the eigenvalues are far apart, thats when we have problems. OK. Im going back to my job, which is this– because this is so nice. Right. Could you– well, the homework thats maybe going out this minute for middle of next week gives you some exercises with this. Let me do a couple of things, a couple of exercises here. For example, suppose I have a positive definite matrix, S, and a positive definite matrix, T. If I add those matrices, is the result positive definite? So there is a perfect math question, and we hope to answer it. 549 00:30:39,208 –> 00:30:41,960 So S and T– positive definite. What about S plus T? 553 00:30:50,180 –> 00:30:53,720 Is that matrix positive definite? OK. How do I answer such a question? I look at my five tests and I think, can I use it? Which one will be good? And one that wont tell me much is the eigenvalues because the eigenvalues of S plus T are not immediately clear from the eigenvalues of S and T separately. I dont want to use that test. This is my favorite test, so Im going to use that. What about the energy in– so look at the energy. 567 00:31:30,140 –> 00:31:33,590 So I look at x transpose, S plus T x. And whats my question in my mind here? Is that a positive number or not, for every x? And how am I going to answer that question? 572 00:31:50,340 –> 00:31:53,200 Just separate those into two pieces, right? Its there in front of me. Its this one plus this one. 576 00:32:00,880 –> 00:32:04,630 And both of those are positive, so the answer is yes, it is positive definite. Yes. 580 00:32:10,030 –> 00:32:15,110 You see how the energy was right. I dont want to compute the pivots or any determinants. That would be a nightmare trying to find the determinants for S plus T. But this one just does it immediately. What else would be a good example to start with? What about S inverse? Is that positive definite? So let me ask S positive definite, and I want to ask about its inverse. So its inverse is a symmetric matrix. 591 00:32:49,175 –> 00:32:51,770 And is it positive definite? And the answer– yes. Yes. Ive got five tests, 20% chance at picking the right one. Determinants is not good. The first one is great. The first one is the good one for this question because the eigenvalues. So the answer is yes. Yes, this has– eigenvalues. So what are the eigenvalues of S inverse? 1 over lambda? So– yes, positive definite, positive definite. 605 00:33:37,946 –> 00:33:45,400 Yep. What about– let me ask you just one more question of the same sort. Suppose I have a matrix, S, and suppose I multiply it by another matrix. Oh, well. OK. Suppose– do I want to ask you this? Suppose I asked you about S times another matrix, M. Would that be positive definite or not? Now Im going to tell you the answer is that the question wasnt any good because that matrix is probably not symmetric, and Im only dealing with symmetric matrices. Matrices have to be symmetric before I know they have real eigenvalues and I can ask these questions. So thats not good. But I could– oh, lets see. 624 00:34:55,664 –> 00:34:58,830 Let me put it in an orthogonal guy. Well, still thats not symmetric. But if I put the– its transpose over there. Then I made it symmetric. Oh, dear, I may be getting myself in trouble here. So Im starting with a positive definite S. Im hitting it with an orthogonal matrix and its transpose. And my instinct carried me here because I know that thats still symmetric. Right? Everybody sees that? If I transpose this, Q transpose will come here, S, Q will go there. Itll be symmetric. Now is that positive definite? Ah, yes. We can answer that. Can we? Is that positive definite? So remember that this is an orthogonal matrix, so also, if you wanted me to write it that way, I could. 648 00:35:55,930 –> 00:35:59,150 And what about positive-definiteness of that thing? 651 00:36:02,970 –> 00:36:08,420 Answer, I think, is yes. Do you agree? It is positive definite? Give me a reason, though. Why is this positive definite? 657 00:36:18,530 –> 00:36:21,190 So that word similar, this is a similar matrix to S?

Do you remember what similar means from last time? It means that sum M and its inverse are here, which they are. And so whats the consequence of being similar? What do I know about a matrix thats similar to S? It has– AUDIENCE: Same [INAUDIBLE] GILBERT STRANG: Same eigenvalues. And therefore, were good. Right? Or I could go this way. I like energy, so let me try that one. x transpose, Q transpose, SQx– that would be the energy. And what am I trying to show? Im trying to show its positive. So, of course, as soon as I see that, its just waiting for me to– let Qx be something called y, maybe. And then what will this be? AUDIENCE: y [INAUDIBLE] GILBERT STRANG: y transpose. So this energy would be the same as y transpose, Sy. And what do I know about that? Its positive because thats an energy in the y, for the y vector. So one way or another, we get the answer yes to that question. OK. OK. 689 00:37:54,159 –> 00:37:57,980 Let me introduce the idea of semidefinite. Semidefinite is the borderline. So what did we have? We had 3, 4, 4. And then when it was 5, you told me indefinite, a negative eigenvalue. When it was 6, you told me 2 positive eigenvalues– definite. Whats the borderline? Whats the borderline there? 700 00:38:29,880 –> 00:38:32,680 Its not going to be an integer. What do I mean? What am I looking for, the borderline? 704 00:38:38,222 –> 00:38:43,110 So tell me again? AUDIENCE: 16 over– GILBERT STRANG: 16/3, that sounds right. Why is that the borderline? AUDIENCE: [INAUDIBLE] GILBERT STRANG: Because now the determinant is– AUDIENCE: 0. GILBERT STRANG: 0. Its singular. It has a 0 eigenvalue. Theres a 0 eigenvalue. So thats what semidefinite means. Lambdas are equal to 0. Wait a minute. That has a 0 eigenvalue because its determinant is 0. How do I know that the other eigenvalue is positive? Could it be that the other ei– so this is the semidefinite case we hope. But wed better finish that reasoning. How do I know that the other eigenvalue is positive? AUDIENCE: Trace. GILBERT STRANG: The trace, because adding 3 plus 16/3, whatever the heck that might give, it certainly gives a positive number. And that will be lambda 1 plus lambda 2. Thats the trace. But lambda 2 is 0. We know from this its singular. So we know lambda 2 is 0. So lambda 1 must be 3 plus 5– 5 and 1/3. The lambdas must be 8 and 1/3, 3 plus 5 and 1/3, and 0. So thats a positive semidefinite. If you think of the positive definite matrices as some clump in matrix space, then the positive semidefinite definite ones are sort of the edge of that clump. There the boundary of the clump, the ones that are not quite inside but not outside either. Theyre lying right on the edge of positive definite matrices. Let me just take a– 745 00:40:38,800 –> 00:40:41,420 so what about a matrix of all 1s? 747 00:40:45,510 –> 00:40:49,200 Whats the story on that one– positive definite, all the numbers are positive, or positive semidefinite, or indefinite? What do you think here? 1-1, all 1. AUDIENCE: Semi– GILBERT STRANG: Semidefinite sounds like a good guess. Do you know what the eigenvalues of this matrix would be? AUDIENCE: 0 [INAUDIBLE] GILBERT STRANG: 3, 0, and 0– why did you say that? AUDIENCE: Because 2 [INAUDIBLE] GILBERT STRANG: Because we only have– the rank is? AUDIENCE: 1. GILBERT STRANG: Yeah, we introduced that key where the rank is 1. So theres only one nonzero eigenvalue. And then the trace tells me that number is 3. So this is a positive semidefinite matrix. So all these tests change a little for semidefinite. The eigenvalue is greater or equal to 0. The energy is greater or equal to 0. The A transpose A– but now I dont require– oh, I didnt discuss this. But semidefinite would allow dependent columns. By the way, youve got to do this for me. Write that matrix as A transpose times A just to see that its semidefinite because– 775 00:42:19,275 –> 00:42:22,720 so write that as A transpose A. Yeah. If its a rank 1 matrix, you know what it must look like. 778 00:42:32,840 –> 00:42:37,280 A transpose A, how many terms am I going to have in this? And now Im thinking back to the very beginning of this course if I pulled off the pieces. In general, this is lambda 1 times the first eigenvector, times the first eigenvector transposed. AUDIENCE: Would it just be a vector of three 1s? GILBERT STRANG: Yeah, it would just be a vector of three 1s. Yeah. So this would be the usual picture. This is the same as the Q lambda, Q transpose. This is the big fact for any symmetric matrix. And this is symmetric, but its rank is only 1, so that lambda 2 is 0 for that matrix. Lambda 3 is 0 for that matrix. And the one eigenvector is the vector 1-1-1. And the eigen– so this would be 3 times 1-1-1. Oh, I have to do– yeah. So I was going to do 3 times 1-1-1, times 1-1-1. 798 00:43:54,130 –> 00:43:57,450 But that gives me 3-3-3. Thats not right. AUDIENCE: Normalize them. GILBERT STRANG: I have to normalize them. Thats right. Yeah. So thats a vector whose length is the square root of 3. So I have to divide by that, and divide by it. And then the 3 cancels the square root of 3s, and Im just left with 1-1-1, 1-1-1. Yeah. AUDIENCE: [INAUDIBLE] GILBERT STRANG: So there is a matrix– one of our building-block type matrices because it only has one nonzero eigenvalue. Its rank is 1, so it could not be positive definite. Its singular. But it is positive semidefinite because that eigenvalue is positive. OK. So youve got the idea of positive definite matrices. Again, any one of those five tests is enough to show that its positive definite. And so whats my goal next week? Its the singular value decomposition and all that that leads us to. Were there now, ready for the SVD. OK. Have a good weekend, and see you– oh, I see you on Tuesday, I guess. Right– not Monday but Tuesday next week.

tags:
positive definite matrices, semidefinite matrices, symmetric positive definite matrices
Thank you for watching all the articles on the topic 5. Positive Definite and Semidefinite Matrices. All shares of thevoltreport.com are very good. We hope you are satisfied with the article. For any questions, please leave a comment below. Hopefully you guys support our website even more.

Leave a Comment