Singular Value Decomposition (the SVD)

629.1k views1647 WordsCopy TextShare
MIT OpenCourseWare
MIT RES.18-009 Learn Differential Equations: Up Close with Gilbert Strang and Cleve Moler, Fall 2015...
Video Transcript:
okay the previous video was about positive definite matrices this video is also linear algebra a very interesting way to break up a matrix called the singular value decomposition and everybody says SVD for singular value decomposition and what is that factoring what are the three pieces of the SVD so this the fact is every matrix rectangular every matrix factors into these are the three pieces u Sigma V transpose that people use those letters for the three factors okay the factor U is an orthogonal matrix an orthogonal matrix the factor Sigma in the middle is a diagonal
matrix the factor V transpose on the right is also an orthogonal matrix so I have orthogonal diagonal orthogonal or physically rotation stretching rotation now we have seen three factors for a matrix V lambda V inverse what's the difference what's the difference between this SVD this this and the V lambda V transpose V inverse V lambda V inverse for diagonalizing other matrices so the lambda is diagonal and the Sigma is diagonal but they're different the key point is I now have two different matrices not just V and V inverse but two different matrices but the new
great advantages they are orthogonal matrices both of them so by going to and and they can I can do it for rectangular matrices also eigen values really worked for square matrices now we earlier we have two we have a input matrix and an output matrix and those spaces m and n can have different have different dimensions so by allowing two separate bases we get rectangular matrices we get at way and we get orthogonal factors with again a diagonal and this is called these numbers Sigma instead of eigen values are called singular values so these are
the singular values these are the singular vectors the right singular vectors and the left singular vectors that's the statement of the factorization but we have to think a little bit what are those factors what are the can we see how why this works so I want that and let me do as you see this coming I'll look at a transpose a I like a transpose a okay so a transpose will be I transpose this V Sigma transpose u transpose right that's a transpose and I multiply by a u Sigma V transpose and what do I
have well I've got six matrices but u transpose u in here is the identity because U is an orthogonal matrix so I really have just the V on one side a Sigma transpose Sigma that will be diagonal and a V transpose on the right oh this I recognize this I recognize here is a single V a diagonal matrix a V transpose what I'm showing you here what we reached is the eigenvalue the diagonalization the usual eigenvalues are in here and the eigenvectors are in here but the matrix is a transposing once again a was rectangular
completely general and we couldn't see perfect results but when we went to a transpose a that gave us a positive semi-definite matrix symmetric for sure its eigenvectors will be orthogonal that's how I know this V matrix the eigenvectors for this symmetric matrix are orthogonal and the eigenvalues are positive and they're the squares of the singular value so this is telling me that the lambdas for a transpose a are the Sigma Squared's for s for a for a itself lambda is the same lambda for a transpose a is Sigma squared for the matrix a all right
well that tells me V that tells me Sigma and you disappear here because u transpose U was the identity it just went away how would I get hold of you well here's one way to see it I multiply a times a transpose in that order in that order so now I have u Sigma V transpose times the transpose which is the V Sigma transpose u transpose I'm having a lot of fun here with transposes but V transpose V is now the identity in the middle so what do I learn here I learned that U is
the eigenvector matrix for a a transpose so these have the same eigen values a times B has the same eigen values as B times a in this case out here same eigen values this has eigen vectors V this has eigen vectors U and those are the V and the U in the singular value decomposition well I have to show you an example I have to show you an example and an application and that sit ok so here's an example suppose a I make it a square matrix - 2 - 1 1 not symmetric certainly not
positive definite I wouldn't use the word because that matrix is not symmetric but it's got an SVD 3 factors and I work them out this is the orthogonal matrix I have to divide by square root of 5 to make it unit vectors you oh that's not going to work how about that the two columns are orthogonal and that's a perfectly good you and then in the Sigma I got well that's a oh I did want one in one I did want one in one yes oh yes so I have a singular matrix determinant zero singular
matrix so my eigen values will be zero and it turns out square root of 10 is that right is the other eigen value for that other singular value for this guy and now I'll put in the V transpose matrix which is 1 1 and 1 minus 1 is it and those have length square root of 2 which I have to divide by well I didn't do that so smoothly but the result is clear u Sigma V transpose so here's the signal and the singular values of this matrix are square root of 10 and then 0
because it's a singular matrix and the eigen vectors well the singular vectors of the matrix are the left singular vectors and the right singular vectors okay that looks good to me uh-uh and now the application to finish a first application is well very important all the time in this century we're getting matrices with data in them maybe in life sciences we test a bunch of samples people for genes so I have a my data comes somehow I have a gene expression matrix I have samples people people 1 2 3 in those columns and I have
in this in the rows let me say 4 rows I have genes what gene expressions that would be completely normal a rectangular matrix because the number of people in the number of genes is not the same and in reality those are both very very big numbers so I have a large matrix and out of it I want to and each number in the matrix is telling me how much the gene is expressed by that person we may be searching for genes causing some disease so we take several people some well some with the disease we
check on the genes we get a big matrix and we look to understand something out of it what can we understand what are we looking for we're looking for the correlation the connection between some combination maybe of genes and some we're looking for a gene people connection here but it's not going to be person number one we're not looking for one person we're looking to find a mixture of those people so we're going to have sort of an eigen sample eigen people oh that's a terrible i ghen person would be better so I think we
I think I'm seeing an eigen person let me see where I were going to put this so so yeah I think my matrix would be written oh here's a main point that just as I see in this example it's the first vector and the first vector and the biggest Sigma that are all important well in that example the other Sigma was zero nothing but in this example I'll probably have three different Sigma's but the largest Sigma the the first either u1 and the v1 it's that combination that I want I want you 1 Sigma 1
V 1 transpose that the first eigenvector of a transpose a and of a a transpose and the first thing you the biggest singular value that's the information that's that's the best sort of put together person eigen person combination of these people and the best combination of genes that has the in statistics I would say the greatest variance in ordinary English I would say the most information the most information in this big matrix is in this very special matrix with only Rank 1 only a single column repeated a single row repeated and a number Sigma 1
the number that tells me that because you remember you as a unit vector V is a unit vector it's that number Sigma one that's selling me so it's like that that unit vector times that number key number times that unit vector yes this I'm talking here about principal component analysis I'm looking for the principal component this part principal component and Al a big a big application in applied statistics you know in in large drug large-scale drug tests statisticians really have a central place here and and and this is on the research side to find the
get the information out of a big sample okay so so you one is sort of a combination of people v1 is a combination of genes Sigma one is the biggest number I can get so that's PCA all coming from the singular value decomposition thank you
Related Videos
Boundary Conditions Replace Initial Conditions
17:03
Boundary Conditions Replace Initial Condit...
MIT OpenCourseWare
50,985 views
6. Singular Value Decomposition (SVD)
53:34
6. Singular Value Decomposition (SVD)
MIT OpenCourseWare
231,435 views
SVD Visualized, Singular Value Decomposition explained | SEE Matrix , Chapter 3 #SoME2
16:28
SVD Visualized, Singular Value Decompositi...
Visual Kernel
232,865 views
Eigenvalues and Eigenvectors
19:01
Eigenvalues and Eigenvectors
MIT OpenCourseWare
240,166 views
Lecture 47 — Singular Value Decomposition | Stanford University
13:40
Lecture 47 — Singular Value Decomposition ...
Artificial Intelligence - All in One
339,500 views
Computing the Singular Value Decomposition | MIT 18.06SC Linear Algebra, Fall 2011
11:36
Computing the Singular Value Decomposition...
MIT OpenCourseWare
439,347 views
Olaf Schubert - Advent, Advent der P*nis brennt | Die besten Comedians Deutschlands
15:29
Olaf Schubert - Advent, Advent der P*nis b...
MySpass Stand-up
574,893 views
Gil Strang's Final 18.06 Linear Algebra Lecture
1:05:09
Gil Strang's Final 18.06 Linear Algebra Le...
MIT OpenCourseWare
2,538,172 views
Singulärwertzerlegung - Übersicht, Anwendung und Berechnung
36:04
Singulärwertzerlegung - Übersicht, Anwendu...
BrainPi
15,079 views
21. Eigenvalues and Eigenvectors
51:23
21. Eigenvalues and Eigenvectors
MIT OpenCourseWare
660,895 views
StatQuest: Principal Component Analysis (PCA), Step-by-Step
21:58
StatQuest: Principal Component Analysis (P...
StatQuest with Josh Starmer
3,048,142 views
The Big Picture of Linear Algebra
15:57
The Big Picture of Linear Algebra
MIT OpenCourseWare
992,567 views
Why is the determinant like that?
19:07
Why is the determinant like that?
broke math student
187,100 views
The Simple Math Problem That Revolutionized Physics
32:44
The Simple Math Problem That Revolutionize...
Veritasium
8,078,180 views
Lecture: The Singular Value Decomposition (SVD)
44:36
Lecture: The Singular Value Decomposition ...
AMATH 301
231,675 views
Dear linear algebra students, This is what matrices (and matrix manipulation) really look like
16:26
Dear linear algebra students, This is what...
Zach Star
1,192,738 views
Gilbert Strang: Singular Value Decomposition
5:06
Gilbert Strang: Singular Value Decomposition
Lex Fridman
66,509 views
MIT Introduction to Deep Learning | 6.S191
1:09:58
MIT Introduction to Deep Learning | 6.S191
Alexander Amini
807,593 views
Singular Value Decomposition (SVD) and Image Compression
28:56
Singular Value Decomposition (SVD) and Ima...
Serrano.Academy
96,407 views
Principal Component Analysis (PCA)
13:46
Principal Component Analysis (PCA)
Steve Brunton
407,271 views
Copyright © 2025. Made with ♥ in London by YTScribe.com