3Blue1Brown

Chapter 15A quick trick for computing eigenvalues

This is a video for anyone who already knows what eigenvalues and eigenvectors are, and who might enjoy a quick way to compute them in the case of 2x2 matrices. If you're unfamiliar with eigenvalues, take a look at the previous chapter which introduces them.

You can skip ahead if you just want to see the trick, but if possible we'd like you to rediscover it for yourself, so let's lay down a little background.

As a quick reminder, if the effect of a linear transformation on a given vector is to scale it by some constant, we call that vector an eigenvector of the transformation, and we call the relevant scaling factor the corresponding eigenvalue, often denoted with the letter lambda, λ\lambda.

When you write this as an equation and rearrange a little, what you see is that if a number λ\lambda is an eigenvalue of a matrix AA, then the matrix (AλI)(A - \lambda I) must send some nonzero vector, namely the corresponding eigenvector, to the zero vector, which in turn means the determinant of this modified matrix must be 00.

Av=λIvAvλIv=0(AλI)v=0det(AλI)=0\begin{aligned} A \vec{\mathbf{v}} & =\lambda I \vec{\mathbf{v}} \\ \rule{0pt}{1.25em} A \vec{\mathbf{v}}-\lambda I \vec{\mathbf{v}} & =\vec{\mathbf{0}} \\ \rule{0pt}{1.25em} (A-\lambda I) \vec{\mathbf{v}} & =\vec{\mathbf{0}} \\ \rule{0pt}{1.25em} \operatorname{det}(A-\lambda I) &= 0 \end{aligned}

That's a bit of a mouthful, but again, we're assuming this is review for anyone reading.

The usual way to compute eigenvalues and how most students are taught to carry it out, is to subtract a variable lambda off the diagonals of a matrix and solve for when the determinant equals 00. For example, when finding the eigenvalues of the matrix [3141]\left[\begin{array}{ll} 3 & 1 \\ 4 & 1 \end{array}\right] this looks like:

det([3λ141λ])=(3λ)(1λ)(1)(4)=0\operatorname{det}\left(\left[\begin{array}{cc} 3-\lambda & 1 \\ 4 & 1-\lambda \end{array}\right]\right)=(3-\lambda)(1-\lambda)-(1)(4) = 0

This always involves a few steps to expand this and simplify it to get a clean quadratic polynomial, known as the “characteristic polynomial” of the matrix. The eigenvalues are the roots of this polynomial, so to find them you apply the quadratic formula, which typically requires one or two more steps of simplification.

det([3λ141λ])=(3λ)(1λ)(1)(4)=(34λ+λ2)4=λ24λ1=0λ1,λ2=4±424(1)(1)2=2±5\begin{aligned} \operatorname{det}\left(\left[\begin{array}{cc} 3-\lambda & 1 \\ 4 & 1-\lambda \end{array}\right]\right) & =(3-\lambda)(1-\lambda)-(1)(4) \\ \rule{0pt}{1.5em} & =\left(3-4 \lambda+\lambda^2\right)-4 \\ \rule{0pt}{1.5em} & =\lambda^2-4 \lambda-1=0 \\ \rule{0pt}{2.0em} \lambda_1, \lambda_2 & = \frac{4 \pm \sqrt{4^2-4(1)(-1)}}{2} =2 \pm \sqrt{5} \end{aligned}

This process isn't terrible, but at least for 2x2 matrices, there's a much more direct way to get at this answer. If you want to rediscover this trick, there are three relevant facts you'll need to know, each of which is worth knowing in its own right and can help with other problem-solving.

  1. The trace of a matrix, which is the sum of these two diagonal entries, is equal to the sum of the eigenvalues.

    tr([abcd])=a+d=λ1+λ2\operatorname{tr}\left(\left[\begin{array}{cc}a & b \\ c & d\end{array}\right]\right)=a+d=\lambda_1+\lambda_2

    Or another way to phrase it, more useful for our purposes, is that the mean of the two eigenvalues is the same as the mean of these two diagonal entries.

    12tr([abcd])=a+d2=λ1+λ22\frac{1}{2} \operatorname{tr}\left(\left[\begin{array}{cc}a & b \\ c & d\end{array}\right]\right)=\frac{a+d}{2}=\frac{\lambda_1+\lambda_2}{2}
  1. The determinant of a matrix, our usual adbcad-bc formula, equals the product of the two eigenvalues.

    det([abcd])=adbc=λ1λ2\operatorname{det}\left(\left[\begin{array}{ll}a & b \\ c & d\end{array}\right]\right)=a d-b c=\lambda_1 \lambda_2

    This should make sense if you understand that an eigenvalue describes how much an operator stretches space in a particular direction and that the determinant describes how much an operator scales areas (or volumes).

  1. (We'll get to this...)

Before getting to the third fact, notice how you can essentially read these first two values out of the matrix. Take this matrix [8426]\left[\begin{array}{ll}8 & 4 \\ 2 & 6\end{array}\right] as an example. Straight away you can know that the mean of the eigenvalues is the same as the mean of 88 and 66, which is 77.

m=λ1+λ22=7m = \frac{\lambda_1 + \lambda_2}{2} = 7

Likewise, most linear algebra students are well-practiced at finding the determinant, which in this case is 8642=4888 \cdot 6 - 4 \cdot 2 = 48 - 8, so you know the product of our two eigenvalues is 4040.

p=λ1λ2=40p = \lambda_1 \lambda_2= 40

Take a moment to see how you can derive what will be our third relevant fact, which is how to recover two numbers when you know their mean and product.

Focus on this example. You know the two values are evenly spaced around 77, so they look like 77 plus or minus something; let's call it dd for distance.

You also know that the product of these two numbers is 4040.

40=(7+d)(7d)40 = (7+d)(7-d)

To find dd, notice how this product expands nicely as a difference of squares. This lets you cleanly solve for dd:

40=(7+d)(7d)40=72d2d2=7240d2=9d=3\begin{aligned} 40 & = (7+d)(7-d) \\ \rule{0pt}{1.25em} 40 & = 7^2-d^2 \\ \rule{0pt}{1.25em} d^2 & =7^2-40 \\ \rule{0pt}{1.25em} d^2 & =9 \\ \rule{0pt}{1.25em} d & =3 \end{aligned}

In other words, the two values for this very specific example work out to be 44 and 1010.

But our goal is a quick trick and you wouldn't want to think through this each time, so let's wrap up what we just did in a general formula.

For any mean, mm, and product, pp, the distance squared is always going to be m2pm^2 - p. This gives the third key fact, which is that when two numbers have a mean and a product, you can write those two numbers as m±m2pm \pm \sqrt{m^2 - p}.

This is decently fast to rederive on the fly if you ever forget it and it's essentially just a rephrasing of the difference of squares formula, but even still it's a fact worth memorizing. In fact, Tim from Acapella Science wrote us a quick jingle to make it a little more memorable.

Examples

Let me show you how this works, say for the matrix [3141]\left[\begin{array}{cc}3 & 1 \\ 4 & 1\end{array}\right]. You start by thinking of the formula, stating it all in your head.

But as you write it down, you fill in the appropriate values of mm and pp as you go. Here, the mean of the eigenvalues is the same as the mean of 33 and 11, which is 22, so you start by writing:

λ1,λ2=2±22\lambda_1, \lambda_2 = 2 \pm \sqrt{2^2 - …}

The product of the eigenvalues is the determinant, which in this example is 3114=13 \cdot 1 - 1 \cdot 4 = -1, so that's the final thing you fill in.

λ1,λ2=2±22(1)\lambda_1, \lambda_2 = 2 \pm \sqrt{2^2 - (-1)}

So the eigenvalues are 2±52 \pm \sqrt{5}. You may noticed this is the same matrix we were using at the start, but notice how much more directly we can get at the answer compared to the characteristic polynomial route.

Here, let's try another one using the matrix [2718]\left[\begin{array}{ll}2 & 7 \\ 1 & 8\end{array}\right]. This time the mean of the eigenvalues is the same as the mean of 22 and 88, which is 55. So again, start writing out the formula, but writing 55 in place of mm:

λ1,λ2=5±52\lambda_1, \lambda_2 = 5 \pm \sqrt{5^2 - …}

The determinant is 2871=92 \cdot 8 - 7 \cdot 1 = 9. So in this example, the eigenvalues look like 5±165 ± \sqrt{16}, which gives us 99 and 11.

λ1,λ2=5±529=9,1\lambda_1, \lambda_2 = 5 \pm \sqrt{5^2 - 9} = 9, 1

You see what we mean about how you can basically just write down the eigenvalues while staring at the matrix? It's typically just the tiniest bit of simplifying at the end.

What are the eigenvalue(s) of the matrix [2324]\left[\begin{array}{ll}2 & 3 \\ 2 & 4\end{array}\right]?

This trick is especially useful when you need to read off the eigenvalues from small examples without losing the main line of thought by getting bogged down in calculations.

For more practice, let's try this out on a common set of matrices which pop up in quantum mechanics, known as the Pauli spin matrices.

If you know quantum mechanics, you'll know the eigenvalues of these are highly relevant to the physics they describe, and if you don't then let this just be a little glimpse of how these computations are actually relevant to real applications.

The mean of the diagonal in all three cases is 00, so the mean of the eigenvalues in all cases is 00, making our formula look especially simple.

What about the products of the eigenvalues, the determinants? For the first one, it's 010 - 1 or 1-1. The second also looks like 010 - 1, though it takes a moment more to see because of the complex numbers. And the final one looks like 10-1 - 0. So in all three cases, the eigenvalues are ±1±1.

Although in this case you don't even really need the formula to find two values evenly spaced around zero whose product is 1-1.

If you're curious, in the context of quantum mechanics, these matrices correspond with observations you might make about the spin of a particle in the xx, yy or zz-direction, and the fact that these eigenvalues are ±1±1 corresponds with the idea that the values for the spin you would observe would be entirely in one direction or entirely in another, as opposed to something continuously ranging in between.

Maybe you'd wonder how exactly this works, and why you'd use 2x2 matrices with complex numbers to describe spin in three dimensions. Those would be valid questions, just beyond the scope of what we're talking about here.

You know it's funny, this section is supposed to be about a case where 2x2 matrices are not just toy examples or homework problems, but actually come up in practice, and quantum mechanics is great for that. However, the example kind of undercuts the whole ponit we're trying to make. For these specific matrices, if you use the traditional method with characteristic polynomials, it's essentially just as fast, and might actually faster.

For the first matrix, the relevant determinant directly gives you a characteristic polynomial of λ21\lambda^2 - 1, which clearly has roots of plus and minus 11. Same answer for the second matrix. And for the last, forget about doing any computations, traditional or otherwise, it's already a diagonal matrix, so those diagonal entries are the eigenvalues!

However, the example is not totally lost on our cause, where you would actually feel the speed up is the more general case where you take a linear combination of these matrices and then try to compute the eigenvalues.

We might write this as aa times the first one, plus bb times the second, plus cc times the third. In physics, this would describe spin observations in the direction of a vector [abc]\left[\begin{array}{c} a \\ b \\ c \end{array}\right].

More specifically, you should assume this vector is normalized, meaning a2+b2+c2=1a^2 + b^2 + c^2 = 1. When you look at this new matrix, it's immediate to see that the mean of the eigenvalues here is still zero, and you may enjoy pausing for a brief moment to confirm that the product of those eigenvalues is still 1-1, and from there concluding what the eigenvalues must be.

The characteristic polynomial approach, on the other hand, is now actually more cumbersome to do in your head.

Relation to the quadratic formula

To be clear, using the mean-product formula is the same thing as finding roots of the characteristic polynomial; it has to be. In fact, this formula is a nice way to think about solving quadratics in general and some viewers of the channel may recognize this.

If you're trying to find the roots of a quadratic given its coefficients, you can think of that as a puzzle where you know the sum of two values, and you know their product, and you're trying to recover the original two values.

Specifically, if the polynomial is normalized so that the leading coefficient is 11, then the mean of the roots is 1/2-1/2 times the linear coefficient, for the example on screen that would be 55, and the product of the roots is even easier, it's just the constant term. From there, you'd apply the mean product formula to find the roots.

Now, you could think of the mean product formula as being a lighter-weight reframing of the traditional quadratic formula. But the real advantage is that the terms have more meaning to them.

The whole point of this eigenvalue trick is that because you can read out the mean and product directly from the matrix, you can jump straight to writing down the roots without thinking about what the characteristic polynomial looks like. But to do that, we need a version of the quadratic formula where the terms carry some kind of meaning.

What are the eigenvalue(s) of the matrix [3157]\left[\begin{array}{ll}3 & 1 \\ 5 & 7\end{array}\right]?

What are the eigenvalue(s) of the matrix [8426]\left[\begin{array}{ll}8 & 4 \\ 2 & 6\end{array}\right]?

Last thoughts

The hope is that it's not just one more thing to memorize, but that the framing reinforces other nice facts worth knowing, like how the trace and determinant relate to eigenvalues. If you want to prove these facts, by the way, take a moment to expand out the characteristic polynomial for a general matrix, and think hard about the meaning of each coefficient.

Many thanks to Tim, for ensuring that the mean-product formula will stay stuck in all of our heads for at least a few months.

If you don't know about his channel, do check it out. The Molecular Shape of You, in particular, is one of the greatest things on the internet.

TwitterRedditFacebook
Notice a mistake? Submit a correction on GitHub
Table of Contents

Thanks

Special thanks to those below for supporting the original video behind this post, and to current patrons for funding ongoing projects. If you find these lessons valuable, consider joining.

7733771stViewMathsAaron BinnsAda CohenAdam CedroneAdam DřínekAdam MarguliesAdam MielsAditya MunotAhmed ElkhananyAidan De AngelisAidan ShenkmanAlan SteinAlbin EgasseAlexAlex HackmanAlexander JanssenAlexander MaiAlexis OlsonAli YahyaAljoscha SchulzeAllen StengerAlonso MartinezAman KarunakaranAndre AuAndré Sant’AnnaAndré Yuji HisatsugaAndreas NautschAndresAndrew BuseyAndrew CaryAndrew FosterAndrew GuoAndrew MohnAndrew WyldAnisha PatilAntoine Bruguieranul kumar sinhaAravind C VArcusArjun ChakrobortyArkajyoti MisraArnaldo LeonArne Tobias Malkenes ØdegaardArthur LazarteArthur ZeyArun IyerAshwany RayuAugustine LimAxel EricssonAZsorcererBarry FamBartosz BurclafBeckett Madden-WoodsBen CampbellBen DeloBen GrangerBen GutierrezBenjamin BaileyBenjamin R.² M.Bernd SingBill GatliffBoris VeselinovichBpendragonBradley PirtleBrandon HuangBrendan ColemanBrendan ShahBrian CloutierBrian KingBrian StaroselskyBritt SelvitelleBritton FinleyBruce MalcolmBurt HumburgC CarothersCalvin LinCardozo FamilyCarl SchellCarl-Johan R. NordangårdCarlos IriarteChandra SripadaCharles PereiraCharles SoutherlandCharlie EllisonChelaseChien Chern KhorChris ConnettChris DrutaChris SachsChris SeabyChristian BroßChristian KaiserChristian OpitzChristopher LortonChristopher SutercinterloperClark GaebelCody MerhoffConstantine GoltsevConvenienceShoutCooper JonesCorey OgburnCristian AmitroaieCurt ElsasserCy 'kkm' K'NelsonD DasguptaD. Sivakumard5bDallas De AtleyDamian MarekDan DavisonDan HerbatschekDan KinchDan LaffanDan Martindancing through life...Daniel BadgioDaniel BrownDaniel Herrera CDaniel PangDave Bdave nicponskiDavid B. HillDavid Bar-OnDavid BarkerDavid ClarkDavid GowDavid J WuDavid JohnstonDebbie NewhouseDelton DingDominik WagnerDonal BotkinDoug LundinDouglas Lee HallDrTadPhdDuane RichEdan MaorEddy LazzarinEero HippeläinenEliasElle NolanEmilio MendozaemptymachineEric FlynnEric JohnsonEric KoslowEric RobinsonEric YoungeErnest HymelEro CarreraEryq OuithaqueueEttore RandazzoEugene FossEugene PakhomovEvan MiyazonoEverett KnagFederico LebronfluffyfunnypantsFrank R. Brown, Jr.Gabriele SiinoGarbanarbaGerhard van NiekerkGordon GouldGregory HopperGuillaume SartorettiHal HildebrandHaraldHarry EakinsHenri SternHenry ReichHitoshi YamauchiHo Woo NamHolger FlierHugh ZhangIan McinerneyIan RayIllia TulupovIlya LatushkoImran AhmedIvanIvan Sorokinivo galicIzzy GomezJ. Chris WesleyJ. Dmitri GallowJack ThullJacob HarmonJacob HartmannJacob TaylorJacob WallingfordJaewon JungJalex StarkJameel SyedJames D. Woolery, M.D.James GolabJames SugrimJames WinegarJamie WarnerJan PfeiferJan-Hendrik PrinzJarred HarveyJason HiseJay EbhomenyeJayCoreJayne GabrieleJean-Manuel IzaretJed YeiserJeff DoddsJeff LinseJeff RJeff StraathofJeremyJeremy ColeJeroen SwinkelsJerris HeatonJim CarusoJim PowersJimmy YangJNJoe PregrackeJohan AusterJohn CampJohn GriffithJohn HaleyJohn LeJohn LuttigJohn McClaskeyJohn RizzoJohn ZelinkaJohnny HolzeisenJon AdamsJonathanJonathan WhitmoreJonathan WilsonJonathon KrallJono ForbesJoseph John CoxJoseph KellyJoseph O'ConnorJoseph RoccaJosh KinnearJoshua ClaeysJoshua DavisJoshua LinJoshua OuelletteJuan BenetJulien DuboisJullciferJustin ChandlerjustpwdKai-Siang AngKarim SafinKarl NiuKarl WAN NAN WOKarma ShiKartik Cating-SubramanianKeith SmithKeith TysonKenneth LarsenKevinKevin FowlerKevin SteckKillian McGuinnessKrishanu SankarKrishnamoorthy VenkatramanKros DaiLael S CostaLAI OscarlardysoftLaura GastLee BurnetteLee Reddenlevav ferber tasLiang ChaoYiLinda XieLinh TranLuc RitchieLuka KorovLukas BiewaldlukvolLunchbag RodriguezMads ElvheimMads Munch AndreasenMagister MugitMagnus DahlströmMagnus HiieMahrlo AmpostaMajid AlfifiManuel GarciaMarc CohenMarc FoliniMarcial AbrahantesMarek GluszczukMarina PillerMark HeisingMark MannMarshall McQuillenMartin MauersbergMartin PriceMárton VaitkusMateo AbascalMathias JanssonMatt BerggrenMatt GodboltMatt ParlmerMatt RovetoMatt RussellMattéo BoissièreMatthäus PawelczykMatthew BouchardMaty SimanMax AndersonMax FilimonMax WelzMaxim NitscheMehmet BudakMert ÖzMichael BosMichael HardelMichael W WhiteMikeMike DourMike DussaultMikkoMingFung LawMitch HardingMolly MackinlayMR. FANTASTICMr. William ShakespawNag RajanNate PinskyNathan WinchesterNayantara JainNero LiNickNick LucasNikita LesnikovNipun RamakrishnanNiranjan ShivaramNitu KitchlooOctavian VoicuOleksandr MarchukovOlga CoopermanOliver SteeleOmar Zrienotavio goodParker BurchettPatch KesslerPatrick GibsonPatrick LucasPaul PluzhnikovPaul WolfgangPavel DubovPāvils JurjānsPerry TagaPesho IvanovPetar VeličkovićPete DietlPeter BryanPeter EhrnstromPeter FreesePeter McinerneyPethanolPi NuesslePierre LancienPradeep GollakotaRabidCamelRam Eshwar KaundinyaRandy C. WillRandy TrueRaymond FowkesRebecca Linrehmi postRex GodbyRich JohnsonRICHARD C BRIDGESRipta PasayRish KundaliaRob GranieriRobert KlockRobert van der TuukRobert Von BorstelRod SRon CapelliRonnie ChengRoobieRyan AtallahSamuel CahyawijayaSamuel JudgeSansWord HuangScott GibbonsScott GrayScott Walter, Ph.D.Sean BarrettSean ChittendenSebastian BraunertSergey OvchinnikovSiddhesh VichareSinan TaifourSiobhan DurcanSmarter Every DaysoekulSohail FarhangiSolara570SonOfSofamanStefan GrunspanStefan KorntreffSteve CohenSteve HuynhSteve MuenchSteven SiddalsStevie MetkeSundar SubbarayansupershabamSuthen ThomasTal EinavTanmayan PandeTaras BobrovytskyTarrence NTed SuzmanTerry HayesThomas Peter BerntsenTianyu GeTim ErbesTim FerreiraTim KazikTimothy ChklovskiTrevor SettlesTyler HerrmannTyler ParcellTyler VanValkenburgTyler VenessUbiquity VenturesVai-Lam MuiValentin Mayer-EichbergerVassili PhilippovVasu DubeyVeritasiumVictor CastilloVictor KostyukVignan VelivelaVignesh Ganapathi SubramanianVignesh ValliappanVijayVince GaborVladimir SolomatinwillWooyong EeXierumengXueqiYair kassYana ChernobilskyYetinotherYinYangBalance.AsiaYoon Suk OhYurii MonastyrshynYushi WangZachariah RosenbergZachary GidwitzZachary MeyerАлександр ГорленкоМаксим АласкаровНавальный噗噗兔泉辉致鉴