3Blue1Brown

Chapter 6The determinant

"The purpose of computation is insight, not numbers."

\qquad— Richard Hamming

Scaling Area

One thing that turns out to be surprisingly useful for understanding a given transformation is to measure exactly how much a given transformation stretches and squishes things. More specifically, to measure the factor by which the area of a given region increases or decreases.

For example, the matrix [3002]\begin{bmatrix}3&0\\0&2\end{bmatrix} scales i^\hat i by a factor of 33, and scales j^\hat j by a factor of 22. How can we tell the factor by which space gets stretched? Focus your attention on the 1×11\times 1 square whose bottom sits on i^\hat i, and whose left side sits on j^\hat j. After the transformation, this turns into a 2×32\times 3 rectangle.

Since this region started with area 11, and ended up with area 2×3=62\times 3=6, we can say the linear transformation has scaled its area by a factor of 66.

So even though this transformation smooshes things about, it seems to leave areas unchanged. Actually, though, if you know how much the area of that one unit square changes, it tells you how the area of any region in space changes. For starters, notice that whatever happens to one square in the grid happens to any other square in the grid, no matter the size of that other square. This is because grid lines remain parallel and evenly spaced.

Then, any shape that’s not a grid square can be approximated by grid squares, with arbitrarily good approximations if you use small enough grid squares. So since the areas of all those tiny grid squares are being scaled by a certain amount, the area of the blob as a whole will be scaled by that amount.

This special scaling factor, the factor by which a linear transformation changes areas, is called the “determinant” of that transformation.

I’ll show how to compute the determinant of transformation using its matrix later on in this video, but understanding what it represents is much more important than the computation.

For example, the determinant of a transformation would be 33 if that transformation increases the area of a region by a factor of 3.

The determinant of a transformation is 12\frac12 if it squishes down all areas by a factor of 1/2.

What is the determinant of the following transformation?

The determinant of a 2D transformation is 00 if it squishes all of space onto a line, or even onto a single point, since the area of every region would then become 0.

That last one is especially important; checking if the determinant of a given matrix is 00 will give a way of computing whether or not the transformation associated with a matrix squishes everything into a smaller dimension. You’ll see in the next few chapters why this is a useful thing to think about. For now I just want to lay down the visual intuition, which in and of itself is a beautiful thing to think about.

Negative determinant

Actually, what I’ve said so far is not quite right. The full concept of the determinant allows for negative values, but what does the idea of scaling an area by a negative amount mean?

det([1234])=2\det\left( \begin{bmatrix} \color{green}1 & \color{red}2 \\ \color{green}3 & \color{red}4 \end{bmatrix} \right) = \color{orange}-2

This has to do with the idea of orientation. For example, notice how normally j^\hat j is to the left of i^\hat i.

However, after this transformation now L(j^)L(\hat j) is on the right of L(i^)L(\hat i).

If you thought of 2D space as a sheet of paper, a transformation like that one seems to turn over that sheet to the other side. Any transformations that do this are said to “invert the orientation of space”. Whenever this happens, the determinant will be negative. The absolute value of the determinant still tells you the factor by which areas have been scaled.

det([1.5122])=5\det\left( \begin{bmatrix} 1.5 & 1 \\ 2 & -2 \end{bmatrix} \right) = -5

What does the determinant of the matrix [1234]\begin{bmatrix}1&2\\3&4\end{bmatrix} tell us about the space?

Your answer:
?
Our answer:

Since det([1234])=2\det\left(\begin{bmatrix}1 &2 \\3 &4\end{bmatrix}\right)=-2, we know this transformation flips space and doubles the area.

Why would the idea of negative area be a natural way to describe orientation flipping? Well, think about the series of transformations you get by slowly letting i^\hat i get closer and closer to j^\hat j. As i^\hat i gets closer, all of the areas in space are getting squished more and more, meaning the determinant approaches 00. Once i^\hat i lines up with j^\hat j, the determinant is 00. Then if i^\hat i continues the way it was going, doesn’t it feel kind of natural for the determinant to keep decreasing into negative numbers?

In Three Dimensions

Okay, so that’s the understanding of the determinant in two-dimensions, what do you think it should mean for three dimensions? It also tells you how much the transformation scales things, but this time, it tells you how much volumes get scaled.

Just as in two dimensions, where this is easiest to think about by focusing on one particular square with area 11, and watching only what happens to it. In three dimensions it helps to focus your attention on the specific 1×1×11\times 1\times 1 cube whose edges are resting on the basis vectors i^\hat i, j^\hat j and k^\hat k.

After the transformation, that cube might get warped into a slanty cube. That shape has the best name ever: parallelepiped. A name made even more delightful when your professor has a thick Russian accent.

Since this cube starts with a volume of 11, and the determinant gives the factor by which any volume is scaled, you can think of the determinant as simply being the volume of the parallelepiped that this cube turns into.

det([1.00.00.050.151.00.050.00.251.0])=Volume of thisparallelepiped\det\left( \begin{bmatrix} \color{green}-1.0 & \color{red}0.0 & \color{blue}0.05 \\ \color{green}0.15 & \color{red}1.0 & \color{blue}0.05 \\ \color{green}0.0 & \color{red}0.25 & \color{blue}1.0 \end{bmatrix} \right) = \begin{matrix} \text{Volume of this} \\ \text{parallelepiped} \end{matrix}

A determinant of zero would mean that all of space is squished onto something with zero volume, meaning a flat plane, a line, or in the most extreme case into a single point at the origin.

The above unit cube transforms into a flat parallelepiped, also known as a parallelogram.

det([1.00.01.00.51.01.51.00.01.0])=0\det\left( \begin{bmatrix} \color{green}1.0 & \color{red}0.0 & \color{blue}1.0 \\ \color{green}0.5 & \color{red}1.0 & \color{blue}1.5 \\ \color{green}1.0 & \color{red}0.0 & \color{blue}1.0 \end{bmatrix} \right) = 0

Those who watched chapter 2 will recognize this as meaning the columns of the matrix are linearly dependent.

Describe why the columns of this matrix are linearly dependent.

Your answer:
?
Our answer:

Any one of these columns can be written as a linear combination of the other two. For example:

[1.01.51.0][1.00.51.0]=[0.01.00.0]\color{blue}\begin{bmatrix} 1.0\\1.5\\1.0 \end{bmatrix} \color{black}- \color{green}\begin{bmatrix} 1.0\\0.5\\1.0 \end{bmatrix} \color{black}= \color{red}\begin{bmatrix} 0.0\\1.0\\0.0 \end{bmatrix}

Negative determinants in 3D

What about negative determinants in three dimensions, what would that mean?

One way to describe orientation in three-dimensions is with the right-hand rule: Point the forefinger of your right hand in the direction of i^\hat i, stick out your middle finger in the direction of j^\hat j, and notice how when you point up your thumb, it is in the direction of k^\hat k.

If you can still do this after the transformation, orientation has not changed, and the determinant is positive. Otherwise, if after the transformation it only makes sense to do that with your left hand, orientation has been flipped, and the determinant is negative.

What is the sign of the determinant for the following transformation?

Computation

How do you actually compute one of these determinants?

Two Dimensions

This is the formula for a 2×22\times 2 matrix:

det([abcd])=adbc\det\left( \begin{bmatrix} \color{green}a&\color{red}b \\ \color{green}c&\color{red}d \end{bmatrix} \right) = \color{green}a\color{red}d \color{black}- \color{red}b\color{green}c

Here’s part of an intuition for this formula. Let’s say the terms b\color{red}b and c\color{green}c were zero. Then the term a\color{green}a tells you how much i^\hat i is stretched in the xx-direction, and the term d\color{red}d tells you how much j^\hat j is stretched in the yy-direction. So if the other terms were zero, it should make sense that ad\color{green}a\color{black}\cdot\color{red}d gives the area of the rectangle that our favorite unit square turns into, like the [3002]\begin{bmatrix}3&0\\0&2\end{bmatrix} example from earlier.

Even if only one of bb or cc are zero, you’ll have a parallelogram with base aa and height dd, so the area will still be ad\color{green}a\color{black}\cdot\color{red}d.

Loosely speaking, if both bb and cc are nonzero, the term bc\color{red}b\color{black}\cdot\color{green}c tells you how much this rectangle is stretched or squished in the diagonal direction. It shows how much i^\hat i is changed in the yy-direction, and how much j^\hat j is changed in the xx-direction. Here is a more precise justification of this term bc\color{red}b\color{black}\cdot\color{green}c shown as a helpful diagram.

If you feel like computing determinants by hand is something you need to know, then the only way to get it down is to just practice it with a few matrices, there’s not much I can say or show you that will drill in the computation.

Three Dimensions

This is all triply true for the three dimensional determinant. There is a formula, and if you feel like that’s something you need to know how to do, you should just practice on a few matrices, or, you know, go watch Sal Khan work through a few.

det([abcdefghi])=adet([efhi])bdet([dfgi])+cdet([degh])\begin{align*} \det\left( \begin{bmatrix} \color{green}a & \color{red}b & \color{blue}c \\ \color{green}d & \color{red}e & \color{blue}f \\ \color{green}g & \color{red}h & \color{blue}i \end{bmatrix} \right) &= \color{green}a \color{black}\det\left( \begin{bmatrix} \color{red}e&\color{blue}f \\ \color{red}h&\color{blue}i \end{bmatrix} \right) \\ \color{black}&- \color{red}b \color{black}\det\left( \begin{bmatrix} \color{green}d&\color{blue}f \\ \color{green}g&\color{blue}i \end{bmatrix} \right) \\ \color{black}&+ \color{blue}c \color{black}\det\left( \begin{bmatrix} \color{green}d&\color{red}e \\ \color{green}g&\color{red}h \end{bmatrix} \right) \end{align*}

Honestly, I don’t think those computations fall into the essence of linear algebra, but I definitely think that understanding what the determinant represents falls within that essence.

Matrix Multiplication

Here’s a kind of fun question to think about: If you multiply two matrices together, the determinant of the resulting matrix is the same as the product of the determinants of the original two matrices.

det(M1M2)=det(M1)det(M2)\det( \color{blue}M_1 \color{purple}M_2 \color{black})=\det( \color{blue}M_1 \color{black})\det( \color{purple}M_2 \color{black})

If you tried to justify this numerically, it would be a horrible mess. But can you explain why in just one sentence?

Your answer:
?
Our answer:

The determinant shows how much area scales after the matrix's linear transformation.

Since applying two matrices sequentially leads to a new linear transformation, it doesn't matter if we first apply M2M_2 which scales space by det(M2)A\det(M_2)\cdot A and then apply M1M_1 which scales space by det(M1)det(M2)A\det(M_1)\cdot\det(M_2)\cdot A or if we apply both of them which scales space by det(M1M2)A\det(M_1M_2)\cdot A.

TwitterRedditFacebook
Notice a mistake? Submit a correction on GitHub
Table of Contents