3Blue1Brown

Chapter 5Three-dimensional linear transformations

Lisa: Well, where's my dad?

Frink: Well, it should be obvious to even the most dimwitted individual who holds an advanced degree in hyperbolic topology that Homer Simpson has stumbled into... dramatic pause... the third dimension.

In the last two chapters, we talked about linear transformations and matrices, but only showed the specific case of transformations that take two-dimensional vectors to other two-dimensional vectors. Throughout this series we will work mainly in two-dimensions, mostly because it's easier to actually see on the screen and wrap your mind around. And more importantly, once you get all the core ideas in two dimensions, they carry over pretty seamlessly to higher dimensions.

Nevertheless it's good to peek our head outside of flatland now and then to see what it means to apply these ideas to more dimensions.

Three-dimensions

For example, consider a linear transformation with three-dimensional vectors as inputs and three-dimensional vectors as outputs.

Just like in two dimensions, we can visualize the transformation as the input vector moving to the output vector.

To understand the transformation as a whole, we imagine every possible vector move to its corresponding output vector.

It gets very crowded to think about all the vectors as arrows all at once. Instead, we can get a sense for the behavior of the function by transforming the grid in a way that keeps grid lines parallel and evenly spaced.

Just as with two dimensions, one of these transformations is completely determined by where the basis vectors go. But now, there are three basis vectors: The unit vector in the xx-direction, ı^\hat{\imath}, the unit vector in the yy-direction, ȷ^\hat{\jmath}, and the unit vector in the zz-direction, called k^\hat{k}.

In fact it's easier to think about these transformations by only following the basis vectors, since the full 3d grid representing all points can get messy. By leaving a copy of the original axes in the background, we can think about the coordinates where each of the three basis vectors lands.

Record the coordinates of these three resulting vectors as the columns of a 3x3 matrix. This gives a matrix that completely describes your transformation using nine numbers.

To see where a vector with coordinates xx, yy and zz lands, the reasoning is almost identical to what it was for two dimensions: Each of those coordinates can be thought of as instructions for how to scale each basis vector, so that they add together to get your vector.

The important part, just like the 2d case, is that this scaling and adding process works both before and after the transformation. Note how the axis are tilted in the following image:

To see where your vector lands, you multiply those coordinates by the corresponding column of the matrix, and add together the three results.

Given the transformation function defined by the matrix [00.50.500.51100.5]\left[\begin{array}{ccc} 0 & 0.5 & -0.5 \\ 0 & 0.5 & 1 \\ 1 & 0 & 0.5\end{array}\right] and the vector [102]\left[\begin{array}{c} 1 \\ 0 \\ -2 \end{array}\right] as input what is the resulting output vector? For reference, here is the corresponding visualization of where the basis vectors land.

Select from the vectors below:

Examples

Consider the transformation here that rotate space 9090 degrees around the yy-axis.

It takes ı^\hat{\imath} to the coordinates [001]\left[\begin{array}{c} 0 \\ 0 \\ -1 \end{array}\right], so that's the first column of our matrix. It doesn't move ȷ^\hat{\jmath}, so the second column is [010]\left[\begin{array}{c} 0 \\ 1 \\ 0 \end{array}\right]. And it moves k^\hat{k} onto the xx-axis at [100]\left[\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right], so that becomes the third column of our matrix.

ı^[001]ȷ^[010]k^[100]\hat{\imath} \rightarrow\left[\begin{array}{c} 0 \\ 0 \\ -1 \end{array}\right] \hat{\jmath} \rightarrow\left[\begin{array}{l} 0 \\ 1 \\ 0 \end{array}\right] \hat{k} \rightarrow\left[\begin{array}{l} 1 \\ 0 \\ 0 \end{array}\right]

What is the transformation matrix that rotates space 9090 degrees counterclockwise around the zz-axis? To differentiate clockwise from counterclockwise, imagine a clock lying flat in the xyxy-plane that is facing the positive zz direction.

Combining Transformations

Multiplying two matrices is also similar to what we did for two dimensions: Whenever you see two 3x3 matrices getting multiplied together, you should imagine first applying the transformation encoded by the right one, then applying the transformation encoded by the left one.

The resulting transformation is the combination of applying the two matrices one after the other.

Three dimension matrix multiplication turns out to be very important for computer graphics and robotics, since things like rotations in three-dimensions can be very hard to describe, but are easier to wrap your mind around if you break them down as the composition of different transformations.

Performing this matrix multiplication numerically is, once again, very analogous to the two-dimensional case. In fact, a good way to test your understanding of the last chapter would be to reason through what specifically it should look like, thinking closely about how it relates to the idea of applying two successive transformations of space.

Your answer:
?
Our answer:

Puzzle

Here's another puzzle for you: It's also meaningful to talk about a linear transformation from two-dimensional space to three-dimensional space, or from three-dimensions down to two. Can you visualize such a transformation? And can you represent them with matrices? How many rows and columns for each one? When is it meaningful to talk about multiplying such matrices, and why?

Your answer:
?
Our answer:

A simple example from 3d to 2d would be the shadow cast by a 3d object onto a 2d plane. Although, in order for the transformation to be linear, the light rays should be considered parallel to eachother to simplify the problem. If you consider the light source to be something really far away, like the sun, then this is a reasonable choice to make.

You can represent these transformations with matrices. The number of columns corresponds to the dimension of the input and the number of rows corresponds to the dimension of the output. For example, a matrix that maps coordinates on a sphere to a plane would have three columns and two rows.

A=[a1a2a3a4a5a6]A = \left[\begin{array}{ccc} a_1 & a_2 & a_3 \\ a_4 & a_5 & a_6 \end{array}\right]

It's meaningful to talk about multiplying these matrices when the number of columns on the left matrix is equal to the number of rows on the right matrix. That whey when apply these matrices to a vector, reading right to left, the dimensions of the input and output match up.

[b1b2b3b4][a1a2a3a4a5a6]\left[\begin{array}{cc} b_1 & b_2 \\ b_3 & b_4 \end{array}\right] \left[\begin{array}{ccc} a_1 & a_2 & a_3 \\ a_4 & a_5 & a_6 \end{array}\right]

We'll cover nonsquare matrices in more detail in a following chapter.

In the next chapter, we'll get into the determinant.

TwitterRedditFacebook
Notice a mistake? Submit a correction on GitHub
Table of Contents