Matrix multiplication as composition | Essence of linear algebra, chapter 4
Articles Blog

Matrix multiplication as composition | Essence of linear algebra, chapter 4

October 9, 2019


It is my experience that proofs involving matrices can be shortened by 50% if one throws matrices out.
— Emil Artin Hey everyone! Where we last left off, I showed what linear
transformations look like and how to represent them using matrices. This is worth a quick recap, because it’s
just really important. But of course, if this feels like more than
just a recap, go back and watch the full video. Technically speaking, linear transformations
are functions, with vectors as inputs and vectors as outputs. But I showed last time how we can think about
them visually as smooshing around space in such a way the gridlines
stay parallel and evenly spaced, and so that the origin remains fixed. The key take-away was that a linear transformation is completely determined,
by where it takes the basis vectors of the space which, for two dimensions, means i-hat and
j-hat. This is because any other vector can be described
as a linear combination of those basis vectors. A vector with coordinates (x, y) is x times i-hat + y times j-hat. After going through the transformation this property, the grid lines remain parallel
and evenly spaced, has a wonderful consequence. The place where your vector lands will be
x times the transformed version of i-hat + y times the transformed version of j-hat. This means if you keep a record of the coordinates
where i-hat lands and the coordinates where j-hat lands you can compute that a vector which starts
at (x, y), must land on x times the new coordinates of
i-hat + y times the new coordinates of j-hat. The convention is to record the coordinates
of where i-hat and j-hat land as the columns of a matrix and to define this sum of the scaled versions
of those columns by x and y to be matrix-vector multiplication. In this way, a matrix represents a specific linear transformation and multiplying a matrix by a vector is, what
it means computationally, to apply that transformation to that vector. Alright, recap over. Onto the new stuff. Often-times you find yourself wanting to describe
the effect of applying one transformation and then another. For example, maybe you want to describe what happens when
you first rotate the plane 90° counterclockwise then apply a shear. The overall effect here, from start to finish, is another linear transformation, distinct from the rotation and the shear. This new linear transformation is commonly called the “composition” of the two separate transformations we applied. And like any linear transformation it can be described with a matrix all of its
own, by following i-hat and j-hat. In this example, the ultimate landing spot
for i-hat after both transformations is (1, 1). So let’s make that the first column of the
matrix. Likewise, j-hat ultimately ends up at the
location (-1, 0), so we make that the second column of the matrix. This new matrix captures the overall effect
of applying a rotation then a sheer but as one single action, rather than two
successive ones. Here’s one way to think about that new matrix: if you were to take some vector and pump it
through the rotation then the sheer the long way to compute where it ends up is to, first, multiply it on the left by the
rotation matrix; then, take whatever you get and multiply that
on the left by the sheer matrix. This is, numerically speaking, what it means to apply a rotation then a sheer
to a given vector. But, whatever you get should be the same as
just applying this new composition matrix that we just found, by
that same vector, no matter what vector you chose, since this new matrix is supposed to capture
the same overall effect as the rotation-then-sheer action. Based on how things are written down here I think it’s reasonable to call this new matrix,
the “product” of the original two matrices. Don’t you? We can think about how to compute that product
more generally in just a moment, but it’s way too easy to get lost in the forest
of numbers. Always remember, the multiplying two matrices
like this has the geometric meaning of applying one
transformation then another. One thing that’s kinda weird here, is that
this has reading from right to left; you first apply the transformation represented
by the matrix on the right. Then you apply the transformation represented
by the matrix on the left. This stems from function notation, since we write functions on the left of variables, so every time you compose two functions, you
always have to read it right to left. Good news for the Hebrew readers, bad news
for the rest of us. Let’s look at another example. Take the matrix with columns (1, 1) and (-2, 0) whose transformation looks like this, and let’s call it M1. Next, take the matrix with columns (0, 1)
and (2, 0) whose transformation looks like this, and let’s call that guy M2. The total effect of applying M1 then M2 gives us a new transformation. So let’s find its matrix. But this time, let’s see if we can do it without
watching the animations and instead, just using the numerical entries
in each matrix. First, we need to figure out where i-hat goes after applying M1 the new coordinates of i-hat, by definition, are given by that first column
of M1, namely, (1, 1) to see what happens after applying M2 multiply the matrix for M2 by that vector
(1,1). Working it out, the way that I described last
video you’ll get the vector (2, 1). This will be the first column of the composition
matrix. Likewise, to follow j-hat the second column of M1 tells us the first
lands on (-2, 0) then, when we apply M2 to that vector you can work out the matrix-vector product
to get (0, -2) which becomes the second column of our composition
matrix. Let me talk to that same process again, but
this time, I’ll show variable entries in each matrix, just to show that the same line of reasoning
works for any matrices. This is more symbol heavy and will require
some more room, but it should be pretty satisfying for anyone
who has previously been taught matrix multiplication the more rote way. To follow where i-hat goes start by looking at the first column of the
matrix on the right, since this is where i-hat initially lands. Multiplying that column by the matrix on the
left, is how you can tell where the intermediate
version of i-hat ends up after applying the second transformation. So, the first column of the composition matrix will always equal the left matrix times the
first column of the right matrix. Likewise, j-hat will always initially land
on the second column of the right matrix. So multiplying the left matrix by this second
column will give its final location and hence, that’s the second column of the
composition matrix. Notice, there’s a lot of symbols here and it’s common to be taught this formula
as something to memorize along with a certain algorithmic process to
kind of help remember it. But I really do think that before memorizing
that process you should get in the habit of thinking about
what matrix multiplication really represents: applying one transformation after another. Trust me, this will give you a much better
conceptual framework that makes the properties of matrix multiplication
much easier to understand. For example, here’s a question: Does it matter what order we put the two matrices
in when we multiply them? Well, let’s think through a simple example like the one from earlier: Take a shear which fixes i-hat and smooshes
j-hat over to the right and a 90° rotation. If you first do the shear then rotate, we can see that i-hat ends up at (0, 1) and j-hat ends up at (-1, 1) both are generally pointing close together. If you first rotate then do the shear i-hat ends up over at (1, 1) and j-hat is off on a different direction
at (-1, 0) and they’re pointing, you know, farther apart. The overall effect here is clearly different so, evidently, order totally does matter. Notice, by thinking in terms of transformations that’s the kind of thing that you can do in
your head, by visualizing. No matrix multiplication necessary. I remember when I first took linear algebra there’s this one homework problem that asked
us to prove that matrix multiplication is associative. This means that if you have three matrices
A, B and C, and you multiply them all together, it shouldn’t matter if you first compute A
times B then multiply the result by C, or if you first multiply B times C then multiply
that result by A on the left. In other words, it doesn’t matter where you
put the parentheses. Now if you try to work through this numerically like I did back then, it’s horrible, just horrible, and unenlightening
for that matter. But when you think about matrix multiplication
as applying one transformation after another, this property is just trivial. Can you see why? What it’s saying is that if you first apply
C then B, then A, it’s the same as applying C, then B then A. I mean, there’s nothing to prove, you’re just applying the same three things
one after the other all in the same order. This might feel like cheating. But it’s not! This is an honest-to-goodness proof that matrix
multiplication is associative, and even better than that, it’s a good explanation
for why that property should be true. I really do encourage you to play around more
with this idea imagining two different transformations thinking about what happens when you apply
one after the other and then working out the matrix product numerically. Trust me, this is the kind of play time that
really makes the idea sink in. In the next video I’ll start talking about
extending these ideas beyond just two dimensions. See you then!

Only registered users can comment.

  1. if everyone have taught the maths in primary schools as u do we would have had herd of mathematicians,….

  2. 20 years after first being exposed to matrices, and getting a computer engineering degree, I finally understand matrix multiplication.

  3. I’m about to go in middle school and I still watch these high level math stuff because it’s really somewhat addicting to watch

  4. Wrong logic for proving associative property through visualization of transformation. (AB)C cannot be thought of C transforming B then A. But logically, means C transforming the the resultant matrix from AB.

  5. OMG!!! that kind of lessons really make me understad linear algebra
    thanks to take the time to do this 🙂

  6. these r all 2*2 matrices examples ..what if the matrices is 2*3 or 4*4 .what does that mean graphically .

  7. crazy! I didn't even need this video, because it could be all deduced by the last one, but this was great to confirm the deduction and consolidate memory, thanks!

  8. I'm not sure if the associativity is as trivial as you say. with the brackets like this: (AB)C aren't we supposed to apply A on B first and then use the result and apply it on C? The way I see it it's not trivial and you just applied what results from associativity, you didn't prove it holds. But maybe I'm wrong.

  9. I was never taught the reason behind matrix multiplication and how all of it is how it is. Thank you so much for posting high quality content, absolutely love it!

  10. At 7:57 when Shear is applied to the rotated basis vectors. why does i(hat ) moves to 1,1 and J(hat) remains unchanged?

  11. I really liked the idea of how you can imagine a matrix after watching this video. These videos are really mind blowing. They certainly don't teach us with this perspective and such details in schools. The idea of imagining 2D planes in a 3D space and what not! I mean, the highest I was taught in school was matrix transformation and multiplication, but just for the sake of it 😂.

  12. If a matrix can be visualised as an entity for transformation of space then how a tensor can be visualised?
    Thanks in advance

  13. 3:39 after rotation we have: green arrow – transformed i; red arrow – transformed j. right?
    but applying shear matrix geometrically on the new transformed grid and I'm getting a different result then on video 3:40
    I think the shear matrix (purple color) should be:
    1 0
    -1 1
    Am I right? or better say Why am not right?

  14. To prove the associativity of matrices,It be great if the transformations considered are simple. Think of each A,B and C as a 90° anticlockwise rotation, and it will be quite clear how it works!😄💎

  15. Sir, in the Rotation and then Shear transformation, shouldnt the j cap(red) be shifted by 45° rather than the i cap?

  16. I clicked on one video and now i'm not able to stop !! Now if you ask me what is Mathematics and what is Music ,answer would be same !!!

  17. Why the first time Shear affects j-hat but in the second one it affects i-hat?? 7:36
    If shear affects i-hat or j-hat in the both times ,then there is no difference between M1*M2 and M2*M1 !!

  18. Doesn’t (AB)C imply that he’s transforming B then A, then that is transformed by C? Do the order of operations not apply because otherwise you would have to transform B then A prior to C.

  19. This is a fantastic complement to more math proof oriented books, it's important to grasp both approaches! Thanks a lot.

  20. Now that there is visual understanding, the whole thing is understood on such a better level! Man! I so wished this side of internet existed when I was STRUGGLING to understand. Now it looks beautiful and surprisingly easy.

  21. I wish he would take the time to go over the last proof again. I think he totally missed the boat on a fundamental point – that because matrices are not just transformations, but LINEAR transformations, they have associativity. In other words, because multiplication is associative, and because matrices transform through multiplication, they are thus associative.

    Instead, this video makes it seem like the whole concept of order of operations is totally trivial and you can just think of operations willy nilly.

    If he wanted to stick to visual interpretations, he could have repeated his earlier demonstration showing how the bases i and j are transformed successively- but it would not prove anything formally.

  22. I'm thinking so many issues I once had with matrices are from them being organized top to bottom rather than left to right

  23. Truly inspiring … an absolutely amazing series … keep going! I have two masters degrees and I feel like I'm actually beginning to understand & really learn when I watch these videos.

  24. Курс лекций саватеева о теории групп там ассоциативность явно видно

  25. I think after I finish this and go to understand what Calculus is really about in your other videos, I will be dangerous.

  26. Towards the end of this video you demonstrated the associativity of linear transformations (aka matrix multiplication). I was a little confused until I figured out that what you were saying was the semantics of the expression resulted in the same sequence of transformations regardless of how the parenthesis were placed. That's all well and good, but I was curious if the associativity held if the actual sequence of operations was altered in accordance with the parenthesis. So I did as you advised and played around with it a little bit. I just used the matrix multiplication algorithm mechanically without regard to the transformation concept and did it in both possible sequences to see if it got the same result. It was, as you alluded to from your earlier school experience, messy. But I am pleased to report that it did. Associativity holds regardless of the sequence of operations. Relying on that semantic work-around is not necessary!

    If you're curious what it looks like, I uploaded a rendering of the worksheet showing this result to this link:
    https://www.dropbox.com/s/1zt118iz7kk670s/Matrix%20Multiplication%20Demo.PNG?dl=0

  27. WOW! I always wandered why the heck we multiply the matrices as we do, just what is all this! Yet here you are, clearing another query of mine. Thank you so much! 😃

  28. I think there is a mistake in explanation what the associative property is: 1) for A(BC) you said: "first apply C, then B, then A"; 2) for (AB)C you said: "apply C, then B, then A", but probably you should say something like this: "Apply C, then multiply B times A as T and then apply T".

  29. Great video. Difficult topic, especially the end. One thing I found helpful is remembering that unlike regular multiplication, the order in which matrices are multiplied matters. This is why the rotation after the shear compared to the shear after the rotation have different result.

    Part of the reason for the confusion was that the associative property was discussed right after and seemed to contradict what was said in terms of the order of transformation mattering. I think it might have been easier to explain the property if instead you said the property holds because the result of taking your initial vector, rotating it and then shearing it has the same effect as determining the matrix that results from rotating and shearing the base vectors and then multiplying your initial vector by that matrix.

  30. I started learning linear algebra using this series. And I wonder what is going to happen when I see traditional ways in college. Thanks for this great series and videos!

  31. The two transformations can be any right? So by changing the position of basis ones to a new position when we apply another transformation is it according to the previous basis vectors or the new ones so formed. Help me as i am trying to plot the multiplication in my notebook rather that multiplying it. I want to achieve the ans by graphical approach.

  32. I wish youtube were like nowadays 10 years ago, when I scratch my head over and over again but can't figure out the intuition of linear algebra

  33. I tried to prove whether it's associative or not and my conclusion was that it wasn't based on this finding:
    I decided to frame this in terms of directions: North, South, and West.
    If you go 1 unit N then S, then W it is the same as if you went S, N, W.
    HOWEVER, that is only because I switched the grouping of two directions.
    If you go W, N, S; switching all three, you actually get a different result. And for this reason I thought that matrix multiplication was non associative. I don't know where I went wrong.

  34. These are fantastic videos! My only criticism is the axis colours. x=red, y=green, z=blue is the standard everywhere i've ever seen except here where x=green and y=red. It means i constantly have to remember to invert the standard while processing the video

  35. Every exact-science student needs to see this series. I can honestly say that my linear algebra teacher was brilliant with extensive understanding of the topic, but was limited in transffering that knowledge by inability to show what lin-alg looks like millisecond after millisecond of transformations.
    Well-done!

Leave a Reply

Your email address will not be published. Required fields are marked *