Daily summaries for Math 229 (section 1): Matrices and Linear Equations


Thursday, November 13

People indicated to me that they were unaware of the existence of this page, so I haven't been in too much of a rush to keep up lately....

In any case, we spent the last month or so covering each section of Chapter 3. In particular, we talked about minimal polynomials (what they are, some properties of them, and how to compute them), eigenvalues and eigenvectors, and a bunch of applications (discrete dynamical systems, principal component analysis, etc.), among other related topics. We are following the book very directly, so it shouldn't be much of a chore to check out what we have been up to....

Exam III is next Monday, and both the practice test and its solutions can be found on the course webpage. For the last three weeks of class, we'll cover the four sections of Chapter 4 (plus one day of review for the final and one day for teacher-course evaluations).


Thursday, October 16

The exam generally went well. Some people improved last time, others went the other way, but the average stayed about the same. Today, we covered section 3.1, about eigenvectors and eigenvalues. Given one, we discussed how to compute the other. It turns out that eigenvectors correspond to invariant subspaces while eigenvalues correspond to "stretch factors" on these invariant subspaces.


Tuesday, October 14

Review day. We covered the practice test, and that's it. The exam is tomorrow!!!


Thursday, October 9

Beware: There is an exam next Wednesday!

We talked about changing coordinates today, section 2.6. In 2-dimensional space, any change of coordinates is just a translation, followed by a rotation. Each such transformation can be expressed as a matrix (which is just a product of two matrices with a special structure). So, we talked about these two special matrices (one for translation, one for rotation), and we talked about how to determine the coordinates of a point after a change of coordinates. Think of it this way - coordinates (coordinate systems, a.k.a. axes) just give a convenient way of naming points. Changing coordinate systems is tantamount to changing the names of points - that's all. The matrix representation for coordinate changes that we cooked up today just gives us an easy way to figure out the new name, given the old one.


Tuesday, October 7

We covered 2.5 today, about so-called least squares fits. Given a bunch of points (too many of them, in some sense), how can we find a linear function (geometrically, a line) that cuts through them "best." Interpolation works if you don't have too many points. Otherwise, we do a least squares fit, which involves projection in a so-called sample space. As usual, the bottom line is a tidy little formula to compute the coefficients of the polynomial that you seek (of any given degree, actually - not just linear functions).


Thursday, October 2

Today, we discussed section 2.4, about projection. This will be very important in the coming sections. Given some vector and some subspace (like a plane in 3-dimensional real space), how do we determine which vector in the subspace most closely resembles the vector we are interested in? We project (using a symmetric projector, which is a matrix with a certain formula). What if we also want to project our vector onto some subspace orthogonal to our first subspace? There's a projector for that, too - the complementary projector. It turns out that these two projectors allow you to decompose any given vector into two orthogonal projectors. Anyway, we'll see a cool application next time.


Tuesday, September 30

We covered section 2.3 today, about matrix inverses and the evaluation of f(A)v where f(x) is a polynomial, A is a matrix, and v is a vector. There is a slick way of doing this, and it will be important to be familiar with this when we get to the minimal polynomial algorithm later in the semester. Also, we talked about how to invert Vandermonde matrices. The bottom line is that the columns of the inverse of a Vandermonde matrix are just certain Lagrange interpolants, which are easily constructed with a neat, little recipe. This construction may feel as though it is coming from left field, but, hopefully, at the end of the semester, we will be able to go over what we discussed along the way so that it all (eventually) fits together....


Thursday, September 25

We went through section 2.2 today, about matrix multiplication. We also went over the concept of a digraph. Digraphs are cute and interesting but are not really part of the main body of linear algebra.

Also, I bumped the homework due today to next Tuesday since so many people had trouble with it. Thus, be sure to have BOTH homework assignments with you next Tuesday....


Tuesday, September 23

We covered the complex numbers today (section 2.1). There is a lot to be said about the complex numbers, so we went very quickly and covered a number of useful formulas. The main points are that the complex numbers do actually form a number system and that there are multiple ways of understanding them (Cartesian coordinates, polar coordinates, etc.). These will come up only occasionally as we move forward so, for now, please do not worry too much if you do not fully understand everything there is to know about the complex numbers....


Thursday, September 18

We spent the day going over the practice test. The actual test will be very similar to the practice test, so be sure that you know how to solve the problems on the practice test. My solutions to the practice test are now posted on the course website. As for the exam, remember that it is on Monday (not a class day!).

Next week, we will dive into chapter 2, starting with complex numbers on Tuesday.


Tuesday, September 16

The pace was a bit more relaxed today. We covered the basics of interpolation, i.e., how to find equations of univariate polynomials through a given set of points. This is a cute application of linear algebra (rather than a mainstrem part of the field). Given the general form of a polynomial (in a specified degree), you can plug in the specified points in order to produce a system of linear equations, the variables of which correspond to the coefficients of the polynomial. Solving that system (via the so-called Vandermonde matrix), you can find a polynomial (or even infinitely many polynomials, depending on the case) that interpolate the points.

Next time, we will go over the practice test (available on the course website) and generally gear up for Exam 1, held next Monday (NOT A CLASS DAY!!!).


Thursday, September 11

Today was another busy day! We covered much of section 1.6 (the parts you need to know, at least) at a frenzied pace. We talked about the geometry underlying vectors and vector arithmetic. We also talked a bit about linear transformations (functions given by multiplication by a matrix) and their effect on (sets of) vectors. All told, it was busy, but you should have in your toolbox all the tools that you will need for the homework.

Next time, we will cover a fun application - interpolation. Then, next Thursday, we'll gear up for the first exam, which is the following Monday (NOT A CLASS DAY!).


Tuesday, September 9

Today, we flew through Section 1.5. There is a lot of information in that section, so we went very quickly. We talked about matrix-vector multiplication and various properties of matrix and vector arithmetic. In the end, most things that you expect would be legal are legal (except matrix multiplication, which we haven't defined yet). We also talked about linear combinations and linear (in)dependence. Finally, we discussed how to break a general solution to a linear system into the sum of a so-called distinguished solution and a linear combination of BSAHS (basic solutions of the associated homogeous system). This concept if generally important in linear algebra, though we only had time to discuss just enough for the homework....


Thursday, September 4

We spent about the first 20-25 minutes of class today talking about RREF and Gaussian elimination. In particular, there were questions about whether you can really put every matrix into RREF (using Gaussian elimination) and whether it is really unique. We went through an example to look at what can happen, and hopefully all of the questions were adequately answered. PLEASE feel free to bring questions like that to class or to me during office hours - I will try to make time to answer those types of questions in class, and I will certainly have time during my office hours.

After that, we covered section 1.4. We talked about how to write down general solutions of linear systems (which you can read directly from the RREF form of the augmented matrix of your linear system). We also talked about how to tell whether there are 0, 1, or infinitely many solutions, based on that RREF form.


Tuesday, September 2

We covered 1.3 in class - most of it, at least. You can put any matrix into RREF (reduced row-echelon form). This is a very particular way of standardizing the way a matrix looks, and one of the benefits of going through the trouble to do this is that it is (fairly) easy to read the solutions of a linear system off of the RREF version of the corresponding augmented matrix. To transform a matrix into an equivalent matrix in RREF, we use (variants of) Gaussian elimination. To be more specific, Gaussian elimination will get you the (slightly weaker) row-echelon form (REF) while Gauss-Jordan elimination will get you the full RREF. Later on, we'll want to put things into REF, but for now, RREF may be a bit easier to work with....

We ran a little short on time today, so I very quickly went over RREF for nonnumerical matrices and generalized augmented matrices. Check out the homework tips page for some notes that I typed up about this two topics....

Next time, we'll talk about the general form of solutions of linear systems, as understood by looking at the RREF version of the corresponding augmented matrix.


Thursday, August 28

The first homework assignment was due today - please make sure that you get these turned in on time. Points add up over time....

We covered section 1.2 in class today. In particular, we talked about the augmented matrix form of a system of linear equations and various operations that you can perform on such matrices. Next time, we'll see how to piece together these operations to solve linear systems.


Tuesday, August 26

Today, we went over the syllabus a bit and I spent about 35 minutes lecturing on section 1.1. As for the syllabus, please let me know if there are any questions. Your homework will be due each day (T,H), and I'll have office hours the afternoon before each class day (M,W). Exams are outside of class - please let me know if that is a problem. Also, 229 is still some sort of requirement for majors and minors in math (please see my email for details), but it is no longer a prerequisite for 369. That might have an effect on your gameplan if you are neither a math major nor a math minor.

As for 1.1, we talked about linear systems today. In particular, we defined them and went through a bunch of examples (and non-examples). There are several strange cases (3=6, trig identities, etc.) to watch out for, but hopefully it is fairly clear what is meant by a linear system.

We also talked about solutions of linear systems (still in 1.1). To see if a given n-tuple is a solution, just plug it into each equation. If they all check out, then it's a solution. If at least one fails, it is not a solution. Thus, to prove that an n-tuple is not a solution, you just need to produce only one equation that it fails to satisfy.

There is homework due Thursday (and just about every class day after that) - follow the link on the course webpage to see it. Please let me know if you have any trouble.


Page maintained by Dan Bates