Monthly Archives: November 2007

A comment on the previous post

It occurred to me that the previous post might have given the wrong impression, for example, that you need to understand the article on the Chinese remainder theorem for the exam. This was far from my intention. I hope it’s obvious that they are actually recommended either for your general amusement, or if you wish to go farther than the constraints of the courses themselves allow.

MK

Advertisements

Three pedagogical articles and one more

Three articles available on my website discuss topics that may be of interest to my current students. But they’re a bit hidden, so I’m adding links here.

Comments on the Chinese remainder theorem

Some matrix groups

Why everyone should know number theory

All three are a bit dated and many sentiments expressed there (especially in the third one) don’t look quite right anymore, but maybe someone will find them amusing, nevertheless.

I’ve received now several inquiries about my actual research. Perhaps I’ll attempt sometime to make the main ideas at least more accessible by writing something expository. In the meanwhile, here is the presentation I gave at the London-Paris Number Theory Seminar this fall. You can throw it a casual glance if it seems worth the bother.

MK

Serge Lang

While conversing with Acyr Locatelli about the textbook on linear algebra, I ended up elaborating a bit on the author Serge Lang, who was mentioned earlier as the supervisor for my Ph.D. thesis. I thought therefore to provide links to two articles that appeared in the Notices of the American Mathematical Society a short while after his death. One concentrates on the personality, and the other, on mathematics. My contribution appears in the latter, which, unfortunately, is a bit involved. The former, on the other hand, presents a vivid portrait of an exceedingly colorful mathematician. So it may be of interest, especially to students thinking seriously about an academic career. In any case, it’s well-known that familiarity with the author can make books come alive, even when (especially when, I like to think) it’s about fundamental mathematics.

MK

Matrix representation of quadratic forms

Hi Sir

I was doing question 2 of Sheet 6 2201 and I am confused as to how to find the matrix corresponding to the basis B which is {(1 1 1), (1 1 -1), (1 0 -1)}. I read the article on Quadratic Forms on your webpage but that did not clear it up.

Could I write the basis as v={(x+y+z, x+y, x-y-z)} and transpose this to find the matrix? i.e.

Q(v)= (v^T)(A)(v),

and set this answer equal to each equation?

I have tried to do this but the algebra is very tricky, could you explain a simpler way?

I looked at the example given in the article and the (1,2)th entry was given by:

q(1,1,0)-q(1,0,0)-q(0,1,0)/2 = (17-3-6)/2 = 4

How was the value 17 calculated?

Thank you for taking the time to read this

Kind Regards

Reply:

To answer the last question first, the 17 is exactly what you would expect from that line, namely, the value q(1,1,0).

As to your general question, first of all, you shouldn’t be writing something like

v={(x+y+z, x+y, x-y-z)}

when referring to a basis. Since we are in R^3, a basis should be a collection of three specific three-component vectors, such as the standard basis {(1,0,0), (0,1,0), (0,0,1)} or the basis

B={(1,1,1),(1,1,-1), (1,0,-1)}

given. I will write B={b_1,b_2,b_3} for ease of notation, in the order displayed. Then for any bilinear form f, the matrix [f]_B will have (i,j) entry equal to f(b_i,b_j), by definition. The difficulty arises here, because we are just given q(v)=f(v,v), and we need to figure out the bilinear function f(v,w) for v and w different, just from that information. We are saved by the identity

f(v,w)=(1/2)[q(v+w)-q(v)-q(w)]

for symmetric f. So, for example, to find the (1,2) entry, you would calculate

(1/2)[q(b_1+b_2)-q(b_1)-q(b_2) ]=(1/2)[q(2,2,0)-q(1,1,1)-q(1,1,-1)]

I’ll leave it to you at this point to figure out these and other necessary values for each of the quadratic forms on the worksheet.

MK

Quadratic forms

I just added a small `Remark on quadratic forms‘ to the course webpage, which I hope you’ll read. It was prompted by a question from one of you concerning the reason for looking at the quadratic form associated to a symmetric bilinear form. I gave some kind of an answer, and then realized that the direction of application had actually been reversed. So take a look! I’m aware of your general reluctance to read such things :), so I’ll mention that the note might even help you with the coursework this week.

MK

Diagonalization

Hi Sir,

I need a little help on diagonalising matrices. What do you do if you only have one eigenvalue. i.e. on question 1 of this weeks homework, the result of det(XI-A) is (X-2)^2=0 and (A-2I)v=0 gives x=y so my eigenvector is
(1)
(1)
How do I continue from here?
Thanks.

Reply:

Think about the following:

When is an nxn matrix with entries from a field F (say, Q, R ,C) diagonalizable over F?

(1) If and only if it has a basis of eigenvectors in F^n.

(2) If and only if the minimal polynomial has all roots in F and has no multiple roots.

If you consider the exercise from either of these perspectives, the answer should be clear.

MK

More about this blog

It seems reasonable at this point to extend an invitation to others who may have mathematical questions that are suitable for discussions on this blog. For example, students in my tutorial groups are strongly encouraged to use this medium. However, even if you’re enrolled in some other mathematics course at UCL, or, for that matter, a random student who found out about this site and wishes to pose a question, feel free to send it to me using the submission form or by email and I’ll do my best to reply. The email address to use is

myfirstname.mylastname@ucl.ac.uk

The email option is useful if your question is complicated and you need to attach a file.

Best,

Minhyong Kim

Textbook

Hi Prof Kim,

I’m wondering is there recommended books that I can buy for 2201? Thank you.

Mi Sun

Reply:

The book listed on the syllabus by Serge Lang is a rather nice classic. I have some sentimental attachment to it because it was written by my late Ph.D. supervisor. Another clear exposition is the textbook by Gilbert Strang. Incidentally, you should check out the MIT on-line course delivered by Strang. You find many other resources as well on that page.

Finally, a good computationally intensive book with many examples is `Linear Algebra in Action’ by Harry Dym.

MK

Minimal polynomials and generalized eigenspaces

Hi Professor Kim,
Having read various notes and definitions on the internet, i still don’t understand how to calculate the minimal polynomial in your Linear Algebra course.

If i take the Homework 4 sheet as an example, I have no problem working out the characteristic polynomials in questions 2 and 3, which i find to be (X-1)(X-2)(X-1), (X-2)(X-2)(X-1) [Q.2] and (X-6)^4 [Q.3] … could you tell me how to work out m_A(X) from this?

Also, in the notes it says that V1(lambda) is contained in V2(lambda) and so on… which agrees with my answer to question 2, but in question 3 i have worked out V1, V2 and V3 and they come out as

V1 = (0,-2,-1,1 )^t, V2 = (0,1,1,0 )^t and (0,-1,0,1 )^t V3= (1,0,0)^t and (0,1,0 )^t and (0,0,1)^t

(and the last result means there’s no point in carrying on) And as you can see none of the previous vectors are contained in the next set.

Could you tell me where i’m going wrong? The only thing i can think of is if b=1 (so the mA(X) = (X-6)^1 and we only work out V1) then the containment issue is void.

Thanks in advance for any help.

Reply:

I will make a few comments based on the computation you give, with no guarantee of their correctness. Most importantly, the V_t(lambda) are subspaces, so you should write something like

V_1=Span (0,-2,-1,1 )^t, V_2= Span{(0,1,1,0 )^t, (0,-1,0,1 )^t}, and so on.

This resolves part of your difficulty. That is, the assertion is that

V_1\subset V_2\subset V_3, …

as subspaces. The containment isn’t about any particular bases you may have chosen. For example, note that

(0,-2,-1,1 )^t=(0,-1,0,1 )^t-(0,1,1,0 )^t

so that definitely V_1 \subset V_2. Try working it out again with this in mind.

Having said this, I should point out that what you wrote for V_3 doesn’t make sense, since your vectors should have four components. Read some of the previous posts for some comments on the minimal polynomial.

MK

Logic question

Dear Prof. Kim,

Sorry to disturb your working.I am one of your students. Since we do not have tutorial during this reading week,I have got a problem of understanding one of the question from the homework. Not only me, but also other students have the same problem.The question is followed.
Q: Show that every truth table in 2 variables is the truth table of some formula of Propositonal Calculus.(Hint: 8=16/2)

I will be really appreciated if you can explain it for me.

Thanks,
Austin

Reply:

Call the variables A and B. Then given any assignment of truth values for A and B, you should be able to cook up a formula that’s true for that assignment and none else. This is very simple. For A=F and B=T, for example, you can use

(not A) and B

Make sure you understand this elementary construction. Now start experimenting with some truth tables:

A B P

—————-

T T

T F

F T

F F

That is, start with some sample fill-ins for the third column, and see if you can combine the elementary formulas above to come up with a P that works. If you experiment with a few, you may see a pattern. Alternatively, since there are only sixteen truth tables possible, you could just do them all separately in an ad hoc way 🙂