## More Jordan bases and canonical forms

Dear Professor,

I have a couple of questions about the Jordan canonical form section of the course if you get a spare moment.
1. I understand that if are considering linear maps from a vector space over the complex numbers to another vector space over the complex numbers, then eigenvalues always exist. When we find a Jordan basis, we use these eigenvalues. Does this mean we are assuming that the field in question is the complex numbers? Surely if not, then these eigenvalues aren’t guaranteed to exist so we can’t always find a Jordan basis?

2. After finding a pre-Jordan basis, we have to replace some basis vectors so as to satisfy the condition that if $b_i$ belongs to $V_i(\lambda)$, then $(L -\lambda)b_i$ belongs to $V_{i-1}(\lambda)$ for $i$ greater than 1. I have noticed that sometimes there is a choice as to which basis vector in $V_{i-1}(\lambda)$ to replace. Does it matter? Is there a convention? Similarly, when extending our basis

$B_1\cup B_2\cup ...\cup B_{i-1 }$

for

$V_{i-1}(\lambda)$

to be a basis for $V_i(\lambda)$, there may be a choice as to which basis vector we choose to add. Does it matter? Is there a convention? My questions above lead me to believe that there are actually infinitely many Jordan bases for any one linear map / matrix, but that their Jordan canonical form is unique up to permutation of the Jordan blocks. Am I correct in thinking this?

3. Finally, say we have a linear map with two eigenvalues, $\alpha$ and $\beta.$ Say the characteristic polynomial is

$(X - \alpha)^a (X - \beta)^b$

and the minimal polynomial is

$(X - \alpha)^c (X - \beta)^d$

with, obviously, $c$ less than or equal to $a$, $d$ less than or equal to $b$. I don’t understand how we know without proof that $dim V_c(\alpha) = a$. I have chosen a two eigenvalue case for simplicity – obviously I’m interested in all several eigenvalue cases.

Thanks very much!

1. You are right that that eigenvalues need to be contained in the field of definition for a linear map to admit a Jordan canonical form. For example, the rotation matrix

$\left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right)$

will not have a Jordan canonical forms over $R$. Over $C$, however, it is diagonalizable, and the diagonal form *is* the Jordan canonical form. (What is it?)

2. There is an error in the condition you write, which could be minor or serious. To be a Jordan basis for a linear map $L$ with one eigenvalue $\lambda$, the requirement is that

$(L-\lambda)B_i\subset B_{i-1}$

for $i>1$. Look carefully to see and understand the difference from what you wrote. (Actually, from the overall understanding reflected in your message, I suspect your mistake was just a misprint. But I wrote the above for the general reader.)

As to your question, you are write there there are many choices involved. This is the point people often find confusing, not just in this topic, but in many basic mathematical problems: when there is not a unique solution. We just have to understand the material well enough to feel relaxed about the choices. There is no general convention I can think of regarding `good’ choices, other than obvious demands of economy like simple numbers and as many zero entries as possible.

All of your remaining observations are correct. I wouldn’t be too surprised if a study of the *space of all possible Jordan bases* would yield some insight on good choices, at least in some natural special situations.

3. The general discussion is an obvious generalization of the case with two eigenvalues, so let’s stick to your question as it stands.

For $v \in V_c(\alpha)$, we have

$(L-\alpha )v \in V_{c-1}(\alpha)\subset V_c(\alpha)$

and hence,

$Lv\in \langle v \rangle+V_c(\alpha)=V_c(\alpha).$

Therefore, $V_c(\alpha)$ is stabilized (that is, taken to itself) by $L$. Similarly, $V_d(\beta)$ is stabilized by $L$. By considering the shape of the Jordan canonical form for

$L|V_c(\alpha)$

(the one eigenvalue case) we see that

$ch_{L|V_c(\alpha)}(X)=(X-\alpha)^s$

for $s=dim V_c(\alpha)$. Similarly,

$ch_{L|V_d(\beta)}(X)=(X-\beta)^t$

for $t=V_d(\beta)$. But we have

$V=V_c(\alpha)\oplus V_d(\beta)$

by the primary decomposition theorem. So

$ch_L(X)=ch_{L|V_c(\alpha)}(X)ch_{L|V_d(\beta)}(X)$

or

$(X-\alpha)^c(X-\beta)^d=(X-\alpha)^s(X-\beta)^t.$

Therefore,

$c=s=dim V_c(\alpha)$

and

$d=t=dim V_d(\beta)$.