Dear Prof. Kim,

The definition of an adjoint map given in the lectures relied on an orthonormal basis. Can we still define an adjoint map on a vector space that does not have an orthonormal basis by using the (immediate) lemma in the lectures (definition 5.3.132)? (Though I do realise the proof of existence and uniqueness in the online notes still assumed an orthonormal basis.)

First of all, I would like to laud very much the intent of this question. Original inquiry almost always begins by examining an existing result or notion, and asking if it could be done differently, or what exactly are the necessary ingredients for a proof. Unfortunately, exams that dwell excessively on reproducing proofs tend not to encourage this kind of constructive thinking. My friend Graeme Segal, who taught at Cambridge for many years, complained to me that students in his courses used absolutely to hate it if he gave two proofs of one theorem in order to illustrate different perspectives on an important result. Such a reaction is a very sorry state of affairs, to which I hope less people would wish to contribute. Within this context, your question is indeed very welcome.

Now regarding adjoints: As you say, the definition is as stated in the online notes. It is in showing that the definition makes sense, i.e., such a linear map indeed exists, that one needs the basis.
Let’s elaborate on this a bit.

Let $(V, \langle \cdot, \cdot \rangle)$ be an inner product space. A basic fact is that a vector $v \in V$ is completely determined by its inner product with other vectors in $V$. That is, if

$\langle v_1, w\rangle =\langle v_2, w\rangle$

for all $w \in V$, then $v_1=v_2$. Prove this! (Easy.)

This fact suffices to show the uniqueness of the adjoint (again, prove this). It is in proving the existence of a linear map $A^*$ such that

$\langle Av, w\rangle =\langle v, A^*w\rangle$

(equivalently,

$\langle A^*v, w\rangle =\langle v, Aw\rangle )$

that one needs to pause and think about bases. But at this point, it is actually very natural to use an orthonormal basis. Let me explain this important point. Suppose $B=\{b_1,\ldots, b_n\}$ is any basis at all. Then a vector $v$ can be specified by simply giving its coefficients with respect to the basis $B$. That is, if we are given a collection $c_1,\ldots, c_n$ of numbers, this determines a vector

$v=c_1b_1+c_2b_2+\cdots +c_nb_n.$

Now here is the point: If $B$ happens to be an orthonormal basis, then giving the $c_i$ is exactly the same as specifying the inner products $\langle v, b_i \rangle$. This is why the equations

$\langle u, b_i\rangle =\langle v, Ab_i\rangle$

definitely determine some vector $u$ for each given vector $v$, which, in fact, is unique. So we can define $A^*v$ to be this $u$. The fact that the assignment

$v \mapsto A^*v$

is linear follows easily from the uniqueness and the linearity of $A$.

To give a proof that doesn’t refer to an orthonormal basis at all, the most natural way is to define the dual vector space $V^*$ to a vector space $V$. I won’t go through the details here, but just throw out the words for you to look up. We didn’t discuss it in our course, but the notion of a dual space is actually critically important in mathematics. In regard to adjoints, the key point is that the inner product defines a linear or conjugate-linear bijection from $V$ to $V^*$. As I said, look up these notions and use them to give an `abstract’ proof yourself of the existence of $A^*$.