Determinants

Many properties of a matrix are based on its ``determinant''. To review how they are calculated, let's start with a simple $3\times 3$ matrix

\begin{displaymath}
A =
\left(
\begin{array}{rrr}
a_{11} & a_{12} & a_{13}  ...
...22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{array}\right)
\end{displaymath}

We learn in high school that the rule for calculating its determinant is to start by multiplying along the main diagonal: $a_{11} a_{22}
a_{33}$, then along the parallel super diagonal (wrapping around): $a_{12} a_{23} a_{31}$, then along the parallel sub diagonal (again wrapping around): $a_{21} a_{32} a_{13}$. We add these three terms. Then we switch to the (let's call it) ``antidiagonal'': $a_{13} a_{22}
a_{31}$ and its parallel super and sub antidiagonals: $a_{12} a_{21} a_{33}$ and $a_{11} a_{23} a_{32}$. These last three products are subtracted from the sum of the first three. So the full result is

\begin{displaymath}
\det\vert A\vert = a_{11} a_{22} a_{33} + a_{12} a_{23} a_{...
...{22} a_{31} - a_{12} a_{21} a_{33} - a_{11} a_{23} a_{32}   .
\end{displaymath}

This method works only for $3\times 3$ matrices. But we can generalize it by recognizing that it has the compact form

\begin{displaymath}
\det\vert A\vert = \sum_{P} (-)^P a_{1,P1} a_{2,P2} a_{3,P3}
\end{displaymath}

where the sum is over all permutations $P$ of the columns 123. We note that there are six such permutations: 123, 231, 312, 321, 213, 132, which matches the number of terms in our standard form. We use the shorthand notation $P1,P2,P3$ to specify one of these six permutations. Any permutation can be achieved by swapping enough pairs of members. The first three permutations in this list are called ``even'', because they are produced by an even number (including 0) of pairwise exchanges, and the last three are called ``odd'', because they require an odd number. The shorthand notation $(-)^P$ means plus for an even and minus for an odd permutation.

Expressing the determinant in terms of permutations allows us to generalize to any size matrix:

\begin{displaymath}
\det\vert A\vert = \sum_{P} (-)^P a_{1,P1} a_{2,P2} a_{3,P3} \ldots{} a_{n,Pn}
\end{displaymath}

The number of terms in the sum is $n!$.

There are other ways you may have learned to calculate the determinant of a matrix. We won't go into details here. One way is to use the ``cofactor'' method: Pick a row of the matrix. Let's do it with the first row $a_{1,k}$. Work you way across the row, visiting each column once. At each step, construct an $(n-1) \times (n-1)$ matrix by eliminating the row you are working with and the column that you are visiting at that step. Calculate the determinant of the smaller matrix. This is called the cofactor. Call it $A_{1,k}$. Proceed to the end of the row. At that point you have a cofactor for each of the elements in the row. Then the determinant is given by the rule

\begin{displaymath}
\det\vert A\vert = \sum_k (-)^{k-1} a_{1,k} A_{1,k}
\end{displaymath}

You can use any row you like. If you use row $j$, then you get

\begin{displaymath}
\det\vert A\vert = \sum_k (-)^{k+j} a_{j,k} A_{j,k}
\end{displaymath}

You can also do it by columns.

The cofactor method is, in fact, algebraically the same as the method of summing over permutations. It is just another way of rearranging the terms in the sum.

We list some important properties of determinants without proof

What is an efficient way to calculate a determinant? Efficient means calculating it with the least number of floating point operations, since that is what usually costs computing effort. The sum over permutations method (or the equivalent cofactor method) is terribly inefficient, because it requires computing the product of $n$ factors in each of $n!$ terms. Including the summation, the number of floating point operations is $n\times n!$, which grows extremely rapidly with increasing $n$.

A more efficient way to calculate the determinant is to factor the matrix into a product of a lower triangular and upper triangular matris.

\begin{displaymath}
\det\vert A\vert = \det\vert L\vert\det\vert U\vert
\end{displaymath}

which is just the product of the diagonal elements of each factor. The cost of doing this using the Crout reduction method is the same as the cost of doing Gaussian elimination. It grows as $n^3$ as the matrix size grows. This is still costly, but for large $n$ it is vastly cheaper than the sum over permutations.