Many properties of a matrix are based on its ``determinant''. To
review how they are calculated, let's start with a simple
We learn in high school that the rule for calculating its determinant
is to start by multiplying along the main diagonal:
, then along the parallel super diagonal (wrapping around):
, then along the parallel sub diagonal (again
. We add these three terms.
Then we switch to the (let's call it) ``antidiagonal'':
and its parallel super and sub antidiagonals:
. These last three products are
subtracted from the sum of the first three. So the full result is
This method works only for matrices. But we can
generalize it by recognizing that it has the compact form
where the sum is over all permutations of the columns 123. We
note that there are six such permutations: 123, 231, 312, 321, 213,
132, which matches the number of terms in our standard form. We use
the shorthand notation to specify one of these six
permutations. Any permutation can be achieved by swapping enough
pairs of members. The first three permutations in this list are
called ``even'', because they are produced by an even number
(including 0) of pairwise exchanges, and the last three are called
``odd'', because they require an odd number. The shorthand notation
means plus for an even and minus for an odd permutation.
Expressing the determinant in terms of permutations allows us to
generalize to any size matrix:
The number of terms in the sum is .
There are other ways you may have learned to calculate the determinant
of a matrix. We won't go into details here. One way is to use the
``cofactor'' method: Pick a row of the matrix. Let's do it with the
first row . Work you way across the row, visiting each
column once. At each step, construct an
by eliminating the row you are working with and the column that you
are visiting at that step. Calculate the determinant of the smaller
matrix. This is called the cofactor. Call it . Proceed to
the end of the row. At that point you have a cofactor for each of the
elements in the row. Then the determinant is given by the rule
You can use any row you like. If you use row , then you get
You can also do it by columns.
The cofactor method is, in fact, algebraically the same as the method
of summing over permutations. It is just another way of rearranging
the terms in the sum.
We list some important properties of determinants without proof
- The determinant of a product of matrices is the product of the determinants.
- If the inverse of a matrix exists, its determinant is the inverse
of the determinant of the original matrix.
- Taking the transpose does not change the determinant.
- The determinant of a triangular matrix is the product of its diagonal elements.
The rule is the same for upper and lower triangular matrices.
This one is easy to show using the permutation sum. If a permutation
puts a zero matrix element in the product of terms, then that
permutation doesn't contribute anything to the determinant. The only
that doesn't involve at least one
zero matrix element is the identity permutation . And that
identity permutation gives you the product of the diagonal elements.
What is an efficient way to calculate a determinant? Efficient means
calculating it with the least number of floating point operations,
since that is what usually costs computing effort. The sum over
permutations method (or the equivalent cofactor method) is terribly
inefficient, because it requires computing the product of factors
in each of terms. Including the summation, the number of
floating point operations is , which grows extremely
rapidly with increasing .
A more efficient way to calculate the determinant is to factor the
matrix into a product of a lower triangular and upper triangular matris.
which is just the product of the diagonal elements of each factor.
The cost of doing this using the Crout reduction method is the same as
the cost of doing Gaussian elimination. It grows as as the
matrix size grows. This is still costly, but for large it is
vastly cheaper than the sum over permutations.