<BODY > <HR><center> <table border=1> <td> <A NAME="tex2html26" HREF="node1.html"><img height=36 width=36 alt="PREVIOUS" src="../../../icons/latex2html/previous.gif"></A></td> <td> <A NAME="tex2html32" HREF="linalg.html"><img height=36 width=36 alt="UP" src="../../../icons/latex2html/up.gif"></A></td> <td> <A NAME="tex2html34" HREF="node3.html"><img height=36 width=36 alt="NEXT" src="../../../icons/latex2html/next.gif"></A></td> <td><a href="contents-node2.html"><img height=36 width=36 alt="CONTENTS" src="../../../icons/latex2html/toc.gif"></a></td> </table> </center> <HR> <!--End of Navigation Panel--> <H1><A NAME="SECTION00020000000000000000"> Determinants</A> </H1> <P> Many properties of a matrix are based on its determinant''. To review how they are calculated, let's start with a simple <IMG WIDTH="40" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img35.png" ALT="$3\times 3$"> matrix <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} A = \left( \begin{array}{rrr} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right) \end{displaymath} --> <IMG WIDTH="169" HEIGHT="64" BORDER="0" SRC="img36.png" ALT="\begin{displaymath} A = \left( \begin{array}{rrr} a_{11} &amp; a_{12} &amp; a_{13} ... ...22} &amp; a_{23} \\ a_{31} &amp; a_{32} &amp; a_{33} \end{array}\right) \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> We learn in high school that the rule for calculating its determinant is to start by multiplying along the main diagonal: <!-- MATH $a_{11} a_{22} a_{33}$ --> <IMG WIDTH="70" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img37.png" ALT="$a_{11} a_{22} a_{33}$">, then along the parallel super diagonal (wrapping around): <!-- MATH $a_{12} a_{23} a_{31}$ --> <IMG WIDTH="70" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img38.png" ALT="$a_{12} a_{23} a_{31}$">, then along the parallel sub diagonal (again wrapping around): <!-- MATH $a_{21} a_{32} a_{13}$ --> <IMG WIDTH="70" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img39.png" ALT="$a_{21} a_{32} a_{13}$">. We add these three terms. Then we switch to the (let's call it) antidiagonal'': <!-- MATH $a_{13} a_{22} a_{31}$ --> <IMG WIDTH="70" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img40.png" ALT="$a_{13} a_{22} a_{31}$"> and its parallel super and sub antidiagonals: <!-- MATH $a_{12} a_{21} a_{33}$ --> <IMG WIDTH="70" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img41.png" ALT="$a_{12} a_{21} a_{33}$"> and <!-- MATH $a_{11} a_{23} a_{32}$ --> <IMG WIDTH="70" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img42.png" ALT="$a_{11} a_{23} a_{32}$">. These last three products are subtracted from the sum of the first three. So the full result is <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} \det|A| = a_{11} a_{22} a_{33} + a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} - a_{13} a_{22} a_{31} - a_{12} a_{21} a_{33} - a_{11} a_{23} a_{32} \, . \end{displaymath} --> <IMG WIDTH="554" HEIGHT="28" BORDER="0" SRC="img43.png" ALT="\begin{displaymath} \det\vert A\vert = a_{11} a_{22} a_{33} + a_{12} a_{23} a_{... ...{22} a_{31} - a_{12} a_{21} a_{33} - a_{11} a_{23} a_{32} . \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> <P> This method works only for <IMG WIDTH="40" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img35.png" ALT="$3\times 3$"> matrices. But we can generalize it by recognizing that it has the compact form <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} \det|A| = \sum_{P} (-)^P a_{1,P1} a_{2,P2} a_{3,P3} \end{displaymath} --> <IMG WIDTH="230" HEIGHT="45" BORDER="0" SRC="img44.png" ALT="\begin{displaymath} \det\vert A\vert = \sum_{P} (-)^P a_{1,P1} a_{2,P2} a_{3,P3} \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> where the sum is over all permutations <IMG WIDTH="17" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img45.png" ALT="$P$"> of the columns 123. We note that there are six such permutations: 123, 231, 312, 321, 213, 132, which matches the number of terms in our standard form. We use the shorthand notation <IMG WIDTH="80" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img46.png" ALT="$P1,P2,P3$"> to specify one of these six permutations. Any permutation can be achieved by swapping enough pairs of members. The first three permutations in this list are called even'', because they are produced by an even number (including 0) of pairwise exchanges, and the last three are called odd'', because they require an odd number. The shorthand notation <IMG WIDTH="40" HEIGHT="35" ALIGN="MIDDLE" BORDER="0" SRC="img47.png" ALT="$(-)^P$"> means plus for an even and minus for an odd permutation. <P> Expressing the determinant in terms of permutations allows us to generalize to any size matrix: <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} \det|A| = \sum_{P} (-)^P a_{1,P1} a_{2,P2} a_{3,P3} \ldots{} a_{n,Pn} \end{displaymath} --> <IMG WIDTH="293" HEIGHT="45" BORDER="0" SRC="img48.png" ALT="\begin{displaymath} \det\vert A\vert = \sum_{P} (-)^P a_{1,P1} a_{2,P2} a_{3,P3} \ldots{} a_{n,Pn} \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> The number of terms in the sum is <IMG WIDTH="18" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img49.png" ALT="$n!$">. <P> There are other ways you may have learned to calculate the determinant of a matrix. We won't go into details here. One way is to use the cofactor'' method: Pick a row of the matrix. Let's do it with the first row <IMG WIDTH="31" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img50.png" ALT="$a_{1,k}$">. Work you way across the row, visiting each column once. At each step, construct an <!-- MATH $(n-1) \times (n-1)$ --> <IMG WIDTH="122" HEIGHT="32" ALIGN="MIDDLE" BORDER="0" SRC="img51.png" ALT="$(n-1) \times (n-1)$"> matrix by eliminating the row you are working with and the column that you are visiting at that step. Calculate the determinant of the smaller matrix. This is called the cofactor. Call it <IMG WIDTH="34" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img52.png" ALT="$A_{1,k}$">. Proceed to the end of the row. At that point you have a cofactor for each of the elements in the row. Then the determinant is given by the rule <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} \det|A| = \sum_k (-)^{k-1} a_{1,k} A_{1,k} \end{displaymath} --> <IMG WIDTH="194" HEIGHT="45" BORDER="0" SRC="img53.png" ALT="\begin{displaymath} \det\vert A\vert = \sum_k (-)^{k-1} a_{1,k} A_{1,k} \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> <P> You can use any row you like. If you use row <IMG WIDTH="12" HEIGHT="29" ALIGN="MIDDLE" BORDER="0" SRC="img54.png" ALT="$j$">, then you get <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} \det|A| = \sum_k (-)^{k+j} a_{j,k} A_{j,k} \end{displaymath} --> <IMG WIDTH="190" HEIGHT="45" BORDER="0" SRC="img55.png" ALT="\begin{displaymath} \det\vert A\vert = \sum_k (-)^{k+j} a_{j,k} A_{j,k} \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> You can also do it by columns. <P> The cofactor method is, in fact, algebraically the same as the method of summing over permutations. It is just another way of rearranging the terms in the sum. <P> We list some important properties of determinants without proof <UL> <LI>The determinant of a product of matrices is the product of the determinants. <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} \det(AB) = \det(A)\det(B) \end{displaymath} --> <IMG WIDTH="176" HEIGHT="28" BORDER="0" SRC="img56.png" ALT="\begin{displaymath} \det(AB) = \det(A)\det(B) \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> </LI> <LI>If the inverse of a matrix exists, its determinant is the inverse of the determinant of the original matrix. <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} \det(A^{-1}) = 1/\det(A) \end{displaymath} --> <IMG WIDTH="148" HEIGHT="28" BORDER="0" SRC="img57.png" ALT="\begin{displaymath} \det(A^{-1}) = 1/\det(A) \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> </LI> <LI>Taking the transpose does not change the determinant. <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} \det(\tilde A) = \det(A) \end{displaymath} --> <IMG WIDTH="112" HEIGHT="28" BORDER="0" SRC="img58.png" ALT="\begin{displaymath} \det(\tilde A) = \det(A) \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> </LI> <LI>The determinant of a triangular matrix is the product of its diagonal elements. The rule is the same for upper and lower triangular matrices. <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} \det(U) = u_{11} u_{22} \ldots{} u_{nn} \end{displaymath} --> <IMG WIDTH="162" HEIGHT="28" BORDER="0" SRC="img59.png" ALT="\begin{displaymath} \det(U) = u_{11} u_{22} \ldots{} u_{nn} \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> This one is easy to show using the permutation sum. If a permutation puts a zero matrix element in the product of terms, then that permutation doesn't contribute anything to the determinant. The only permutation <!-- MATH $P1, P2, \ldots{}, Pn$ --> <IMG WIDTH="110" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img60.png" ALT="$P1, P2, \ldots{}, Pn$"> that doesn't involve at least one zero matrix element is the identity permutation <IMG WIDTH="54" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img61.png" ALT="$12\ldots{} n$">. And that identity permutation gives you the product of the diagonal elements. </LI> </UL> <P> What is an efficient way to calculate a determinant? Efficient means calculating it with the least number of floating point operations, since that is what usually costs computing effort. The sum over permutations method (or the equivalent cofactor method) is terribly inefficient, because it requires computing the product of <IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img62.png" ALT="$n$"> factors in each of <IMG WIDTH="18" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img49.png" ALT="$n!$"> terms. Including the summation, the number of floating point operations is <IMG WIDTH="47" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img63.png" ALT="$n\times n!$">, which grows extremely rapidly with increasing <IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img62.png" ALT="$n$">. <P> A more efficient way to calculate the determinant is to factor the matrix into a product of a lower triangular and upper triangular matris. <BR><P></P> <DIV ALIGN="CENTER"> <!-- MATH \begin{displaymath} \det|A| = \det|L|\det|U| \end{displaymath} --> <IMG WIDTH="158" HEIGHT="28" BORDER="0" SRC="img64.png" ALT="\begin{displaymath} \det\vert A\vert = \det\vert L\vert\det\vert U\vert \end{displaymath}"> </DIV> <BR CLEAR="ALL"> <P></P> which is just the product of the diagonal elements of each factor. The cost of doing this using the Crout reduction method is the same as the cost of doing Gaussian elimination. It grows as <IMG WIDTH="21" HEIGHT="16" ALIGN="BOTTOM" BORDER="0" SRC="img65.png" ALT="$n^3$"> as the matrix size grows. This is still costly, but for large <IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img62.png" ALT="$n$"> it is vastly cheaper than the sum over permutations. <P> <HR><center> <table border=1> <td> <A NAME="tex2html26" HREF="node1.html"><img height=36 width=36 alt="PREVIOUS" src="../../../icons/latex2html/previous.gif"></A></td> <td> <A NAME="tex2html32" HREF="linalg.html"><img height=36 width=36 alt="UP" src="../../../icons/latex2html/up.gif"></A></td> <td> <A NAME="tex2html34" HREF="node3.html"><img height=36 width=36 alt="NEXT" src="../../../icons/latex2html/next.gif"></A></td> <td><a href="contents-node2.html"><img height=36 width=36 alt="CONTENTS" src="../../../icons/latex2html/toc.gif"></a></td> </table> </center> <HR> <!--End of Navigation Panel--> <HR><P><ADDRESS> <I></I> </ADDRESS> </BODY>