|
< ><FONT face="Times New Roman"><FONT size=3> For the definition that follows we assume that we are given a particular field K. The scalars to be used are to be elements of K.</FONT></FONT></P>
< ><FONT face="Times New Roman"><FONT size=3> DEFINITION. A vector space is a set V of elements called vectors satisfying the following axioms.</FONT></FONT></P>
< ><FONT face="Times New Roman"><FONT size=3> (A) To every pair, x and y ,of vectors in V corresponds a vector x+y,called the sum of x and y, in such a way that.</FONT></FONT></P>
<P ><FONT face="Times New Roman" size=3>(1) addition is commutative, x + y = y + x.</FONT></P>
<P ><FONT face="Times New Roman" size=3>(2) addition is associative, x + ( y + z ) = ( x + y ) + z.</FONT></P>
<P ><FONT face="Times New Roman" size=3>(3) there exists in V a unique vector 0 (called the origin ) such that x + 0 = x for every vector x , and</FONT></P>
<P ><FONT face="Times New Roman" size=3>(4) to every vector x in V there corresponds a unique vector - x such that x + ( - x ) = 0.</FONT></P>
<P ><FONT size=3><FONT face="Times New Roman">(B) To every pair,</FONT>α<FONT face="Times New Roman">and x , where </FONT>α<FONT face="Times New Roman"> is a scalar and x is a vector in V ,there corresponds a vector </FONT>α<FONT face="Times New Roman">x in V , called the product of </FONT>α<FONT face="Times New Roman"> and x , in such a way that </FONT></FONT></P>
<P ><FONT size=3><FONT face="Times New Roman">(1) multiplication by scalars is associative,</FONT>α<FONT face="Times New Roman">(</FONT>β<FONT face="Times New Roman">x ) = (</FONT>αβ<FONT face="Times New Roman">) x</FONT></FONT></P>
<P ><FONT face="Times New Roman" size=3>(2) 1 x = x for every vector x.</FONT></P>
<P ><FONT size=3><FONT face="Times New Roman">(C) (1) multiplication by scalars is distributive with respect to vector addition,</FONT>α<FONT face="Times New Roman">( x + y ) = </FONT>α<FONT face="Times New Roman">x+</FONT>β<FONT face="Times New Roman">y , and</FONT></FONT></P>
<P ><FONT size=3><FONT face="Times New Roman">(2)multiplication by vectors is distributive with respect to scalar addition,(</FONT>α<FONT face="Times New Roman">+</FONT>β<FONT face="Times New Roman">) x = </FONT>α<FONT face="Times New Roman">x + </FONT>β<FONT face="Times New Roman">x .</FONT></FONT></P>
<P ><FONT size=3><FONT face="Times New Roman">The relation between a vector space V and the underlying field K is usually described by saying that V is a vector space over K . The associated field of scalars is usually either the real numbers R or the complex numbers C . If V is linear space and M</FONT>真包含于<FONT face="Times New Roman">V , and if </FONT>α<FONT face="Times New Roman"> u -v belong to M for every u and v in M and every </FONT>α∈<FONT face="Times New Roman"> K , then M is linear subspace of V . If U = { u 1,u 2,</FONT>…<FONT face="Times New Roman">} is a collection of points in a linear space V , then the (linear) span of the set U is the set of all points o the form </FONT>∑<FONT face="Times New Roman"> c <SUB>i</SUB> u <SUB>i</SUB> , where c <SUB>i</SUB></FONT>∈<FONT face="Times New Roman"> K ,and all but a finite number of the scalars c<SUB>i</SUB> are 0.The span of U is always a linear subspace of V.</FONT></FONT></P>
<P ><FONT size=3><FONT face="Times New Roman"> A key concept in linear algebra is independence. A finite set { u <SUB>1</SUB>,u <SUB>2</SUB>,</FONT>…<FONT face="Times New Roman">, u <SUB>k </SUB>} is said to be linearly independent in V if the only way to write 0 = </FONT>∑<FONT face="Times New Roman"> c <SUB>i</SUB> u <SUB>i </SUB> is by choosing all the c <SUB>i</SUB> = 0 . An infinite set is linearly independent if every finite set is independent . If a set is not independent, it is linearly dependent, and in this case, some point in the set can be written as a linear combination of other points in the set. A basis for a linear space M is an independent set that spans M . A space M is finite-dimensional if it can be spanned by a finite set; it can then be shown that every spanning set contains a basis, and every basis for M has the same number of points in it. This common number is called the dimension of M .</FONT></FONT></P>
<P ><FONT size=3><FONT face="Times New Roman">Another key concept is that of linear transformation. If V and W are linear spaces with the same scalar field K , a mapping L from V into W is called linear if L (u + v ) = L( u ) + L ( v ) and L ( </FONT>α<FONT face="Times New Roman">u ) = </FONT>α<FONT face="Times New Roman"> L ( u ) for every u and v in V and </FONT>α<FONT face="Times New Roman"> in K . With any I , are associated two special linear spaces:</FONT></FONT></P>
<P ><FONT face="Times New Roman"><FONT size=3> ker ( L ) = null space of L = L<SUP>-1 </SUP>(0)</FONT></FONT></P>
<P ><FONT size=3><FONT face="Times New Roman"> = { all x </FONT>∈<FONT face="Times New Roman"> V such that L ( X ) = 0 }</FONT></FONT></P>
<P ><FONT size=3><FONT face="Times New Roman"> Im ( L ) = image of L = L( V ) = { all L( x ) for x</FONT>∈<FONT face="Times New Roman"> V }.</FONT></FONT></P>
<P ><FONT face="Times New Roman" size=3>Then r = dimension of Im ( L ) is called the rank of L. If W also has dimension n, then the following useful criterion results: L is 1-to-1 if and only if L is onto.In particular, if L is a linear map of V into itself, and the only solution of L( x ) = 0 is 0, then L IS onto and is therefore an isomorphism of V onto V , and has an inverse L <SUP>-1</SUP> . Such a transformation V is also said to be nonsingular.</FONT></P>
<P ><FONT size=3><FONT face="Times New Roman">Suppose now that L is a linear transformation from V into W where dim ( V ) = n and dim ( W ) = m . Choose a basis {</FONT>υ<FONT face="Times New Roman"><SUB>1 </SUB>,</FONT>υ<SUB><FONT face="Times New Roman">2 ,</FONT></SUB>…<FONT face="Times New Roman">,</FONT>υ<FONT face="Times New Roman"><SUB>n</SUB>} for V and a basis {w <SUB>1 </SUB>,w<SUB>2 </SUB>,</FONT>…<FONT face="Times New Roman">,w <SUB>m</SUB>} for W . Then these define isomorphisms of V onto K<SUP>n </SUP>and W onto K<SUP>m</SUP> , respectively, and these in turn induce a linear transformation A between these. Any linear transformation ( such as A ) between K<SUP>n </SUP>and K<SUP>m </SUP> is described by means of a matrix ( a<SUB>ij </SUB>), according to the formula A ( x ) = y , where x = { x<SUB>1</SUB> , x <SUB>2</SUB>,</FONT>…<FONT face="Times New Roman">, x<SUB>n</SUB> } y = { y<SUB>1</SUB> , y <SUB>2</SUB>,</FONT>…<FONT face="Times New Roman">, y <SUB>m</SUB>} and </FONT></FONT></P>
<P ><FONT face="Times New Roman" size=3> </FONT></P>
<P ><FONT size=3><FONT face="Times New Roman"> Y <SUB>j</SUB> =</FONT>Σ<FONT face="Times New Roman"><SUP>n</SUP><SUB>j=i</SUB> a<SUB>ij</SUB> x<SUB>i </SUB> I=1,2,</FONT>…<FONT face="Times New Roman">,m.</FONT></FONT></P>
<P ><FONT face="Times New Roman" size=3>The matrix A is said to represent the transformation L and to be the representation induced by the particular basis chosen for V and W .</FONT></P>
<P ><FONT face="Times New Roman" size=3>If S and T are linear transformations of V into itself, so is the compositic transformation ST . If we choose a basis in V , and use this to obtain matrix representations for these, with A representing S and B representing T , then ST must have a matrix representation C . This is defined to be the product AB of the matrixes A and B , and leads to the standard formula for matrix multiplication.</FONT></P>
<P ><FONT size=3><FONT face="Times New Roman">The least satisfactory aspect of linear algebra is still the theory of determinants even though this is the most ancient portion of the theory, dating back to Leibniz if not to early China. One standard approach to determinants is to regard an n -by- n matrix as an ordered array of vectors( u <SUB>1 </SUB>, u <SUB>2</SUB> ,</FONT>…<FONT face="Times New Roman">, u <SUB>n</SUB> ) and then its determinant det ( A ) as a function F( u <SUB>1 </SUB>, u <SUB>2 </SUB>,</FONT>…<FONT face="Times New Roman">, u <SUB>n</SUB> ) of these n vectors which obeys certain rules.</FONT></FONT></P>
<P ><FONT size=3><FONT face="Times New Roman">The determinant of such an array A turns out to be a convenient criterion for characterizing the nonsingularity of the associated linear transformation, since det ( A ) = F ( u <SUB>1</SUB> , u <SUB>2</SUB> ,</FONT>…<FONT face="Times New Roman">, u <SUB>n</SUB> ) = 0 if and only if the set of vectors u<SUB>i </SUB> are linearly dependent. There are many other useful and elegant properties of determinants, most of which will be found in any classic book on linear algebra. Thus, det ( AB ) = det ( A ) det ( B ), and det ( A ) = det ( A') ,where A' is the transpose of A , obtained by the formula A' =( a <SUB>ji </SUB>), thereby rotating the array about the main diagonal. If a square matrix is triangular, meaning that all its entries above the main diagonal are 0,then det ( A ) turns out to be exactly the product of the diagonal entries.</FONT></FONT></P>
<P ><FONT size=3><FONT face="Times New Roman">Another useful concept is that of eigenvalue. A scalar is said to be an eigenvalue for a transformation T if there is a nonzero vector </FONT>υ<FONT face="Times New Roman"> with T (</FONT>υ<FONT face="Times New Roman">) </FONT>λυ<FONT face="Times New Roman"> . It is then clear that the eigenvalues will be those numbers </FONT>λ∈<FONT face="Times New Roman"> K such that T -</FONT>λ<FONT face="Times New Roman"> I is a singular transformation. Any vector in the null space of T -</FONT>λ<FONT face="Times New Roman"> I is called an eigenvector of T associated with eigenvalue </FONT>λ<FONT face="Times New Roman">, and their span the eigenspace, E </FONT><SUB>λ<FONT face="Times New Roman">.</FONT></SUB><FONT face="Times New Roman"> It is invariant under the action of T , meaning that T carries E</FONT><SUB>λ</SUB><FONT face="Times New Roman"> into itself. The eigenvalues of T are then exactly the set of roots of the polynomial p(</FONT>λ<FONT face="Times New Roman">) =det ( T -</FONT>λ<FONT face="Times New Roman"> I ).If A is a matrix representing T ,then one has p (</FONT>λ<FONT face="Times New Roman">) det ( A -</FONT>λ<FONT face="Times New Roman">I ), which permits one to find the eigenvalues of T easily if the dimension of V is not too large, or if the matrix A is simple enough. The eigenvalues and eigenspaces of T provide a means by which the nature and structure of the linear transformation T can be examined in detail.</FONT></FONT></P> |
|