Diagonal Matrices and Diagonalization in Linear Algebra

mathematical fundamentals linear algebra n.w
1 / 57
Embed
Share

Dive into the world of diagonal matrices - learn what they are, how they affect vectors, and explore diagonalization using eigenvalues and eigenvectors. Discover the significance of diagonalizable matrices and their properties.

  • Linear Algebra
  • Diagonal Matrices
  • Diagonalization
  • Eigenvalues
  • Eigenvectors

Uploaded on | 1 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Mathematical Fundamentals: Linear Algebra Jothi Ramalingam jothiram@nitk.edu.in

  2. Introduction: Diagonal Matrices Before beginning this topic, we must first clarify the definition of a Diagonal Matrix . A Diagonal Matrix is an n by n Matrix whose non- diagonal entries have all the value zero.

  3. Introduction: Diagonal Matrices In this presentation, all Diagonal Matrices will be denoted as: where dnn is the entry at the n-th row and the n- th column of the Diagonal Matrix.

  4. Introduction: Diagonal Matrices For example, the previously given Matrix of: Can be written in the form: diag(5, 4, 1, 9)

  5. Introduction: Diagonal Matrices The Effects of a Diagonal Matrix The Identity Matrix is an example of a Diagonal Matrix which has the effect of maintaining the properties of a Vector within a given System. For example:

  6. Introduction: Diagonal Matrices The Effects of a Diagonal Matrix However, any other Diagonal Matrix will have the effect of enlarging a Vector in given axes. For example, the following Diagonal Matrix: Has the effect of stretching a Vector by a Scale Factor of 2 in the x-Axis, 3 in the z-Axis and reflecting the Vector in the y-Axis.

  7. The Goal By the end of this PowerPoint, we should be able to understand and apply the idea of Diagonalisation, using Eigenvalues and Eigenvectors.

  8. The Goal The Matrix Point of View By the end, we should be able to understand how, given an n by n Matrix, A, we can say that A is Diagonalisable if and only if there is a Matrix, , that allows the following Matrix to be Diagonal: And why this knowledge is significant.

  9. The Points of View The Square Matrix, A, may be seen as a Linear Operator, F, defined by: Where X is a Column Vector. Linear : A(x+y) = Ax + Ay

  10. The Points of View Furthermore: Represents the Linear Operator, F, relative to the Basis, or Coordinate System, S, whose Elements are the Columns of .

  11. The Effects of a Coordinate System If we are given A, an n by n Matrix of any kind, then it is possible to interpret it as a Linear Transformation in a given Coordinate System of n-Dimensions. For example: Has the effect of 45 degree Anticlockwise Rotation, in this case, on the Identity Matrix.

  12. The Effects of a Coordinate System However, it is theorised that it is possible to represent this Linear Transformation as a Diagonal Matrix within another, different Coordinate System. We define the effect upon a given Vector in this new Coordinate System as: There is a scalar multiplication of the Vector relative to all the axes by an unknown Scale Factor, without affecting direction or other properties.

  13. The Effects of a Coordinate System This process can be summarised by the following definition: Current Coordinat e System New Coordinat e System Where: A is the Transformation Matrix v is a non-Zero Vector to be Transformed is a Scalar in this new Coordinate System that has the same effect on v as A.

  14. The Effects of a Coordinate System This process can be summarised by the following definition: Current Coordinat e System New Coordinat e System Where: : The Matrix Transformation upon the Vector . is a Linear is the Scalar which results in the same Transformation onas : .

  15. The Effects of a Coordinate System This can be applied in the following example: Matrix A Vector 1 2

  16. The Effects of a Coordinate System This can be applied in the following example: Matrix A Vector 3 Thus, when: A is equivalent to the Diagonal Matrix: 4 Which, as discussed previously, has the effect of enlarging the Vector by a Scale Factor of 2 in each Dimension.

  17. Definitions Thus, if call: is true, we the Eigenvector the Eigenvalue of corresponding to

  18. Exceptions and Additions We do not count Eigenvector, as . However, is allowed as an accepted Eigenvalue. If is a known Eigenvector of a Matrix, then so is values of . If Vectors are both Eigenvectors of a given Matrix, and both have the same resultant Eigenvalue, then as an for all values of , for all non-Zero will also

  19. Characteristic Polynomials Establishing the Essentials is an Eigenvalue for the Matrix to the Eigenvector Matrix of the same Dimensions as . , relative . Is the Identity Thus: = = 0

  20. Characteristic Polynomials Application of the Knowledge What this, essentially, leads to is the finding of all Eigenvalues and Eigenvectors of a specific Matrix. This is done by considering the Matrix, addition to the Identity Matrix, We then multiply the Identity Matrix by the unknown quantity, , in . .

  21. Characteristic Polynomials Application of the Knowledge Proceeding this, we then take the Identity Matrix, and subtract the Matrix We then take the Determinant of the result which ends up as a Polynomial equation, in order to find possible values of This can be exemplified by the following example: lots of the from it. , the Eigenvalues.

  22. Characteristic Polynomials Calculating Eigenvalues from a Matrix To find the Eigenvalues of A We must consider :

  23. Characteristic Polynomials Calculating Eigenvalues from a Matrix Then, = Which factorises to: Therefore, the Eigenvalues of the Matrix are:

  24. Characteristic Polynomials Calculating Eigenvectors from the Values With: We need to solve values of . for all given This is done by solving a Homogeneous System of Linear Equations. In other words, we must turn and find the values of the Diagonals of the Matrix. into Echelon Form , which are

  25. Characteristic Polynomials Calculating Eigenvectors from the Values For example, we will take the Previous example: from

  26. Characteristic Polynomials

  27. Characteristic Polynomials Calculating Eigenvectors from the Values For example, we will take the Previous example: from Therefore, the result is that:

  28. Characteristic Polynomials Calculating Eigenvectors from the Values For example, we will take the Previous example: from is the set of general Eigenvectors for the Eigenvalue of 4.

  29. Diagonalisation Mentioned earlier was the ultimate goal of Diagonalisation; that is to say, finding a Matrix, , such that the following can be applied to a given Matrix, : Where the result is a Diagonal Matrix.

  30. Diagonalisation There are a few rules that can be derived from this: Firstly, the Inverse is necessary to the calculation. must be an Invertible Matrix, as Secondly, the Eigenvectors of necessarily be Linearly Independent for this to work. Linear Independence will be covered later. must

  31. Diagonalisation Eigenvectors, Eigenvalues & Diagonalisation It turns out that the columns of the Matrix are the Eigenvectors of the Matrix . This is why they must be Linearly Independent, as Matrix must be Invertible. Furthermore, the Diagonal Entries of the resultant Matrix are the Eigenvalues associated with that Column of Eigenvectors.

  32. Diagonalisation Eigenvectors, Eigenvalues & Diagonalisation For example, in the previous example, we can create a Matrix 4, 2 and 6, respectively. from the Eigenvalues It is as follows:

  33. Diagonalisation Eigenvectors, Eigenvalues & Diagonalisation Furthermore, we can calculate that:

  34. Diagonalisation Eigenvectors, Eigenvalues & Diagonalisation Thus, the Diagonalisation of can be created by: Solving this gives: The Eigenvalues in the Order given!

  35. Diagonalisation Eigenvectors, Eigenvalues & Diagonalisation Thus, the Diagonalisation of can be created by:

  36. Linear Independence Introduction This will be a brief section on Linear Independence to enforce that the Eigenvectors of Independent for Diagonalisation to be implemented. must be Linearly

  37. Linear Independence Linear Independency in x-Dimensions The vectors Linearly Independent set of Vectors if the following rule applies: are classified as a The only value of the Scalar, , which makes the equation: True is for all instances of

  38. Linear Independence Linear Independency in x-Dimensions If there are any non-zero values of at any instance of within the equation, then this set of Vectors, , is considered Linearly Dependent. It is to note that only one instance of non-zero is needed to make the dependence.

  39. Linear Independence Linear Independency in x-Dimensions Therefore, if, say, at then the vector set is Linearly Dependent. But, if were to be omitted from the set, given all other instances of then the set would, therefore, become Linearly Independent. , the value of , were zero,

  40. Linear Independence Implications of Linear Independence If the set of Vectors, Independent, then it is not possible to write any of the Vectors in the set in terms of any of the other Vectors within the same set. is Linearly Conversely, if a set of Vectors is Linearly Dependent, then it is possible to write at least one Vector in terms of at least one other Vector.

  41. Linear Independence Implications of Linear Independence For example, the Vector set of: Is Linearly Dependent, as as: can be written

  42. Linear Independence Implications of Linear Independence For example, the Vector set of: We can say, however, that this Vector set may be considered as Linearly Independent if were omitted from the set.

  43. Linear Independence Finding Linear Independency The previous equation can be more usefully written as: More significantly, additionally, is the idea that this can be translated into a Homogeneous System of x Linear Equations, where x is the Dimension quantity of the System.

  44. Linear Independence Finding Linear Independency Therefore, the Matrix of Coefficients, an n by x Matrix, where n is the number of Vectors in the System and x is the Dimensions of the System. The Columns of are equivalent to the Vectors of the System, , is .

  45. Linear Independence Finding Linear Independency To observe whether Independent or not, we need to put the Matrix into Echelon Form. If, when in Echelon Form, we can observe that each Column of Unknowns has a Leading Entry, then the set of Vectors are Linearly Independent. is Linearly

  46. Linear Independence Finding Linear Independency If not, then the set of Vectors are Linearly Dependent. To find the Coefficients, we can put into Reduced Echelon Form to consider the general solutions.

  47. Linear Independence Finding Linear Independency: Example Let us consider whether the following set of Vectors are Linearly Independent:

  48. Linear Independence Finding Linear Independency: Example These Vectors can be written in the following form:

  49. Linear Independence Finding Linear Independency: Example The following EROs put this Matrix into Echelon Form: As this Matrix has a leading entry for every Column, we can conclude that the set of Vectors is Linearly Independent.

  50. Summary Thus, to conclude: is the formula for Eigenvectors and Eigenvalues. is a Matrix that has Eigenvectors and Eigenvalues to be calculated. is an Eigenvector of is an Eigenvalue of , corresponding to

More Related Content