The eigenvector of a matrix is one of the important concepts in matrix theory, which has a wide range of applications.Mathematically, the eigenvector (eigenvector) of a linear transformation is a nondegenerate vector whose direction is invariant under the transformation.The scale of this vector under this transformation is called itscharacteristic value(Eigenvalue)。
Onelinear transformation It can usually be fully described by its eigenvalues and eigenvectors.The feature space is a set of feature vectors with the same eigenvalue.The word "feature" comes from the German word eigen.Hilbert first used the word in this sense in 1904, and Helmholtz also used it in a related sense earlier.The word eigen can be translated as "own", "specific to", "characteristic", or "individual", which shows the importance of eigenvalues in defining specific linear transformations.
Feature space is a space composed of all feature vectors with the same eigenvalue, includingZero vector, but note that the zero vector itself is not an eigenvector[1]。
The main eigenvector of the linear transformation is the eigenvector corresponding to the maximum eigenvalue.
The geometric multiplicity of eigenvalues is the dimension of the corresponding feature space.
Finite dimensionvector space The spectrum of a linear transformation on is the set of all its eigenvalues.
For example,three-dimensional spaceThe eigenvector of the rotation transformation inRotation axisThe corresponding eigenvalue is 1, and the corresponding feature space contains all vectors parallel to the axis.The feature space is aOne-dimensional spaceTherefore, the geometric multiplicity of eigenvalue 1 is 1.Eigenvalue 1 is the only real eigenvalue in the spectrum of rotation transformation.
example
Announce
edit
Spatial distribution of eigenvectors of maximum and minimum temperatures in summer
With the rotation of the earth, except for the two arrows on the axisGeocentricThe arrows pointing outward are rotating.Consider the transformation of the earth after an hour of rotation: the arrow pointing from the geocenter to the geographical south pole is a feature vector of this transformation, but it points from the geocenterequatorThe arrow at any point on the is not a eigenvector.Because the arrow pointing to the pole is not stretched by the rotation of the earth, its eigenvalue is 1.
Another example is that the thin metal plate extends evenly about a fixed point, doubling the distance from each point on the plate to the fixed point.thisstretchIt is a transformation with eigenvalue 2.The vector from the fixed point to any point on the board is a feature vector, and the corresponding feature space is a collection of all these vectors.
However, 3D geometry is not uniquevector space 。For example, consider a taut rope fixed at both ends, like the vibrating string of a stringed instrument.The signed distances from the atoms of the vibrating string to their positions at rest are regarded as components of a vector in a space whose dimension is the number of atoms on the string.
If we consider the transformation of a rope over time, its eigenvector, or eigenfunction (if we assume that the rope is a continuous medium), is itsstanding wave——That is, the vibration of the plucking sound of bow strings and guitars through air transmission.Standing waves correspond to the specific vibration of the string, which makes the shape of the string expand and contract by a factor (eigenvalue) with time.Each component of the vector associated with the chord is multiplied by a time dependent factor.Standing waveamplitude(Eigenvalue) gradually decreases in consideration of damping.Therefore, each eigenvector can be mapped to a lifetime, and the concept of eigenvector can be linked to the concept of resonance[1]。
equation
Announce
edit
Mathematically, if vector v and transformation A meetAv=λv, then the vector v is an eigenvector of transformation A, and λ is the corresponding eigenvalue.This equation is called the "eigenvalue equation".
Suppose it is alinear transformation , then v can be represented by a group of bases in its vector space as:
Where vi is the vector inBase vectorThe projection (i.e. coordinates) of the vector space is assumed to be n-dimensional.Therefore, it can be directly represented by coordinate vector.Using the basis vector, the linear transformation can also use a simplematrixMultiplicative representation.The above eigenvalue equation can be expressed as:
However, sometimes the eigenvalues are written down in matrix formequationIt is unnatural or even impossible.For example, when the vector space is infinite, the above string case is an example.Depending on the nature of the transformation and the space it acts on, it is sometimes better to express the eigenvalue equation as a set of differential equations.If onedifferential operator Its characteristic vector is usually called the characteristic function of the differential operator.For example, differentiation itself is a linear transformation because (if M and N are differentiable functions, and a and b are constants)
Consider the differential for time t.Its characteristic function meets the following eigenvalue equation:
Where λ is the eigenvalue corresponding to the function.For such a time function, if λ=0, it will remain unchanged; if λ is positive, it will grow proportionally; if λ is negative, it will decay proportionally.For example, the total number of idealized rabbits will breed faster where there are more rabbits, thus satisfying a positive λ eigenvalue equation.
One solution of the eigenvalue equation is N=exp (λ t), that isexponential function;Thus, the function isdifferential operator The eigenvalue of d/dt is the characteristic function of λ.If λ is negative, we call the evolution of N exponential decay;If it isPositive number, thenExponential growth。The value of λ can be anycomplex。So the spectrum of d/dt is the entire complex plane.In this example, the space that the operator d/dt acts on is the space of univariate differentiable functions.This space has infinite dimensions (because not every differentiable function can use a finite basis functionlinear combinationTo express).However, the characteristic space corresponding to each eigenvalue λ is one-dimensional.It is the set of all functions whose form is N=N0exp (λ t).N0 is an arbitrary constant, that is, the initial quantity of t=0.
theorem
Announce
edit
In the case of finite dimension, the spectral theorem willDiagonalizationIt shows that a matrix can be unitary diagonalized[3]If and only if it is aNormal matrix。Note that this includesconjugate(Ermett).This is very useful because the concept of the function f (T) of the diagonal matrix T (such as the Bollier function f) is clear.The effect of spectral theorem is more obvious when using more general matrix functions.For example, if f is analytic, its formpower series, if T replaces x, it can be regarded asBanach spaceinAbsolute convergence。The spectral theorem also allows easy definition of the unique square root of a positive operator.
The spectral theorem can be generalized toHilbert spaceBounded normality onoperator, or unbounded self conjugate operators.
Overview of eigenvectors
Announce
edit
brief introduction
Calculate eigenvalues and eigenvectors of matrices
Suppose we want to calculate the eigenvalues of a given matrix.If the matrix is very small, we can useCharacteristic polynomialPerform symbolic calculus.However, this is usually not feasible for large matrices, in which case we must use numerical methods.
Find eigenvalue
An important tool for describing the eigenvalues of a square matrix is the featurepolynomial, λ is the eigenvalue of A equivalent toLinear equations(A – λ I) v=0 (where I is the identity matrix) has nonZero solutionV (an eigenvector), therefore equivalent todeterminant|A – λI|=0[1]。
The function p (λ)=det (A – λ I) is a polynomial of λ, because the determinant is defined as the sum of some products, which is the characteristic polynomial of A.The eigenvalue of a matrix is the zero point of its characteristic polynomial.
The eigenvalue of a matrix A can be obtained by solving the equation pA (λ)=0.If A is an n × n matrix, then pA is a polynomial of degree n, so A has at most n eigenvalues.conversely,Fundamental theorem of algebraSay that this equation just has n roots, ifDouble rootIf it is also included.AllOdd numberThere must be one degree polynomialReal root, so for odd n, eachReal matrixThere is at least one real eigenvalue.In the case of real matrices, foreven numbersOr odd n,Non real numberThe eigenvalues appear as conjugate pairs.
Find eigenvector
Once the eigenvalue λ is found, the corresponding eigenvector can be obtained by solving the characteristic equation (A – λ I) v=0, where v is the eigenvector to be solved and I is the unit matrix.
An example of a matrix without real eigenvalues is a 90 degree clockwise rotation.
numerical calculation
In practice, the eigenvalues of large matrices cannot be calculated by characteristic polynomials, which is quite resource consuming. The roots of accurate "symbolic expressions" are difficult to calculate and express for higher order polynomials: Abel Ruffini theorem shows that the roots of higher order (5th or higher) polynomials cannot be simply expressed by n-power roots.There are effective algorithms for estimating the roots of polynomials, but small errors of eigenvalues can lead to large errors of eigenvectors.The general algorithm for finding the zero point of the characteristic polynomial, that is, the eigenvalue, isIterative method。The simplest method is the power method: take a random vector v, and then calculate a series of unit vectors.
This sequence almost always converges to the eigenvector corresponding to the eigenvalue with the largest absolute value.This algorithm is simple, but it is not very useful in itself.However, algorithms such as QR algorithm are based on this.
Second nature
Announce
edit
Algebraic multiplicity
Of an eigenvalue λ of AAlgebraThe multiplicity is the degree of λ as the zero point of the characteristic polynomial of A;In other words, if λ is the root of a polynomial, it is the factor (t − λ) in the characteristic polynomialFactorizationThe number of occurrences in the following.A n × n matrix has n eigenvalues. If algebraic multiplicity is included, the degree of its characteristic polynomial is n.
The eigenvalue of an algebraic multiplicity 1 is a "single eigenvalue".
OnMatrix theoryThe following propositions may be encountered in the entry of
"The eigenvalue of a matrix A is 4, 4, 3, 3, 3, 2, 2, 1"
The algebraic multiplicity of 4 is two, 3 is three, 2 is two, and 1 is one.This style is due to the fact that algebraic multiplicity affects many aspects of matrix theoryMathematical proofIt is very important and widely used.
Recall that we define the geometric multiplicity of the eigenvector as the dimension of the corresponding eigenspace, that is, λ I − ANull space。Algebraic multiplicity can also be regarded as a dimension: it is the dimension of the corresponding generalized eigenspace (the first meaning), that is, the matrix (λ I − A) ^ k is a zero space of any sufficiently large k.In other words, it is the space of "generalized eigenvector" (the first meaning), where a generalized eigenvector is any vector that will "eventually" become 0 if λ I − A acts continuously enough times.Any feature vector is a generalized feature vector, and any feature space is contained in the corresponding generalized feature space.This gives a simple proof that geometric multiplicity is always less than algebraic multiplicity.The first meaning here cannot be confused with the generalized eigenvalue problem mentioned below.
for example
It has only one eigenvalue, that is, λ=1.Its characteristic polynomial is (λ − 1) 2, so the algebraic multiplicity of this eigenvalue is 2.However, the corresponding feature space is usually called the x-axisNumber axis, by vectorlinearSo geometric multiplicity is only 1.
The generalized eigenvector can be used to calculate the Jordan canonical form of a matrix.If the current block is not normallyDiagonalizationThe fact that it is nilpotent is directly related to the difference between eigenvectors and generalized eigenvectors.
decomposition theorem
Announce
edit
As mentioned above, the spectral theorem indicates that a square matrix canDiagonalizationIf and only if it is normal.For more general matrices that are not necessarily normal, we have similar results.Of course, in general, some requirements must be relaxed, such as unitary equivalence or the diagonal of the final matrix.All these results, to a certain extent, make use ofEigenvalues and eigenvectors。Some of these results are listed below:
ShureThe triangular form indicates that any matrix is equivalent to an upperTriangular matrix;
singular value decomposition Theorem, A=U ∑ V * where ∑ isDiagonal matrix, while U and V are unitary matrices.The element on the diagonal of A=U ∑ V * is non negative, and the positive term is called A'sSingular value。This is right and wrongsquareMatrix also holds[2];
If the standard form, where A=U ∧ U − 1, where Λ is not a diagonal matrix, but a block diagonal matrix, and U isUnitary matrix。If the size and number of blocks are determined by the geometric and algebraic multiplicity of eigenvalues.If decomposition is a basic result.From it, we can immediately get a square matrix, which can be fully expressed by its eigenvalues including multiplicity, and can only differ by one unitary equivalent at most.This means that mathematically, eigenvalues play an extremely important role in the study of matrices.
As the direct result of Jordan decomposition, a matrix A can be uniquely written as A=S+N, where S canDiagonalization, N is nilpotent (that is, for a certain q, Nq=0), while S and N are interchangeable (SN=NS).
whateverInvertible matrixA can uniquely write A=SJ, where S can be diagonalized and J is a unipotent matrix (even if the characteristic polynomial is a power of (λ - 1), and S and J are commutative).
Other properties
Announce
edit
Spectrum inSimilarity transformationLower invariance: matrix A and P ^ - 1AP have the same eigenvalue, which is true for any square matrix A and any invertible matrix P.Spectrum inTranspositionIt also remains the same below: matrix A and A ^ T have the same eigenvalue[2]。
If some more results of decomposition are as follows:
A matrix A is similar to a diagonal matrix if and only if the algebraic multiplicity of each eigenvalue of A is equal to the geometric multiplicity.In particular, if an n × n matrix has n different eigenvalues, it can alwaysDiagonalizationOf.
The vector space acted by a matrix can be regarded as the invariance of its generalized eigenvectorSubspaceThe direct sum of.Each block on the diagonal corresponds to a subspace of the straight sum.If a block is diagonalized, itsInvariant subspaceIt is a feature space.Otherwise, it is a generalized feature space, as defined above;
Because trace, that is, matrixMajor diagonalThe sum of elements is invariant under unitary equivalence, if the standard form indicates that it is equal to the sum of all eigenvalues;
Similarly, because the eigenvalue of a triangular matrix is the term on the main diagonal, its determinant is equal to the product of the eigenvalue (the number of occurrences is calculated according to algebraic multiplicity).
The positions of the spectra of some subclasses of the normal matrix are:
All eigenvalues of an Hermite matrix (A=A *) are real numbers.Further, allPositive definite matrixAll characteristic values of (v * Av>0 for all vectors v) are positive numbers;
AllUnitary matrix(A-1=A *), the absolute value of the characteristic value is 1;
Suppose A is an m × n matrix, where m ≤ n, and B is an n × m matrix.Then BA has the same eigenvalue as AB plus n − m eigenvalue equal to 0.
Each matrix can be assigned aoperatornorm。The norm of an operator is the module of its eigenvalueSupremum, so it is also itsSpectral radius。The norm is directly related to the power method for calculating the eigenvalue of the maximum module.When a matrix is normal, its operator norm is the maximum module of its eigenvalue, and is independent of itsDefine FieldsNorm of.
Conjugate eigenvector
Announce
edit
A conjugate eigenvector, or a common eigenvector, is a conjugate that becomes its conjugate under transformation multiplied by ascalarThe vector of, in which the scalar is called the conjugate eigenvalue or common eigenvalue of the linear transformation.Conjugate eigenvectors and conjugate eigenvalues represent the same information and meaning as conventional eigenvectors and eigenvalues, but only alternateCoordinate systemIt appears when.
For example, in the theory of coherent electromagnetic scattering, the linear transformation A represents the effect of the scattering object, while the eigenvector represents the polarization state of the electromagnetic wave.In optics, the coordinate system is defined from the point of view of the wave, called forward scattering alignment (FSA), which leads to the conventional eigenvalue equation, while in radar, the coordinate system is defined from the point of view of the radar, calledBackscatteringAlignment (BSA), which gives the conjugate eigenvalue equation.
Characteristic problem
Announce
edit
A generalized eigenvalue problem (the second meaning) has the following form
Where A and B are matrices.Its generalized eigenvalue (the second meaning) λ can be obtained by solving the following equation
A set of matrices in the form of A − λ B, where λ is a complex number, is called a "pencil".If B is reversible, then the initial problem can be written as a standard eigenvalue problem.However, in many cases, the inverse operation is not desirable, and the generalized eigenvalue problem should be solved as its original expression.
If A and B are realsymmetric matrix , the eigenvalues are all real numbers.This is not obvious in the second equivalent expression above, because the matrix B − 1A may not be symmetric.
Element in ring
Announce
edit
In the case of square matrix A whose coefficient belongs to a ring, λ is called a right eigenvalue if there is aColumn vectorX makes Ax=λ x, or called a left eigenvalue if there is non-zeroRow vectorY makes yA=y λ.
If the ring is commutative, the left and right eigenvalues are equal, and they are called eigenvalues for short.Otherwise, for example, when the ring isQuaternionWhen assembling, they may be different.
If the vector space is infinite dimensional, the concept of eigenvalue can be extended to the concept of spectrum.The spectrum is a set of scalar λ, for which there is no definition, that is to say, they make no bounded inverse.
Obviously, if λ is the eigenvalue of T, λ lies in the spectrum of T.Generally speaking, the reverse is not true.stayHilbert spaceperhapsBanach spaceThere are someoperatorThere is no eigenvector at all.This can be seen in the following example.In Hilbert space (all scalarsseriesSpace, each series makes convergence)translationThere are no eigenvectors but spectral values.
At infinityDimensional spaceThe pedigree of bounded operators is always non empty, which is also true for unbounded self conjugate operators.By testing the spectral measure, the spectrum of any bounded or unbounded self conjugate operator can be decomposed into absolutely continuous, discrete, and isolated parts.Exponential growthOr the attenuation is an example of continuous spectrum, and the vibrating stringstanding waveyesDiscrete spectrumexample.The hydrogen atom is an example of both spectra.Hydrogen atomicBound stateIt corresponds to the discrete part of the spectrum, and the ionization state is represented by the continuous spectrum.Fig. 3 ChlorineatomAn example is given to explain.
application
Announce
edit
Schrodinger equation
For one transformationdifferential operator The representative eigenvalue equation is time invariant in quantum mechanicsSchrodinger equation
HΨE = EΨE
Where H isHamiltonoperator, oneSecond order differential operatorAnd Ψ E iswave function, corresponding to the characteristic function of the characteristic value E, which can be interpreted as its energy.
Of an electron in a hydrogen atomBound stateThe corresponding wave function can be regarded as an eigenvector of the Hamiltonian operator of hydrogen atom, which is alsoangular momentumAn eigenvector of an operator.They correspond to eigenvalues that can be interpreted as their energy (increasing: n=1,2,3,...) and angular momentum (increasing: s, p, d,...).Here we draw the square of the absolute value of the wave function.Brighter areas correspond to higher position measuresprobability density。The center of each picture isNucleus, oneprotonHowever, in this case we only look forSchrodinger The bound state solution of the equation, as inquantum chemistryAs is often done in, we find Ψ E in the square integrable function.Because this space is aHilbert space, there is a well-definedscalarProduct, we can introduce a base set, where Ψ E and H can be expressed as a one-dimensionalarrayAnd a matrix.This enables us to express Schrodinger equation in matrix form.
Dirac notation is often used in this context to emphasize the difference between the state vector and its representation, function Ψ E.In this case, Schrodinger equation writing
And called one of HEigenstate(H is sometimes written in the entry level textbook), H is regarded as a transformation (see the observation value) rather than an alternativedifferential operator A specific representation of a term.In the above equation, it is understood as a vector obtained by applying H to.
molecular orbital
In quantum mechanics, especially in atomic physics andMolecular physicsIn the Hartree Fock theory, atomic and molecular orbitals can be defined as FockoperatorThe eigenvector of.The corresponding eigenvalue can be interpreted as ionization potential energy through Koopmans theorem.In this case, the term eigenvector can be used in a broader sense, because Fock operators explicitly depend on orbits and their eigenvalues.If we need to emphasize this feature, we can call it implicit eigenvalue equation.In this way, the equation usually adoptsiterationProgram solution, in this case calledSelf consistent fieldmethod.stayquantum chemistryHartree Fock equation is often expressed by non orthogonal basis sets.This specific expression is a generalized eigenvalue problem called Roothaanequation。
factor analysis
stayfactor analysisIn, the eigenvector of a covariant matrix corresponds to the factor, and the eigenvalue is the factor load.Factor analysis is a statistical technique used for social science and market analysis, product management, operation research planning and other processing of large amounts of dataapplied science。Its goal is to use a small number of unobservable random variables called factors to explain some observablerandom variableChanges in.The observable random variable is modeled by the linear combination of factors, plus“residualItem.
Eigenfaces are examples of eigenvectors
stayimage processingIn, the processing of facial image can be regarded as a vector whose component is the gray level of each pixel.Of this vector spacedimensionIs the number of pixels.Of a large data set of standardized facial graphicscovariantThe eigenvector of a matrix is calledCharacteristic face。They are useful for expressing any facial image as a linear combination of them.Feature faces provide adata compressionMode.In this application, only the feature faces corresponding to the maximum feature values are generally taken.
Inertia tensor
In mechanics, the eigenvector of inertia definesrigid bodyThe spindle of.Inertia is the key data that determines the rotation of a rigid body around the center of mass.
Stress tensor
staysolid mechanicsMedium,Stress tensorIt is symmetric, so it can be decomposed into diagonaltensorThe eigenvalues are located on the diagonal, and the eigenvectors can be used as the basis.Because it is a diagonal matrix, in this orientation, the stress tensor has no shear component;It has only principal components.
Eigenvalues of graphs
In pedigreegraph theoryThe eigenvalues of a graph are defined asadjacency matrix The eigenvalue of A, or (more importantly) the Laplacian matrix I − T − 1/2AT − 1/2 of the graph, where T is the diagonal matrix representing the degree of each vertex, and in T − 1/2, 0 is used to replace 0 − 1/2.The principal eigenvector of a graph is used to measure the centrality of its vertices.Google's PageRank algorithm is an example.The component of the main eigenvector of the modified adjacency matrix of the www graph gives the page score.