feature vector

One of the important concepts in matrix theory
Collection
zero Useful+1
zero
The eigenvector of a matrix is one of the important concepts in matrix theory, which has a wide range of applications. Mathematically, the eigenvector (eigenvector) of a linear transformation is a nondegenerate vector whose direction is invariant under the transformation. The scale of this vector under this transformation is called its characteristic value Eigenvalue )。
One linear transformation It can usually be fully described by its eigenvalues and eigenvectors. The feature space is a set of feature vectors with the same eigenvalue. The word "feature" comes from the German word eigen. Hilbert first used the word in this sense in 1904, and Helmholtz also used it in a related sense earlier. The word eigen can be translated as "own", "specific to", "characteristic", or "individual", which shows the importance of eigenvalues in defining specific linear transformations.
Chinese name
feature vector
Foreign name
Eigenvector
Alias
Eigenvector
Related concepts
characteristic value
Discipline
Linear algebra, matrix theory
application area
Mathematical Science

First nature

Announce
edit
Eigenvector characteristic value Is the scaling factor by which it is multiplied.
Feature space is a space composed of all feature vectors with the same eigenvalue, including Zero vector , but note that the zero vector itself is not an eigenvector [1]
The main eigenvector of the linear transformation is the eigenvector corresponding to the maximum eigenvalue.
The geometric multiplicity of eigenvalues is the dimension of the corresponding feature space.
Finite dimension vector space The spectrum of a linear transformation on is the set of all its eigenvalues.
For example, three-dimensional space The eigenvector of the rotation transformation in Rotation axis The corresponding eigenvalue is 1, and the corresponding feature space contains all vectors parallel to the axis. The feature space is a One-dimensional space Therefore, the geometric multiplicity of eigenvalue 1 is 1. Eigenvalue 1 is the only real eigenvalue in the spectrum of rotation transformation.

example

Announce
edit
Spatial distribution of eigenvectors of maximum and minimum temperatures in summer
With the rotation of the earth, except for the two arrows on the axis Geocentric The arrows pointing outward are rotating. Consider the transformation of the earth after an hour of rotation: the arrow pointing from the geocenter to the geographical south pole is a feature vector of this transformation, but it points from the geocenter equator The arrow at any point on the is not a eigenvector. Because the arrow pointing to the pole is not stretched by the rotation of the earth, its eigenvalue is 1.
Another example is that the thin metal plate extends evenly about a fixed point, doubling the distance from each point on the plate to the fixed point. this stretch It is a transformation with eigenvalue 2. The vector from the fixed point to any point on the board is a feature vector, and the corresponding feature space is a collection of all these vectors.
However, 3D geometry is not unique vector space For example, consider a taut rope fixed at both ends, like the vibrating string of a stringed instrument. The signed distances from the atoms of the vibrating string to their positions at rest are regarded as components of a vector in a space whose dimension is the number of atoms on the string.
If we consider the transformation of a rope over time, its eigenvector, or eigenfunction (if we assume that the rope is a continuous medium), is its standing wave ——That is, the vibration of the plucking sound of bow strings and guitars through air transmission. Standing waves correspond to the specific vibration of the string, which makes the shape of the string expand and contract by a factor (eigenvalue) with time. Each component of the vector associated with the chord is multiplied by a time dependent factor. Standing wave amplitude (Eigenvalue) gradually decreases in consideration of damping. Therefore, each eigenvector can be mapped to a lifetime, and the concept of eigenvector can be linked to the concept of resonance [1]

equation

Announce
edit
Mathematically, if vector v and transformation A meet Av=λv , then the vector v is an eigenvector of transformation A, and λ is the corresponding eigenvalue. This equation is called the "eigenvalue equation".
Suppose it is a linear transformation , then v can be represented by a group of bases in its vector space as:
Where vi is the vector in Base vector The projection (i.e. coordinates) of the vector space is assumed to be n-dimensional. Therefore, it can be directly represented by coordinate vector. Using the basis vector, the linear transformation can also use a simple matrix Multiplicative representation. The above eigenvalue equation can be expressed as:
However, sometimes the eigenvalues are written down in matrix form equation It is unnatural or even impossible. For example, when the vector space is infinite, the above string case is an example. Depending on the nature of the transformation and the space it acts on, it is sometimes better to express the eigenvalue equation as a set of differential equations. If one differential operator Its characteristic vector is usually called the characteristic function of the differential operator. For example, differentiation itself is a linear transformation because (if M and N are differentiable functions, and a and b are constants)
Consider the differential for time t. Its characteristic function meets the following eigenvalue equation:
Where λ is the eigenvalue corresponding to the function. For such a time function, if λ=0, it will remain unchanged; if λ is positive, it will grow proportionally; if λ is negative, it will decay proportionally. For example, the total number of idealized rabbits will breed faster where there are more rabbits, thus satisfying a positive λ eigenvalue equation.
One solution of the eigenvalue equation is N=exp (λ t), that is exponential function Thus, the function is differential operator The eigenvalue of d/dt is the characteristic function of λ. If λ is negative, we call the evolution of N exponential decay; If it is Positive number , then Exponential growth The value of λ can be any complex So the spectrum of d/dt is the entire complex plane. In this example, the space that the operator d/dt acts on is the space of univariate differentiable functions. This space has infinite dimensions (because not every differentiable function can use a finite basis function linear combination To express). However, the characteristic space corresponding to each eigenvalue λ is one-dimensional. It is the set of all functions whose form is N=N0exp (λ t). N0 is an arbitrary constant, that is, the initial quantity of t=0.

theorem

Announce
edit
In the case of finite dimension, the spectral theorem will Diagonalization It shows that a matrix can be unitary diagonalized [3 ] If and only if it is a Normal matrix Note that this includes conjugate (Ermett). This is very useful because the concept of the function f (T) of the diagonal matrix T (such as the Bollier function f) is clear. The effect of spectral theorem is more obvious when using more general matrix functions. For example, if f is analytic, its form power series , if T replaces x, it can be regarded as Banach space in Absolute convergence The spectral theorem also allows easy definition of the unique square root of a positive operator.
The spectral theorem can be generalized to Hilbert space Bounded normality on operator , or unbounded self conjugate operators.

Overview of eigenvectors

Announce
edit

brief introduction

Calculate eigenvalues and eigenvectors of matrices
Suppose we want to calculate the eigenvalues of a given matrix. If the matrix is very small, we can use Characteristic polynomial Perform symbolic calculus. However, this is usually not feasible for large matrices, in which case we must use numerical methods.

Find eigenvalue

An important tool for describing the eigenvalues of a square matrix is the feature polynomial , λ is the eigenvalue of A equivalent to Linear equations (A – λ I) v=0 (where I is the identity matrix) has non Zero solution V (an eigenvector), therefore equivalent to determinant |A – λI|=0 [1]
The function p (λ)=det (A – λ I) is a polynomial of λ, because the determinant is defined as the sum of some products, which is the characteristic polynomial of A. The eigenvalue of a matrix is the zero point of its characteristic polynomial.
The eigenvalue of a matrix A can be obtained by solving the equation pA (λ)=0. If A is an n × n matrix, then pA is a polynomial of degree n, so A has at most n eigenvalues. conversely, Fundamental theorem of algebra Say that this equation just has n roots, if Double root If it is also included. All Odd number There must be one degree polynomial Real root , so for odd n, each Real matrix There is at least one real eigenvalue. In the case of real matrices, for even numbers Or odd n, Non real number The eigenvalues appear as conjugate pairs.

Find eigenvector

Once the eigenvalue λ is found, the corresponding eigenvector can be obtained by solving the characteristic equation (A – λ I) v=0, where v is the eigenvector to be solved and I is the unit matrix.
An example of a matrix without real eigenvalues is a 90 degree clockwise rotation.

numerical calculation

In practice, the eigenvalues of large matrices cannot be calculated by characteristic polynomials, which is quite resource consuming. The roots of accurate "symbolic expressions" are difficult to calculate and express for higher order polynomials: Abel Ruffini theorem shows that the roots of higher order (5th or higher) polynomials cannot be simply expressed by n-power roots. There are effective algorithms for estimating the roots of polynomials, but small errors of eigenvalues can lead to large errors of eigenvectors. The general algorithm for finding the zero point of the characteristic polynomial, that is, the eigenvalue, is Iterative method The simplest method is the power method: take a random vector v, and then calculate a series of unit vectors.
This sequence almost always converges to the eigenvector corresponding to the eigenvalue with the largest absolute value. This algorithm is simple, but it is not very useful in itself. However, algorithms such as QR algorithm are based on this.

Second nature

Announce
edit

Algebraic multiplicity

Of an eigenvalue λ of A Algebra The multiplicity is the degree of λ as the zero point of the characteristic polynomial of A; In other words, if λ is the root of a polynomial, it is the factor (t − λ) in the characteristic polynomial Factorization The number of occurrences in the following. A n × n matrix has n eigenvalues. If algebraic multiplicity is included, the degree of its characteristic polynomial is n.
The eigenvalue of an algebraic multiplicity 1 is a "single eigenvalue".
On Matrix theory The following propositions may be encountered in the entry of
"The eigenvalue of a matrix A is 4, 4, 3, 3, 3, 2, 2, 1"
The algebraic multiplicity of 4 is two, 3 is three, 2 is two, and 1 is one. This style is due to the fact that algebraic multiplicity affects many aspects of matrix theory Mathematical proof It is very important and widely used.
Recall that we define the geometric multiplicity of the eigenvector as the dimension of the corresponding eigenspace, that is, λ I − A Null space Algebraic multiplicity can also be regarded as a dimension: it is the dimension of the corresponding generalized eigenspace (the first meaning), that is, the matrix (λ I − A) ^ k is a zero space of any sufficiently large k. In other words, it is the space of "generalized eigenvector" (the first meaning), where a generalized eigenvector is any vector that will "eventually" become 0 if λ I − A acts continuously enough times. Any feature vector is a generalized feature vector, and any feature space is contained in the corresponding generalized feature space. This gives a simple proof that geometric multiplicity is always less than algebraic multiplicity. The first meaning here cannot be confused with the generalized eigenvalue problem mentioned below.

for example

It has only one eigenvalue, that is, λ=1. Its characteristic polynomial is (λ − 1) 2, so the algebraic multiplicity of this eigenvalue is 2. However, the corresponding feature space is usually called the x-axis Number axis , by vector linear So geometric multiplicity is only 1.
The generalized eigenvector can be used to calculate the Jordan canonical form of a matrix. If the current block is not normally Diagonalization The fact that it is nilpotent is directly related to the difference between eigenvectors and generalized eigenvectors.

decomposition theorem

Announce
edit
As mentioned above, the spectral theorem indicates that a square matrix can Diagonalization If and only if it is normal. For more general matrices that are not necessarily normal, we have similar results. Of course, in general, some requirements must be relaxed, such as unitary equivalence or the diagonal of the final matrix. All these results, to a certain extent, make use of Eigenvalues and eigenvectors Some of these results are listed below:
Shure The triangular form indicates that any matrix is equivalent to an upper Triangular matrix
singular value decomposition Theorem, A=U ∑ V * where ∑ is Diagonal matrix , while U and V are unitary matrices. The element on the diagonal of A=U ∑ V * is non negative, and the positive term is called A's Singular value This is right and wrong square Matrix also holds [2]
If the standard form, where A=U ∧ U − 1, where Λ is not a diagonal matrix, but a block diagonal matrix, and U is Unitary matrix If the size and number of blocks are determined by the geometric and algebraic multiplicity of eigenvalues. If decomposition is a basic result. From it, we can immediately get a square matrix, which can be fully expressed by its eigenvalues including multiplicity, and can only differ by one unitary equivalent at most. This means that mathematically, eigenvalues play an extremely important role in the study of matrices.
As the direct result of Jordan decomposition, a matrix A can be uniquely written as A=S+N, where S can Diagonalization , N is nilpotent (that is, for a certain q, Nq=0), while S and N are interchangeable (SN=NS).
whatever Invertible matrix A can uniquely write A=SJ, where S can be diagonalized and J is a unipotent matrix (even if the characteristic polynomial is a power of (λ - 1), and S and J are commutative).

Other properties

Announce
edit
Spectrum in Similarity transformation Lower invariance: matrix A and P ^ - 1AP have the same eigenvalue, which is true for any square matrix A and any invertible matrix P. Spectrum in Transposition It also remains the same below: matrix A and A ^ T have the same eigenvalue [2]
Because it's limited Dimensional space The linear transformation on is Birefringence If and only if it is Monomorphism , one Matrix invertibility If and only if all characteristic values are not 0.
If some more results of decomposition are as follows:
A matrix A is similar to a diagonal matrix if and only if the algebraic multiplicity of each eigenvalue of A is equal to the geometric multiplicity. In particular, if an n × n matrix has n different eigenvalues, it can always Diagonalization Of.
The vector space acted by a matrix can be regarded as the invariance of its generalized eigenvector Subspace The direct sum of. Each block on the diagonal corresponds to a subspace of the straight sum. If a block is diagonalized, its Invariant subspace It is a feature space. Otherwise, it is a generalized feature space, as defined above;
Because trace, that is, matrix Major diagonal The sum of elements is invariant under unitary equivalence, if the standard form indicates that it is equal to the sum of all eigenvalues;
Similarly, because the eigenvalue of a triangular matrix is the term on the main diagonal, its determinant is equal to the product of the eigenvalue (the number of occurrences is calculated according to algebraic multiplicity).
The positions of the spectra of some subclasses of the normal matrix are:
All eigenvalues of an Hermite matrix (A=A *) are real numbers. Further, all Positive definite matrix All characteristic values of (v * Av>0 for all vectors v) are positive numbers;
The eigenvalues of all oblique Hermite matrices (A=− A *) are Pure imaginary number
All Unitary matrix (A-1=A *), the absolute value of the characteristic value is 1;
Suppose A is an m × n matrix, where m ≤ n, and B is an n × m matrix. Then BA has the same eigenvalue as AB plus n − m eigenvalue equal to 0.
Each matrix can be assigned a operator norm The norm of an operator is the module of its eigenvalue Supremum , so it is also its Spectral radius The norm is directly related to the power method for calculating the eigenvalue of the maximum module. When a matrix is normal, its operator norm is the maximum module of its eigenvalue, and is independent of its Define Fields Norm of.

Conjugate eigenvector

Announce
edit
A conjugate eigenvector, or a common eigenvector, is a conjugate that becomes its conjugate under transformation multiplied by a scalar The vector of, in which the scalar is called the conjugate eigenvalue or common eigenvalue of the linear transformation. Conjugate eigenvectors and conjugate eigenvalues represent the same information and meaning as conventional eigenvectors and eigenvalues, but only alternate Coordinate system It appears when.
For example, in the theory of coherent electromagnetic scattering, the linear transformation A represents the effect of the scattering object, while the eigenvector represents the polarization state of the electromagnetic wave. In optics, the coordinate system is defined from the point of view of the wave, called forward scattering alignment (FSA), which leads to the conventional eigenvalue equation, while in radar, the coordinate system is defined from the point of view of the radar, called Backscattering Alignment (BSA), which gives the conjugate eigenvalue equation.

Characteristic problem

Announce
edit
A generalized eigenvalue problem (the second meaning) has the following form
Where A and B are matrices. Its generalized eigenvalue (the second meaning) λ can be obtained by solving the following equation
A set of matrices in the form of A − λ B, where λ is a complex number, is called a "pencil". If B is reversible, then the initial problem can be written as a standard eigenvalue problem. However, in many cases, the inverse operation is not desirable, and the generalized eigenvalue problem should be solved as its original expression.
If A and B are real symmetric matrix , the eigenvalues are all real numbers. This is not obvious in the second equivalent expression above, because the matrix B − 1A may not be symmetric.

Element in ring

Announce
edit
In the case of square matrix A whose coefficient belongs to a ring, λ is called a right eigenvalue if there is a Column vector X makes Ax=λ x, or called a left eigenvalue if there is non-zero Row vector Y makes yA=y λ.
If the ring is commutative, the left and right eigenvalues are equal, and they are called eigenvalues for short. Otherwise, for example, when the ring is Quaternion When assembling, they may be different.
If the vector space is infinite dimensional, the concept of eigenvalue can be extended to the concept of spectrum. The spectrum is a set of scalar λ, for which there is no definition, that is to say, they make no bounded inverse.
Obviously, if λ is the eigenvalue of T, λ lies in the spectrum of T. Generally speaking, the reverse is not true. stay Hilbert space perhaps Banach space There are some operator There is no eigenvector at all. This can be seen in the following example. In Hilbert space (all scalars series Space, each series makes convergence) translation There are no eigenvectors but spectral values.
At infinity Dimensional space The pedigree of bounded operators is always non empty, which is also true for unbounded self conjugate operators. By testing the spectral measure, the spectrum of any bounded or unbounded self conjugate operator can be decomposed into absolutely continuous, discrete, and isolated parts. Exponential growth Or the attenuation is an example of continuous spectrum, and the vibrating string standing wave yes Discrete spectrum example. The hydrogen atom is an example of both spectra. Hydrogen atomic Bound state It corresponds to the discrete part of the spectrum, and the ionization state is represented by the continuous spectrum. Fig. 3 Chlorine atom An example is given to explain.

application

Announce
edit

Schrodinger equation

For one transformation differential operator The representative eigenvalue equation is time invariant in quantum mechanics Schrodinger equation
HΨE = EΨE
Where H is Hamilton operator , one Second order differential operator And Ψ E is wave function , corresponding to the characteristic function of the characteristic value E, which can be interpreted as its energy.
Of an electron in a hydrogen atom Bound state The corresponding wave function can be regarded as an eigenvector of the Hamiltonian operator of hydrogen atom, which is also angular momentum An eigenvector of an operator. They correspond to eigenvalues that can be interpreted as their energy (increasing: n=1,2,3,...) and angular momentum (increasing: s, p, d,...). Here we draw the square of the absolute value of the wave function. Brighter areas correspond to higher position measures probability density The center of each picture is Nucleus , one proton However, in this case we only look for Schrodinger The bound state solution of the equation, as in quantum chemistry As is often done in, we find Ψ E in the square integrable function. Because this space is a Hilbert space , there is a well-defined scalar Product, we can introduce a base set, where Ψ E and H can be expressed as a one-dimensional array And a matrix. This enables us to express Schrodinger equation in matrix form.
Dirac notation is often used in this context to emphasize the difference between the state vector and its representation, function Ψ E. In this case, Schrodinger equation writing
And called one of H Eigenstate (H is sometimes written in the entry level textbook), H is regarded as a transformation (see the observation value) rather than an alternative differential operator A specific representation of a term. In the above equation, it is understood as a vector obtained by applying H to.

molecular orbital

In quantum mechanics, especially in atomic physics and Molecular physics In the Hartree Fock theory, atomic and molecular orbitals can be defined as Fock operator The eigenvector of. The corresponding eigenvalue can be interpreted as ionization potential energy through Koopmans theorem. In this case, the term eigenvector can be used in a broader sense, because Fock operators explicitly depend on orbits and their eigenvalues. If we need to emphasize this feature, we can call it implicit eigenvalue equation. In this way, the equation usually adopts iteration Program solution, in this case called Self consistent field method. stay quantum chemistry Hartree Fock equation is often expressed by non orthogonal basis sets. This specific expression is a generalized eigenvalue problem called Roothaan equation

factor analysis

stay factor analysis In, the eigenvector of a covariant matrix corresponds to the factor, and the eigenvalue is the factor load. Factor analysis is a statistical technique used for social science and market analysis, product management, operation research planning and other processing of large amounts of data applied science Its goal is to use a small number of unobservable random variables called factors to explain some observable random variable Changes in. The observable random variable is modeled by the linear combination of factors, plus“ residual Item.
Eigenfaces are examples of eigenvectors
stay image processing In, the processing of facial image can be regarded as a vector whose component is the gray level of each pixel. Of this vector space dimension Is the number of pixels. Of a large data set of standardized facial graphics covariant The eigenvector of a matrix is called Characteristic face They are useful for expressing any facial image as a linear combination of them. Feature faces provide a data compression Mode. In this application, only the feature faces corresponding to the maximum feature values are generally taken.

Inertia tensor

In mechanics, the eigenvector of inertia defines rigid body The spindle of. Inertia is the key data that determines the rotation of a rigid body around the center of mass.

Stress tensor

stay solid mechanics Medium, Stress tensor It is symmetric, so it can be decomposed into diagonal tensor The eigenvalues are located on the diagonal, and the eigenvectors can be used as the basis. Because it is a diagonal matrix, in this orientation, the stress tensor has no shear component; It has only principal components.

Eigenvalues of graphs

In pedigree graph theory The eigenvalues of a graph are defined as adjacency matrix The eigenvalue of A, or (more importantly) the Laplacian matrix I − T − 1/2AT − 1/2 of the graph, where T is the diagonal matrix representing the degree of each vertex, and in T − 1/2, 0 is used to replace 0 − 1/2. The principal eigenvector of a graph is used to measure the centrality of its vertices. Google's PageRank algorithm is an example. The component of the main eigenvector of the modified adjacency matrix of the www graph gives the page score.