matrix

[jǔ zhèn]
Mathematical terminology
open 5 entries with the same name
Collection
zero Useful+1
zero
Matrix, mathematical term. In mathematics, a matrix is a rectangular array complex or real number aggregate [1] , originally from Equations Of coefficient and constant Constituted matrix This concept was developed by British mathematicians in the 19th century Kelly First, put forward.
Matrix is an advanced generation mathematics Common tools in, also common in statistical analysis Applied Mathematics Medium. [2] In physics, a matrix is Circuit science Dynamics , optics and Quantum physics All have applications in; computer science Medium, three-dimensional animation Matrix is also needed for production. The matrix operation is numerical analysis Important issues in the field. take Matrix decomposition It is a combination of simple matrices that can simplify matrix operations in theory and practical applications. For some matrices with wide application and special form, such as sparse matrix and quasi-diagonal matrix , with specific fast operation algorithm For the development and application of matrix related theories, please refer to Matrix Theory. stay Astrophysics quantum mechanics And other fields infinite Dimension matrix is a generalization of matrix.
The main branch of numerical analysis is devoted to the development of effective algorithms for matrix computation, which has been a subject for several centuries and is an expanding research field. The matrix decomposition method simplifies the theoretical and practical calculation. The algorithm customized for specific matrix structure (such as sparse matrix and near angle matrix) speeds up the calculation in finite element method and other calculations. The infinite matrix occurs in planetary theory and atomic theory. A simple example of an infinite matrix represents a function taylor series Matrix of derivative operator of [3]
Chinese name
matrix
Foreign name
Matrix
Alias
Matrix formula Crossbar array
expression
Amn
Presenter
Kelly
Proposed time
nineteenth century
Applicable fields
Astrophysics Circuit science Dynamics computer science etc.
Applied discipline
linear algebra
founder
Kelly
Pinyin
jǔ zhèn
Interpretation
Refers to vertically and horizontally arranged two-dimensional data form
Type
Mathematical terminology

history

Announce
edit
The study of matrix has a long history, Latin square He Magic Square has been studied in prehistoric times.
Arthur Kelly, Founder of Matrix Theory [4]
In mathematics, a matrix is a rectangular array complex or real number aggregate [1] , originally from Equations Of coefficient and constant Constituted matrix This concept was developed by British mathematicians in the 19th century Kelly First, put forward. As a solution linear equation Matrix has a long history. The book was first written in the early Eastern Han Dynasty《 Chapter Nine Arithmetic 》Medium, separated with coefficient Legal representation Linear equations , obtained its Augmented matrix stay Elimination To multiply a line by a nonzero real number , subtracting one line from another, which is equivalent to the Elementary transformation But at that time, there was no matrix concept understood today. Although it was the same form as the existing matrix, it was only used as the standard representation and treatment of linear equations at that time.
The matrix formally appears as the research object in mathematics determinant After the development of research. Logically, the concept of matrix is prior to determinant, but in actual history it is just the opposite. Japanese mathematician Guan Xiaohe (1683) and Calculus One of the discoverers of Gottfried William Leibniz (1693) established independently almost simultaneously Determinant theory Followed by determinant as solution Linear equations The tools of are gradually developed. In 1750, Gabriel Kramer Found Cramer's rule [5]
James Joseph Sylvester
The concept of matrix came into being in the 19th century. In the 1800s, Gaussian and William Jordan Established Gauss Jordan elimination method In 1844, Germany mathematician Ferdinand Eisenstein (F. Eisenstein) discussed "transformation" (matrix) and its product. In 1850, British mathematician James Joseph Sylvester (James Joseph Sylvester) First uses the word matrix.
britain mathematician Arthur Kelly is generally recognized as a matrix theorist founder He began to treat the matrix as independent mathematics When studying objects, many properties related to matrices have been found in the study of determinants, which makes Kelly think that the introduction of matrices is very natural. He said, "I definitely didn't pass Quaternion And obtain the concept of matrix; It is either directly derived from the concept of determinant, or as an expression Linear equations " Since 1858, he has published a series of special papers on matrices, such as Research Report on Matrix Theory, and studied the operation law, inverse and Transposition and Characteristic polynomial Equation. Kelly The Kelley Hamilton theorem is also proposed, and the case of 3 × 3 matrix is verified. It is also said that further proof is unnecessary. Hamilton The case of 4 × 4 matrix is proved, and the proof in general is made by German mathematicians Frobenius (F.G. Frohenius) in 1898 [5]
1854 France mathematician Hermit (C. Hermite) used“ Orthogonal matrix ”This term, but his official definition was not published by Philobenius until 1878. In 1879, Ferobenius introduced the matrix Rank The concept of. So far, the matrix system has been basically established.
The study of infinite dimensional matrix began in 1884. Poincare After two articles that loosely used the theory of infinite dimensional matrix and determinant, I began to study this aspect. In 1906, Hilbert The introduction of infinite quadratic form (equivalent to infinite dimensional matrix) to study the integral equation greatly promotes the study of infinite dimensional matrix. On this basis, Schmitz Herringer and Triplitz developed the operator theory, and the infinite dimensional matrix became the research function A powerful tool for space operators [6]
Matrix concept It was first seen in Chinese in 1922. In 1922, Cheng Tingxi In an introductory article, the matrix is translated as "vertical and horizontal matrix". In 1925, the computational terminology review group of the Scientific Terminology Review Association《 science 》In the list of approved terms published in the fourth issue of Volume 10, the matrix is translated as "matrix", square matrix It is translated as "square matrix", while various matrices such as "orthogonal matrix"“ Adjoint matrix ”The "matrix" in is translated as "square matrix". In 1935, Chinese Mathematics Society After review, Ministry of Education of the Republic of China In the approved "Mathematical Nouns" (and "all colleges and universities across the country are instructed to follow the rules and regulations"), "matrix" appears as a translation name for the first time. In 1938, Cao Huiqun was entrusted by the Scientific Terminology Review Conference to Mathematical noun In the revised Glossary of Arithmetic Terms, the proper translation is "rectangular array". Prepared after the founding of the People's Republic of China《 Mathematical noun 》In the Chinese version, the translation is "matrix". In 1993, the Chinese Natural Science Terminology Commission announced《 Mathematical noun 》In Chinese, "Matrix" has been officially translated and has been used up to now.

definition

Announce
edit
By m × n number a ij The number table of m rows and n columns is called the matrix of m rows and n columns, or m × n matrix for short. Recorded as:
The number m × n is called matrix A Element of, referred to as element, number a ij In matrix A Is called matrix A (i, j) element of, with the number a ij The matrix of (i, j) element can be recorded as (a ij )Or (a ij ) m × n , m × n matrix A Also recorded as A mn
The element is real number The matrix of is called Real matrix , element is complex The matrix of is called Complex matrix The matrix whose number of rows and columns is equal to n is called n-order matrix or n-order square matrix [7]

Basic operation

Announce
edit
Matrix operation in Scientific computing Very important [8] The basic operations of matrix include matrix addition, subtraction, number multiplication, transpose, conjugate and conjugate transpose [1] [9]

addition

The addition of a matrix satisfies the following operation laws( A B C Are homomorphic matrices):
It should be noted that only homomorphic matrices can be added [10]

subtraction

Number multiplication

The matrix multiplication satisfies the following operation laws:
Addition and subtraction of matrices and number multiplication of matrices; linear operation of matrices [7]

Transposition

The matrix produced by exchanging the rows and columns of matrix A is called the transposed matrix of A(
[8] This process is called matrix transposition
The transpose of the matrix satisfies the following operation laws:

conjugate

The conjugate definition of a matrix is:
The conjugation of a 2 × 2 complex matrix (the real part is unchanged, and the imaginary part is negative) is as follows [11]
be

Conjugate transpose

The conjugate transpose of matrix is defined as:
, can also be written as:
Or as
The conjugate transposition of a 2 × 2 complex matrix is as follows:
be

multiplication

Announce
edit
Multiplication of two matrices only when the first matrix A Number of columns and another matrix B Can only be defined when the number of rows of is equal. as A yes m × n Matrix and B yes n × p Matrix, their product C Is a m × p matrix
, one of its elements:
And record this product as:
[8] .
For example:
The matrix multiplication satisfies the following operation laws:
Binding law:
Left distribution law:
Right distribution law:
Matrix multiplication does not satisfy Commutative law

determinant

Announce
edit
Main entry: determinant
One n × n Square matrix of A The determinant of is recorded as
perhaps
, The determinant of a 2 × 2 matrix can be expressed as follows [12]
The determinant of an n × n matrix is equal to the elements of any row (or column) and the corresponding Algebraic cofactor The sum of the products is:

Eigenvalues and eigenvectors

Announce
edit
n × n Block matrix of A An eigenvalue and corresponding eigenvector of
Of scalar And nonzero vectors [12] among
Is the eigenvector
Is the characteristic value.
A The whole of all eigenvalues of A is called the spectrum of A [13] , recorded as
Matrix Eigenvalues and eigenvectors It can reveal the deep characteristics of linear transformation [9]

Trace of matrix

Announce
edit
Main entry: Trace of matrix
The sum of diagonal elements of matrix A is called the trace of matrix A, which is recorded as
[14] , i.e

Positivity

Announce
edit
n × n Of Real symmetric matrix A If all non-zero vectors are satisfied
, corresponding Quadratic form
if
, called A Is a positive definite matrix. if
be A Is a Negative definite matrix , if
, then A by Positive semidefinite matrix , if A Neither semi positive definite nor semi negative definite, then A by Indefinite matrix [15] The positive definiteness of symmetric matrix is closely related to its eigenvalue. A matrix is positive definite if and only if it features Values are positive [1]

Decomposition of matrix

Announce
edit
Matrix decomposition is to decompose a matrix into the sum or product of several matrices that are relatively simple or have certain characteristics [13] The matrix decomposition methods generally include triangular decomposition, spectral decomposition singular value decomposition , full rank decomposition, etc.

Trigonometric decomposition

set up
, then A can be uniquely decomposed into A = U one R , among U one Is a unitary matrix ,R Is the triangular complex matrix on the main line or A Can be uniquely decomposed into L Is the triangular complex matrix on the main line Is a unitary matrix
[11]

Spectral decomposition

Spectral decomposition matrix Decomposed into characteristic value and feature vector The method of representing the product of matrices. Note that only Diagonalizable matrix Can apply feature decomposition [16]

singular value decomposition

hypothesis M Is a m×n rank matrix , where all elements belong to field K , that is real number Domain or complex Domain. So there is a decomposition that
among U yes m×m rank Unitary matrix ∑ is m×n rank real number Diagonal matrix and V* , i.e V Of Conjugate transpose , Yes n×n Order unitary matrix. Such decomposition is called M Of singular value decomposition [17] Element ∑ on ∑ diagonal i , i mean M Of Singular value The common practice is to arrange singular values from large to small. So ∑ can be changed from M The only one is certain.

Full rank decomposition

set up
, if there is a matrix
and
bring A = FG Is called A A full rank decomposition [18]

LUP decomposition

LUP The idea of decomposition is to find three n×n matrix L , U , P , satisfied
Where L is a unit lower triangular matrix, U is a unit upper triangular matrix, and P is a permutation matrix. But the matrix satisfying the decomposition condition L , U , P Called one of matrix A LUP decompose [19]

Special category

Announce
edit

symmetric matrix

stay linear algebra In, the symmetric matrix is a Square matrix , whose transpose matrix is equal to itself [8] I.e
For example:

Hermitian matrix

A square complex valued matrix
Is called Hermitian matrix, if A = A H That is, its element
In other words, Hermitian matrix is a complex conjugate symmetric matrix [1]
For a real valued matrix, Hermitian matrix is equivalent to symmetric matrix.

Orthogonal matrix

A Real Square Matrix
It is called orthogonal matrix, if

Unitary matrix

A complex valued square matrix
It is called unitary matrix, if

Banded matrix

matrix
, if the matrix satisfies condition a ij =0, | i-j |>k, then the matrix A It can be called banded matrix [20]

Triangular matrix

stay linear algebra In, the triangular matrix is Square matrix It is a kind of the triangle shaped non-zero coefficient. Triangular matrix is divided into upper triangular matrix and lower triangular matrix. if
, then
The matrix of is called upper triangular matrix [8] , if
, then
The matrix of is called lower triangular matrix [8] Triangular matrix can be regarded as a simplified case of general square matrix.

Similarity matrix

stay linear algebra In, the similarity matrix refers to the matrix Similarity relation is one of two matrices equivalence relation Two n × n matrix A And B Is similar matrix if and only if There is one n × n Of Invertible matrix P , so that:
or

Coincidence matrix

order
, and C Nonsingular, then matrix
be called A The coincidence matrix of. Where linear transformation
Is called consistent transformation [1]

Vandermonde matrix

Vandermonde matrix( vandermonde matrix )The name of is derived from the name of Alexandre Th é ophile Vandermonde, and the Vandermonde matrix is a column representing Geometric series Relational matrix [1]
For example:
Or in the i Line No j Column relationship writing:

Hadamard matrix

Hadamard matrix( Adama matrix )Is a matrix , each element is+1 or − 1, and each line is orthogonal to each other [17]
n Adama matrix of order H Meet:
here I n yes n × n Of Identity matrix

Diagonal matrix

about m×m Matrix of, when
When, yes
At this time, all elements on non diagonal lines are 0 [8] The matrix at this time is called diagonal matrix.

Block matrix

A block matrix is to divide the matrix into smaller matrices, which are called sub blocks [21] For example:
The matrix can be divided into four 2×2 Matrix of:
The partitioned matrix can be written as follows:

Jacobian matrix

Jacobian matrix is the first order of function partial derivative A matrix arranged in a certain way.
It can be expressed as follows:

Rotation matrix

A rotation matrix is a matrix that changes the direction of a vector but does not change its size when multiplied by a vector. The rotation matrix does not include inversion, which can change the right-handed coordinate system to the left-handed coordinate system or vice versa. All rotations plus inversion form a set of orthogonal matrices.
The rotation matrix is studied by the world famous lottery expert and Australian mathematician Dietrov. It can help you lock your favorite number and improve your chances of winning the lottery. First, you need to select some numbers, and then use a certain rotation matrix to fill the numbers you select into the corresponding positions. If some of the numbers you choose are the same as the lottery number, you will definitely win a certain prize. Of course, using this rotation matrix, you can get the maximum benefit with the minimum cost, and it is far less than the cost of compound betting.
The principle of rotation matrix is mathematically related to a combination design: covering design. Covering design, filling design, steiner system, t-design are all combinatorial optimization problems in discrete mathematics. They address how to combine elements in a collection to meet a specific requirement.

norm

Announce
edit
Main entry: norm
The norm of matrix mainly includes three main types: induced norm, element form norm and Schatten norm [13]
If mapping
Meet the following requirements:
The mapping is called
The matrix norm on the.

Induced norm

The induced norm is also called
On matrix space Operator norm (operator norm), defined as: [22]
The commonly used induction norm is p-norm:
The p norm is also known as the Minkowski p norm or
Norms. In particular, when
When, the corresponding induction norms are [23]

Element formal norm

take
The matrix is arranged as a column
And then use the definition of vector norm to get the element form norm of the matrix [24] , the formula is as follows:

Schatten norm

The Schatten norm is a matrix Singular value The defined norm is defined as:
among
Is the singular value of the corresponding matrix [25]

application

Announce
edit

image processing

In image processing, the affine transformation of an image can generally be expressed in the form of multiplying an affine matrix and an original image [26] , for example,
This represents a linear transformation followed by a translation.

Linear transformation and symmetry

Linear transformation and its corresponding symmetric , in modern times physics Plays an important role in. For example, in Quantum field theory Medium, Elementary particle Is based on the special theory of relativity Lorentz group It means that, specifically, they are Spinor group Performance. Inclusion Pauli matrix And more general Dirac matrices The specific expression of Fermion The physical description of fermions is an indispensable part, and the performance of fermions can be used Spinor To express. Describe the three lightest quark You need to use an internal Special unitary group SU (3); Physicists will use a simpler matrix expression when calculating, called Gelman matrix , this matrix is also used as SU (3) Gauge group The modern description of strong nuclear force Quantum chromodynamics The foundation of is SU (3). also Kabibo Kobayashi Yichuan Matrix CKM matrix ): on Weak interaction The important basic quark state in is different from the quark state with different mass between the specified particles, but the relationship between them is linear, which is expressed by the CKM matrix.

Linear combination of quantum states

In 1925, Heisenberg proposed the first quantum mechanics In the model, the infinite dimensional matrix is used to represent the operator acting on the quantum state in the theory. This is done in Matrix mechanics It can also be seen in. for example density matrix It is used to describe "pure" in quantum system quantum state The "mixed" quantum state represented by linear combination of [27]
The other matrix is an important tool used to describe the scattering experiments that form the cornerstone of experimental particle physics. When particles are in accelerator The particles that had not interacted with each other entered the action area of other particles in the high-speed movement, and the momentum changed to form a series of new particles. The collision can be interpreted as the scalar product of the linear combination of the result particle state and the incident particle state. The linear combination can be expressed as a matrix, called S-matrix , in which all possible interactions between particles are recorded [28]

Orthonormal mode

Another general application of matrices in physics is to describe linear coupled harmonic systems. Of such systems Equation of motion It can be expressed in the form of a matrix, that is, a mass matrix is multiplied by a generalized velocity to give the motion term, and the force matrix is multiplied by the displacement vector to describe the interaction. The best way to find the solution of the system is to find the eigenvector of the matrix (by Diagonalization , called systematic Orthonormal mode This solution is very important when studying the internal dynamics mode of molecules: the vibration of atoms bound by chemical bonds in the system can be expressed as the superposition of normal vibration modes [29] When describing mechanical vibration or circuit oscillation, it is also necessary to use normal mode to solve [30]

geometrical optics

stay geometrical optics You can find many places where you need to use matrices. Geometric optics is a kind of Wave motion of light wave The approximation theory of the theory, whose model treats light as geometry radial use Paraxial approximation (English: paraxial approximation ), if the angle between the light and the optical axis is very small, then lens Or the effect of reflection elements on light can be expressed as the product of 2 × 2 matrix and vector. The two components of this vector are the geometric properties ( Slope , between light and optical axis Main plane (English: principal plane )Vertical distance). This matrix is called Light transmission matrix (English: ray transfer matrix )The inner element encodes the properties of the optical element. For refraction, this matrix is subdivided into two types: "refraction matrix" and "translation matrix". The refraction matrix describes the refraction behavior of the ray when it encounters the lens. The translation matrix describes the translation behavior of rays propagating from one main plane to another.
By a series of lens Or the optical system composed of reflective elements can simply describe its ray propagation path with the corresponding matrix combination [31]

electronics

stay electronics Traditional Mesh analysis (English: mesh analysis) or Node analysis Will get one Linear equations , which can be expressed and calculated by matrix.