High performance MySQL
Overall architecture
-
Database installation and server startup principle -
Database Connection Principle and Performance Optimization -
The overall architecture of the server and the execution process of the query statement -
Execution process of SQL update statement and log writing -
What is the underlying doing when we create databases and data tables
Index and query optimization
-
The underlying data structure of database index -- B+tree -
How to maintain different types of database index B+trees -
Gags: summary of B+index tree and quick filling of test data in data table -
The use of indexes in full value matching queries and the underlying principle of leftmost prefix principle -
Index usage and explain result field resolution in range matching query -
How to use indexes to optimize queries with paging, sorting, and grouping statistics -
The underlying principle of internal and external join queries and how to use indexes to optimize performance -
The underlying execution strategy of subqueries and how to use indexes for performance optimization -
Combine PHP business code to efficiently obtain random sorting results of data tables
Query tips: for
count Aggregated query, some students may be curious to use
count(*) still
count(id) The query performance is better. For the InnoDB engine, MySQL is dedicated to
count(*) Optimized, and
count(id) The full table will be scanned and then accumulated row by row, so it is recommended to use
count(*) 。 Some people may wonder why InnoDB does not record the number of records in the whole table as MyISAM does, because InnoDB supports transactions, and there is an MVCC mechanism in transactions (described in detail in the following transaction chapter). Each record may have multiple versions at the same time. Therefore, the specific number of rows is uncertain. In addition, for the table fields that often need to be counted, we will design the data table based on the anti normal form design Redundant field To store, such as the number of article views, the number of video views, the number of goods purchased, or through Cache system These methods are all used to improve query performance.
Database Transactions
Note: The following transaction tutorials are limited to the InnoDB engine.
-
Buffer Pool -
Introduction to MySQL database transactions and ACID features -
Ensure the persistence of database transactions through redo logs -
Ensure the atomicity of database transactions through undo logs -
Problems with concurrent transactions and MySQL transaction isolation level -
Ensure the consistency of database transactions through MVCC mechanism -
Global locks, table locks, and row locks in MySQL (shared locks, exclusive locks, intent locks, and deadlocks) -
Implementation of pessimistic lock, optimistic lock and database transaction isolation
How can MySQL transactions solve the problem of phantom reads at the repeatable read level: We know that InnoDB supports row locks, which can be used to lock a row before updating (modifying/deleting) it. However, for the insert operation, the row inserted in advance does not exist, so it is impossible to add a row lock. Therefore, MySQL introduces the concept of gap lock, That is, the gap between the lines to be inserted is locked (both ends are (- ∞, MIN) and (MAX,+∞)), and an adjacent gap lock and row lock are combined to form a Next key Lock, which is an interval that is opened before and closed after (gap lock+row lock), Next key Lock is the basic unit of MySQL locking, and only objects accessed in the query process can be locked. If the query condition of an SQL locking statement contains an equivalent query of a unique index (including the primary key), the lock will degenerate into a row lock. Therefore, for an insert statement, because Next key Lock is set, Therefore, other transactions can be blocked from reading the corresponding row and gap, thus avoiding the phantom reading problem.
Database high availability
-
Solution to Online Database Burst Performance Problem Caused by Slow Query -
The Solution to the Burst Performance Problem of Online Database Caused by High Load and the Probe into the Long Connection of PHP Database -
Binlog write mechanism and high concurrency write transaction performance optimization -
MySQL master-slave replication principle&building database cluster based on Docker -
Binlog log view, format introduction and best choice -
Configure master-slave replication based on GTID and realize database read/write separation in Laravel project -
How to solve the problem of master-slave delay (I) -- The cause of master-slave delay and optimization plan -
How to Solve the Problem of Master Slave Delay (Part 2) -- Delay and Solution of Read Write Separation -
Ensure high availability of MySQL database cluster through active/standby switch -
Use Docker to arrange Mycat middleware to realize the separation of read and write and hot switch between active and standby services (Based on Laravel project demonstration)
Operation and Maintenance Tips Sharing: Deleting a database does not have to run away: Binlog logs can be used not only to build a highly available database cluster, but also to recover data that has been deleted by mistake. If a data line has been deleted by mistake, it can be used Flashback The tool recovers the corresponding data in combination with the binlog whose log format is ROW; If the database/table is deleted by mistake, the corresponding data can be recovered based on the full backup (the entire database data of the scheduled backup)+incremental backup (binlog); If the entire database instance is deleted by mistake (through a disk file deletion command such as rm), and if a database cluster has been built based on binlog, you don't need to worry about this. You just need to remove this node and synchronize the data of other nodes.
Practical Optimization (Free)
Note: The following practical optimization chapter takes Laravel model class database operation as an example for demonstration
-
Measure database performance indicators (Memory usage and query time consuming) -
Getting Started with Correlation Query Performance Optimization (Index, eager loading, specified query field) -
Aggregate query performance optimization (Reduce the number of queries) -
Optimizing the performance of associative queries through subqueries (Create dynamic association through subquery) -
Optimizing Fuzzy Matching Query by Functional Index and Virtual Generated Column -
Fuzzy matching through sub query and union query combined with association query -
Implementation and performance optimization of association query sorting (I): one-to-one and attribution association -
Implementation and Performance Optimization of Associative Query Sorting (Part 2): One to Many Associative Sorting
Suggestions for daily database optimization: make SQL query statements simple and easy to optimize through advanced database design, instead of piling up complex SQL statements that are difficult to optimize to obtain data.