0%

Linux kernel memory management - SLUB

This is the sixth article in the<Linux kernel memory management>series

Part I Simple sorting of knowledge points for kernel memory management process

Part II Introduced the data structure of the kernel

Part III This article describes the memory processing from the first line of code loading to the C code jump.

Chapter IV Overview of memory processing in initialization C code

Chapter 5 (I) and Chapter 5 (Next) Introduces Memblock and partner system allocator

In order to avoid obscurity, this article and the following mainly use charts+text descriptions, and try to avoid involving too much code. Focus will be on:

  1. background
  2. Architecture and ideas
  3. technological process
  4. Special treatment and reasons

preface

There are many articles on the network that introduce SLAB/SLUB in detail. This article uses the current kernel version( 5.14.X )To introduce the widely used SLAB memory management, I hope it can be as detailed and easy to understand as possible. For more references, see, and no additional reference will be made in this article:

kmalloc/kfree It is probably the most commonly used memory allocation and release function in the kernel, and the implementation behind it is the SLAB allocator. SLUB is an implementation of the SLAB allocator. The other two implementations are SLAB and SLOB. It can also be seen from the naming that SLAB is the ancestor. With the development of the kernel, SLOB and SLUB allocators have evolved.

  • SLOB allocator is designed to meet the special requirements of embedded device memory management
  • SLUB is based on SLAB, which can well meet the needs of various platforms, use memory more effectively, and enhance the ease of debugging.

What problems does the SLAB allocator solve? This question can be asked in another way. Why do you use SLAB allocators when you have Buddy System? The explanation is as follows:

  • The partner system is managed in pages, and the size of each page is generally 4096 bytes. When the kernel program applies for memory, it often does not just apply for a multiple of the page size. If we allocate by page, the system memory will soon be exhausted.
  • For these reasons, memory must be managed in smaller units. This needs to consider the memory fragmentation caused by frequent memory allocation and release. At the same time, it is necessary to consider how to use CPU cache more effectively, and try to avoid the static caused by accessing the same memory area.

Why are these three allocators all SLAB allocators This is because the three allocators use the same data structure name and memory allocation/release API (note that only "name" is the same). For example, its management structure is called struct kmem_cache

Kernel Configuration

preface As mentioned, SLAB/SLOB/SLUB use the same API and structure, so they must be mutually exclusive. It can also be seen from the kernel definition KConfig:

 one
two
three
four
five
six
seven
eight
nine
ten
eleven
twelve
thirteen
fourteen
fifteen
sixteen
seventeen
eighteen
nineteen
twenty
twenty-one
twenty-two
twenty-three
twenty-four
twenty-five
twenty-six
twenty-seven
twenty-eight
twenty-nine
thirty
thirty-one
thirty-two
thirty-three
thirty-four
 choice
prompt "Choose SLAB allocator"
default SLUB
help
This option allows to select a slab allocator.

config SLAB
bool "SLAB"
select HAVE_HARDENED_USERCOPY_ALLOCATOR
help
The regular slab allocator that is established and known to work
well in all environments. It organizes cache hot objects in
per cpu and per node queues.

config SLUB
bool "SLUB (Unqueued Allocator)"
select HAVE_HARDENED_USERCOPY_ALLOCATOR
help
SLUB is a slab allocator that minimizes cache line usage
instead of managing queues of cached objects (SLAB approach).
Per cpu caching is realized using slabs of objects instead
of queues of objects. SLUB can use memory efficiently
and has enhanced diagnostics. SLUB is the default choice for
a slab allocator.

config SLOB
depends on EXPERT
bool "SLOB (Simple Allocator)"
help
SLOB replaces the stock allocator with a drastically simpler
allocator. SLOB is generally more space efficient but
does not perform as well on large systems.

endchoice

The default option is SLUB.

KConfig related knowledge can be referred to KConfig Language

framework

The position of SLAB (hereinafter SLAB also uniformly represents SLUB) in the system is as follows Figure 1 As shown in

 Location of SLUB in memory management system

The brief description is as follows:

  • The largest management unit of the memory management system is Node, which is divided into multiple memory zones( ToDo: Buddy System in the previous chapter and supplementary pictures in the second chapter )。
  • During the page allocation (remember the page allocation? Please refer to Chapter 5 (I) and Chapter 5 (Next) yes Buddy System ), which memory zone to allocate memory from according to the parameters passed in.
  • The allocation of SLAB requires the kmem_cache management structure, and the memory required by these management structures is also from the kmem_cache. Here the kernel Made a very clever design :
    • The initial management objects of the slab required to create the kmem_cache slab are boot_kmem_cache and boot_kmem_cache_node. They are decorated by the __init attribute, indicating that they will be placed in the. init.data section and released in the second half of kernel initialization.
    • During the initialization of the SLAB system, the global SLAB objects kmem_cache and kmem_cache_node are allocated from the SLAB pointed to by boot_kmem_cache and boot_kmem_cache_node.
    • Then copy the contents of boot_kmem_cache and boot_kmem_cache_node to kmem_cache and kmem_cache_node.
    • So far, the global SLAB objects kmem_cache and kmem_cache_node are used for SLUB management.
  • The memory of kmalloc is also allocated early in kernel initialization. In essence, SLAB objects with sizes of 2, 4, 8,... are created.

source file

The following table describes SLAB and SLUB related kernel source files:

file describe
slab.c Implementation of SLAB allocator (one of three allocators)
slab.h Header file definitions for all SLAB allocators
slob.c Implementation of SLOB allocator
slub.c Implementation of SLUB allocator
slab_common.c Functions that are common to all SLAB allocators and are implementation independent. Most will call to a specific allocator.

data structure

There are three important data structures of SLAB, and their contents and relationships are shown in the following figure:

 SLUB data structure

Including:

  • Kmem_cache represents a SLAB object
  • Kmem_cache_cpu stores the local resources of the SLAB object in the CPU. Here __percpu The decoration indicates that this is a Per CPU object (each CPU has a copy)
  • Kmem_cache_node is an array. Each array member represents the SLAB object in each Memory node Memory resources for.

management style

The management mode of SLUB is shown as follows:

 SLUB management

The brief description is as follows:

  • Each SLUB management structure has multiple cpu local slabs and node slabs.
  • When SLUB was first established, there was only a corresponding management structure.
  • When SLUB allocates memory
    • If no page is available in the object at this time, allocate the page from the partner system, hang it on the cpu local slab, and return a required memory.
    • If there are pages available in the object at this time, memory is allocated from them.
    • If the current kmem_cache_cpu has no available pages (the freelist and partial pages of kmem_cache_cpu are full), allocate memory from the partial of kmem_cache_node

This processing can ensure that the cache area of the CPU is always allocated first, and improve the access speed of resources.

  • Free memory: the memory will be released to the page where the memory is located first. The release occurs in the following cases:
scene Release mode
Before release All the memory on this page has been used, and the total amount of free and available memory on the per cpu partial linked list > kmem_cache.cpu_partial 1. Hang the page on the partial linked list of kmem_cache_cpu to the per node partial. 2. Put the page of the memory back into the partial linked list of kmem_cache_cpu
Before release All the memory on this page has been used, and the total amount of free and available memory on the per cpu partial linked list <= kmem_cache.cpu_partial Put the page of the memory back into the partial linked list of kmem_cache_cpu
1. This page is displayed in per node partial 2 After release , the page is in unallocated status 3. kmem_cache_node.nr_partial > kmem_cache.min_partial Return the page to the partner system
other /

The main purpose of setting the threshold is to avoid SLAB occupying too many memory pages, which may cause other objects in the system unable to get memory when they want to allocate memory.

summary

This paper introduces SLAB memory allocator, which plays an important and main role in the whole system operation. Introduced:

  • Classification of SLAB distributors
  • Architecture of SLUB allocator
  • Logic principle of SLUB allocator

I hope it is helpful for you to analyze the kernel code.