Detailed explanation of Android CPU governor and IO scheduler

The frequency of Android's CPU is not invariable. It will be adjusted to meet the needs of the program. Usually, it will increase/decrease the frequency depending on CPU Loading%, and then check whether it increases/decreases at a specific time.

I/O is the abbreviation of input/output. For data read/write operations, the priority of data requests from different processes, etc.

 

In the performance settings or third-party kernel regulators, you will see the settings of the processor and I/O scheduler. The processor is usually configured with [ondemand], [interactive], [conservative] and other options, while the I/O scheduler is configured with [noop], [deadline], [cfq] and other options. Now let's introduce the functions, advantages and disadvantages of each option.

 

Processor Configuration Options————————————————————————

 

[ondemand] On demand mode:

Adjust the CPU frequency as needed. When the phone is not operated, it is controlled at the lowest frequency. After sliding the screen or entering the application, it will quickly rise to the highest frequency. When the phone is idle, it will quickly reduce the frequency. The performance is relatively stable. However, due to the large frequency variation, the power saving is only average. It is a default mode that tends to balance battery and performance. However, for smart phones, ondemand is slightly lacking in performance.

Advantages: basically balanced, good performance and good endurance

Disadvantages: Constantly adjusting the CPU frequency consumes a certain amount of power, which makes it only an "ideal" and perfect governor. It is also slightly sensitive when the frequency increases (for example, if you need 500, it may rise to 650)

 

Interactive:

Similar to ondemand, the rule is "fast increase and slow decrease", focusing on response speed and performance. When there is high demand, it quickly jumps to the high frequency, and when there is low demand, it gradually reduces the frequency. Compared with ondemand, which consumes electricity, compared with conservative, it quickly increases the frequency and slowly reduces the frequency.

Advantages: slightly stronger performance than Ondemand and faster response speed

Disadvantages: It still maintains a high frequency when not needed, and consumes more power than Ondemand

 

[conservative] Conservative mode:

Similar to ondemand, the rule is "slow rise and fast fall", focusing on power saving. When there is high demand, the frequency will gradually increase, and when there is low demand, it will quickly jump to low frequency. In contrast to Interactive, the conservative mediation scheme slowly increases the frequency and rapidly decreases the frequency.

Advantages: slightly lower power consumption than Ondemand, and will not increase the frequency until it is really needed

Disadvantages: Slowly increasing the frequency means that it will be slower to open some larger APPs or try to wake up the standby machine

 

[powersave] Power saving mode:

Running at the set minimum frequency has no daily use value, unless it is matched with the setcpu profile, this adjustment mode is used when the screen is turned off for sleep. It saves power but the system response speed is slow, forcing the CPU to run at the minimum frequency all the time.

Advantages: minimum power consumption, longest endurance, best heating control

Disadvantages: poor performance, unsmooth operation, delay and stuck

 

[userspace] User mode:

The user control mode is not a speed governor with a preset speed regulation scheme. It allows the user to adjust the CPU by means of non controlling the core, but the fact is that after the emergence of software such as "set CPU", it is a waste.

In any case, the CPU will be controlled to run within the configured frequency range, and the power saving settings added by the user in the configuration will be controlled. In this scenario, reducing the maximum operating frequency of the CPU can extend the battery standby time, but also reduce the wake-up speed of the machine. It is recommended not to use this option.

Advantages and disadvantages will not be evaluated.

 

[performance] High performance mode:

High performance mode, run at the highest frequency in the range you set, even if the system load is very low, the CPU frequency is the highest. The performance is very good, because the CPU itself does not need resources to adjust the frequency, but the power consumption is fast and the temperature is higher, forcing the CPU to run at the highest frequency all the time.

Advantages: good performance and speed

Disadvantages: High power consumption and poor battery life lead to serious heating of the mobile phone, and long-term use will cause some physical damage to the hardware

 

 

I/O Scheduler Configuration Options————————————————————————

 

[noop] This scheduling mode will directly merge all data requests into a simple queue. It is not suitable for memory with mechanical structure because no optimization sequence will increase additional seek time. The simplest scheduling mode ignores the priority and complexity of IO operations. If there are many read and write operations, the efficiency will be reduced.

 

[deadline] As the name implies, the expiration time is used to sort the order of IO operations to ensure that the shortest delay time of the first IO request is higher than that of the write to read operation. It is a better scheduling mode.

 

[cfq] The completely fair queue is an alternative to the anticipatory mode. It does not do too much predictive scheduling, but directly allocates the order of operations according to the given process IO priority. This mode works well on Linux, but it may not be the most suitable IO scheduling mode for Android, which emphasizes balance too much and reduces the performance of continuous reading and writing data.

 

[bfq ]

Similar to cfq, but with a technology similar to prediction frame, it reduces the number of I/O accesses and data throughput, and saves more MMC life and battery life. Bfq refers to the budgetair queuing. From the name, it can be seen that this policy is fair for all I/O requests, and there is no problem of the first kind of noop mentioned above. The fairness here means that the I/O requests of each process can be responded as soon as possible without being put on hold for a long time. However, because the system resources are limited, it can only ensure the response as soon as possible but not the completion as soon as possible. It is not difficult to see that bfq is suitable for multiple processes to issue multiple I/O requests at the same time, because it will not ignore subsequent I/O requests like noop. Intuitively, the system can respond well to various processes when many mobile phone programs are opened, that is, it has the best random access and low latency.

 

【sio 】

Although it is based on deadline, it does not sort IO operations like NOOP, so it has a fast access speed like NOOP but does not optimize IO operations too much. If you don't like NOOP, you can also choose this option.

 

[Anticipatory] In fact, this is somewhat similar to the NCQ function of a PC hard disk. It seems that the predictive scheduling can improve the efficiency, but because its prediction mechanism will start to prepare for the next preprocessing when the process is about to end a read/write operation, it will disrupt the normal continuous IO scheduling of the system and reduce the random access efficiency. Few users don't recommend it.

 

【vr 】

It has an operation sorting mechanism similar to deadline. It has the highest peak read/write speed, but the performance is relatively unstable. That is to say, the highest score may be obtained but the lowest value may also be obtained

 

【fiops】

Fair iops, although this scheduler pursues average priority like cfq, it is a redesigned regulator based on flash memory devices and performs well in all aspects.

 

【row】

As the name implies, ROW=Read over write. The explanation of this scheduler can be summarized as follows: the maximum limit reduces the IO response time, rearranges the execution operations, directly performs read and write operations, and gives the highest priority value to IO. In mobile devices, it will not have as many parallel threads as possible on the desktop. Usually, it is read and written by a single thread or at most two threads working at the same time. Requests that are good for reading are greatly reduced by the latency of writing and reading. It is easier to use than deadline, but if there are too many threads, it may cause instant stuck.

course

Sensor Simulator on Android devices

2017-3-30 8:07:43

course

Apache uses. htaccess security chain

2017-9-12 6:41:11

13 replies A Author M administrators
  1. Before modifying the CPU scheduling mode by using the kernel adutior and other scheduling and debugging software, be sure to turn off the temperature control. Turn off all the temperature control modes. If there is hot plugging, turn off it.
    The temperature control file may conflict with the changed CPU scheduler.

    •  Shallow summer

      I have now switched to sence2.0.0, which is better than the kernel. In addition, for Snapdragon 625 and Snapdragon 630, there are eight parallel cores or four big cores+four small cores, which have better management (the user's background application occupies the CPU, the system's background occupies the CPU, etc.)

    • The Kernel Adiutor is set every time it is started. It can be restored. The phone cannot be started because it is not easy to brick.

    •  Shallow summer

      Although sence has powerful functions, if the settings are wrong, you need to re brush the bottom package
      Indeed, my previous Nexus 7 II system was due to a non native system. I selected low memory optimization and hung up

    • Yes, if there is no standby machine, the insurance is still important

  2. In my case, the Pixel Experience system operator of the moto x4 machine stopped updating, so the system version on hand could not be updated. In this version, there were many body fever in June and July, and the kernel could not be adjusted. Instead, I contacted sence a few days ago, limiting the user's background applications to a small core. The maximum frequency of the small core is limited to about 1000 mhz by me, and the large core maintains 2200 mhz. This has no impact on the performance, and the heating problem is solved perfectly

    • With this setting, you can't get stuck playing the game

    •  Shallow summer

      Weibo and WeChat, Zhihu Taobao, worth buying online banking. Alipay has lost its black domain. If you press this setting, it will take 3 seconds to open Alipay. If it is the default CPU scheduling of the machine, it will take 5 seconds. The game has never been tried, but I made an optimization. One of the four cores is given to the system background, one of the four small cores is given to the user background, and the other six are given to the foreground application. Therefore, the execution efficiency of the current operation of the host is very high.

    • The key is to cut off wake-up. Some domestic rogue software likes to wake up each other in the background, which is the culprit that causes the machine to get stuck.

  3. In December 19, the 10.0rom of PE was updated, and the system heating was much better than 9.0. The power loss of 1% in one hour can be achieved by inserting the card. Recently, I browsed the article of Zhihu "Red Giant" and the video of bilibili "xin333c", and probably understood the development history of Qualcomm from large and small cores (power consumption control is not good) - parallel eight cores (power consumption control is reasonable) - large, medium and small cores (power consumption control is reasonable through DynamIQ). For the Snapdragon 630 in hand, the parallel eight cores: LITTLT 4 * A53+little 4 * A53, It is also observed that the daily use of LITTLE has a higher occupancy rate, and the highest two frequencies of Little are directly shielded. Anyway, 630 is generally hot, so it is inferred that LITTLE frequency reduction will not occur. Little itself is used to generate power at low load, so low frequency is better

    • You can use the kernel tuning APP to adjust

    • At present, MIUI12 internal beta has been updated. I think it is OK. There are many new functions.

search