The frequency of Android's CPU is not invariable. It will be adjusted to meet the needs of the program. Usually, it will increase/decrease the frequency depending on CPU Loading%, and then check whether it increases/decreases at a specific time.
I/O is the abbreviation of input/output. For data read/write operations, the priority of data requests from different processes, etc.
In the performance settings or third-party kernel regulators, you will see the settings of the processor and I/O scheduler. The processor is usually configured with [ondemand], [interactive], [conservative] and other options, while the I/O scheduler is configured with [noop], [deadline], [cfq] and other options. Now let's introduce the functions, advantages and disadvantages of each option.
Processor Configuration Options————————————————————————
[ondemand] On demand mode:
Adjust the CPU frequency as needed. When the phone is not operated, it is controlled at the lowest frequency. After sliding the screen or entering the application, it will quickly rise to the highest frequency. When the phone is idle, it will quickly reduce the frequency. The performance is relatively stable. However, due to the large frequency variation, the power saving is only average. It is a default mode that tends to balance battery and performance. However, for smart phones, ondemand is slightly lacking in performance.
Advantages: basically balanced, good performance and good endurance
Disadvantages: Constantly adjusting the CPU frequency consumes a certain amount of power, which makes it only an "ideal" and perfect governor. It is also slightly sensitive when the frequency increases (for example, if you need 500, it may rise to 650)
Interactive:
Similar to ondemand, the rule is "fast increase and slow decrease", focusing on response speed and performance. When there is high demand, it quickly jumps to the high frequency, and when there is low demand, it gradually reduces the frequency. Compared with ondemand, which consumes electricity, compared with conservative, it quickly increases the frequency and slowly reduces the frequency.
Advantages: slightly stronger performance than Ondemand and faster response speed
Disadvantages: It still maintains a high frequency when not needed, and consumes more power than Ondemand
[conservative] Conservative mode:
Similar to ondemand, the rule is "slow rise and fast fall", focusing on power saving. When there is high demand, the frequency will gradually increase, and when there is low demand, it will quickly jump to low frequency. In contrast to Interactive, the conservative mediation scheme slowly increases the frequency and rapidly decreases the frequency.
Advantages: slightly lower power consumption than Ondemand, and will not increase the frequency until it is really needed
Disadvantages: Slowly increasing the frequency means that it will be slower to open some larger APPs or try to wake up the standby machine
[powersave] Power saving mode:
Running at the set minimum frequency has no daily use value, unless it is matched with the setcpu profile, this adjustment mode is used when the screen is turned off for sleep. It saves power but the system response speed is slow, forcing the CPU to run at the minimum frequency all the time.
Advantages: minimum power consumption, longest endurance, best heating control
Disadvantages: poor performance, unsmooth operation, delay and stuck
[userspace] User mode:
The user control mode is not a speed governor with a preset speed regulation scheme. It allows the user to adjust the CPU by means of non controlling the core, but the fact is that after the emergence of software such as "set CPU", it is a waste.
In any case, the CPU will be controlled to run within the configured frequency range, and the power saving settings added by the user in the configuration will be controlled. In this scenario, reducing the maximum operating frequency of the CPU can extend the battery standby time, but also reduce the wake-up speed of the machine. It is recommended not to use this option.
Advantages and disadvantages will not be evaluated.
[performance] High performance mode:
High performance mode, run at the highest frequency in the range you set, even if the system load is very low, the CPU frequency is the highest. The performance is very good, because the CPU itself does not need resources to adjust the frequency, but the power consumption is fast and the temperature is higher, forcing the CPU to run at the highest frequency all the time.
Advantages: good performance and speed
Disadvantages: High power consumption and poor battery life lead to serious heating of the mobile phone, and long-term use will cause some physical damage to the hardware
I/O Scheduler Configuration Options————————————————————————
[noop] This scheduling mode will directly merge all data requests into a simple queue. It is not suitable for memory with mechanical structure because no optimization sequence will increase additional seek time. The simplest scheduling mode ignores the priority and complexity of IO operations. If there are many read and write operations, the efficiency will be reduced.
[deadline] As the name implies, the expiration time is used to sort the order of IO operations to ensure that the shortest delay time of the first IO request is higher than that of the write to read operation. It is a better scheduling mode.
[cfq] The completely fair queue is an alternative to the anticipatory mode. It does not do too much predictive scheduling, but directly allocates the order of operations according to the given process IO priority. This mode works well on Linux, but it may not be the most suitable IO scheduling mode for Android, which emphasizes balance too much and reduces the performance of continuous reading and writing data.
[bfq ]
Similar to cfq, but with a technology similar to prediction frame, it reduces the number of I/O accesses and data throughput, and saves more MMC life and battery life. Bfq refers to the budgetair queuing. From the name, it can be seen that this policy is fair for all I/O requests, and there is no problem of the first kind of noop mentioned above. The fairness here means that the I/O requests of each process can be responded as soon as possible without being put on hold for a long time. However, because the system resources are limited, it can only ensure the response as soon as possible but not the completion as soon as possible. It is not difficult to see that bfq is suitable for multiple processes to issue multiple I/O requests at the same time, because it will not ignore subsequent I/O requests like noop. Intuitively, the system can respond well to various processes when many mobile phone programs are opened, that is, it has the best random access and low latency.
【sio 】
Although it is based on deadline, it does not sort IO operations like NOOP, so it has a fast access speed like NOOP but does not optimize IO operations too much. If you don't like NOOP, you can also choose this option.
[Anticipatory] In fact, this is somewhat similar to the NCQ function of a PC hard disk. It seems that the predictive scheduling can improve the efficiency, but because its prediction mechanism will start to prepare for the next preprocessing when the process is about to end a read/write operation, it will disrupt the normal continuous IO scheduling of the system and reduce the random access efficiency. Few users don't recommend it.
【vr 】
It has an operation sorting mechanism similar to deadline. It has the highest peak read/write speed, but the performance is relatively unstable. That is to say, the highest score may be obtained but the lowest value may also be obtained
【fiops】
Fair iops, although this scheduler pursues average priority like cfq, it is a redesigned regulator based on flash memory devices and performs well in all aspects.
【row】
As the name implies, ROW=Read over write. The explanation of this scheduler can be summarized as follows: the maximum limit reduces the IO response time, rearranges the execution operations, directly performs read and write operations, and gives the highest priority value to IO. In mobile devices, it will not have as many parallel threads as possible on the desktop. Usually, it is read and written by a single thread or at most two threads working at the same time. Requests that are good for reading are greatly reduced by the latency of writing and reading. It is easier to use than deadline, but if there are too many threads, it may cause instant stuck.