Simple sorting of iOS locks

  • content
  • comment
  • relevant

IOS lock

What is a lock

In computer science, a lock or mutex (mutex) is a synchronization mechanism that enforces restrictions on access to resources in environments where there are many execution threads. Locks are designed to enforce mutually exclusive concurrency control policies.

Speaking of people is to avoid unexpected things when accessing a resource (variable, file, etc.) in multiple threads at the same time.

Just like everyone goes shopping, there is only one person at the cash register, but if two or three people go to the checkout at the same time, the cashier may be able to handle it, but it can also make mistakes (wrong change).

 

How many locks are available in iOS

  • OSSpinLock Spin lock
  • os_unfair_lock mutex
  • pthread_mutex Recursive lock
  • pthread_mutex Conditional lock
  • dispatch_semaphore Semaphore
  • dispatch_queue(DISPATCH_QUEUE_SERIAL)
  • NSLock
  • NSRecursiveLock
  • NSCondition
  • NSConditionLock
  • @synchronized
  • dispatch_barrier_async fence
  • dispatch_group Dispatching group

     

What's the difference between them

Pthread (POSIX Mutex Lock) lock

Realized by c language, cross platform.

Pthreads defines a set of C language types, functions and constants pthread. h Header file and a thread library.

There are approximately 100 function calls in the Pthreads API, all of which start with "pthread_" and can be divided into four categories:

  • Thread management, such as creating threads, joining threads, querying thread status, etc.
  • mutex (Mutex): Create, destroy, lock, unlock, set attributes and other operations
  • Conditional variable (Condition Variable): Create, destroy, wait, notify, set and query attributes
  • Between threads that use mutexes synchronization Administration

Used in c

 #include <stdio.h> #include <stdlib.h> #include <time.h> #include <pthread.h> static void wait(void) { time_t start_time = time(NULL); while (time(NULL) == start_time) { /* do nothing except chew CPU slices for up to one second */ } } static void *thread_func(void *vptr_args) { int i; for (i = 0; i < 20; i++) { fputs("  b\n", stderr); wait(); } return NULL; } int main(void) { int i; pthread_t thread; if (pthread_create(&thread, NULL, thread_func, NULL) !=  0) { return EXIT_FAILURE; } for (i = 0; i < 20; i++) { puts("a"); wait(); } if (pthread_join(thread, NULL) !=  0) { return EXIT_FAILURE; } return EXIT_SUCCESS; }

Used in oc

 - (NSString *)debugDescription { NSMutableString *s = [NSMutableString stringWithFormat:@"<%@:%p", NSStringFromClass([self class]), self]; // lock pthread_mutex_lock(&_mutex); NSMutableArray *infoDescriptions = [NSMutableArray arrayWithCapacity:_infos.count]; for (_FBKVOInfo *info in _infos) { [infoDescriptions addObject:info.debugDescription]; } [s appendFormat:@" contexts:%@", infoDescriptions]; // unlock pthread_mutex_unlock(&_mutex); [s appendString:@">"]; return s; }

 

Details can be learned

https://zh.wikipedia.org/wiki/POSIX%E7%BA%BF%E7%A8%8B

GCD lock

There are many online materials, so I won't repeat them. Here is an article that is better written. In order to prevent link failure, I will copy it here. The original link is more readable

http://www.mwpush.com/content/d04bd655.html

1、 Dispatch Semaphore
The semaphore can be used to control the concurrency and thread lock. It controls access by increasing or decreasing the signal holding the count. When the count is 0, wait. When the count is greater than or equal to 1, subtract 1 to continue execution.

The dispatch_semaphore_create creates a semaphore through the initial value. It has a parameter of long type, that is, the initial value, which should be greater than or equal to 0. The return value is the dispatch_semaphore_t semaphore type;

The dispatch_semaphore_signal signals that the semaphore value increases. There is one parameter, which is a semaphore of type dispatch_semaphore_t.

Dispatch_semaphore_wait waits for the semaphore, that is, the semaphore decreases. There are two parameters. The first is a semaphore of type dispatch_semaphore_t, and the second is a timeout of type dispatch_time_t. The return value is of long type. If the return value is 0, the waiting time does not expire. If the return value is not 0, the waiting time expires.

The calls of dispatch_semaphore_signal and dispatch_semaphore_wait should be balanced.

dispatch_queue_t c_queue = dispatch_queue_create("com.mwpush", DISPATCH_QUEUE_CONCURRENT);
for(int i = 0; i < 10; i++) {

 dispatch_async(c_queue, ^{ NSLog (@ "Execution% d", i); });

}
The above code is an asynchronous concurrent queue, and the output result is as follows:

Execution 0
1st execution
Execution 4
The second execution
Execution 5
3rd execution
6th execution
Execution 7
Execution 9
8th execution
For disordered output, we can achieve the purpose of sequential output through serial queues, or we can use semaphores to control, as follows:

dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);
dispatch_queue_t c_queue = dispatch_queue_create("com.mwpush", DISPATCH_QUEUE_CONCURRENT);
for(int i = 0; i < 10; i++) {

 dispatch_async(c_queue, ^{ dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); NSLog (@ "Execution% d", i); dispatch_semaphore_signal(semaphore); });

}
The output results are as follows:

Execution 0
1st execution
The second execution
3rd execution
Execution 4
Execution 5
Execution 7
6th execution
8th execution
Execution 9
The initial value of the semaphore we created is 1. When we execute the dispatch_semaphore_wait method, the signal count decreases by 1 to 0. So if other threads want to execute the same method at this time, because the signal count is 0, they need to wait and cannot continue to execute. When the dispatch_semaphore_signal method is executed, the signal count is increased by 1, then the dispatch_semaphore_wait method subtracts the signal count by 1 and continues to execute, reciprocating in turn.

It can be seen that semaphores can coordinate and control access processing through signal counting among multiple threads. We can control the maximum concurrent number by controlling the value of the signal count. As long as the signal count is greater than 0, we don't have to wait for concurrent execution.

2、 Dispatch Source
The scheduling source is an object used to monitor low-level system events. When an event occurs, it automatically executes the specified processing event in the specified Dispatch Queue.

1. User defined scheduling source
Dispatch_source_create creates a scheduling source to monitor low-level system events. There are four parameters. The first one is the type dispatch_source_type_t, which is a structure constant pointer. It is used to define the type of the scheduling source, that is, the identifier of the low-level system object type monitored by the scheduling source. It is necessary to determine the interpretation method of the second and third parameters according to this parameter. The second parameter is uintptr_t, which is the underlying system handle to be monitored of the unsigned long type. The setting of this value is determined according to the first parameter, which determines whether this parameter is used as a file descriptor, a mach port, a number of signals, or a process identifier. The third parameter is the flag mask required for unsigned long events, The setting of this value is also determined by the first parameter. The system provides a fixed parameter of this value. The fourth parameter is the queue to which the processing event called when listening to the specified event is submitted. The return value is a dispatch source of type dispatch_source_t. The scheduling source is created in an inactive state. In the last article, we introduced that dispatch_resume and dispatch_suspend should be used together in pairs, or they will cause a crash. However, there is an exception. In the official introduction to dispatch_resume, it is said that the suspended count of the newly created scheduling source is 1, and it must be restored before the event is delivered. In the official description of dispatch_source_create, it is said that for backward compatibility reasons, dispatch_resume plays the same role as dispatch_activate on an inactive source instead of a suspended source. So we also call it "inactive state". It is recommended to use dispatch_activate in the scheduling source.

 

2. dispatch_source_merge_data merges the data into a dispatch source of type DISPATCH_SOURCE_TYPE_DATA_ADD or DISPATCH_SOURCE_TYPE_DATA_OR, and submits its event handler to the target queue. Two parameters, the first is the scheduling source, and the second is the unsigned long type. It is the data to be submitted, and cannot be 0. If it is 0, it will not start to submit the event handler block, nor can it be a negative number.

The dispatch_source_set_event_handler sets the event handler block for the given scheduling source. There are two parameters, the first is the target scheduling source, and the second is the program block to be executed. This method is to set the block to be submitted to the queue specified in the target scheduling source. At the same time, only one task block can be executed. If another submission event occurs and the current program block has not finished executing, it will be accumulated and merged in the specified way (the ADD or OR specified when creating the scheduling source), effectively solving the pressure of handling events caused by frequent submission.

The dispatch_source_get_data returns the data to be processed from the scheduling source. One parameter is the scheduling source. This method should be called in the event handler block (that is, in the dispatch_source_set_event_handler). The result of calling this function outside the event handler callback is uncertain. This method can obtain the current value in the scheduling source and clear the data to zero.

dispatch_queue_t c_queue = dispatch_queue_create("com.mwpush", DISPATCH_QUEUE_CONCURRENT);
dispatch_source_t source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, c_queue);
dispatch_source_set_event_handler(source, ^{
NSLog(@"%ld", dispatch_source_get_data(source));
});
dispatch_activate(source);
dispatch_apply(5, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(size_t index) {
dispatch_source_merge_data(source, 1);
});
The output results are as follows:

five
In the above code, a concurrent queue c_queue is defined as the queue used to define the scheduling source. The program block for processing events is set through the dispatch_source_set_event_handler, and then the scheduling source is activated. Then, data 1 is submitted to the scheduling source through five iterations. Because event submission occurs continuously and the program block has not been executed yet, data will be merged in the way of ADD and finally output. If the block has been executed when the merge is submitted, the merge will not be performed in ADD mode.

The following verification code:

@interface ViewController ()
@property (nonatomic, strong) dispatch_source_t source;
@end
@implementation ViewController

  • (void)viewDidLoad {
    [super viewDidLoad];

    dispatch_queue_t c_queue = dispatch_queue_create("com.mwpush", DISPATCH_QUEUE_CONCURRENT);
    self.source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, c_queue);

    dispatch_source_set_event_handler(self.source, ^{
    NSLog(@"%ld", dispatch_source_get_data(self.source));
    });
    dispatch_activate(self.source);

}

  • (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {

    dispatch_source_merge_data(self.source, 1);

}
@end
At this time, we can click the screen to output the following results:

one
Verify again, and merge before completing the execution. The modified code is as follows:

dispatch_source_set_event_handler(self.source, ^{
sleep(2); // Add sleep and simulate time consumption
NSLog(@"%ld", dispatch_source_get_data(self.source));
});
At this time, we click the screen for 5 consecutive times, and the output results are as follows:

one
four
Because the first click takes 2 seconds to complete, and the click is completed in a short time, the first event is not completed, and the subsequent click will be merged by the system optimization.

2. Timer source
Dispatch_source_set_timer Sets the start time, interval and backhaul value of the timer source. There are four parameters. The first is the scheduling source, the second is the start time of the dispatch_time_t type, the third is the time interval, in nanoseconds, and the fourth is to set the timing precision, in nanoseconds. If you want to be very precise (relatively speaking, there is no absolute precision), set it to 0. Even if you set it to 0, the timer will have a certain waiting time, If the accuracy is not required, it can be set as an acceptable delay time. The lower the accuracy, the higher the flexibility of the system's execution (that is, the system can cooperate with the system event execution according to the delay time, and when the delay time allows, the system will wake up and execute together with other events that need to be executed, rather than executing alone for the timer).

The timer source has a mask value of DISPATCH_TIMER_STRICT, but it is not recommended to use it. Generally, 0 can be used, unless the time accuracy is very high. This mask specifies that the delay parameter (leeway) of dispatch_source_set_timer should try its best to comply with the set value. If this flag is set, the system will apply the minimum delay, which will also bring negative effects, such as high power consumption, which affects power saving technology. You should use it carefully, and only set this value when absolutely necessary.

dispatch_queue_t c_queue = dispatch_queue_create("com.mwpush", DISPATCH_QUEUE_CONCURRENT);
dispatch_source_t source = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, c_queue);
dispatch_time_t time_t = dispatch_time(DISPATCH_TIME_NOW, 2ull NSEC_PER_SEC);
uint64_t interval = 1ull
NSEC_PER_SEC;
dispatch_source_set_timer(source, time_t, interval, 0);
dispatch_source_set_event_handler(source, ^{

 NSLog(@"%ld", dispatch_source_get_data(source));

});
dispatch_activate(source); // dispatch_resume(source);
The above code implements a function that delays execution for 2 seconds and calls the program block every other second. If repeated calls are not required, set the time interval to DISPATCH_TIME_FOREVER. You can choose the submitted queue by yourself. This is just an example. We can use dispatch_suspend to pause the timer. To correspond to the dispatch_resume of the start/resume timer, we can also use the following method to cancel the timer.

Dispatch_source_cancel asynchronously dispatches the source to prevent further calls to its event handler block. One parameter is the scheduling source. Note that only the unexecuted program blocks can be canceled. The executed program blocks will continue to be executed (the submitted program blocks may also be executed in the future).

The dispatch_source_set_cancel_handler sets the cancellation processing block of the target scheduling source. One parameter is the target scheduling source. This method is called after dispatch_source_cancel is executed, all references to the source base handle are released by the system, and all processed program blocks are executed.

3. Other methods
The dispatch_source_get_mask returns the event mask monitored by the scheduling source.

Dispatch_source_testcancel tests whether the target scheduling source has been cancelled.

The dispatch_source_get_handle returns the underlying system handle associated with the specified scheduling source.

The dispatch_source_set_registration_handler sets the registration handler block for a given scheduling source.

Dispatch Source official address: https://developer.apple.com/library/archive/documentation/General/Conceptual/ConcurrencyProgrammingGuide/GCDWorkQueues/GCDWorkQueues.html#//apple_ref/doc/uid/TP40008091 -CH103-SW13

3、 Dispatch I/O and Dispatch Data Introduction
By using Dispatch I/O and Dispatch Data, large files can be divided into several small parts for reading and merging.

Dispatch_io_create creates an I/O channel and associates it with the specified file description.

Dispatch_io_create_with_path creates a scheduled I/O channel with an associated path.

Dispatch_io_read calls an asynchronous read operation on the specified channel.

Dispatch_io_write calls an asynchronous write operation on the specified channel.

Dispatch_io_close Closes the specified channel for new read/write operations.

Dispatch_io_set_high_water sets the maximum number of bytes to process before queuing the block.

Dispatch_io_set_low_water sets the minimum number of bytes to process before queuing the processing block.

Dispatch_io_set_interval Sets the interval, in nanoseconds, at which the I/O handler of the channel is called.

Dispatch_read schedules an asynchronous read operation using the specified file descriptor.

Dispatch_write schedules an asynchronous write operation using the specified file descriptor.

The Scheduling Data Object provides an interface for managing memory based data buffers. The client accesses the data buffer as a continuous memory block, but the internal buffer may be composed of multiple discontinuous memory blocks.

Dispatch_data_create creates a new dispatch data object using the specified memory buffer.

Dispatch_data_get_size returns the logical size of memory managed by the dispatch data object.

The dispatch_data_create_map returns a new scheduling data object, which contains a continuous representation of the memory of the specified object.

Dispatch_data_create_concat returns a new scheduling data object, which consists of concatenated data from two other data objects.

Dispatch_data_create_subrange returns a new scheduling data object whose content is composed of a part of the memory area of another object.

Dispatch_data_apply traverses the memory of the dispatch data object and executes custom code on each region.

The dispatch_data_copy_region returns a data object that contains part of the data in another data object.

NS lock

NSLock objects can be used to mediate access to the application's global data or protect critical code segments so that they can run automatically.

The NSLock class uses POSIX threads to implement its locking behavior. When you send an unlock message to an NSLock object, you must ensure that the message is sent from the same thread that sent the initial lock message. Unlocking a lock from another thread may result in undefined behavior.

You should not use this class to implement recursive locks. Calling the lock method twice on the same thread will permanently lock your thread. You can use the NSRecursiveLock class to implement recursive locks. Unlocking an unlocked lock is considered a programmer error and should be fixed in the code. The NSLock class reports such errors by printing an error message to the console when an error occurs.

 @protocol NSLocking - (void)lock;// Lock - (void)unlock;// Unlock @end @interface NSLock : NSObject <NSLocking> { @private void *_priv; } //Attempting to lock will not block threads. If true, locking succeeds, and if false, locking fails. This means that this method will return immediately to other threads in locking anyway. - (BOOL)tryLock; //Attempting to lock before specifying NSDate will block threads. If true, locking succeeds, and if false, locking fails. This means that this method will return immediately to other threads in locking anyway. You won't be waiting there when you can't get the lock. - (BOOL)lockBeforeDate:(NSDate *)limit; //Name is used for identification @property (nullable, copy) NSString *name API_AVAILABLE(macos(10.5), ios(2.0), watchos(2.0), tvos(9.0)); @end

NSRecursiveLock

A lock can be acquired multiple times by the same thread without causing deadlock.

NSRecursiveLock defines a lock. The same thread can acquire the lock multiple times without causing a deadlock. In this case, the thread is permanently blocked and waits for itself to give up the lock. When a locked thread has one or more locks, all other threads are prevented from accessing code protected by the lock.

In addition, its performance is lower than NSLock

NSCondition

A condition variable whose semantics follow the semantics used for POSIX style conditions.

Condition objects act as both locks and checkpoints in a given thread. Locks protect your code when testing conditions and performing conditional triggered tasks. The checkpoint behavior requires that the condition be true before the thread continues executing its task. When the condition is not true, the thread will block. It remains blocked until another thread signals the condition object. The semantics of using NSCondition objects are as follows: lock condition objects. Test Boolean predicates. (This predicate is a Boolean flag or other variable in the code, indicating whether the task protected by the condition is safe to execute.) If the Boolean predicate is false, call the wait or waitUntilDate: method of the condition object to block the thread. After returning from these methods, go to step 2 to retest your Boolean predicate. (Continue to wait and retest the predicate until it is true.) If the Boolean predicate is true, execute the task. (Optional) Update all predicates affected by the task (or issue any conditions). After completing the task, unlock the condition object.

NSConditionLock

Locks that can be associated with specific user-defined conditions.

Using the NSConditionLock object, you can ensure that threads can acquire locks only when certain conditions are met. Once the lock is obtained and the key part of the code is executed, the thread can abandon the lock and set the associated condition as a new condition. The conditions themselves are arbitrary: you can define them according to the needs of the application.

1. Initialize self. condition=[[NSConditionLock alloc] initWithCondition: 0];

2. Obtain a lock [self. condition lockWhenCondition: 1];

3. Unlock [self. condition unlockWithCondition: 1];

@synchronized

Frequently used, no need to repeat

 

dispatch_barrier_async

Commit the barrier block for asynchronous execution and return immediately.

What dispatch_barrier_sync and dispatch_barrier_async have in common:
1. Will wait for the task (1, 2, 3) inserted in front of it to finish executing first
2. Will wait for his own task to be completed before executing the following tasks (4, 5, 6)

The differences between dispatch_barrier_sync and dispatch_barrier_async are:
When inserting a task into a queue, dispatch_barrier_sync needs to wait for its own task (0) to end before continuing the program, then insert the task written behind it (4, 5, 6), and then execute the following task
While dispatch_barrier_async inserts its own task (0) into the queue instead of waiting for its own task to end, it will continue to insert the following tasks (4, 5, 6) into the queue

common ground:

1. The task waiting to insert the queue in front of it is completed first

2. Wait until their own tasks are completed before performing the following tasks

Differences (different appending tasks):

1. When dispatch_barrier_sync inserts its own tasks into the queue, it needs to wait for its own tasks to end before continuing to insert the tasks written behind it, and then execute them

2. After dispatch_barrier_async inserts its own task into the queue, it will not wait for its own task to end. Instead, it will continue to insert the following tasks into the queue, and then wait for its own task to end before executing the following tasks.

Therefore, the non waiting (asynchronous) feature of dispatch_barrier_async is reflected in the process of inserting tasks into the queue, and its waiting feature is reflected in the process of actual task execution.

Reprinted to: https://blog.csdn.net/ivolcano/article/details/78012385

dispatch_group

This is used more often

Paste the code directly

 dispatch_group_t group = dispatch_group_create(); ``dispatch_queue_t queue = dispatch_queue_create(``"com.gcd-group.www"``, DISPATCH_QUEUE_CONCURRENT); ``dispatch_group_async(group, queue, ^{ ``for` `(``int` `i = 0; i < 1000; i++) { ``if` `(i == 999) { ``NSLog``(@``"11111111"``); ``} ``} ``}); ``dispatch_group_async(group, queue, ^{ ``NSLog``(@``"22222222"``); ``}); ``dispatch_group_async(group, queue, ^{ ``NSLog``(@``"33333333"``); ``}); ``dispatch_group_notify(group, queue, ^{ ``NSLog``(@``"done"``); ``});

Call dispatch_group_notify when all tasks are completed

 

comment

zero Comments

Post reply

Your email address will not be disclosed. Required items have been used * tagging