Finding concurrency bugs in Objective C

Introduction to concurrency in Objective C

We can introduce concurrency to a process by creating threads. Within a process, threads use the same memory space and the same instruction set. Each thread has its own position within that instruction set. A thread is in one of 3 main states: running, ready, or blocked. Each thread has its own register state and its own call stack.

In OS X 10.6, Apple introduced Grand Central Dispatch, a thread pool technology that simplifies the use of threads. GCD introduces the concept of queues and tasks. A process can have multiple queues, where each queue has one or more blocks. A block is a task for execution on that queue. From a programming languages perspective, a block is a closure (a function plus a binding environment). As a user of the queue API, you simply dispatch blocks on the queue, and GCD manages the running of queues.

Behind the scenes, GCD will create and destroy threads as needed in order to handle the workload of the queues the user has created.

In the case of a serial queue, one block runs at a time. In the case of a concurrent queue, GCD may concurrently execute blocks placed on that queue on different threads.

There are two ways to place a block on a queue.

  • dispatch_sync places a block on a queue and waits until that block completes
  • dispatch_async places a block on a queue and continues immediately

Finding concurrency bugs in Objective C

In code that uses multiple queues, one common concurrency bug is deadlock. Deadlock can happen when there is contention between two queues.

One example of deadlock is when blocks on two separate queues make dispatch_sync calls to each other.

Let’s say we have two queues, the sending queue and the manager queue. The sending queue has a method called beginTransaction, and the manager queue, which oversees both sending and receiving, has methods for managing crypto keys, including one called updateCryptoKey.

beginTransaction looks like this:

make sure the transaction is valid
dispatch_async(sending queue, ^{
    ensure no other transactions are in progress
    get crypto keys for this transaction
    put the transaction in our data structure
    start the transaction
});

updateCryptoKey looks like this:

make sure key being submitted is valid
dispatch_sync(manager queue, ^{
    is there an existing key?
    if not, add the key
    if yes, remove the existing key and add the new key
});

There is a potential deadlock between the sending queue and the manager queue. Each block has a dependency on the other queue. For the block in beginTransaction, the step of getting the crypto keys will do a dispatch_sync to the manager queue. Meanwhile, for the block in the manager queue, the step of removing the existing keys will do a dispatch_sync to the sending queue as part of cancelling any existing transactions under the old key.

These queue dependencies aren’t immediately obvious, because they are hidden behind function calls.

When finding deadlock between queues, it doesn’t matter how a block got placed on a queue. Here we have one block that was placed using dispatch_async, and one that was placed using dispatch_sync. What matters is what happens once that block is running. Since these two blocks are on separate queues, they can be running simultaneously. Then one of them does a dispatch_sync to the other, and it waits, because that queue is not free at the moment. Then the other does a dispatch_sync back to the first, and it waits too, because that queue isn’t free either. This is a deadlock.

Comments are closed.