Monday, June 22, 2009

C# Multithreading: Blocking Similar Requests

It's an abuse of Moore's Law: as computers get faster, software will get slower.  And while there are many reasons why software feeds on additional resources, whether it's providing a more intuitive user interface, coding for security or simple feature bloat – it’s largely the fault of software developer's assumption that disk, memory and CPU will continue to increase in capacity and power while the cost decreases.

With the advent of multi-core processors, it's evident that software cannot afford to assume that execution will occur on a single thread.  We need to embrace multi-threaded techniques to allow our software to scale with the platform.  Developing for multi-threaded applications requires special consideration.

I want to share a simple code snippet that I've frequently used to block individual threads from executing the same code twice.  It's especially handy in dealing with process intensive or time-sensitive requests where race-conditions can occur. For example, a recent project needed to dynamically generate an image and cache as a file – if requests were allowed to overwrite the file as it was being served by another thread, the results would be disastrous.

The concept uses the "double-checked locking" pattern, which is commonly used for synchronized lazy-initialization, such as creating singletons.  Unlike the singleton pattern, this example assumes there are multiple requests for different objects, so the technique is modified slightly.

A “Double-Checked Lock” Singleton

For those that aren't familiar with the "double-checked locking" pattern, I'll outline here using a Singleton example.  In short, double-checked locking is "Check twice before doing any work, with a lock in the middle."

Note: Some may claim that the thread-safety of the double-checked lock pattern simply cannot be guaranteed.  This is true for environments where the processor or application runtime model can vary (Java, C++, etc).  However the CLR does guarantee thread synchronization.

This example outlines the double-checked lock being used to guarantee that only one singleton is created:

public class MySingleton
{
    private static volatile MySingleton _instance;
    private static readonly _lock = new object();
public static MySingleton Instance { get { // 1: first check if (_instance == null) { // 2: lock a common object // all other threads must wait until the lock is released lock(_lock) { // 3: second check if (_instance == null) { // 4: thread-sensitive work _instance = new MySingleton(); } } // 5: release the lock } return _instance; } } // other methods }

If you've never seen this type of pattern before, it may at first seem awkward (“why are you checking for null so much?”).  Here's a quick run down of how this plays out in a multi-threaded environment:

  1. Thread #1 enters the MySingleton Instance_get method
  2. Thread #2 enters the MySingleton Instance_get method at the exact same time as Thread #1.
  3. Thread #1 determines that the instance is null and proceeds to the next statement which puts a lock on a common object that is available to all threads.  This lock will block all other requests until the lock is removed.
  4. Thread #2 attempts to access the _lock object but cannot because it is currently locked. Execution pauses at this statement.
  5. Thread #1 which is now in a sensitive code region that can only be executed by a single thread, determines that the instance is still null and proceeds to the next statement which performs the thread-sensitive work of creating the singleton.
  6. Thread #1 releases the lock.
  7. Thread #2 is no longer blocked and can access the lock.  It puts a lock on the _lock object to ensure that no other object can access this region.
  8. Thread #2 determines that the thread is not null.  No additional work is performed and it releases the lock.
  9. Thread #1 returns the instance.
  10. Thread #2 returns the instance that thread #1 created.

Using Synchronized-Keys for Similar Requests

As mentioned previously, the pattern above ensures that only a single instance of our object is created.  However, this technique doesn't work well for methods that vary their response based on input because we're locking a single object to create sensitive regions.  By locking only a single object, we create a nasty side-effect that synchronizes all requests to a single thread.

To solve this problem, we need to put our lock on a different object so that we are only blocking similar requests.  We need a key that is synchronized for requests with the same input.

This is where the “SyncKey” comes in:

public sealed class SyncKey : IDisposable 
{ 
    private static Dictionary<string,SyncKey> _inner = new Dictionary<string,SyncKey>(); 

    public static SyncKey Get(string key) 
    { 
        // lock the table to ensure that only one thread can access 
        //    the dictionary at a time 
        lock(_inner) 
        { 
            // if this is the first request for this key, it will not be present 
            //    in the dictionary.  create and store it. 
            if (!_inner.ContainsKey(key)) 
            { 
                SyncKey item = new SyncKey(key); 
                _inner.Add(key, item); 
            } 
            // return the synchronized key 
            return _inner[key]; 
        } 
    }
 
    private static void Remove(SyncKey item) 
    { 
        // lock the table to ensure that only one thread can access 
        //    the dictionary at a time 
        lock(_inner) 
        { 
            // for the request that first instantiated the key, 
            //    the sensitive work is complete and the key can be safely 
            //    removed from the dictionary 
            // for subsequent requests, although the key was used to prevent 
            //    a duplicate request, it no longer exists in the table 
            //    and can be safely ignored. 
            if (_inner.ContainsKey(item.Key)) 
            { 
                _inner.Remove(item.Key); 
            } 
        } 
    } 

    private string _key; 

    SyncKey(string key) 
    { 
        _key = key; 
    } 

    public string Key 
    { 
        get { return _key; } 
    }

    public void Dispose() 
    { 
        Remove(this); 
    } 
}

An example using this strategy:

public class MyObject 
{ 
    public string SensitiveMethod(string inputParameter) 
    { 
        string key = GetKey(inputParameter); 
        // 1: first check 
        string result = GetItemFromCache(key); 
        if (result == null) 
        { 
            // create a sync key for this request 
            using(SyncKey sync = SyncKey.Get(key)) 
            { 
                // 2: lock request-specific object 
                lock(sync.Key) 
                { 
                    // 3: second check 
                    result = GetItemFromCache(key); 
                    if (result == null) 
                    { 
                        // 4: thread sensitive work 
                        result = BuildResult(key); 
                        CacheResult(key,result); 
                    } 
                } // 5: release lock 
            } // dispose sync key 
        } 

        return result; 
    } 
} 

Results

I built a small contrived example that pushes 1000 threads through a fixed set of input using the following three strategies:

  • No locking – all requests are executed without blocking.
  • Lock – Locking on a shared lock object.
  • Sync Lock – Locking occurs on a request specific key.

sync-lock-chart

Some notes on the findings:

  • No locking —The example doesn’t reflect CPU and resource-starvation that would occur by flooding the system with multiple requests.  If it did, it’s likely that the execution time would be several magnitudes longer.
  • Lock – duplicate requests are avoided, but the total execution time is longer since the requests are pinned to a single thread when the shared object is locked.
  • Sync Lock – duplicate requests are avoided, but execution time is shorter because only similar requests are blocked, allowing other requests to be processed.

No comments:

Post a Comment