kelan.io

Locking Options for Synchronized<T>

This is a follow-up to my previous post, where I promised to explore a way to allow different locking mechanisms for my Swift Synchronized<T> wrapper.

Because different use-cases can have different needs for the locking behavior, I thought it would be neat to allow the user of a Synchronized<T> wrapper to be able to choose the locking mechanism that worked best for that situation.

We'll accomplish this by using a Lockable protocol.

Here's the protocol.

One goal of a flexible locking scheme is to be able to allow multiple concurrent readers. So, we'll have separate methods for locking for reading versus writing.

And, we make each method take the block to perform, because, as we'll see, that makes it easier to use a serial DispatchQueue as one of the locking options.

protocol Lockable {
    func performWithReadLock<T>(_ block: () throws -> T) rethrows -> T
    func performWithWriteLock<T>(_ block: () throws -> T) rethrows -> T
}

Now we'll update Synchronized to use that.

This is very similar to the previous post, but uses the above protocol, with an instance of the locking strategy passed to the init().
class Synchronized<T> {
    private var value: T
    private var lock: Lockable

    /// - parameter lock: Lets you choose the type of lock you want.
    init(_ value: T, lock: Lockable) {
        self.value = value
        self.lock = lock
    }

    /// Get read-write access to the synchronized resource
    func update(block: (inout T) throws -> Void) rethrows {
        try lock.performWithWriteLock {
            try block(&value)
        }
    }

    /// Get read-only access to the synchronized resource
    func use<R>(block: (T) throws -> R) rethrows -> R {
        return try lock.performWithReadLock {
            return try block(value)
        }
    }

    /// Get access to the resource outside of a synchronized manner
    func unsafeGet() -> T {
        return lock.performWithReadLock {
            return value
        }
    }

}


Next, let's make some Lockable implementations.

This is equivalent to the use of DispatchSemaphore from before.
/// Extend a DispatchSemaphore to be Lockable
extension DispatchSemaphore: Lockable {

    func performWithReadLock<T>(_ block: () throws -> T) rethrows -> T {
        wait()
        defer { signal() }
        return try block()
    }

    func performWithWriteLock<T>(_ block: () throws -> T) rethrows -> T {
        wait()
        defer { signal() }
        return try block()
    }

}

We can also extend DispatchQueue to serve as a Lockable.

I'm not sure if there is a way to enforce that you only use a serial queue for this, but that would be nice to do.

/// Extend a DispatchQueue to be Lockable
/// - note: You *MUST* use a serial queue for this.  Don't use a global/concurrent queue!
extension DispatchQueue: Lockable {

    func performWithReadLock<T>(_ block: () throws -> T) rethrows -> T {
        return try sync(execute: block)
    }

    func performWithWriteLock<T>(_ block: () throws -> T) rethrows -> T {
        return try sync(execute: block)
    }

}

Again, you should read this Twitter thread to get some ideas about the benefits of each. The downside of the DispatchSemaphore is that it can suffer from priority inversion.

Also, both of these still use the same lock for both reads and writes. But, the protocol's methods were set up to differentiate between the two, so that we could allow multiple concurrent readers, as long as nothing is writing.

A pthread_rwlock_t supports this separation, so let's try using that. To make it easier to use from Swift, we can wrap it, like they have done in Perfect.

Here is the basic setup.
public final class RWLock: Lockable {

    private var lock = pthread_rwlock_t()

    public init?() {
        let res = pthread_rwlock_init(&lock, nil)
        if res != 0 {
            assertionFailure("rwlock init failed")
            return nil
        }
    }

    deinit {
        let res = pthread_rwlock_destroy(&lock)
        assert(res == 0, "rwlock destroy failed")
    }

Here are the primitive locking methods.
    public func lockForReading() {
        pthread_rwlock_rdlock(&lock)
    }

    public func lockForWriting() {
        pthread_rwlock_wrlock(&lock)
    }

    public func unlock() {
        pthread_rwlock_unlock(&lock)
    }

And the methods required by Lockable.
    public func performWithReadLock<T>(_ block: () throws -> T) rethrows -> T {
        lockForReading()
        defer { unlock() }
        return try block()
    }

    public func performWithWriteLock<T>(_ block: () throws -> T) rethrows -> T {
        lockForWriting()
        defer { unlock() }
        return try block()
    }
}

There are even more details about other locking options in this post by the always-great Mike Ash. I'll leave it as an exercise to the reader to make them conform to Lockable.

Next Steps

I've published this project on Github, with some tests, and a Playground (following my own advice) so you can try it out yourself. But, it could use some more testing. I think it's especially interesting to use XCTest performance tests to compare the speed of the various Lockable implementations.

And, while I've been using this in some small projects, and find it useful, I think it would be good to get somre more testing and experimentation before I'd really recommend using it in a serious project.

Thanks, as usual, to Jacob for his feedback on this post and the Synchronized implementation.