Swift Concurrency

Posted by Den Ree on December 22, 2024 · 47 mins read

Comprehensive Guide

  1. Start with async/await syntax
  2. Structured Concurrency with Tasks & Task Groups
  3. Actors for State Protection
  4. AsyncSequence for Async Streams
  5. Migrating Old Projects
  6. Advanced Techniques
  7. Testing & Debugging
  8. Resources

Async/Await

The async/await syntax is a cornerstone of Swift Concurrency, allowing to write asynchronous code that looks and behaves like synchronous code.

An async function is a function that can perform asynchronous work. It allows the function to suspend its execution until an asynchronous operation completes.

The await keyword is used within an async function to pause execution until the asynchronous task completes without blocking the thread on which it’s called. This makes asynchronous workflows appear sequential, avoiding the complexity of nested callbacks.


Best Practices

  • Use async let to run independent tasks concurrently, but avoid excessive parallelism. If you have dynamic number of tasks, then use Task Group.
  • Use throws in async functions and handle errors with do-catch calling try await for clean error to propagation.

Examples of Async/Await

Sequential Execution

let user = try await fetchUser()
let posts = try await fetchPosts(for: user.id)
return UserData(user: user, posts: posts)

Parallel Execution

async let first = fetchData1() // begins execution immediately in a child task.
async let second = fetchData2()
let results = await [first, second] // wait until both tasks finished

Error Handling with Async/Await

func fetchData() async throws -> Data {
    do {
        return try await URLSession.shared.data(from: URL(string: "https://example.com")!).0
    } catch {
        // Handle errors and propaganate error with defined information
        throw CustomError.networkError
    }
}

Use MainActor for UI Updates

Task {
    let data = await fetchData() 
    await MainActor.run {
        updateUI(with: data) // data should be @Sendable
    }
}

Task { @MainActor in
    updateUI()
}

Tasks and Task Groups

Tasks and Task Groups to enable structured and unstructured concurrent operations. These tools allow developers to execute multiple tasks in parallel, manage their lifetimes, handle errors gracefully, and work efficiently with system resources.


Tasks

A Task represents an isolated, concurrent unit of work. It is lightweight compared to threads and provides structured handling of asynchronous operations.

  • Child Tasks automatically linked to the parent task and inherit its context (priority, cancellation).
  • Detached Tasks operate independently of any parent task, useful for heavy computations or isolated work.
  • Tasks suspend at await points, freeing threads to perform other work.
  • Use Task.yield() to voluntarily pause execution, allowing other tasks to progress.
  • Pass lightweight state to tasks within the same hierarchy using @TaskLocal.
Task Priority Description
.high Tasks that must execute immediately & for critical user interactions (UI animations, handle immediate touch gestures)
.medium
Tasks that are important but user isn’t actively waiting for (loading visible images, data needed soon but not immediately)
.low
Non-urgent tasks that can wait if higher-priority tasks are queued (prefetching data that may be used later, preparing non-critical updates or UI elements)
.background
Long-running or non-visible tasks that can operate in the background without impacting the user experience (back up data, synchronize large datasets, index or archiving files)
.default
General-purpose tasks that do not require specific prioritization (fetch data for standard use, perform common computations)

Task Groups

A TaskGroup enables managing multiple concurrent tasks within a single scope. It provides structured concurrency by ensuring all tasks complete before proceeding.

Type Description Use Case
TaskGroup For tasks with results that don’t throw errors. Use when you need results from multiple tasks, and none can throw errors.
ThrowingTaskGroup For tasks that can throw errors, propagating them to the parent context. Use when tasks can throw errors, and you need to handle or propagate those errors.
DiscardingTaskGroup For tasks with results that are discarded after execution. Use for side-effect tasks where results are unnecessary (e.g., sending notifications).
ThrowingDiscardingTaskGroup For tasks that can throw errors, with results discarded after execution. Use for side-effect tasks that can throw errors (e.g., long-running cleanup operations).

Best Practices for Tasks and Task Groups

  • Regularly check Task.isCancelled to terminate unnecessary work and stop early.
  • Use Task.detached sparingly to prevent loss of parent context (e.g., task-local values, cancellation).
  • Assign appropriate priorities to ensure critical tasks execute promptly.
  • Use @TaskLocal for lightweight, scoped state-sharing.
  • Replace threads sleep with Task.sleep for non-blocking delays.
  • Move heavy computations in detached task async functions.
  • Use ThrowingTaskGroup or ThrowingDiscardingTaskGroup for tasks that might fail, and catch errors where needed.
  • Use try await to wait for a task’s result, handling any potential errors that might be thrown.

Examples of Tasks & Task Usage

Basic Task Creation

Task {
    print("This is a basic task.")
}

Task(priority: .high) {
    print("High-priority task running.")
}

Task.detached {
    print("Detached task running.")
}

Checking for Cancellation

Task {
    for i in 0..<10 {
        if Task.isCancelled {
            print("Task cancelled.")
            return
        }
        print("Processing \(i)")
    }
}

Processing Results in Task Group with for await

await withTaskGroup(of: Int.self) { group in
    for i in 1...5 {
        group.addTask { i * i }
    }
    for await result in group {
        print("Result: \(result)")
    }
}

Handling Errors in a ThrowingTaskGroup

try await withThrowingTaskGroup(of: String.self) { group in
    group.addTask { try await fetchData(from: "https://example.com") }
    group.addTask { throw URLError(.badURL) }

    do {
        for try await data in group {
            print(data)
        }
    } catch {
        print("Error: \(error)")
    }
}

Side-Effect Operations with a DiscardingTaskGroup

await withDiscardingTaskGroup(of: Void.self) { group in
    for _ in 1...5 {
        group.addTask {
            print("Side effect performed.")
        }
    }
}

Parallel Data Fetching

try await withThrowingTaskGroup(of: Data.self) { group in
    for url in urls {
        group.addTask {
            try await fetchData(from: url)
        }
    }
    for try await data in group {
        print(data)
    }
}

Batch Processing

await withTaskGroup(of: String.self) { group in
    for file in files {
        group.addTask {
            processFile(file)
        }
    }
    for await result in group {
        print("Processed: \(result)")
    }
}

Real-Time Updates

await withTaskGroup(of: Void.self) { group in
    for stream in streams {
        group.addTask {
            await handleStream(stream)
        }
    }
}

Processing Tasks One by One

try await withThrowingTaskGroup(of: String.self) { group in
    group.addTask { try await fetchData(from: "https://example1.com") }
    group.addTask { try await fetchData(from: "https://example2.com") }

    while let result = try await group.next() {
        print("Fetched result: \(result)")
    }
}

Continue execution in case if task in group failed

try await withThrowingTaskGroup(of: String.self) { group in
    group.addTask { try await fetchData(from: "https://example1.com") }
    group.addTask { throw URLError(.badURL) } // Simulate an error

    do {
        for try await result in group {
            print("Fetched result: \(result)")
        }
    } catch {
        print("Error: \(error)")
        while let result = try await group.next() {
            print("Processing remaining task result: \(result)")
        }
    }
}

Actors and Sendable

Actors are designed to ensure thread-safe access to shared mutable state. Distributed actors extend this concept to enable communication between processes or devices, making them ideal for distributed systems.

actor is reference type that isolate state, ensuring that only one task can access their mutable state at a time. They simplify concurrency by handling synchronization automatically.

  • Prevents data races by isolating state within the actor.
  • Only accessible within the actor.
  • Ensures only one task interacts with the actor’s state at a time.
  • Simplifies thread-safe programming by avoiding manual synchronization.
  • Tasks must use await to interact with an actor’s state.
  • Do not need [weak self] because Swift Concurrency ensures safe and isolated access.
  • [weak self] is only required in special cases like detached tasks or long-running tasks with non-isolated methods.

The Sendable protocol ensures that values passed between concurrency domains are thread-safe.

  1. Prefer structures against classes
  2. If struct, enum or collection type contains only @Sendable then it’s also sendable
  3. Use @unchecked Sendable to manually provide safe read/write access.
  4. Enforced at compile time, providing strong guarantees.
Type Conformance Requirements
Structures/Enums All members and associated values must be Sendable.
Implicit if frozen, not public, and not @usableFromInline.
Actors
Implicitly conform.
Classes
Must be final.
Only immutable and sendable stored properties.
No superclass or only NSObject as a superclass.
Classes marked @MainActor are implicitly sendable.
Functions/Closures
Mark with @Sendable.
Captured values must be sendable.
Captures must be by value.
Implicit in contexts like Task.detached.
Use @Sendable in type annotations or before closure parameters.

Best Practices for Actors

  • Use actors over manual synchronization methods like locks or semaphores.
  • Use nonisolated for properties or methods that don’t require actor isolation or don’t mutate state.
  • Design types and closures to satisfy Sendable implicitly whenever possible.
  • Ensure manual verification when marking types as @unchecked to avoid concurrency errors.
  • Always use @Sendable for closures passed to detached tasks or other concurrency operations.
  • Use @MainActor attribute to ensure that actor methods or properties run on the main thread for UI updates.

Examples Of Actor usage

Thread-Safe Data Access

actor BankAccount {
    private var balance: Double = 0.0

    func deposit(amount: Double) {
        balance += amount
    }

    func withdraw(amount: Double) throws {
        guard balance >= amount else { throw BankError.insufficientFunds }
        balance -= amount
    }

    func getBalance() -> Double {
        return balance
    }
}

let account = BankAccount()

Task {
    await account.deposit(amount: 100.0)
    let balance = await account.getBalance()
    print("Balance: \(balance)")
}

@Sendable

//@unchecked can be used, but be careful!
class ConcurrentCache<Key: Hashable & Sendable, Value: Sendable>: @unchecked Sendable {
  var lock: NSLock
  var storage: [Key: Value]
}

let lily = Chicken(name: "Lily")
Task.detached {@Sendable in
	lily.feed()
}

Non-isolated

actor WeatherService {
    nonisolated let apiEndpoint = "https://api.weather.com"

    func fetchWeather() async -> Weather {
        // Fetch weather data
    }
}

UI Synchronization:

@MainActor
actor UIManager {
    func updateLabel(_ text: String) {
        label.text = text
    }
}

AsyncSequence for Async Streams

AsyncSequence is a protocol introduced in Swift Concurrency to handle asynchronous streams of values. It is analogous to the Sequence protocol but works asynchronously, enabling developers to process data that arrives over time, such as network streams or real-time updates.

It produces a sequence of values asynchronously. Consumers use the for await loop to retrieve these values one at a time, pausing execution until the next value is available.

  • Asynchronous Iteration
  • Values are generated only when requested.
  • Automatically handles slower consumers without overloading resources.

Declaring an AsyncSequence

You can create your own AsyncSequence by conforming to the protocol and implementing the AsyncIterator type.


AsyncStream is a built-in utility for creating and consuming asynchronous sequences. It is especially useful for bridging asynchronous data sources into the AsyncSequence paradigm.


Best Practices for AsyncSequence

  • Prefer AsyncStream when handling event-based or dynamic data (updates for real-time data)
  • Implement proper error propagation in custom sequences.
  • Use appropriate buffering policies to avoid excessive memory usage.
  • Always ensure tasks and streams respect cancellation signals.

Example of AsyncSequence & AsyncStream:

Consuimng AsyncSequence

let counter = Counter()

for await number in counter {
    print("Number: \(number)")
}

Creating an AsyncStream

let stream = AsyncStream(Int.self) { continuation in
    for i in 1...10 {
        continuation.yield(i)
    }
    continuation.finish()
}

// Consuming an AsyncStream
for await value in stream {
    print("Value: \(value)")
}

Creating a custom AsyncSequence

struct Counter: AsyncSequence {
    typealias Element = Int

    struct AsyncIterator: AsyncIteratorProtocol {
        var current = 0

        mutating func next() async -> Int? {
            guard current < 10 else { return nil }
            defer { current += 1 }
            return current
        }
    }

    func makeAsyncIterator() -> AsyncIterator {
        return AsyncIterator()
    }
}

Error Handling:

struct FaultyCounter: AsyncSequence {
    typealias Element = Int

    struct AsyncIterator: AsyncIteratorProtocol {
       var current = 0

    mutating func next() async throws -> Int? {
        guard current < 5 else { throw CustomError.limitReached }
        defer { current += 1 }
           return current
        }
   }

    func makeAsyncIterator() -> AsyncIterator {
        return AsyncIterator()
    }
}

Buffered AsyncStream:

let bufferedStream = AsyncStream(Int.self, bufferingPolicy: .bufferingOldest(5)) { continuation in
    for i in 1...100 {
        continuation.yield(i)
    }
    continuation.finish()
}

Cancellation

Task {
    for await value in stream {
       print("Processing \(value)")
        if value == 5 { break }
    }
}

Real-Time Updates

func fetchLivePrices() -> AsyncStream<Double> {
    AsyncStream { continuation in
        Timer.scheduledTimer(withTimeInterval: 1.0, repeats: true) { timer in
            continuation.yield(Double.random(in: 100...200))
        }
    }
}

Migration Guide

Migrating to Swift Concurrency involves transitioning existing asynchronous codebases to use modern features like async/await, actors, and task groups.

Best Practices for Migration

  • Migrate one module or feature at a time.
  • Use @MainActor for UI-related concurrency.
  • Provide examples of the new concurrency approach for your team.
  • Replace legacy patterns gradually to minimize disruptions.
  • Replace locks (DispatchQueue.sync) with actor declarations.
  • Replace manual thread handling (DispatchQueue) with Task.
  • Test migrated functions with XCTest using async/await.
  • Use Instruments to identify bottlenecks and optimize task execution.

1. Identify Asynchronous Workflows

Start by identifying parts of your codebase that rely on:

  • Completion handlers.
  • Dispatch queues (DispatchQueue).
  • Legacy threading models like OperationQueue or pthread.

2. Replace Completion Handlers with Async/Await

  • Convert legacy completion handler-based code to use async/await:

Before

func fetchData(completion: @escaping (Result<Data, Error>) -> Void) {
    URLSession.shared.dataTask(with: URL(string: "https://example.com")!) { data, _, error in
        if let error = error {
            completion(.failure(error))
        } else if let data = data {
            completion(.success(data))
        }
    }.resume()
}

After

func fetchData() async throws -> Data {
    try await withCheckedThrowingContinuation { continuation in
        legacyFetchData { data, error in
            if let error = error {
                continuation.resume(throwing: error) // Resume with an error.
            } else if let data = data {
                continuation.resume(returning: data) // Resume with the result.
            } else {
                continuation.resume(throwing: NSError(domain: "UnknownError", code: 0)) // Handle unexpected cases.
            }
        }
    }
}
func legacyFetchData(completion: @escaping (Data?, Error?) -> Void) {
    DispatchQueue.global().async {
        // Simulate some data fetching
        let success = Bool.random()
        if success {
            completion(Data("Fetched data".utf8), nil)
        } else {
            completion(nil, NSError(domain: "ExampleError", code: 1, userInfo: nil))
        }
    }
}

3. Introduce Actors for Shared State

  • Replace locks, semaphores, and other manual synchronization mechanisms with actors.

Before

class SharedCounter {
    private var lock = NSLock()
    private var count = 0

    func increment() {
        lock.lock()
        count += 1
        lock.unlock()
    }

    func getCount() -> Int {
        lock.lock()
        defer { lock.unlock() }
        return count
    }
}

After

actor Counter {
    private var count = 0

    func increment() {
        count += 1
    }

    func getCount() -> Int {
        return count
    }
}

4. Adopt Task Groups for Parallel Work

  • Task groups are ideal for transitioning batch-processing logic from dispatch queues.

Before

DispatchQueue.global().async {
    let group = DispatchGroup()

    for i in 1...5 {
        group.enter()
        processItem(i) {
            group.leave()
        }
    }

    group.notify(queue: .main) {
        print("All tasks completed")
    }
}

After

await withTaskGroup(of: Void.self) { group in
    for i in 1...5 {
        group.addTask {
            await processItem(i)
        }
    }
}
print("All tasks completed")

5. Migrate to AsyncSequence for Streams

  • Transform existing observable patterns into AsyncSequence.

Before

class DataStream {
    var callback: ((Int) -> Void)?

    func start() {
        DispatchQueue.global().async {
            for i in 1...5 {
                self.callback?(i)
            }
        }
    }
}

After

let stream = AsyncStream(Int.self) { continuation in
    for i in 1...5 {
        continuation.yield(i)
    }
    continuation.finish()
}

for await value in stream {
    print(value)
}

6. Handle Compatibility Issues

  • Older APIs that rely on completion handlers can be bridged using async alternatives.
func fetchLegacyData() async throws -> Data {
    try await withCheckedThrowingContinuation { continuation in
        legacyFetchData { data, error in
            if let error = error {
                continuation.resume(throwing: error)
            } else if let data = data {
                continuation.resume(returning: data)
            }
        }
    }
}

Advanced Techniques

Swift Concurrency provides advanced techniques to fine-tune and extend concurrency for specialized use cases. These include features like detached tasks, task-local values, atomic values, sendable protocol, and handling advanced synchronization patterns.


Best Practices

  • Avoid overusing detached tasks as they lack the parent task’s context.
  • Store lightweight, task-specific data.
  • Ensure all shared data conforms to Sendable.
  • Regularly check for cancellation in long-running tasks.

Detached Tasks

Detached tasks are independent units of work that do not inherit context like priority or task-local values from their parent.

  • Runs independently of the task hierarchy.
  • Ideal for background work that does not need context from the current task.
Task.detached {
    let result = await performBackgroundComputation()
    print(result)
}

Use Cases

  • Offloading computationally intensive tasks to the background.
  • Performing work that does not depend on the current task’s context.

Task-Local Values

Task-local values provide a way to store lightweight state that can be accessed by the current task and its child tasks.

@TaskLocal static var userID: String?

Task {
    Task.userID = "12345"
    print(Task.userID ?? "No user ID")
}

Benefits

  • Pass context-sensitive values like authentication tokens or user IDs across task hierarchies.

Atomic Values

Atomic values ensure thread-safe updates to shared state without using locks.

actor Counter {
    private var value = 0

    func increment() {
        value += 1
    }

    func getValue() -> Int {
        return value
    }
}

Benefits

  • Simplifies concurrent state management.
  • Avoids pitfalls like deadlocks and data races.

Advanced Synchronization Patterns

Swift Concurrency enables advanced synchronization using actors and task groups.

actor SharedData {
    private var data: [String] = []

    func addItem(_ item: String) {
        data.append(item)
    }

    func getData() -> [String] {
        return data
    }
}

let sharedData = SharedData()

await withTaskGroup(of: Void.self) { group in
    for i in 1...10 {
        group.addTask {
            await sharedData.addItem("Item \(i)")
        }
    }
}

let results = await sharedData.getData()
print(results)

Handling Cancellation Gracefully

Tasks in Swift can respond to cancellation signals, allowing developers to clean up resources or stop unnecessary work.

Task {
    for i in 0...100 {
        guard !Task.isCancelled else {
            print("Task was cancelled")
            break
        }
        print("Processing item \(i)")
    }
}

Benefits

  • Improves resource management.
  • Prevents unnecessary computations.

Avoid Main Actor Blocking

The @MainActor ensures that tasks run on the main thread, which is critical for UI updates. However, long-running tasks on the main actor can block the UI, making the app unresponsive.

Solution

  • Offload heavy computations to background tasks.
  • Use Task or Task.detached to perform work outside the main thread.

Example:

@MainActor
func updateUI() {
    Task {
        let data = await fetchData()
        display(data)
    }
}

Reduce Actor Contention

Actors serialize access to their state, which can lead to contention when multiple tasks compete for access.

Solution

  • Minimize shared state.
  • Use multiple actors to distribute the workload.

Example:

actor Logger {
    private var logs: [String] = []

    func addLog(_ log: String) {
        logs.append(log)
    }
}

Use Task Priorities Wisely

Assign higher priority to UI-critical tasks and lower priority to background computations.

Task(priority: .high) {
    await performCriticalTask()
}

Optimize Task Group Usage

Task groups enable efficient parallel execution, but improper usage can lead to wasted resources.

Tips

  1. Limit the number of tasks in a group to avoid overwhelming the thread pool.
  2. Aggregate results efficiently to reduce memory usage.
await withTaskGroup(of: Int.self) { group in
    for i in 1...5 {
        group.addTask { i * i }
    }
    for await result in group {
        print(result)
    }
}

Combine Concurrency Patterns

Combine tools like async/await, task groups, and actors for more efficient workflows.

actor DataProcessor {
    private var processedData: [Int] = []

    func process(data: [Int]) async {
        await withTaskGroup(of: Void.self) { group in
            for item in data {
                group.addTask {
                    self.processedData.append(item * item)
                }
            }
        }
    }
}

Distributed Actors

Distributed actors extend actors to support communication across process or network boundaries. They provide a foundation for building distributed systems.

  1. Location Transparency:
    • The caller doesn’t need to know if the actor is local or remote.
  2. Automatic Serialization:
    • Parameters and return values are serialized for remote communication.
  3. Fault Tolerance:
    • Handle failures gracefully in distributed environments.

Use the distributed actor keyword for distributed actors.

import Distributed

distributed actor ChatService {
    func sendMessage(_ message: String) async throws {
        print("Sending message: \(message)")
    }
}

Testing and Debugging

Testing and debugging concurrency in Swift requires specialized tools and practices to ensure that asynchronous code behaves as expected under different conditions.


Unit Testing Asynchronous Code

  • XCTest, Swift Testing supports asynchronous tests using the async keyword.

XCTest and Async/Await

import XCTest

class NetworkTests: XCTestCase {
    func testFetchData() async throws {
        let data = try await fetchData(from: "https://example.com")
        XCTAssertNotNil(data)
    }
}

Testing Task Groups

func testParallelProcessing() async throws {
    let results = try await withTaskGroup(of: Int.self) { group -> [Int] in
        for i in 1...5 {
            group.addTask { i * i }
        }
        var results: [Int] = []
        for try await result in group {
            results.append(result)
        }
        return results
    }
    XCTAssertEqual(results.sorted(), [1, 4, 9, 16, 25])
}

Testing Cancellation

func testTaskCancellation() async {
    let task = Task {
        while !Task.isCancelled {
            print("Task is running")
        }
    }
    task.cancel()
    await task.value
    XCTAssertTrue(Task.isCancelled)
}

Debugging Concurrency

  • Add logs to track task creation, execution, and cancellation.
  • Use Instruments to analyze relationships between parent and child tasks.
  • Check task states like running, suspended, and canceled.
  • Ensure actors serialize access to their state.

Async Call Stack

  • View the execution flow of asynchronous code in Xcode.
  • Use breakpoints in async functions to step through code.

Concurrency Instrument in Instruments

  • Profile tasks, actors, and task groups.
  • Identify bottlenecks, long-running tasks, and contention points.

Deadlocks Deadlocks occur when tasks wait indefinitely for each other. Avoid them by designing clear task hierarchies.

Debugging Deadlocks

  1. Use the thread debugger in Xcode to inspect thread states.
  2. Check for circular dependencies in task interactions.

Data Races Data races happen when multiple tasks access shared mutable data simultaneously.

  1. Use actors to isolate state.
  2. Annotate shared state with @Sendable.

Debugging Actors

Actors simplify debugging by isolating state. Use @MainActor to debug UI-related concurrency.

Example:

@MainActor
actor Logger {
    func log(_ message: String) {
        print(message)
    }
}

Task Traces

  • Swift Concurrency provides tools to trace task execution using Instruments.
  1. Task Hierarchy Visualization - view relationships between tasks and their parents.
  2. Execution Timelines - analyze task execution times.
  3. State Transitions - trace state changes (e.g., pending, running, suspended).

Debugging and Optimization

  1. Use Instruments for Tracing - profile task execution to identify bottlenecks.
  2. Log Task Events - add custom logs to trace task creation and completion.
  3. Aggregate Task Metrics - monitor task group execution time and resource usage.
  4. Ensure proper task-scoping to maintain resource efficiency.

Resources