Concurrency, Where Yesterday Meets Today.

Mohamed Kouache - December 25, 2024

post image

The story of concurrency begins in the early 1960s when computers were enormously expensive machines that could only execute one program at time. This sequential processing meant that while a program was waiting for input/output operations (like reading from a tape or printing), the CPU would sit idle, a terribly inefficient use of such costly hardware.

This economic pressure led to a groundbreaking idea: what if we could let multiple programs share the CPU ? When one program was waiting for I/O, another could use the processor, this concept of multiprogramming was revolutionary, but it introduced entirely new challenges in computer science.

Multics (Multiplexed Information and Computing Service), developed from 1964 to 1996, was the first operating system to truly embrace these ideas. but even before Multics there were a system named CTSS (Compatible Time-Sharing System) developed at MIT in 1961. which introduced several fundamental concepts we still use today like context switching between users. Multics took it further by introducing or refining several other core concepts :

Process Management: Multics treated programs as "processes" - independent units of computation that could run concurrently. Each process had its own address space and state information, allowing the system to switch between them safely.

When we examine Multics more closely, we can see how its revolutionary concepts laid the groundwork for modern operating systems. For instance, its process management system introduced the idea of hierarchical processes, processes that could create child processes, forming a tree structure read more Here.This concept directly influenced Unix's fork() system call, which remains fundamental to process creation in Unix-like systems today.

Multics went beyond process management. it introduced dynamic linking, where programs could link to libraries at runtime rather than compile time. This seemingly simple idea had profound implications for concurrency it meant programs could share code without needing to include their own copies, saving memory and enabling more efficient concurrent execution. Modern dynamic linking in systems like Linux and Windows builds directly on these concepts.

Memory Segmentation: To support multiple concurrent processes, Multics pioneered segmented memory management. Each process could access memory through segments, with hardware-enforced protection preventing processes from interfering with each other.


Concurrency challenges

The development of Multics led to profound insight about concurrent programming. Engineers discovered that concurrent programs could exhibit entirely new types of bugs, like race conditions and deadlocks, that never occurred in sequential programs. This spurred research into synchronization primitives.

Analogy :

Imagine two processes trying to update a bank account balance simultaneously. Process A reads the balance ($100), adds $50, and writes back $150. Meanwhile, Process B reads the same initial balance ($100), subtracts $25, and writes back $75. Without proper synchronization, the final balance could be either $75 or $150, losing one of the transactions entirely! This is known as a race condition, and it helped drive the development of sophisticated synchronization mechanisms :

  1. Semaphores, introduced by Edsger Dijkstra in 1965, provided a systematic way to control access to a shred resources, A semaphore acts like a traffic light for processes, ensuring that critical sections of code are executed by only one process at a time.

  2. Monitor, developed by Tony Hoare in 1974, offered a higher-level abstraction for synchronization. Monitors encapsulate shared data with the procedures that operate on it, making concurrent programming more manageable and less error-prone.

  3. Mutual Exclusion (Mutex), Building on Dijkstra's semaphore, Mutexes provided a simpler mechanism specifically designed for exclusive access to resources. Unlike semaphores, Which can act as counters, Mutexes are binary (true or false), they're either locked or unlocked.

  4. Condition Variables, These allowed processes to wait efficiently for specific conditions to become true, enabling more sophisticated coordination between concurrent processes. For example, a producer process could wait until a buffer had space available, while consumer processes waited for data to become available.


Modern operating systems build on these foundations but face new challenges. The rise of multicore processors means that concurrency isn't just about time-sharing anymore, programs need to effectively utilize multiple CPU cores running truly in parallel.

This had led to programming models like :

Threads: Lightweight units of execution that share memory within a process, making it easier to write parallel programs but requiring careful synchronization.

Message Passing: A different approach where concurrent components communicate by sending messages rather than sharing memory, reducing the complexity of synchronization but potentially introducing communication overhead.

Atomic operations: Processors developed special atomic instructions like Compare-and-Swap (CAS) or Test-and-Set (TAS) that could perform multiple operations without interruption. These instructions became the building block for implementing higher-level synchronization primitives.

Let's continue our journey through the evolution of concurrent computing, exploring how early challenges and solutions shapes today's landscape.


The Distributed Computing Era

As networks became faster and more reliable in the 1970's and 1980s, concurrent computing expanded beyond single machines, Leslie Lamport's work on logical clocks and distributed consensus became foundational. His 1978 paper "Time, Clocks, and the Ordering of Events in a Distributed System" introduced the concept that in distributed systems, what matters ins't physical time but the logical ordering of events.


The Client-Server Revolution

The 1990's brought the widespread adoption of client-server architecture, introducing new concurrency challenges. Web servers needed to handle thousands of concurrent connections efficiently, This led to various concurrency models :

  • Thread-per-connection, was simple but didn't scale well with thousands of connections. The overhead of creating and managing threads became a bottleneck. This limitation led to the development of event-driven architectures and the reactor pattern and reactor pattern, where a single thread could manage many connections through event multiplexing.

  • The C10K Problem, named by Dan Kegel in 1999, became a central challenge: how to handle 10,000 concurrent connections on a single server. This drove innovations like epoll on Linux and kqueue on BSD systems, enabling efficient event-driven I/O multiplexing.


The Multicore Era

The early 2000s marked a significant shift. Moore's Law continued, but clock speeds hit physical limits around 3-4 GHz. Processor manufacturers turned to adding cores instead of increasing clock speeds. This fundamental shift meant that performance improvements new depended on parallel execution.

This hardware shift profoundly impacted software development. Programs needed to utilize multiple cores effectively to improve performance. The industry explored various approaches:

  • Automatic Parallelization, Compilers attempted to automatically parallelize sequential code, but this proved extremely difficult except in specific numerical computing cases.
  • Parallel Programming Models, OpenMP provided a way to add parallel execution directives to existing code. MPI (Message Passing Interface) became standard for distributed parallel computing in scientific applications.

The rise of Functional Programming

The multicore era sparked renewed interest in functional programming. Pure functions, which have no side effects, are naturally easier to parallelize. Languages like Haskel, with their emphasis on immutability, offered new ways to think about concurrent programming.

This influence spread of mainstream languages. Java 8 introduced streams and lambda expressions, making it easier to express parallel operations on collections. Python's multiprocessing module provided similar capabilities.



Conclusion

We're now entering an era where quantum computing promises to revolutionize our understanding of parallel computation. Meanwhile, neuromorphic computing attempts to mimic the massive parallelism of biological brains.

The journey of concurrent computing reflects humanity's persistent drive to push the boundaries of what computers can achieve. From the early days of expensive mainframes to today's distributed cloud systems, we've seen how the practical needs of efficiency and resource utilization sparked profound theoretical innovations.