computer science



Concurrent Programming

Semester 2

This paper presents theory and practice of concurrent programming, including locks, transactional memory, and message passing; multicore and distributed systems; and specification and testing of protocols.

Concurrent programming is about building programs in which many activities are taking place, and the existence of and collaboration between these activities is essential to the program's design. In distributed systems, like credit card systems and telephone networks, the activities happen on physically separate machines communicating by sending messages over wires or radio channels. In multicore computers, the activities may take place on one or many cores and they may communicate through shared memory. The difficulties and principles are similar.

The first half of the paper is about shared memory concurrency, presenting general concepts in the context of POSIX threads. The second half is about message passing between logicallyseparate processes, using Erlang. There will also be material on specification and verification.

Students will learn about designing, implementing, and testing concurrent and distributed programs. Students will learn about deadlock, starvation, livelock, races, and such hazards, about why "let it crash" is a good approach to errors, about the difference between task parallelism and data parallelism, and some concurrency design patterns.

For more information about this paper, contact Dr Richard O'Keefe.