CITS2002 Systems Programming  
next CITS2002 CITS2002 schedule  

Introduction to multi-threaded programming

Contemporary computers have the ability to execute multiple operations at the same time - or at least they appear to do so.

We know from earlier lectures that operating systems address the concepts of allocating and sharing a single CPU (process scheduling) and RAM (memory management), but we still have competing goals:

  • From the perspective of the operating system, the goal is to fairly, securely, and efficiently shared resources.

  • From the perspective of each process, the goal is to have its required task completed as quickly as possbly, both in terms of speed of execution and responsiveness.

Our laptop computers, for example, will be 'executing' a few hundred processes (most of them sleeping!). As we type a document, we can listen to music, download files, and manipulate a graphical interface via peripherals such as mouse, tablet, and keyboard. For example, when a program does I/O (say to a disk), the CPU may be switched such that it is able to work on some other task while the I/O is taking place. But, as our laptops are considered single-user devices, we accept the conflicts between the competing goals of the operating system and the user.

 

In this and the next lecture, we'll casually examine the concept of concurrency - that independent sets of operations can be occurring at the same time. Operating systems allow the user to both manage concurrency and to exploit concurrency to improve performance.

The focus here will be more on the user and their programming, than the operating system and its resources.

 


CITS2002 Systems Programming, Lecture 20, p1, 9th October 2023.