CITS2002 Systems Programming | |
|
Project 1 2019 - simulate a single-CPU, multi-device, pre-emptive process scheduler
Project DescriptionModern operating systems maintain a lot of state information to support their operation. Some of this information can be periodically captured to provide a snapshot of a currently running system, or logged for later analysis. For example, a tracefile can contain a summary of the execution history of completed processes, and the information may be used to improve the execution of future processes.In combination, the execution requirements of processes and the input/output (I/O) operations they perform, is termed the job-mix. This project will use the information in a tracefile to find the near-optimal scheduling of processes, in the belief that future job-mixes will be similar to that in a recent tracefile.
Our Operating System ModelConsider an operating system's 5-State Model of Process Execution, as introduced in Lecture 8.New processes are admitted to the system and are immediately marked as Ready to run. Each process executes, in turn, until it:
For this project we'll consider a simplified operating system in which only a single process occupies the single CPU at any one time. The CPU has a clock speed of 1GHz, enabling it to execute one-billion instructions per second (or 1000 instructions per microsecond, 1000000 instructions per millisecond). For this simplified project, we do not need to consider any RAM accesses. It takes 5 microseconds to perform a context-switch - to move one process from Ready → Running. No time is consumed deciding that the currently Running process can remaining Running (and start a new time-quantum). All other state transitions occur in zero time (unrealistic, but keeping the project simple). The CPU is connected to a number of input/output (I/O) devices of differing speeds, using a single high-speed data-bus. Only a single process can use the data-bus at any one time, and it takes 5 microseconds for any process to first acquire the data-bus. Only a single process can access each I/O device (and the data-bus) at any one time. If the data-bus is in use (data is still being transferred) and a second process also needs to access the data-bus, the second process must be queued until the current transfer is complete. When a data transfer completes, all waiting (queued) processes are consider to determine which process can next acquire the data-bus. If multiple processes are waiting to acquire the data-bus, the process that has been waiting the longest for the device with the highest priority will next acquire the data-bus. Thus, all processes waiting on higher priority devices are serviced before any processes that are waiting on lower priority devices. The result is a need to support Multiple blocked queues, as introduced in Lecture 8.
TracefilesConsider the following lines from the beginning of text file termed a tracefile. We may assume that the format of each tracefile is correct, and its data consistent, so we do not need to check for errors in the tracefile.
Each line provides a device definition including its name and its data transfer rate (all transfer rates are measured in bytes/second). Devices with higher transfer rates have a higher priority and are serviced first. The final line indicates the end of all device definitions, and that the operating system commences execution setting the system-time to zero microseconds. All times in the tracefile are measured in microseconds. The following lines of the same tracefile provide the execution summary of two processes:
As process 2 commenced execution before process 1 exits, they will need to share the CPU. When both processes exist, each process executes on the single CPU until it exits, until it performs some I/O, or until its finite time-quantum expires.
Evaluating Operating System Scheduling EffectivenessThere are a number of ways to measure the effectiveness of an operating system's process scheduling. There can be no single perfect measure of its effectiveness as it is very dependent on the job-mix to be scheduled, and the data transfer rates of the system's devices. For this project we will use the total process completion time to evaluate our process scheduling. With reference to our simple tracefile above, the total process completion time is the time from the commencement of the first process to the time that the final process exits.Under our simple operating system model, the time-quantum may be varied to permit each process to occupy the CPU for different lengths of time. Varying the time-quantum will result in the same job-mix exhibiting different total process completion times. With a short time-quantum, a system running many processes will give each of those processes a frequent chance to 'make progress', and will appear very responsive. With a long time-quantum, computationally-intensive processes can perform a lot of their execution, but delay the execution of other processes waiting for the CPU, and may make the system appear sluggish. Each job-mix (recorded in a tracefile) will have a near optimal time-quantum that minimises the total process completion time. If future job-mixes are similar to the one in a given tracefile, and we can determine a good time-quantum, then we can tune our process scheduling to match our anticipated job-mix.
Project requirementsYou are required to develop and test a program that determines the time-quantum providing the best (shortest) total process completion time for a given tracefile.Your program, named besttq, should accept command-line arguments providing the name of the tracefile and three integers representing times measured in microseconds. The program is only required to produce a single line of output (though you may wish to produce several more lines, as a form of debugging) reporting the best time-quantum found. For example, the following command might produce the single line of output:
prompt> ./besttq tracefile1 100 2000 100
best 1600 440800 The command reads and evaluates (simulates, 'executes') the contents of the tracefile several times, employing different time-quanta, to find which time-quantum results in the lowest total process completion time. The command, above, considers the different time-quantum values of 100 microseconds, 200 microseconds, ... 2000 microseconds (just like a for loop in C99). The program has determined that the best time-quantum for the given job-mix is 1600 microseconds and which results in a total process completion time of 440800 microseconds. Your program can print out any additional information (debugging?) that you want - just ensure that the last line is of the form "best 1600 440800". Good luck! Chris McDonald. |