Department of Computer Science and Information Systems

End-of-Semester Assessment Paper

Academic Year:



Semester 1

Module Title:

Operating Systems

Module Code:


Duration of Exam:

2½ Hours

Percent of Total Marks:



J Sturdy

Paper marked out of :


Instructions to Candidates:

Section A



Explain the concepts of:

  1. process address spaces

  2. swap space

  3. demand-paged executable

  4. copy-on-write

A process address space is the range of memory available to a process. In typical system, and it starts at address zero, and goes up to the maximum size of address, possibly with some gaps. Each process has its own address space, and the process cannot access each other's memory -- they can only access memory inside their own address space.

Swap space is disk space used as backing store for main memory. When main memory is full, to make space for new things in it, pages all of its existing content are copied out to swap space to free up main memory to put other things in. Swap space consists of swap partitions, which are not organised as files but are simply blocks of disk space, and swap files in the normal file partitions.

A demand-paged executable is a compiled and linked program in such a format that it can be used as a swap file for that program -- i.e., when a page of the program is found not to be in physical memory, it is brought in straight from the executable file rather than from separate swap space.

Copy on write is a way on copying an area of virtual memory efficiently, by making both the process is using it see it as read-only in a shared copy, and then when either of them tries changing it, really taking a copy of just the page containing that change, and giving the two processes separate copies of the page from now.

12 Marks


How do these make the Unix/Linux operations “fork” and “exec” more efficient?

Fork is more efficient this way because it does not have to copy the memory of the process, which may often not be used anyway if the memory contents are about to be replaced by doing an exec operation. Only the pages which are actually needed to be different in a child process are copied. Also, the copying that does occur happens over a period of time rather than all at once, and so the perceived delay to the user is less.

The exec is more efficient because rather than loading into memory all pages in the program (some of which may never be used anyway) and then, if necessary, copying them out to the swap space, it simply uses the program file as the swap file, and brings the program in from there as required, step-by-step, as each part of it is first used.

5 Marks


Explain the use of hardware interrupts, and the interrupt handlers for them, in implementing both of these features.

The first way that interrupts are used here is that the memory management unit will interrupt the CPU whenever it does an address translation for a page that is not currently in memory; and also if a write access is attempted to a page which is currently read only when doing copy on write.

Disk Drive interrupts are used, as for other disk I/O, for transferring swap data in and out of memory.

8 Marks



By means of a diagram or otherwise, list the major components of an operating system, showing how the connect with each other.

The usual diagram to go here!

10 Marks


By means of a table or otherwise, indicate the involvement (if any) of each component in:

  1. multi-processing

  2. input/output

  3. virtual memory

  4. security

  5. performance

The obvious table to go here! Will fill in in more detail in a few days

15 Marks

Section B



What are the advantages and disadvantages of including virtual memory in a computer system?

The advantages include that processes are protected from accesses to their memory by other processes, that more memory is available than is provided by physical memory, that processes can easily share memory, and that all address spaces can take the same form, such as starting at 0 and extending upwards. Disadvantages include more complex hardware, and possible delays.

6 Marks


How does the provision of virtual memory affect the process model of the system? And how does the process model affect the virtual memory? Explain in detail how these two interact, for example, where code implementing one of them needs to call the other.

The provision of virtual memory means that each process can ignore the presence of other processes in the system as far as memory layout is concerned -- in effect, each process has a computer to itself unless it arranges otherwise.

Address spaces are done on a per process basis, and must be created with process. The code sections of several processes running the same program can share memory. Processes can arrange to share memory; and threads within a process always share memory.

Whenever a process experiences a page fault, it must stop being the current process while the virtual memory mechanism brings the page required in from disk; once it is in from disk, that process can then be scheduled again.

15 Marks


On what kinds of system does it make sense not to have virtual memory?

In a very simple system, or one with a very well-known and limited set of functions, virtual memory is an unnecessary overhead. This will be typical of an embedded system. Also, because it can sometimes cause a delay because of paging faults, it may be inappropriate in some realtime systems.

4 Marks



Explain what a scheduler does, and the data structures it uses.

A scheduler chooses the next process to be run at any time when change in current process is appropriate. It maintains several lists of processes, one of them are processes which are ready to be run, and the others being processes which are waiting for certain things to happen, typically things caused by peripheral devices such as disk drives completing an operation.

7 Marks


In what situations is the scheduling code entered, and what does it do in each of these?

The scheduler is entered whenever any process has to give up the CPU, such as a blocking I/O operation like reading part of a file from disk. When this happens, the current process is taken off the ready queue, and put onto the queue for the device concerned. It is also entered from interrupt handlers, which typically indicates that the device has finished an operation, and so a process waiting for that device can now be taken off the device queue and put back onto the ready queue. Each time the scheduler completes running, it picks a process from the ready queue, and makes that the current process. The scheduler is also entered when processes do locking operations such as monitor enter or exit.

12 Marks


Explain the factors to be taken into account (such as priority) in designing a scheduling algorithm.

Priority is probably the most important thing for the scheduler to keep in account in choosing the process to run next. Within processes of the same priority, it usually makes sense to apply an idea of ``fairness'', such as a process which is not have much CPU time recently is chosen.

6 Marks



Some complex applications such as rendering an animated film or matching DNA sequences can be parallelized to take advantage of multiple CPUs. Preferably using fragments of C-like pseudocode, explain the use such an application makes of the operating system, paying particular attention to how processes are set up and how they are co-ordinated with each other.

Use of fork and exec in a loop to create a family of processes; synchronisation calls such as semaphore get and put, and perhaps something to do with share memory and/or message queues. Exit and wait would be good as well.

16 Marks


In the context of such an application, explain the difference between processes and threads. How would you choose which of these to use in a particular application?

The main difference is that threads share an address space whereas processes have their own address spaces. Different library calls are needed to set them up. Threads often makes sense for application where many things are being done to the same data structure at same time.

9 Marks