Define Virtual Memory, Thrashing and Threads

Virtual memory is a technique that allows the execution of processes that are not completely in memory. 

If there is an increase in the number of processes submitted to the CPU for execution, then the CPU utilization will be increased. But by increasing the process continues at a certain time, the CPU utilization falls sharply and sometimes it reaches 0. This situation is said to be thrashing.

A thread is similar to a sequential program like a process because it has a beginning, end, and sequence. A thread is also known as a lightweight process because it runs within a program by using the resources that are actually allocated to the program or process. A thread also has its life cycle and can share a processor. Each thread has its own stack, program counter, and a set of registers


This article defines virtual memory, thrashing, and threads in detail.

Virtual Memory

Generally, a memory unit is required to represent processes to be executed inside the CPU. One of the memories called virtual memory is a technique that allows a large logical address space to be mapped onto a smaller physical memory. It allows large processes to be executed which specifies the increase in CPU utilization. It abstracts main memory into an extremely large and uniform array of storage and separates logical memory from physical memory which is viewed by the user.

Virtual memory allows processes to share files and address space and it also provides an efficient mechanism for process creation. One advantage of virtual memory is that a program can be larger than physical memory.

Virtual memory is the separation of user logical memory from physical memory. This separation allows large virtual memory to be provided for programmers when a smaller physical memory is available. The structure for this can be represented as

Virtual memory can be implemented by using demand paging and segmentation systems. In addition to separating logical memory from physical memory, virtual memory allows files and memory to be shared by different processes through page sharing. Sharing of pages allows for the creation of processes.


If CPU utilization is too small, then we increase the degree of multiprogramming by introducing a new process to the system. 

Suppose a process enters a new phase in its execution and needs more frames. It starts faulting and taking processes away from other processes. These faulting processes use the paging device to swap pages in and out. As the process waits for the paging device, CPU utilization decreases.

The CPU scheduler increases the degree of multiprogramming on finding a decrease in CPU utilization. The new processes start taking frames from each other processes thereby causing more page faults and a long queue for the paging device. Due to all this, CPU utilization decreases further thus making the CPU scheduler increase the degree of multiprogramming even more. This causes thrashing and also throughput decreases.

This can be represented as –

From the figure, we can see that the degree of multiprogramming increases with the increase in CPU utilization although more slowly until the maximum is reached. 

Now if the degree of multiprogramming increases even further, thrashing comes into the picture, and CPU utilization drops sharply. 

At this point, we must decrease the degree of multiprogramming in order to increase CPU utilization and stop thrashing. 

The effects of thrashing can be minimized by local replacement (priority replacement algorithm). So what happens in local replacement is that if one process starts thrashing it cannot steal frames from another process thereby causing the latter to thrash as well.

If the processes are thrashing the average time for the page fault will increase because they will be in a queue for paging devices most of the time. The effective access time will increase. 

The working set strategy starts by looking at how many frames a process is actually using. This approach is known as the locality model of process execution.

The locality model states that, as a process executes, it moves from locality to locality. A locality is a set of pages that are actively used together. It also states that all programs will exhibit a basic memory reference structure. 

WORKING SET MODEL – Based on the assumption of locality:

  • This model uses a parameter Δ to define a working set window. It is to examine the most recent Δ page references. 
  • If the page is in active use, it will be in the working set and if it is no longer being used, it will drop from the working set Δ time units after its last reference. Thus the working set is the approximation of the program’s locality. 
  • The accuracy of the working set depends on the selection of Δ.

If  Δ is too small, it will not encompass the entire locality.

If  Δ is too large, it will overlap several localities

If  Δ is infinite, the working set is the set of pages touched during the process execution.

  • The most important property of a working set is its size. The working set size for each process can be computed :

                  D = Σ WSSi        D – total demand for frames

  • Each process is actively using the pages in its working sets. Thus the process needs WSSi frames.
  • If there are not enough frames to fulfill demand then thrashing will occur because some processes will not have enough frames. 
  • The working set strategy prevents thrashing while keeping the degrees of multiprogramming as high as possible. Thus it optimizes CPU utilization.
  • It is difficult to keep track of the working set. The working set window is a moving window. At each memory reference, a new reference appears at one end and the oldest reference drops off at the other end.


A thread is the basic unit of CPU utilization. A thread consists of a thread ID, a program counter, a register set, and a stack. It shares its code section, data section, and other operating resources such as open files and signals with other threads belonging to the same process.

A single thread is used for a traditional (heavyweight) process. If a process has multiple threads of control, it can perform more than one task at a time. 

Several software packages that are running on modern PCs are multithreaded. An application is implemented as a separate process with several threads of control.

Eg. A word processor may have a thread for displaying graphics, another thread for responding to keystrokes from the user, and a third thread for performing spelling and grammar checking.

There is a requirement for a single application to perform several similar tasks. Eg: A web server accepts client requests for web pages, images, etc. A web server may have several clients concurrently accessing it. If the webserver runs as a single-threaded process, it would be able to service only one client at a time. So, other clients may have to wait for their requests to be served. 

One of the solutions is to have the server run as a simple process that accepts requests. So what happens is that when the server receives a request, it creates a separate process to service that request. This process creation is time-consuming and resource-intensive.

The solution that can be used is to use one process that contains multiple threads. This is known as a multithreaded process. In this, the webserver would create a separate thread that would listen for client requests rather than create another process. 


  1. Responsiveness: It increases responsiveness to the user. It allows a program to continue running even if part of it is blocked or is performing a length operation. Multithreading is an interactive application. 
  2. Resource sharing: Threads allow applications to have several different threads of activity within the same address space. It shares the memory and resources of the process to which they belong. 
  3. Economy: It is more economical to create and context switch threads. Process creation is costly.
  4. Utilization of multiprocessor architecture: Multithreading can be greatly increased in multiprocessor architecture where threads may be running in parallel on different processors. A single thread can run only on one CPU. Multithreading increases concurrency. 


  • Virtual memory involves the separation of logical memory as perceived by the user from physical memory. This allows an extremely large virtual memory to be provided when only a smaller physical memory is available.
  • A process is thrashing if it spends more time in paging than executing.
  • If the CPU scheduler increases the degrees of multiprogramming and CPU utilization drops sharply then thrashing occurs.
  • Effects of thrashing can be minimized using a priority replacement algorithm and working set model. 
  • Thread is the smallest unit execution. It has its own program counter, stack, and a set of registers that is there is a process.

Special thanks to Ami Jangid for contributing to this article on takeUforward. If you also wish to share your knowledge with the takeUforward fam, please check out this article