
Understanding Operating System Evolution and Functions
Dive into the world of Operating Systems, exploring their evolution from serial processing to multiprogramming, and understanding the crucial functions they perform in managing computer resources efficiently and securely.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
What is Operating System? What is Operating System? An Operating System is a large collection of software which acts as an interface between the user of a computer and the computer hardware. It manages the resources of a computer system, such as memory, processor, file system and Input/Output devices. It also keeps track of the status of each resource and decides who will have control over computer resources for how long and when.
Evolution of Operating System Evolution of Operating System The evolution of O/S involves several phases such as: 1. Serial Processing Here jobs are allotted to the processor one by one and after the completion of one job, the second one gets the processor time. 2. Batch processing Jobs are stored in the memory unit as job pool and the O/S can choose the next job to run. This is also called spooling, where the I/O operations can overlap with the processor time.
Evolution of Operating System Evolution of Operating System 3. Multiprogramming In this case, many programs can run simultaneously using the time-sharing mechanism of the O/S. The CPU utilization increases in this case.
Functions of Operating System Functions of Operating System Simplifies the execution of user programs and make solving of user problems easier. Use computer hardware efficiently. It allows sharing of hardware and software resources. Provide isolation, security and protection among user programs. Improve overall system reliability through error confinement, fault tolerance, reconfiguration etc.
Functions of Operating System Functions of Operating System The main functions of an O/S can broadly categorized as: Process Management Memory Management File Management Input / Output Management Security and Protection
The concept of a Process The concept of a Process A process is an executing program including the current values of the program counter, registers and variables. A program is a group of instructions whereas the process is the activity. For an O/S, process management functions include: Process creation Termination of a process
The concept of a Process The concept of a Process Controlling the process Process scheduling Dispatching Interrupt handling / Exception handling Switching between processes Process synchronization Inter-process communication support Management of Process Control Block (PCB)
Process States Process States A process has five states and at any given time, it remains in any one of those five states. 1. New State: The process is being created 2. Ready State: The process is waiting to be assigned to a processor 3. Running State: Instructions are being executed 4. Waiting / Suspended / Blocked: The process is waiting for some event to occur 5. Halted : The process has finished execution.
Transitions Transitions 1. New Ready: When the scheduler allows the new process in its queue. 2. Ready Running and Running Ready: Caused by the process scheduler, when the CPU time allotted to the process expires. 3. Running Blocked/waiting: Occurs due to I/O block or any explicit block. 4. Blocked/Waiting Ready: Occurs when the external event for which the process was waiting is completed. Halted: terminates. 5. Running When the process
Implementation of Processes Implementation of Processes To implement the process model, the O/S maintains a table, an array of structures, called Process Table or Process Control Block (PCB) or Switch Frame. Each entry identifies a process with information such as process state, its program counter (PC), Stack Pointer, Memory allocation, status of its open files, its accounting information. and scheduling
Process Control Block (PCB) Process Control Block (PCB) Process State Process Number Program Counter Registers - - Memory Limits List of open files It must contain everything about the process that must be saved when the process is switched from running state to ready state, so that it may be restarted later, as if it were never being stopped.
Context Switch Context Switch A context is the contents of a CPU s registers and Program counter at any point of time. A context switch (Process Switch/ Task Switch) is the switching of the CPU from one process to another. It is also described as the kernel suspending execution of one process on CPU and resuming execution of some other process that had previously being suspended.
Process Scheduling Process Scheduling Scheduling refers to a set of policies and mechanisms supported by O/S that controls the order in which the work to be done is completed. A scheduler is an O/S module or Program that selects the next job to be admitted for execution. The main objective of scheduling is to increase CPU utilization and higher throughput. Throughput is the amount of work accomplished in a given time interval.
Scheduling Objectives Scheduling Objectives 1. Maximize Throughput scheduling attempts to service largest possible number of process per unit time. 2. Maximize the number of interactive user receiving acceptable response times. 3. Be predictable A given job should utilize the same amount of time and cost should be same, regardless the load in the system. 4. Minimize overhead minimize the wasted resource overhead.
Scheduling Objectives Scheduling Objectives 5. Balance resource use should keep the resources of the system busy. 6. Avoid indefinite postponement 7. Enforce priorities. 8. Degrade gracefully under heavy load A scheduling mechanism should not collapse under heavy system load. It either prevents heavy load by not allowing new processes to be created during heavy load or it should provide service to the heavier load by providing a moderately reduced level of service to all processes.
Types of Schedulers Types of Schedulers There are three types of schedulers: 1. Short term Scheduler 2. Long term Scheduler 3. Medium term Scheduler Short term scheduler selects the process for the processor among the processes which are already in the ready queue (memory). They execute on processor and execute very fast in order to achieve better processor utilization.
Types of Schedulers Types of Schedulers Long term scheduler They selects processes from the process pool and loads selected processes into memory (ready queue) for execution. It is slower and executes much less frequently. It controls the degree of multiprogramming the number of processes in the memory (ready queue). It selects a good mix of I/O bound and CPU bound processes.
Types of Schedulers Types of Schedulers Medium term Scheduler It can sometimes be good to reduce the degree of multiprogramming by removing processes from memory and storing them on disk. These processes can then be re-introduced into memory by medium term scheduler.
Scheduling Criteria Scheduling Criteria The goal of scheduling algorithm is to identify the process whose selection will result in the best possible system performance. In order to achieve an efficient processor management, O/S tries to select the most appropriate process from the ready queue. For selecting, the relative importance of the following may be considered as performance criteria.
Scheduling Criteria Scheduling Criteria CPU Utilization: Ratio between busy time of processor to the total time of processes to finish) Processor Utilization = Processor Busy Time Processor Busy Time + Processor Idle Time Throughput: Refers to the amount of work completed in a unit time. Throughput = Number of processes completed Time Unit
Scheduling Criteria Scheduling Criteria Turnaround Time defined as interval from the time of submission of a process to the time of its completion. Turnaround time = t(process completed) t(process submitted) Waiting Time Time spend in the ready queue Waiting time = Turnaround time Processing Time
Scheduling Criteria Scheduling Criteria Response Time In the time sharing system, this is the interval from the last character typed in command line to the time when the last result appears on the screen. Response Time = t(first response) t(submission of request)
Scheduling Algorithms Scheduling Algorithms A major division among scheduling algorithms is that whether they support pre-emptive or non- preemptive scheduling discipline. Pre-emptive Scheduling An O/S implementing this algorithm, switches to the processing of a new request before completing the processing of the current request. Pre-emptive scheduling is more useful in high priority process which response. required immediate
Scheduling Algorithms Scheduling Algorithms Round scheduling, SRTN (shortest remaining time next) are considered as pre-emptive scheduling. Robin scheduling, Priority based Non Pre-emptive Scheduling: A non pre-emptive scheduling always processes a scheduled request to its completion. First Come First Serve (FCFS) and Shortest Job First (SJF) are considered to be non pre-emptive scheduling algorithms.
Scheduling Algorithms Scheduling Algorithms 1. First Come First Serve (FCFS) Simplest scheduling algorithm Jobs are schedules in order they are received Tends to favour CPU-Bound processes Example of non pre-emptive scheduling 2. Shortest Job First (SJF) The shortest job in the queue is executed first Example of non-pre-emptive scheduling algorithm
Scheduling Algorithms Scheduling Algorithms 3. Round Robin (RR) Round Robin Algorithm is pre-emptive that relates the process that has been waiting the longest. Oldest, simplest and widely used algorithm. Basically the CPU time is divided into time slices. No process can run for more than a quantum (small time slice allocated) while others are waiting in the ready queue. A process requiring more CPU time goes and waits at the end of the queue.
Scheduling Formulas Again Scheduling Formulas Again TAT = time(process comp) time(process submitted) Waiting time = TAT Processing Time Waiting time = t(proc comp) proc time Response Time = t(first response) t(submission of request)
Some Definition and Concepts Some Definition and Concepts Uniprogramming: processes only one program at a time and all system resources are available exclusively for the job until its completion. A uniprogramming system Multiprogramming: processes two or more different and independent programs at the same time simultaneously. Multiprogramming system Multitasking: It is the interleaved execution of multiple programs or jobs (often referred to as tasks of the same user) in a single user system.
Some Definition and Concepts Some Definition and Concepts Multithreading In multithreading, a single process consists of an address space and one or more threads of control. Each thread process has its own program counter, its own register states and its own stack. However all the threads of the process share the same address space. We often refer to threads as light-weight processes.
Some Definition and Concepts Some Definition and Concepts Multiprocessing Systems having multiple processors (CPUs) are multiprocessing systems. They can execute multiple processes concurrently. These systems use multiple CPUs to process either instructions from different and independent programs or different instructions from the same program simultaneously. This is also termed as parallel processing.
Some Definition and Concepts Some Definition and Concepts Time-Sharing This is a mechanism to provide simultaneous interactive use of a computer system by many users in such a way that all users feel that the computer is working dedicatedly for him/her. It uses multiprogramming with a special CPU scheduling algorithm to achieve this.
Some Definition and Concepts Some Definition and Concepts Fixed Partition and Variable partition Memory: In a multiprogramming memory model, multiple user processes can reside simultaneously in main memory. The two memory management schemes, which operating systems use to facilitate this, are: Multiprogramming partitions, where the partition size remains same, irrespective of the size of the process. Wastage of main memory space is the main disadvantage of this system with fixed number of and
Some Definition and Concepts Some Definition and Concepts and Multiprogramming with variable number of partition, where the partition size is created as per the size of the process.
Some Definition and Concepts Some Definition and Concepts Virtual Memory: This is a memory management scheme that allows execution of a process without the need to load the entire process in the main memory all at a time. A portion of the secondary memory keeps the bulk amount of the process, which is not used, being needed by the CPU at the very moment. They are transferred to the main memory, as and when required for processing.
Some Definition and Concepts Some Definition and Concepts The three basic concepts, which virtual memory mechanism uses for its realization are online secondary storage, swapping paging. and demand Swapping: Storing programs in the virtual memory (a portion of the online secondary memory, treated virtually as the main memory) and then transferring some block of data into main memory and vice-versa, whenever they are needed is called swapping. This technique is used to process large programs, or several programs with limited memory.
Some Definition and Concepts Some Definition and Concepts Demand Paging: In a virtual memory system, the operating system partitions all processes into pages, which reside on on-line secondary storage. The Operating system also partitions the physical memory into page frames of same size. Now instead of loading an entire process before its execution can start, the operating system uses a swapping algorithm called demand paging. This algorithm swaps in those pages only of the process, which the process currently needs in memory for its execution to continue.
Some Definition and Concepts Some Definition and Concepts This idea is based on the observation that since a computer executes a program s instructions one-by-one, it does not need all parts of the program in memory simultaneously. When there is no free page frame in memory to accommodate a page that the OS needs to swap to continue the process execution, it invokes a replacement algorithm to create one for the accessed page. page- Page replacement deals with selecting a page that is residing in memory but is not in use currently. The OS swaps out the selected page to free the page frame it is occupying and then swaps in the accessed page of the process in the freed page frame.
File Management File Management A file is collection of related information, which has a definite name to identify it uniquely by the system, its data and attributes. The files data is its contents, which is a sequence of bits, bytes, lines or records. File management module of an OS takes care of file-related activities, such as structuring, accessing, naming, sharing and protection of files. File Access Methods: To use information stored in a file, a system needs to access it and read its contents in main memory. Normally a computer system supports the following two file access methods at operating system level.
File Management File Management Sequential access: In this method, a process can read the bytes or records in the file in the order in which they are stored, starting at the beginning. Reading of bytes or records randomly or out of order is not possible, but an application can however rewind and read a sequential access file as often as needed. Random access: Applications can access the contents of a random access file randomly, irrespective of the order in which the bytes or records are stored.
Device DeviceManagement Management For processing data, a computer must first input data and programs for which it needs input devices. Similarly for producing results of processing, it needs output devices. Device management module of the OS takes care of controlling all I/O devices of a computer system and provides a simple and easy to use interface to these devices. A computer uses device controller to connect I/O devices to it. Each device controller is in charge of and controls a set of devices of a specific type. For example, a disk controller controls disk drives and a printer controller controls printers.
Device DeviceManagement Management A device controller maintains some local buffer storage and is responsible for moving data between an I/O device that it controls and its local buffer storage. Each device controller also has a few registers that it uses for communicating with CPU. In some computers these registers are part of the regular memory address space. This scheme is called memory mapped I/O. In other computers, the OS uses a special address space for I/O, and allocates a portion of it to each device controller.
Device DeviceManagement Management To perform I/O operation, the OS writes the relevant commands and their associated parameters into the appropriate controller s registers, and resumes its operation. The device controller then examines the contents of these registers and performs necessary action for the I/O operation. For example, the action for a read request will be to transfer data from the specified input device to its local buffer. Once transfer of data from input device to the controller s local buffer is complete, the OS uses one of the following two methods to transfer the data from the controller s local buffer to the appropriate memory area of the computer.
Device DeviceManagement Management Non-DMA transfer: In this method, as soon as transfer of data from input device to the controller s local buffer is complete, the controller sends an interrupt signal to CPU. The CPU then stops what it was doing currently and transfers control of execution to the starting address of the service routine, which handles the interrupt. On execution, the interrupt service routine transfers the data from local buffer of the device controller to main memory.
Device DeviceManagement Management As soon as the data transfer is complete, the CPU resumes the job that it was doing before interruption. In this case the CPU is involved in transfer of data from device controller s buffer to main memory. The interrupt service reads one byte or word at a time from the device controller s buffer and stores it in memory.
Device DeviceManagement Management DMA transfer: To free the CPU from data transfer operation, many device controllers support Direct Memory Access (DMA) mechanism. In this method, when the OS prepares for data transfer operation, it commands and their associated parameters into the controller s registers. parameters include the starting memory address (from/to where data transfer is to take place) and the number of bytes in the data. writes the relevant The command
Device DeviceManagement Management Now after the controller has read the data from the device into its buffer, it copies the data from the buffer into main memory (at the specified memory address) one byte or word at a time. Thus it does not involve the CPU in this data transfer operation. It only sends an interrupt to the CPU after copying the entire data to the memory.
Computer Virus Computer Virus Computer Virus: A computer virus is a piece of code attached to a legitimate program, which when executed, infects other programs in the system by replicating and attaching itself to them. In addition to this replicating effect, a virus normally does some other damage to the system such as corrupting and erasing files.
Computer Worm Computer Worm Computer Worm: Programs that spreads from one computer to another in a network of computers are worms. It spreads by taking advantage of the way in which a computer network shares resources, and in some cases, by exploiting flaws in standard software, which computer network use. It may perform destructive activities after arrival at a network node.