
Memory Management and Address Binding in Operating Systems
Explore the concepts of memory management, address binding, logical and physical address spaces, and loading processes in operating systems. Learn how memory units, CPU, and hardware devices work together to manage memory efficiently.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
OPERATING SYSTEMS 1 H.Bodur
MEMORY MANAGEMENT Memory consists of a large array of words or bytes, each with its own address. Memory unit sees only a stream of memory addresses. It does not know how they are generated. Program must be brought into memory and placed within a process for it to be run. 2
MEMORY MANAGEMENT Address binding of instructions and data to memory addresses can happen at three different stages. Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes. Load time: Must generate relocatable code if memory location is not known at compile time. Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. 3
LOGICAL - PHYSICAL ADDRESS SPACE Logical address generated by the CPU; also referred to as virtual address. Physical address seen by the memory unit. The set of all logical addresses generated by a program is a logical address space; the set of all physical addresses corresponding to these logical addresses are a physical address space. Logical and physical addresses are the same in compile-time and load-time address- binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme. The run-time mapping from virtual to physical addresses is done by a hardware device called the memory management unit (MMU). 4
LOGICAL - PHYSICAL ADDRESS SPACE The user program never sees the real physical addresses. The program can create a pointer to location x, store it in memory, manipulate it and compare it to other addresses. The user program deals with logical addresses. The memory mapping hardware converts logical addresses into physical addresses. The value in the relocation register is added to every address generated by a user process at the time it is sent to memory. 5
LOGICAL - PHYSICAL ADDRESS SPACE The operating system loads a library of functions during the execution of various programs. As the programs are processed, files are brought into the required memories. There are two types of loading processes: static and dynamic. 6
STATIC LOADING Static loading is the process of loading the complete program into the main memory before it is executed. 7
DYNAMIC LOADING The complete program and all process data must be in physical memory to execute a process. As a result, the process size is restricted by the amount of physical memory available. Dynamic loading is utilized to ensure optimal memory consumption. In dynamic loading, a routine is not loaded until it is invoked. All of the routines are stored on disk in a reloadable load format. The main advantages of dynamic loading are that new routines are never loaded. This loading is useful when a huge amount of code is required to handle it efficiently. 8
COMPARISON BETWEEN THE STATIC AND DYNAMIC LOADING Static Loading Static loading refers to loading the whole program into the main memory before executing the program. It is only performed in structured programming languages such as C. Static loading links and compiles the entire program without the need for additional software. Dynamic Loading Dynamic loading refers to the process of loading a program into the main memory on demand. It happens in OOPs languages such as C++, Java, and others. All modules are loaded dynamically. The developer references all of these, and the rest of the job is completed at execution time. The linking process occurs dynamically in a relocatable form. Data is only loaded into memory when the program requires it. In run time, data and information are loaded bit by bit. When dynamic loading is used, dynamic linking is used as well. The processing speed of dynamic loading is slower The linker joins the object program and other object modules to form a single static-loading program. Unlimited data and the program are loaded into memory to begin execution. When static loading is used, static linking is used as well. It has a faster processing time because no files are 9
STATIC LINKING When we click the .exe (executable) file of the program and it starts running, all the necessary contents of the binary file have been loaded into the process s virtual address space. However, most programs also need to run functions from the system libraries, and these library functions also need to be loaded. In the simplest case, the necessary library functions are embedded directly in the program s executable binary file. Such a program is statically linked to its libraries, and statically linked executable codes can commence running as soon as they are loaded. Static linking is performed during the compilation of source program. Linking is performed before execution in static linking. Disadvantage: Every program generated must contain copies of exactly the same common system library functions. In terms of both physical memory and disk-space usage, it is much more efficient to load the system libraries into memory only once. Dynamic linking allows this single loading to happen. 10
DYNAMIC LINKING Every dynamically linked program contains a small, statically linked function that is called when the program starts. This static function only maps the link library into memory and runs the code that the function contains. The link library determines what are all the dynamic libraries which the program requires along with the names of the variables and functions needed from those libraries by reading the information contained in sections of the library. After which it maps the libraries into the middle of virtual memory and resolves the references to the symbols contained in those libraries. They are compiled into position-independent code (PIC), that can run at any address in memory. Advantage: Memory requirements of the program are reduced. A DLL is loaded into memory only once, whereas more than one application may use a single DLL at the moment, thus saving memory space. Application support and maintenance costs are also lowered. 11
Static Linking The process of combining all necessary library routines and external references into a single executable file at compile-time. Occurs at compile-time. Generally larger file size, as all required libraries are included in the executable. Less flexible, as any updates or changes to the libraries require recompilation and relinking of the entire program. Faster program startup and direct execution, as all libraries are already linked. Dynamic Linking The process of linking external libraries and references at runtime, when the program is loaded or executed. Occurs at runtime. Smaller file size, as libraries are linked dynamically at runtime. More flexible, as libraries can be updated or replaced without recompiling the program. Slightly slower program startup due to the additional linking process, but overall performance impact is minimal. Executables with file extensions like .dll, .so, .dylib, etc. Executables with file extensions like .exe, .elf, .a, .lib, etc. 12
SWAPPING Swapping is a memory management scheme in which any process can be temporarily swapped from main memory to secondary memory so that the main memory can be made available for other processes. It is used to improve main memory utilization. In secondary memory, the place where the swapped-out process is stored is called swap space. The purpose of the swapping in operating system is to access the data present in the hard disk and bring it to RAM so that the application programs can use it. The thing to remember is that swapping is used only when data is not present in RAM. Although the process of swapping affects the performance of the system, it helps to run larger and more than one process. This is the reason why swapping is also referred to as memory compaction. 13
SWAPPING The concept of swapping has divided into two more concepts: Swap-in and Swap- out. Swap-out is a method of removing a process from RAM and adding it to the hard disk. Swap-in is a method of removing a program from a hard disk and putting it back into the main memory or RAM. Example: Suppose the user process's size is 8192KB and is a standard hard disk where swapping has a data transfer rate of 2Mbps. Now we will calculate how long it will take to transfer from main memory to secondary memory. 14
SWAPPING User process size is 8192Kb Data transfer rate is 2Mbps = 2048 kbps Time = process size / transfer rate = 8192 / 2048 = 4 seconds = 4000 milliseconds Now taking swap-in and swap-out time, the process will take 4000 milliseconds. 15
SWAPPING 16
ADVANTAGES OF SWAPPING It helps the CPU to manage multiple processes within a single main memory. It helps to create and use virtual memory. Swapping allows the CPU to perform multiple tasks simultaneously. It improves the main memory utilization. 17
DISADVANTAGES OF SWAPPING If the computer system loses power, the user may lose all information related to the program in case of substantial swapping activity. If the swapping algorithm is not good, the composite method can increase the number of Page Fault and decrease the overall processing performance. 18
CONTIGUOUS MEMORY ALLOCATION Allocating space to software applications is referred to as memory allocation. Memory is a sizable collection of bytes. There are two basic types of memory allocation: Contiguous memory allocation and non-contiguous memory allocation. Contiguous memory allocation enables the tasks to be finished in a single memory region. Contrarily, non-contiguous memory allocation distributes the procedure throughout many memory locations in various memory sections. 19
CONTIGUOUS MEMORY ALLOCATION A software or process requires memory space in order to be run. As a result, a process must be given a specific amount of memory that corresponds to its needs. Memory allocation is the term for this procedure. Contiguous memory allocation is one of these memory allocation strategies. We use this strategy to allocate contiguous blocks of memory to each process, as the name suggests. Therefore, we allot a continuous segment from the entirely empty area to the process based on its size whenever a process requests to reach the main memory. 20
TECHNIQUES FOR CONTIGUOUS MEMORY ALLOCATION Depending on the needs of the process making the memory request, a single contiguous piece of memory blocks is assigned. It is performed by creating fixed-sized memory segments and designating a single process to each partition. The amount of multiprogramming will be constrained, therefore, to the number of memory-based fixed partitions. 21
TECHNIQUES FOR CONTIGUOUS MEMORY ALLOCATION Internal fragmentation results from this allocation as well. Consider the scenario where a process is given a fixed-sized memory block that is a little larger than what is needed. Internal fragmentation is the term used to describe the leftover memory space in the block in the scenario. A partition becomes available for another process to run once a process inside of it has completed. In the variable partitioning scheme, the OS keeps a table that lists which memory partitions are free and which are used by processes. Contiguous memory allocation reduces address translation overheads, expediting process execution. According to the contiguous memory allocation technique, if a process needs to be given space in the memory, we must give it a continuous empty block of space to reside in. There are two ways to allocate this: 22
FIX-SIZE PARTITIONING METHOD Each process in this method of contiguous memory allocation is given a fixed size continuous block in the main memory. This means that the entire memory will be partitioned into continuous blocks of fixed size, and each time a process enters the system, it will be given one of the available blocks. Because each process receives a block of memory space that is the same size, regardless of the size of the process. Static partitioning is another name for this approach. 23
FIX-SIZE PARTITIONING METHOD Three processes in the input queue in the figure require memory space allocation. The memory has fixed-sized pieces because we are using the fixed size partition technique. In addition to the 4MB process, the first process, which is 3MB in size, is given a 5MB block. The second process, which is 1MB in size, is also given a 5MB block. So, it doesn't matter how big the process is. The same fixed-size memory block is assigned to each. It is obvious that under this system, the number of continuous blocks into which the memory will be partitioned will be determined by the amount of space each block covers, and this, in turn, will determine how many processes can remain in the main memory at once. 25
ADVANTAGES This strategy is easy to employ because each block is the same size. Now all that is left to do is allocate processes to the fixed memory blocks that have been divided up. It is simple to keep track of how many memory blocks are still available, which determines how many further processes can be allocated memory. This approach can be used in a system that requires multiprogramming since numerous processes can be maintained in memory at once. 26
DISADVANTAGES Although the fixed-size partitioning strategy offers numerous benefits, there are a few drawbacks as well: We won't be able to allocate space to a process whose size exceeds the block since the size of the blocks is fixed. The amount of multiprogramming is determined by block size, and only as many processes can run simultaneously in memory as there are available blocks. We must assign the process to this block if the block's size is more than that of the process; nevertheless, this will leave a lot of free space in the block. 27
FLEXIBLE PARTITIONING METHOD No fixed blocks or memory partitions are created while using this style of contiguous memory allocation technique. Instead, according on its needs, each process is given a variable-sized block. This indicates that if space is available, this amount of RAM is allocated to a new process whenever it requests it. As a result, each block's size is determined by the needs and specifications of the process that uses it. 28
FLEXIBLE PARTITIONING METHOD There are no partitions with set sizes in the figure. Instead, the first process is given just 5MB of RAM because it requires that much. The remaining 3 processes are similarly given only the amount of space that is necessary for them. This method is also known as dynamic partitioning because the blocks' sizes are flexible and determined as new processes start. 30
ADVANTAGES There is no internal fragmentation because the processes are given blocks of space according to their needs. Therefore, this technique does not waste RAM. How many processes are in the memory at once and how much space they take up will determine how many processes can be running simultaneously. As a result, it will vary depending on the situation and be dynamic. Even a large process can be given space because there are no blocks with set sizes. 31
DISADVANTAGES This method is dynamic, hence it is challenging to implement a variable-size partition scheme. It is challenging to maintain record of processes and available memory space. 32
TECHNIQUES FOR CONTIGUOUS MEMORY ALLOCATION INPUT QUEUES Continuous blocks of memory assigned to processes cause the main memory to always be full. A procedure, however, leaves behind an empty block termed as a hole after it is finished. A new procedure could potentially be implemented in this area. As a result, there are processes and holes in the main memory, and each one of these holes might be assigned to a new process that comes in. 33
TECHNIQUES FOR CONTIGUOUS MEMORY ALLOCATION INPUT QUEUES This is a fairly straightforward technique where we start at the beginning and assign the first hole, which is large enough to meet the needs of the process. The first-fit technique can also be applied so that we can pick up where we left off in our previous search for the first-fit hole. 34
FIRST-FIT Allocate the first hole that is big enough. 35
BEST-FIT The goal of this method, which allocates the smallest hole that meets the needs of the process, is to minimise any memory that would otherwise be lost due to internal fragmentation in the event of static partitioning. Therefore, in order to select the greatest match for the procedure without wasting memory, we must first sort the holes according to their diameters. 36
WORST-FIT The Best-Fit strategy is in opposition to this one. The largest hole is chosen to be assigned to the incoming process once the holes are sorted based on size. The theory behind this allocation is that because the process is given a sizable hole, it will have a lot of internal fragmentation left over. As a result, a hole will be left behind that can house a few additional processes. 37
ADVANTAGES OF CONTIGUOUS MEMORY ALLOCATION It is easy to keep track of the number of memory blocks remaining. Contiguous memory allocation has good read performance since the entire file can be read from the disc in a single process. The contiguous allocation works well and is easy to set up. 38
DISADVANTAGES OF CONTIGUOUS MEMORY ALLOCATION Fragmentation is not an issue because each new file can be written to the disk's end after the preceding one. In order to choose the proper hole size while creating a new file, it needs know its final size. The extra space in the holes would need to be compressed or used once the disk is full. 39
FRAGMENTATION Fragmentation is an unwanted problem in the operating system in which the processes are loaded and unloaded from memory, and free memory space is fragmented. Processes can't be assigned to memory blocks due to their small size, and the memory blocks stay unused. It is also necessary to understand that as programs are loaded and deleted from memory, they generate free space or a hole in the memory. These small blocks cannot be allotted to new arriving processes, resulting in inefficient memory use. 40
FRAGMENTATION The conditions of fragmentation depend on the memory allocation system. As the process is loaded and unloaded from memory, these areas are fragmented into small pieces of memory that cannot be allocated to incoming processes. It is called fragmentation. 41
CAUSES OF FRAGMENTATION User processes are loaded and unloaded from the main memory, and processes are kept in memory blocks in the main memory. Many spaces remain after process loading and swapping that another process cannot load due to their size. Main memory is available, but its space is insufficient to load another process because of the dynamical allocation of main memory processes. 42
TYPES OF FRAGMENTATION There are mainly two types of fragmentation in the operating system. These are as follows: Internal Fragmentation External Fragmentation 43
INTERNAL FRAGMENTATION When a process is allocated to a memory block, and if the process is smaller than the amount of memory requested, a free space is created in the given memory block. Due to this, the free space of the memory block is unused, which causes internal fragmentation. For Example: Assume that memory allocation in RAM is done using fixed partitioning (i.e., memory blocks of fixed sizes). 2MB, 4MB, 4MB, and 8MB are the available sizes. The Operating System uses a part of this RAM. 44
INTERNAL FRAGMENTATION Let's suppose a process P1 with a size of 3MB arrives and is given a memory block of 4MB. As a result, the 1MB of free space in this block is unused and cannot be used to allocate memory to another process. It is known as internal fragmentation. 45
HOW TO AVOID INTERNAL FRAGMENTATION? The problem of internal fragmentation may arise due to the fixed sizes of the memory blocks. It may be solved by assigning space to the process via dynamic partitioning. Dynamic partitioning allocates only the amount of space requested by the process. As a result, there is no internal fragmentation. 46
EXTERNAL FRAGMENTATION This is a condition where we have enough memory for the incoming process but that memory is not contiguous and hence cannot be allotted to the process. Hence the unused memory gets wasted. This causes external fragmentation. Example: Suppose we have a RAM of size 16MB and 4 processes p1,p2,p3, and p4 with size 2MB,4MB,4MB, and 6MB respectively. Since we don t have any partitions, we will allocate memory to the processes in a contiguous manner. After allocation, the RAM will look as in diagram 1. Now after that process p1 and p3 get pulled out from the RAM after they finish execution. The RAM will look as in figure. 47
EXTERNAL FRAGMENTATION You can clearly see that we have two holes created of size 2MB and 4MB. Now if a process of size 6MB comes that will not be loaded into RAM because we don t have contiguous memory of size available although have 6MB free in RAM. This is external fragmentation in OS. 6MB we 48
HOW TO REMOVE EXTERNAL FRAGMENTATION? If somehow we can divide the process of 6MB into three parts of size 2MB each then the first part will be loaded into the hole of size 2MB and the remaining two parts will be loaded in the hole with size 4MB. The other approach can be to combine the two holes of size 2MB and 4MB together to have a single hole of size 6MB. Now, this process can easily be loaded into the RAM. These two approaches are mentioned below: Paging Segmentation Compaction/Defragmentation 49
PAGING Now if p1 and p3 get pulled out of RAM after finishing their execution and a process of size 6MB arrives then it will be divided into 3 parts and each part will be loaded into three available holes. Hence this solves external fragmentation. This is shown in the figure. 50