Process Communication and Concurrency in Operating Systems

process communications and concurrency n.w
1 / 62
Embed
Share

Learn about the importance of processes communicating with each other, the mechanisms they use, common issues in interprocess communication, and the desirable characteristics of communication mechanisms in operating systems.

  • Process Communication
  • Concurrency
  • Operating Systems
  • Interprocess Communication
  • Synchronization

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Process Communications and Concurrency CS 111 Operating Systems Peter Reiher Lecture 7 Page 1 CS 111 Spring 2015

  2. Outline Why do processes need to communicate? What mechanisms do they use for communication? What kinds of problems does such communications lead to? Lecture 7 Page 2 CS 111 Spring 2015

  3. Processes and Communications Many processes are self-contained Or only need OS services to share hardware and data But many others need to communicate To other processes To other machines Often complex applications are built of multiple processes Which need to communicate Lecture 7 Page 3 CS 111 Spring 2015

  4. Types of Communications Simple signaling Just telling someone else that something has happened E.g., letting go of a lock Messages Occasional exchanges of information Procedure calls or method invocation Tight sharing of large amounts of data E.g., shared memory, pipes Lecture 7 Page 4 CS 111 Spring 2015

  5. Some Common Characteristics of IPC There are issues of proper synchronization Are the sender and receiver both ready? Issues of potential deadlock There are safety issues Bad behavior from one process should not trash another process There are performance issues Copying of large amounts of data is expensive There are security issues, too Lecture 7 Page 5 CS 111 Spring 2015

  6. Potential Trouble Spots We re breaking the boundaries between processes Can we still ensure safety and security? Interprocess communication implies more sophisticated synchronization Better not mess it up Moving data between entities usually involves copying Too much copying is expensive Lecture 7 Page 6 CS 111 Spring 2015

  7. Desirable Characteristics of Communications Mechanisms Simplicity Simple definition of what they do and how to do it Good to resemble existing mechanism, like a procedure call Best if they re simple to implement in the OS Robust In the face of many using processes and invocations When one party misbehaves Flexibility E.g., not limited to fixed size, nice if one-to-many possible, etc. Free from synchronization problems Good performance Usable across machine boundaries Lecture 7 Page 7 CS 111 Spring 2015

  8. Blocking Vs. Non-Blocking When sender uses the communications mechanism, does it block waiting for the result? Synchronous communications Or does it go ahead without necessarily waiting? Asynchronous communications Blocking reduces parallelism possibilities And may complicate handling errors Not blocking can lead to more complex programming Parallelism is often confusing and unpredicatable Particular mechanisms tend to be one or the other Lecture 7 Page 8 CS 111 Spring 2015

  9. Communications Mechanisms Sharing memory Messages RPC More sophisticated abstractions The bounded buffer Lecture 7 Page 9 CS 111 Spring 2015

  10. Sharing Memory Everyone uses the same pool of RAM anyway Why not have communications done simply by writing and reading parts of the RAM? Sender writes to a RAM location, receiver reads it Give both processes access to memory via their domain registers Conceptually simple Basic idea cheap to implement Usually non-blocking Lecture 7 Page 14 CS 111 Spring 2015

  11. Sharing Memory With Domain Registers Process 1 Process 2 Processor And read permission for Process 2 With write permission for Process 1 Network Memory Disk Lecture 7 Page 15 CS 111 Spring 2015

  12. Using the Shared Domain to Communicate Process 1 Process 2 Processor Process 2 then reads it Process 1 writes some data Network Memory Disk Lecture 7 Page 16 CS 111 Spring 2015

  13. Potential Problem #1 With Shared Domain Communications Process 1 Process 2 How did Process 2 know this was the correct place to read the data? How did Process 1 know this was the correct place to write the data? Processor Network Memory Disk Lecture 7 Page 17 CS 111 Spring 2015

  14. Potential Problem #2 With Shared Domain Communications Process 1 Process 2 Timing Issues Processor Worse, what if Process 2 reads the data in the middle of Process 1 writing it? What if Process 2 tries to read the data before process 1 writes it? Network Memory Disk Lecture 7 Page 18 CS 111 Spring 2015

  15. And the Problems Can Get Worse What if process 1 wants to write more data than the shared domain can hold? What if both processes wish to send data to each other? Give them read/write on the single domain? Give them each one writeable domain, and read permission on the other s? What if it s more than two processes? Just scratches the surface of potential problems Lecture 7 Page 19 CS 111 Spring 2015

  16. The Core Difficulty This abstraction is too low level It leaves too many tricky problems for the application writers to solve The OS needs to provide higher level communications abstractions To hide complexities and simplify the application writers lives There are many possible choices here Lecture 7 Page 20 CS 111 Spring 2015

  17. Messages A conceptually simple communications mechanism The sender sends a message explicitly The receiver explicitly asks to receive it The message service is provided by the operating system Which handles all the little details Usually non-blocking Lecture 7 Page 21 CS 111 Spring 2015

  18. Using Messages Operating System Process 1 Process 2 RECEIVE Processor SEND Network Memory Disk Lecture 7 Page 22 CS 111 Spring 2015

  19. Advantages of Messages Processes need not agree on where to look for things Other than, perhaps, a named message queue Clear synchronization points The message doesn t exist until you SEND it The message can t be examined until you RECEIVE it So no worries about incomplete communications Helpful encapsulation features You RECEIVE exactly what was sent, no more, no less No worries about size of the communications Well, no worries for the user; the OS has to worry Easy to see how it scales to multiple processes Lecture 7 Page 23 CS 111 Spring 2015

  20. Implementing Messages The OS is providing this communications abstraction There s no magic here Lots of stuff needs to be done behind the scenes And the OS has to do it Issues to solve: Where do you store the message before receipt? How do you deal with large quantities of messages? What happens when someone asks to receive before anything is sent? What happens to messages that are never received? How do you handle naming issues? What are the limits on message contents? Lecture 7 Page 24 CS 111 Spring 2015

  21. Message Storage Issues Messages must be stored somewhere while waiting delivery Typical choices are either in the sender s domain What if sender deletes/overwrites them? Or in a special OS domain That implies extra copying, with performance costs How long do messages hang around? Delivered ones are cleared What about those for which no RECEIVE is done? One choice: delete them when the receiving process exits Lecture 7 Page 25 CS 111 Spring 2015

  22. Message Receipt Synchronization When RECEIVE issued for non-existent message Block till message arrives Or return error to the RECEIVE process Can also inform processes when messages arrive Using interrupts or other mechanism Only allow RECEIVE operations for arrived messages Lecture 7 Page 26 CS 111 Spring 2015

  23. A Big Advantage of Messages Reasonable choice for communicating between processes on different machines If you use them for everything, you sort of get distributed computing for free Not really, unfortunately But at least the interface remains the same Lecture 7 Page 27 CS 111 Spring 2015

  24. Remote Procedure Calls A more object-oriented mechanism Communicate by making procedure calls on other processes Remote here really means in another process Not necessarily on another machine They aren t in your address space And don t even use the same code Some differences from a regular procedure call Typically blocking Lecture 7 Page 28 CS 111 Spring 2015

  25. RPC Characteristics Procedure calls are primary unit of computation in most languages Unit of information hiding in most methodologies Primary level of interface specification A natural boundary between client and server processes Turn procedure calls into message send/receives Requires both sender and receiver to be playing the same game Typically both use some particular RPC standard Lecture 7 Page 29 CS 111 Spring 2015

  26. RPC Mechanics The process hosting the remote procedure might be on same computer or a different one Under the covers, use messages in either case Resulting limitations: No implicit parameters/returns (e.g. global variables) No call-by-reference parameters Much slower than procedure calls (TANSTAAFL) Most commonly used for client/server computing Lecture 7 Page 30 CS 111 Spring 2015

  27. Marshalling Arguments RPC calls have parameters But the remote procedure might not have the same data representation Especially if it s on a different machine type Need to store sender s version of arguments in a message Using intermediate representation Send the message to receiver Translate the intermediate representation to the receiver s format Lecture 7 Page 32 CS 111 Spring 2015

  28. Tools to Support Remote Procedure Calls RPC interface specification RPC generation tool server RPC skeleton External Data Representation access functions Client RPC stubs server client implementation code application code server client Lecture 7 Page 33 CS 111 Spring 2015

  29. RPC Operations Client application links to local procedures Calls local procedures, gets results All RPC implementation is inside those procedures Client application does not know about details Does not know about formats of messages Does not worry about sends, timeouts, resends Does not know about external data representation All generated automatically by RPC tools The key to the tools is the interface specification Lecture 7 Page 34 CS 111 Spring 2015

  30. RPC As an IPC Mechanism Inherits many characteristics of local procedure calls Inherently non-parallel Make the call and wait for the reply Caller is thus limited by speed of callee Not suitable for creating parallel programs Works best for obvious client/server situations Lecture 7 Page 35 CS 111 Spring 2015

  31. What If You Call and No One Answers? Unlike your own procedure calls, remote procedure calls might not be answered Especially if they re on different machines Possibly because the call failed Possibly because the return value delivery failed Possibly because the callee just isn t done yet What should the caller do? Lecture 7 Page 36 CS 111 Spring 2015

  32. RPC Caller Options Just keep waiting Forever? Timeout and automatically retry As part of underlying RPC mechanism Without necessarily informing calling process How often do you do that? Forever? Timeout and allow calling process to decide You could query callee about what happened But what if there s no answer there, either? Lecture 7 Page 37 CS 111 Spring 2015

  33. Another Difference Between Local and Remote Procedure Calls In local procedure calls, fatal bug in called procedure usually kills caller They re in the same process In RPC, fatal bug in called procedure need not kill caller He won t get an answer But otherwise his process is still OK RPC insulates caller better than normal procedure call Lecture 7 Page 38 CS 111 Spring 2015

  34. Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC A buffer that allows writers to put messages in And readers to pull messages out FIFO Unidirectional One process sends, one process receives With a buffer of limited size Lecture 7 Page 39 CS 111 Spring 2015

  35. SEND and RECEIVE With Bounded Buffers For SEND(), if buffer is not full, put the message into the end of the buffer and return If full, block waiting for space in buffer Then add message and return For RECEIVE(), if buffer has one or more messages, return the first one put in If there are no messages in buffer, block and wait until one is put in Lecture 7 Page 40 CS 111 Spring 2015

  36. Practicalities of Bounded Buffers Handles problem of not having infinite space Ensures that fast sender doesn t overwhelm slow receiver A classic problem in queueing theory Provides well-defined, simple behavior for receiver But subject to some synchronization issues The producer/consumer problem A good abstraction for exploring those issues Lecture 7 Page 41 CS 111 Spring 2015

  37. The Bounded Buffer Process 2 is the reader Process 1 is the writer What could possibly go wrong? Process 1 Process 2 A fixed size buffer Process 2 RECEIVEs a message from the buffer Process 1 SENDs a message through the buffer And received More messages are sent Lecture 7 Page 42 CS 111 Spring 2015

  38. One Potential Issue What if the buffer is full? Process 1 Process 2 The sender will need to wait for the receiver to catch up An issue of sequence coordination But the sender wants to send another message? Lecture 7 Page 43 CS 111 Spring 2015

  39. Another Potential Issue Process 2 wants to receive a message Process 1 Process 2 But the buffer is empty Process 1 hasn t sent any messages Another sequence coordination issue Lecture 7 Page 44 CS 111 Spring 2015

  40. Handling the Sequence Coordination Issues One party needs to wait For the other to do something If the buffer is full, process 1 s SEND must wait for process 2 to do a RECEIVE If the buffer is empty, process 2 s RECEIVE must wait for process 1 to SEND Naively, done through busy loops Check condition, loop back if it s not true Also called spin loops Lecture 7 Page 45 CS 111 Spring 2015

  41. Implementing the Loops What exactly are the processes looping on? They care about how many messages are in the bounded buffer That count is probably kept in a variable Incremented on SEND Decremented on RECEIVE Never to go below zero or exceed buffer size The actual system code would test the variable Lecture 7 Page 46 CS 111 Spring 2015

  42. A Potential Danger Process 1 wants to SEND Process 2 wants to RECEIVE Concurrency s a bitch Process 1 Process 2 Process 2 checks BUFFER_COUNT Process 1 checks BUFFER_COUNT 4 5 3 BUFFER_COUNT 4 3 4 5 Lecture 7 Page 47 CS 111 Spring 2015

  43. Why Didnt You Just Say BUFFER_COUNT=BUFFER_COUNT-1? These are system operations Occurring at a low level Using variables not necessarily in the processes own address space Perhaps even RAM memory locations The question isn t, can we do it right? The question is, what must we do if we are to do it right? Lecture 7 Page 48 CS 111 Spring 2015

  44. Another Concurrency Problem The two processes may operate on separate cores Meaning true concurrency is possible Even if not, scheduling may make it seem to happen No guarantee of the atomicity of operations Above the level of hardware instructions E.g., when process 1 puts the new message into the buffer, its update of BUFFER_COUNT may not occur till process 2 does its work In which case, BUFFER_COUNT may end up at 5 Which isn t right, either Lecture 7 Page 49 CS 111 Spring 2015

  45. One Possible Solution Use separate variables to hold the number of messages put into the buffer And the number of messages taken out Only the sender updates the IN variable Only the receiver updates the OUT variable Calculate buffer fullness by subtracting OUT from IN Won t exhibit the previous problem When working with concurrent processes, it s safest to only allow one process to write each variable Lecture 7 Page 50 CS 111 Spring 2015

  46. Multiple Writers and Races What if there are multiple senders and receivers sharing the buffer? Other kinds of concurrency issues can arise Unfortunately, in non-deterministic fashion Depending on timings, they might or might not occur Without synchronization between threads/processes, we have no control of the timing Any action interleaving is possible Lecture 7 Page 51 CS 111 Spring 2015

  47. A Multiple Sender Problem Processes 1 and 3 are senders Process 1 wants to SEND Process 1 Process 2 is a receiver There s plenty of room in the buffer for both But . . . Process 2 Process 3 The buffer starts empty We re in trouble: Process 3 wants to SEND 0 1 1 We overwrote process 1 s message IN Lecture 7 Page 52 CS 111 Spring 2015

  48. The Source of the Problem Concurrency again Processes 1 and 3 executed concurrently At some point they determined that buffer slot 1 was empty And they each filled it Not realizing the other would do so Worse, it s timing dependent Depending on ordering of events Lecture 7 Page 53 CS 111 Spring 2015

  49. Process 1 Might Overwrite Process 3 Instead Process 1 Process 2 Process 3 0 1 0 0 IN Lecture 7 Page 54 CS 111 Spring 2015

  50. Or It Might Come Out Right Process 1 Process 2 Process 3 0 1 0 1 2 IN Lecture 7 Page 55 CS 111 Spring 2015

More Related Content