Processes that share memory can exchange data by writing and reading shared variables. As an example, consider two processes p and q that share some variable mesg. Then p can communicate information to q by writing new data in mesg, which q can then read.
The above discussion raises an important question.
How does q know when p writes new information into mesg?
In some cases,
q does not need to know.
For instance,
if it is a load balancing program that simply looks at the current
load in p's machine stored in mesg.
When it does need to know,
it could poll,
but polling puts undue burden on the cpu.
Another possibility is that it could get a software interrupt,
which we discuss below.
A familiar alternative is to use semaphores or conditions.
Process q could block till p changes mesg and sends a signal
that unblocks q.
However,
these solutions would not allow q to (automatically) block if mesg cannot hold
all the data it wants to write.
(The programmer could manually implement a bounded buffer using semaphores.)
Moreover,
conventional shared memory is accessed by processes on a single
machine,
so it cannot be used for communicating information among remote processes.
Recently,
there has been a lot of work in
distributed shared memory,
which you will study in 203/243.