Read Programming Python Online

Authors: Mark Lutz

Tags: #COMPUTERS / Programming Languages / Python

Programming Python (31 page)

BOOK: Programming Python
3.86Mb size Format: txt, pdf, ePub
ads
Process Exit Status and Shared State

Now, to learn how to obtain
the exit status from forked processes, let’s write a
simple forking program: the script in
Example 5-17
forks child processes and
prints child process exit statuses returned by
os.wait
calls in the parent until a “q” is
typed at the console.

Example 5-17. PP4E\System\Exits\testexit_fork.py

"""
fork child processes to watch exit status with os.wait; fork works on Unix
and Cygwin but not standard Windows Python 3.1; note: spawned threads share
globals, but each forked process has its own copy of them (forks share file
descriptors)--exitstat is always the same here but will vary if for threads;
"""
import os
exitstat = 0
def child(): # could os.exit a script here
global exitstat # change this process's global
exitstat += 1 # exit status to parent's wait
print('Hello from child', os.getpid(), exitstat)
os._exit(exitstat)
print('never reached')
def parent():
while True:
newpid = os.fork() # start a new copy of process
if newpid == 0: # if in copy, run child logic
child() # loop until 'q' console input
else:
pid, status = os.wait()
print('Parent got', pid, status, (status >> 8))
if input() == 'q': break
if __name__ == '__main__': parent()

Running this program on Linux, Unix, or Cygwin (remember,
fork
still doesn’t work on standard Windows
Python as I write the fourth edition of this book) produces the
following sort of results:

[C:\...\PP4E\System\Exits]$
python testexit_fork.py
Hello from child 5828 1
Parent got 5828 256 1
Hello from child 9540 1
Parent got 9540 256 1
Hello from child 3152 1
Parent got 3152 256 1
q

If you study this output closely, you’ll notice that the exit
status (the last number printed) is always the same—the number 1.
Because forked processes begin life as
copies
of
the process that created them, they also have copies of global memory.
Because of that, each forked child gets and changes its own
exitstat
global variable without changing any
other process’s copy of this variable. At the same time, forked
processes copy and thus share file descriptors, which is why prints go
to the same place.

Thread Exits and Shared State

In contrast,
threads run in parallel within the
same
process and share global memory. Each thread
in
Example 5-18
changes the
single shared global variable,
exitstat
.

Example 5-18. PP4E\System\Exits\testexit_thread.py

"""
spawn threads to watch shared global memory change; threads normally exit
when the function they run returns, but _thread.exit() can be called to
exit calling thread; _thread.exit is the same as sys.exit and raising
SystemExit; threads communicate with possibly locked global vars; caveat:
may need to make print/input calls atomic on some platforms--shared stdout;
"""
import _thread as thread
exitstat = 0
def child():
global exitstat # process global names
exitstat += 1 # shared by all threads
threadid = thread.get_ident()
print('Hello from child', threadid, exitstat)
thread.exit()
print('never reached')
def parent():
while True:
thread.start_new_thread(child, ())
if input() == 'q': break
if __name__ == '__main__': parent()

The following shows this script in action on Windows; unlike
forks, threads run in the standard version of Python on Windows, too.
Thread identifiers created by Python differ each time—they are arbitrary
but unique among all currently active threads and so may be used as
dictionary keys to keep per-thread information (a thread’s id may be
reused after it exits on some platforms):

C:\...\PP4E\System\Exits>
python testexit_thread.py
Hello from child 4908 1
Hello from child 4860 2
Hello from child 2752 3
Hello from child 8964 4
q

Notice how the value of this script’s global
exitstat
is changed by each thread, because
threads share global memory within the process. In fact, this is often
how threads communicate in general. Rather than exit status codes,
threads assign module-level globals or change shared mutable objects
in-place to signal conditions, and they use thread module locks and
queues to synchronize access to shared items if needed. This script
might need to synchronize, too, if it ever does something more
realistic—for global counter changes, but even
print
and
input
may have to be synchronized if they
overlap stream access badly on some platforms. For this simple demo, we
forego locks by assuming threads won’t mix their operations
oddly.

As we’ve learned, a thread normally exits silently when the
function it runs returns, and the function return value is ignored.
Optionally, the
_thread.exit
function
can be called to terminate the calling thread explicitly and silently.
This call works almost exactly like
sys.exit
(but takes no return status
argument), and it works by raising a
SystemExit
exception in the calling thread.
Because of that, a thread can also prematurely end by calling
sys.exit
or by directly raising
SystemExit
. Be sure not to call
os._exit
within a thread function,
though—doing so can have odd results (the last time I tried, it hung the
entire process on my Linux system and killed every thread in the process
on Windows!).

The alternative
threading
module for threads has no method equivalent to
_thread.exit()
, but since all that the latter
does is raise a system-exit exception, doing the same in
threading
has the same effect—the thread exits
immediately and silently, as in the following sort of code (see
testexit-threading.py
in the
example tree for this code):

import threading, sys, time
def action():
sys.exit() # or raise SystemExit()
print('not reached')
threading.Thread(target=action).start()
time.sleep(2)
print('Main exit')

On a related note, keep in mind that threads and processes have
default lifespan models, which we explored earlier. By way of review,
when child threads are still running, the two thread modules’ behavior
differs—programs on most platforms exit when the parent thread does
under
_thread
, but not normally under
threading
unless children are made
daemons. When using processes, children normally outlive their parent.
This different process behavior makes sense if you remember that threads
are in-process function calls, but processes are more independent and
autonomous.

When used well, exit status can be used to implement error
detection and simple communication protocols in systems composed of
command-line scripts. But having said that, I should underscore that
most scripts do simply fall off the end of the source to exit, and most
thread functions simply return; explicit exit calls are generally
employed for exceptional conditions and in limited contexts only. More
typically, programs communicate with richer tools than integer exit
codes; the next section
shows how.

Interprocess Communication

As we saw
earlier, when scripts spawn
threads
—tasks that run in parallel within the
program—they can naturally communicate by changing and inspecting names
and objects in shared global memory. This includes both accessible
variables and attributes, as well as referenced mutable objects. As we
also saw, some care must be taken to use locks to synchronize access to
shared items that can be updated concurrently. Still, threads offer a
fairly straightforward communication model, and the
queue
module can make this nearly automatic for
many programs.

Things aren’t quite as simple when scripts start child processes and
independent programs that do not share memory in general. If we limit the
kinds of communications that can happen between programs, many options are
available, most of which we’ve already seen in this and the prior
chapters. For example, the following simple mechanisms can all be
interpreted as cross-program communication devices:

  • Simple files

  • Command-line arguments

  • Program exit status codes

  • Shell environment variables

  • Standard stream redirections

  • Stream pipes managed by
    os.popen
    and
    subprocess

For instance, sending command-line options and writing to input
streams lets us pass in program execution parameters; reading program
output streams and exit codes gives us a way to grab a result. Because
shell environment variable settings are inherited by spawned programs,
they provide another way to pass context in. And pipes made by
os.popen
or
subprocess
allow even more dynamic
communication. Data can be sent between programs at arbitrary times, not
only at program start and exit.

Beyond this set, there are other tools in the Python library for
performing Inter-Process Communication (IPC). This includes sockets,
shared memory, signals, anonymous and named pipes, and more. Some vary in
portability, and all vary in complexity and utility. For instance:

  • Signals
    allow
    programs to send simple notification events to other
    programs.

  • Anonymous pipes
    allow
    threads and related processes that share file
    descriptors to pass data, but generally rely on the Unix-like forking
    model for processes, which is not universally portable.

  • Named pipes
    are
    mapped to the system’s filesystem—they allow completely
    unrelated programs to converse, but are not available in Python on all
    platforms.

  • Sockets
    map to
    system-wide port numbers—they similarly let us transfer
    data between arbitrary programs running on the same computer, but also
    between programs located on remote networked machines, and offer a
    more portable option.

While some of these can be used as communication devices by threads,
too, their full power becomes more evident when leveraged by separate
processes which do not share memory at large.

In this section, we explore directly managed pipes (both anonymous
and named), as well as signals. We also take a first look at sockets here,
but largely as a preview; sockets can be used for IPC on a single machine,
but because the larger socket story also involves their role in
networking, we’ll save most of their details until the Internet part of
this book.

Other IPC tools are available to
Python programmers (e.g., shared memory as provided by the
mmap
module) but are not covered here
for lack of space; search the Python manuals and website for more details
on other IPC schemes if you’re looking for something more specific.

After this section, we’ll also study the
multiprocessing
module,
which offers additional and portable IPC options as part of its general
process-launching API, including shared memory, and pipes and queues of
arbitrary pickled Python objects. For now, let’s study traditional
approaches first.

Anonymous Pipes

Pipes, a cross-program
communication device, are implemented by your operating
system and made available in the Python standard library. Pipes are
unidirectional channels that work something like a shared memory buffer,
but with an interface resembling a simple file on each of two ends. In
typical use, one program writes data on one end of the pipe, and another
reads that data on the other end. Each program sees only its end of the
pipes and processes it using normal Python file calls.

Pipes are much more within the operating system, though. For
instance, calls to read a pipe will normally
block
the caller until data becomes available (i.e., is sent by the program on
the other end) instead of returning an end-of-file indicator. Moreover,
read calls on a pipe always return the oldest data written to the pipe,
resulting in a
first-in-first-out
model—the first
data written is the first to be read. Because of such properties, pipes
are also a way to synchronize the execution of independent
programs.

Pipes come in two flavors—
anonymous
and
named
.
Named pipes (often called fifos) are represented by a file
on your computer. Because named pipes are really external files, the
communicating processes need not be related at all; in fact, they can be
independently started programs.

By contrast, anonymous pipes exist only within processes and are
typically used in conjunction with process
forks
as
a way to link parent and spawned child processes within an application.
Parent and child converse over shared pipe file descriptors, which are
inherited by spawned processes. Because threads run in the same process
and share all global memory in general, anonymous pipes apply to them as
well.

Anonymous pipe basics

Since they are more
traditional, let’s start with a look at anonymous pipes.
To illustrate, the script in
Example 5-19
uses the
os.fork
call to make a copy of the calling
process as usual (we met forks earlier in this chapter). After
forking, the original parent process and its child copy speak through
the two ends of a pipe created
with
os.pipe
prior to
the fork. The
os.pipe
call returns
a tuple of two
file descriptors

the low-level file identifiers we met in
Chapter 4
—representing the input and
output sides of the pipe. Because forked child processes get
copies
of their parents’ file descriptors,
writing to the pipe’s output descriptor in the child sends data back
to the parent on the pipe created before the child was spawned.

Example 5-19. PP4E\System\Processes\pipe1.py

import os, time
def child(pipeout):
zzz = 0
while True:
time.sleep(zzz) # make parent wait
msg = ('Spam %03d' % zzz).encode() # pipes are binary bytes
os.write(pipeout, msg) # send to parent
zzz = (zzz+1) % 5 # goto 0 after 4
def parent():
pipein, pipeout = os.pipe() # make 2-ended pipe
if os.fork() == 0: # copy this process
child(pipeout) # in copy, run child
else: # in parent, listen to pipe
while True:
line = os.read(pipein, 32) # blocks until data sent
print('Parent %d got [%s] at %s' % (os.getpid(), line, time.time()))
parent()

If you run this program on Linux, Cygwin, or another Unix-like
platform (
pipe
is available on
standard Windows Python, but
fork
is not), the parent process waits for the child to send data on the
pipe each time it calls
os.read
.
It’s almost as if the child and parent act as client and server
here—the
parent starts the child and waits for it to initiate
communication.
[
17
]
To simulate differing task durations, the child keeps
the parent waiting one second longer between messages with
time.sleep
calls, until the delay has
reached four seconds. When the
zzz
delay counter hits 005, it rolls back down to 000 and starts
again:

[C:\...\PP4E\System\Processes]$
python pipe1.py
Parent 6716 got [b'Spam 000'] at 1267996104.53
Parent 6716 got [b'Spam 001'] at 1267996105.54
Parent 6716 got [b'Spam 002'] at 1267996107.55
Parent 6716 got [b'Spam 003'] at 1267996110.56
Parent 6716 got [b'Spam 004'] at 1267996114.57
Parent 6716 got [b'Spam 000'] at 1267996114.57
Parent 6716 got [b'Spam 001'] at 1267996115.59
Parent 6716 got [b'Spam 002'] at 1267996117.6
Parent 6716 got [b'Spam 003'] at 1267996120.61
Parent 6716 got [b'Spam 004'] at 1267996124.62
Parent 6716 got [b'Spam 000'] at 1267996124.62
Parent 6716 got [b'Spam 001'] at 1267996125.63
...etc.: Ctrl-C to exit...

Notice how the parent received a
bytes
string through the pipe. Raw pipes
normally deal in binary byte strings when their descriptors are used
directly this way with the descriptor-based file tools we met in
Chapter 4
(as we saw there, descriptor
read and write tools in
os
always
return and expect byte strings). That’s why we also have to manually
encode to
bytes
when writing in the
child—the string formatting operation is not available on
bytes
. As the next section shows, it’s also
possible to wrap a pipe descriptor in a text-mode file object, much as
we did in the file examples in
Chapter 4
, but that object simply performs
encoding and decoding automatically on transfers; it’s
still bytes in the pipe.

Wrapping pipe descriptors in file objects

If you look closely
at the preceding output, you’ll see that when the
child’s delay counter hits 004, the parent ends up reading two
messages from the pipe at the same time; the child wrote two distinct
messages, but on some platforms or configurations (other than that
used here) they might be interleaved or processed close enough in time
to be fetched as a single unit by the parent. Really, the parent
blindly asks to read, at most, 32 bytes each time, but it gets back
whatever text is available in the pipe, when it becomes
available.

To distinguish messages better, we can mandate a separator
character in the pipe. An end-of-line makes this easy, because we can
wrap the pipe descriptor in a file object with
os.fdopen
and rely
on the file object’s
readline
method to scan up through the next
\n
separator in the pipe. This also lets us
leverage the more powerful tools of the text-mode file object we met
in
Chapter 4
.
Example 5-20
implements this scheme
for the parent’s end of the pipe.

Example 5-20. PP4E\System\Processes\pipe2.py

# same as pipe1.py, but wrap pipe input in stdio file object
# to read by line, and close unused pipe fds in both processes
import os, time
def child(pipeout):
zzz = 0
while True:
time.sleep(zzz) # make parent wait
msg = ('Spam %03d\n' % zzz).encode() # pipes are binary in 3.X
os.write(pipeout, msg) # send to parent
zzz = (zzz+1) % 5 # roll to 0 at 5
def parent():
pipein, pipeout = os.pipe() # make 2-ended pipe
if os.fork() == 0: # in child, write to pipe
os.close(pipein) # close input side here
child(pipeout)
else: # in parent, listen to pipe
os.close(pipeout) # close output side here
pipein = os.fdopen(pipein) # make text mode input file object
while True:
line = pipein.readline()[:-1] # blocks until data sent
print('Parent %d got [%s] at %s' % (os.getpid(), line, time.time()))
parent()

This version has also been augmented to
close
the unused end of the pipe in each process
(e.g., after the fork, the parent process closes its copy of the
output side of the pipe written by the child); programs should close
unused pipe ends in general. Running with this new version reliably
returns a single child message to the parent each time it reads from
the pipe, because they are separated with markers when written:

[C:\...\PP4E\System\Processes]$
python pipe2.py
Parent 8204 got [Spam 000] at 1267997789.33
Parent 8204 got [Spam 001] at 1267997790.03
Parent 8204 got [Spam 002] at 1267997792.05
Parent 8204 got [Spam 003] at 1267997795.06
Parent 8204 got [Spam 004] at 1267997799.07
Parent 8204 got [Spam 000] at 1267997799.07
Parent 8204 got [Spam 001] at 1267997800.08
Parent 8204 got [Spam 002] at 1267997802.09
Parent 8204 got [Spam 003] at 1267997805.1
Parent 8204 got [Spam 004] at 1267997809.11
Parent 8204 got [Spam 000] at 1267997809.11
Parent 8204 got [Spam 001] at 1267997810.13
...etc.: Ctrl-C to exit...

Notice that this version’s reads also return a text data
str
object now, per the default
r
text mode for
os.fdopen
. As mentioned, pipes normally deal
in binary byte strings when their descriptors are used directly with
os
file tools, but wrapping in
text-mode files allows us to use
str
strings to represent text data instead
of
bytes
. In this example, bytes
are decoded to
str
when read by the
parent; using
os.fdopen
and text
mode in the child would allow us to avoid its manual encoding call,
but the file object would encode the
str
data anyhow (though the encoding is
trivial for ASCII bytes like those used here). As for simple files,
the best mode for processing pipe data in is determined by its
nature.

Anonymous pipes and threads

Although the
os.fork
call
required by the prior section’s examples isn’t available
on standard Windows Python,
os.pipe
is. Because threads all run in the same process and share file
descriptors (and global memory in general), this makes anonymous pipes
usable as a communication and synchronization device for threads, too.
This is an arguably lower-level mechanism than queues or shared names
and objects, but it provides an additional IPC option for threads.
Example 5-21
, for instance,
demonstrates the same type of pipe-based communication occurring
between threads instead of
processes
.

Example 5-21. PP4E\System\Processes\pipe-thread.py

# anonymous pipes and threads, not processes; this version works on Windows
import os, time, threading
def child(pipeout):
zzz = 0
while True:
time.sleep(zzz) # make parent wait
msg = ('Spam %03d' % zzz).encode() # pipes are binary bytes
os.write(pipeout, msg) # send to parent
zzz = (zzz+1) % 5 # goto 0 after 4
def parent(pipein):
while True:
line = os.read(pipein, 32) # blocks until data sent
print('Parent %d got [%s] at %s' % (os.getpid(), line, time.time()))
pipein, pipeout = os.pipe()
threading.Thread(target=child, args=(pipeout,)).start()
parent(pipein)

Since threads work on standard Windows Python, this script does
too. The output is similar here, but the speakers are in-process
threads, not processes (note that because of its simple-minded
infinite loops, at least one of its threads may not die on a
Ctrl-C—
on Windows you may need to use
Task Manager to kill the
python.exe
process running this script or
close its window to exit):

C:\...\PP4E\System\Processes>
pipe-thread.py
Parent 8876 got [b'Spam 000'] at 1268579215.71
Parent 8876 got [b'Spam 001'] at 1268579216.73
Parent 8876 got [b'Spam 002'] at 1268579218.74
Parent 8876 got [b'Spam 003'] at 1268579221.75
Parent 8876 got [b'Spam 004'] at 1268579225.76
Parent 8876 got [b'Spam 000'] at 1268579225.76
Parent 8876 got [b'Spam 001'] at 1268579226.77
Parent 8876 got [b'Spam 002'] at 1268579228.79
...etc.: Ctrl-C or Task Manager to exit...
Bidirectional IPC with anonymous pipes

Pipes
normally let data flow in only one direction—one side is
input, one is output. What if you need your programs to talk back and
forth, though? For example, one program might send another a request
for information and then wait for that information to be sent back. A
single pipe can’t generally handle such bidirectional conversations,
but two pipes can. One pipe can be used to pass requests to a program
and another can be used to ship replies back to the requestor.

This really does have real-world applications. For instance, I
once added a GUI interface to a command-line debugger for a C-like
programming language by connecting two processes with pipes this way.
The GUI ran as a separate process that constructed and sent commands
to the non-GUI debugger’s input stream pipe and parsed the results
that showed up in the debugger’s output stream pipe. In effect, the
GUI acted like a programmer typing commands at a keyboard and a client
to the debugger server. More generally, by spawning command-line
programs with streams attached by pipes, systems can add new
interfaces to legacy programs. In fact, we’ll see a simple example of
this sort of GUI program structure in
Chapter 10
.

The module in
Example 5-22
demonstrates one way to
apply this idea to link the input and output streams of two programs.
Its
spawn
function forks a new
child program and connects the input and output streams of the parent
to the output and input streams of the child. That is:

  • When the parent reads from its standard input, it is reading
    text sent to the child’s standard output.

  • When the parent writes to its standard output, it is sending
    data to the child’s standard input.

The net effect is that the two independent programs communicate
by speaking over their standard streams.

Example 5-22. PP4E\System\Processes\pipes.py

"""
spawn a child process/program, connect my stdin/stdout to child process's
stdout/stdin--my reads and writes map to output and input streams of the
spawned program; much like tying together streams with subprocess module;
"""
import os, sys
def spawn(prog, *args): # pass progname, cmdline args
stdinFd = sys.stdin.fileno() # get descriptors for streams
stdoutFd = sys.stdout.fileno() # normally stdin=0, stdout=1
parentStdin, childStdout = os.pipe() # make two IPC pipe channels
childStdin, parentStdout = os.pipe() # pipe returns (inputfd, outoutfd)
pid = os.fork() # make a copy of this process
if pid:
os.close(childStdout) # in parent process after fork:
os.close(childStdin) # close child ends in parent
os.dup2(parentStdin, stdinFd) # my sys.stdin copy = pipe1[0]
os.dup2(parentStdout, stdoutFd) # my sys.stdout copy = pipe2[1]
else:
os.close(parentStdin) # in child process after fork:
os.close(parentStdout) # close parent ends in child
os.dup2(childStdin, stdinFd) # my sys.stdin copy = pipe2[0]
os.dup2(childStdout, stdoutFd) # my sys.stdout copy = pipe1[1]
args = (prog,) + args
os.execvp(prog, args) # new program in this process
assert False, 'execvp failed!' # os.exec call never returns here
if __name__ == '__main__':
mypid = os.getpid()
spawn('python', 'pipes-testchild.py', 'spam') # fork child program
print('Hello 1 from parent', mypid) # to child's stdin
sys.stdout.flush() # subvert stdio buffering
reply = input() # from child's stdout
sys.stderr.write('Parent got: "%s"\n' % reply) # stderr not tied to pipe!
print('Hello 2 from parent', mypid)
sys.stdout.flush()
reply = sys.stdin.readline()
sys.stderr.write('Parent got: "%s"\n' % reply[:-1])

The
spawn
function in this
module does not work on standard Windows Python (remember that
fork
isn’t yet available there
today). In fact, most of the calls in this module map straight to Unix
system calls (and may be arbitrarily terrifying at first glance to
non-Unix developers!). We’ve already met some of these (e.g.,
os.fork
), but much of this code depends on
Unix concepts we don’t have time to address well in this text. But in
simple terms, here is a brief summary of the system calls demonstrated
in this code:

os.fork

Copies the
calling process as usual and returns the child’s
process ID in the parent process only.

os.execvp

Overlays a
new program in the calling process; it’s just like
the
os.execlp
used earlier
but takes a
tuple
or
list
of command-line argument strings
(collected with the
*args
form in the function header).

os.pipe

Returns a
tuple of file descriptors representing the input
and output ends of a pipe, as in earlier examples.

os.close(fd)

Closes the
descriptor-based file
fd
.

os.dup2(fd1,fd2)

Copies all system
information associated with the file named by the
file descriptor
fd1
to the
file named by
fd2
.

In terms of connecting standard streams,
os.dup2
is the real nitty-gritty here. For
example, the call
os.dup2(parentStdin,stdinFd)
essentially
assigns the parent process’s
stdin
file to the input end of one of the two pipes created; all
stdin
reads will henceforth come from the
pipe. By connecting the other end of this pipe to the child process’s
copy of the
stdout
stream file with
os.dup2(childStdout,stdoutFd)
, text
written by the child to its
sdtdout
winds up being routed through the pipe to the parent’s
stdin
stream. The effect is reminiscent of
the way we tied together streams with the
subprocess
module in
Chapter 3
, but this script is more
low-level and less portable.

To test this utility, the self-test code at the end of the file
spawns the program shown in
Example 5-23
in a child process and
reads and writes standard streams to converse with it over two
pipes.

Example 5-23. PP4E\System\Processes\pipes-testchild.py

import os, time, sys
mypid = os.getpid()
parentpid = os.getppid()
sys.stderr.write('Child %d of %d got arg: "%s"\n' %
(mypid, parentpid, sys.argv[1]))
for i in range(2):
time.sleep(3) # make parent process wait by sleeping here
recv = input() # stdin tied to pipe: comes from parent's stdout
time.sleep(3)
send = 'Child %d got: [%s]' % (mypid, recv)
print(send) # stdout tied to pipe: goes to parent's stdin
sys.stdout.flush() # make sure it's sent now or else process blocks

The following is our test in action on Cygwin (it’s similar
other Unix-like platforms like Linux); its output is not incredibly
impressive to read, but it represents two programs running
independently and shipping data back and forth through a pipe device
managed by the operating system. This is even more like a
client/server model (if you imagine the child as the server,
responding to requests sent from the parent). The text in square
brackets in this output went from the parent process to the child and
back to the parent again, all through pipes connected to standard
streams:

[C:\...\PP4E\System\Processes]$
python pipes.py
Child 9228 of 9096 got arg: "spam"
Parent got: "Child 9228 got: [Hello 1 from parent 9096]"
Parent got: "Child 9228 got: [Hello 2 from parent 9096]"
BOOK: Programming Python
3.86Mb size Format: txt, pdf, ePub
ads

Other books

Fate Worse Than Death by Sheila Radley
True Highland Spirit by Amanda Forester
Joshua and the Cowgirl by Sherryl Woods
Monster Madness by Dean Lorey
Nineteen Seventy-Four by David Peace
Adventures of a Sea Hunter by James P. Delgado
Super (Book 2): Super Duper by Jones, Princess