Read Programming Python Online

Authors: Mark Lutz

Tags: #COMPUTERS / Programming Languages / Python

Programming Python (27 page)

BOOK: Programming Python
6.7Mb size Format: txt, pdf, ePub
ads

[
13
]
At least in the current Python implementation, calling
os.fork
in a Python script actually copies
the Python interpreter process (if you look at your process list,
you’ll see two Python entries after a fork). But since the Python
interpreter records everything about your running script, it’s OK to
think of
fork
as copying your
program directly. It really will if Python scripts are ever compiled
to binary machine code.

Threads

Threads are
another way to start activities running at the same time. In
short, they run a call to a function (or any other type of callable
object) in parallel with the rest of the program. Threads are sometimes
called “lightweight processes,” because they run in parallel like forked
processes, but all of them run within the same single process. While
processes are commonly used to start independent programs, threads are
commonly used for tasks such as nonblocking input calls and long-running
tasks in a GUI. They also provide a natural model for algorithms that can
be expressed as independently running tasks. For applications that can
benefit from parallel processing, some developers consider threads to
offer a
number of advantages:

Performance

Because all threads
run within the same process, they don’t generally
incur a big startup cost to copy the process itself. The costs of
both copying forked processes and running threads can vary per
platform, but threads are usually considered less expensive in terms
of performance overhead.

Simplicity

To many observers, threads can be noticeably simpler to
program, too, especially when some of the more complex aspects of
processes enter the picture (e.g., process exits, communication
schemes, and zombie processes, covered in
Chapter 12
).

Shared global memory

On a related note, because threads run in a single process,
every thread shares the same
global memory space of the process. This provides a
natural and easy way for threads to communicate—by fetching and
setting names or objects accessible to all the threads. To the
Python programmer, this means that things like global scope
variables, passed objects and their attributes, and program-wide
interpreter components such as imported modules are shared among all
threads in a program; if one thread assigns a global variable, for
instance, its new value will be seen by other threads. Some care
must be taken to control access to shared items, but to some this
seems generally simpler to use than the process communication tools
necessary for forked processes, which we’ll meet later in this
chapter and book (e.g., pipes, streams, signals, sockets, etc.).
Like much in programming, this is not a universally shared view,
however, so you’ll have to weigh the difference for your programs
and platforms yourself.

Portability

Perhaps most important is the fact that threads are more
portable than forked processes. At this writing,
os.fork
is not supported by the standard
version of Python on Windows, but threads are. If you want to run
parallel tasks portably in a Python script today and you are
unwilling or unable to install a Unix-like library such as Cygwin on
Windows, threads may be your best bet. Python’s thread tools
automatically account for any platform-specific thread differences,
and they provide a consistent interface across all operating
systems. Having said that, the relatively new
multiprocessing
module described later in
this chapter offers another answer to the process portability issue
in some use cases.

So what’s the catch? There are three potential
downsides you should be aware of before you start spinning
your threads:

Function calls versus programs

First of all, threads are not a way—
at least, not a direct way—to start up another
program
. Rather, threads are designed to run a
call to a
function
(technically, any callable,
including bound and unbound methods) in parallel with the rest of
the program. As we saw in the prior section, by contrast, forked
processes can either call a function or start a new program.
Naturally, the threaded function can run scripts with the
exec
built-in function and can start new
programs with tools such as
os.system
,
os.popen
and the
subprocess
module, especially if doing so
is itself a long-running task. But fundamentally, threads run
in-program functions.

In practice, this is usually not a limitation. For many
applications, parallel functions are sufficiently powerful. For
instance, if you want to implement nonblocking input and output and
avoid blocking a GUI or its users with long-running tasks, threads
do the job; simply spawn a thread to run a function that performs
the potentially long-running task. The rest of the program will
continue independently.

Thread synchronization and queues

Secondly, the
fact that threads share objects and names in global
process memory is both good news and bad news—it provides a
communication mechanism, but we have to be careful to synchronize a
variety of operations. As we’ll see, even operations such as
printing are a potential conflict since there is only one
sys.stdout
per process, which is shared by
all threads.

Luckily, the
Python
queue
module, described in this section, makes this simple: realistic
threaded programs are usually structured as one or more
producer
(a.k.a.
worker
)
threads that add data to a queue, along with one or more
consumer
threads that take the data off the
queue and process it. In a typical threaded GUI, for example,
producers may download or compute data and place it on the queue;
the consumer—the main GUI thread—checks the queue for data
periodically with a timer event and displays it in the GUI when it
arrives. Because the shared queue is thread-safe, programs
structured this way automatically synchronize much cross-thread data
communication.

The global interpreter lock (GIL)

Finally,
as we’ll learn in more detail later in this section,
Python’s implementation of threads means that only one thread is
ever really running its Python language code in the Python virtual
machine at any point in time. Python threads are true operating
system threads, but all threads must acquire a single shared lock
when they are ready to run, and each thread may be swapped out after
running for a short period of time (currently, after a set number of
virtual machine instructions, though this implementation may change
in Python 3.2).

Because of this structure, the Python language parts of Python
threads cannot today be distributed across multiple CPUs on a
multi-CPU computer. To leverage more than one CPU, you’ll simply
need to use process forking, not threads (the amount and complexity
of code required for both are roughly the same). Moreover, the parts
of a thread that perform long-running tasks implemented as C
extensions can run truly independently if they release the GIL to
allow the Python code of other threads to run while their task is in
progress. Python code, however, cannot truly overlap in time.

The advantage of Python’s implementation of threads is
performance—when it was attempted, making the virtual machine truly
thread-safe reportedly slowed all programs by a factor of two on
Windows and by an even larger factor on Linux. Even nonthreaded
programs ran at half speed.

Even though the GIL’s multiplexing of Python language code
makes Python threads less useful for leveraging capacity on multiple
CPU machines, threads are still useful as programming tools to
implement nonblocking operations, especially in GUIs. Moreover, the
newer
multiprocessing
module
we’ll meet later offers another solution here, too—by providing a
portable thread-like API that is implemented with processes,
programs can both leverage the simplicity and
program
mability
of threads and benefit from
the scalability of independent processes across CPUs.

Despite what you may think after reading the preceding overview,
threads are remarkably easy to use in Python. In fact, when a program is
started it is already running a thread, usually called the “main thread”
of the process. To start new, independent threads of execution within a
process, Python code uses either the low-level
_thread
module to run a function call in a
spawned thread, or the higher-level
threading
module to manage threads with
high-level class-based objects. Both modules also provide tools for
synchronizing access to shared objects with locks.

Note

This book presents both the
_thread
and
threading
modules, and its examples use both
interchangeably. Some Python users would recommend that you always use
threading
rather than
_thread
in general. In fact, the latter was
renamed from
thread
to
_thread
in 3.X to suggest such a lesser status
for it. Personally, I think that is too extreme (and this is one reason
this book sometimes uses
as thread
in
imports to retain the original module name). Unless you need the more
powerful tools in
threading
, the
choice is largely arbitrary, and the
threading
module’s extra requirements may be
unwarranted.

The basic
thread
module does
not impose OOP, and as you’ll see in the examples of this section, is
very straightforward to use. The
threading
module may be better for more
complex tasks which require per-thread state retention or joins, but not
all threaded programs require its extra tools, and many use threads in
more limited scopes. In fact, this is roughly the same as comparing the
os.walk
call and visitor classes
we’ll meet in
Chapter 6
—both have
valid audiences and use cases. The most general Python rule of thumb
applies here as always:
keep it simple,
unless
it has to be
complex
.

The _thread Module

Since the basic
_thread
module
is a bit simpler than the more advanced
threading
module covered later in this
section, let’s look at some of its interfaces first. This module
provides a
portable
interface to whatever threading
system is available in your platform: its interfaces work the same on
Windows, Solaris, SGI, and any system with an installed
pthreads
POSIX threads implementation
(including Linux and others). Python scripts that use the Python
_thread
module work on all of these
platforms without changing their source code.

Basic usage

Let’s start off by
experimenting with a script that demonstrates the main
thread interfaces. The script in
Example 5-5
spawns threads until you
reply with a
q
at the console; it’s similar in
spirit to (and a bit simpler than) the script in
Example 5-1
, but it goes parallel
with threads instead of process forks.

Example 5-5. PP4E\System\Threads\thread1.py

"spawn threads until you type 'q'"
import _thread
def child(tid):
print('Hello from thread', tid)
def parent():
i = 0
while True:
i += 1
_thread.start_new_thread(child, (i,))
if input() == 'q': break
parent()

This script really contains only two thread-specific lines: the
import of the
_thread
module and
the thread creation call. To start a thread, we simply call
the
_thread.start_new_thread
function, no matter
what platform we’re programming on.
[
14
]
This call takes a function (or other callable) object
and an arguments tuple and starts a new thread to execute a call to
the passed function with the passed arguments. It’s almost like
Python’s
function(*args)
call
syntax, and similarly accepts an optional keyword arguments
dictionary, too, but in this case the function call begins running in
parallel with the rest of the program.

Operationally speaking, the
_thread.start_new_thread
call itself returns
immediately with no useful value, and the thread it spawns silently
exits when the function being run returns (the return value of the
threaded function call is simply ignored). Moreover, if a function run
in a thread raises an uncaught exception, a stack trace is printed and
the thread exits, but the rest of the program continues. With the
_thread
module, the entire program
exits silently on most platforms when the main thread does (though as
we’ll see later, the
threading
module may require special handling if child threads are still
running).

In practice, though, it’s almost trivial to use threads in a
Python script. Let’s run this program to launch a few threads; we can
run it on both Unix-like platforms and Windows this time, because
threads are more portable than process forks—here it is spawning
threads on Windows:

C:\...\PP4E\System\Threads>
python thread1.py
Hello from thread 1
Hello from thread 2
Hello from thread 3
Hello from thread 4
q

Each message here is printed from a new thread, which exits
almost as soon as it
is started.

Other ways to code threads with _thread

Although the
preceding script runs a simple function
, any
callable object
may be run in the thread, because all
threads live in the same process. For instance, a thread can also run
a lambda function or bound method of an object (the following code is
part of file
thread-alts.py
in
the book examples package):

import _thread                                       # all 3 print 4294967296
def action(i): # function run in threads
print(i ** 32)
class Power:
def __init__(self, i):
self.i = i
def action(self): # bound method run in threads
print(self.i ** 32)
_thread.start_new_thread(action, (2,)) # simple function
_thread.start_new_thread((lambda: action(2)), ()) # lambda function to defer
obj = Power(2)
_thread.start_new_thread(obj.action, ()) # bound method object

As we’ll see in larger examples later in this book,
bound methods
are especially useful in this
role—because they remember both the method function and instance
object, they also give access to state information and class methods
for use within and during the thread.

More fundamentally, because threads all run in the same process,
bound methods run by threads reference the original in-process
instance object, not a copy of it. Hence, any changes to its state
made in a thread will be visible to all threads automatically.
Moreover, since bound methods of a class instance pass for callables
interchangeably with simple functions, using them in threads this way
just works. And as we’ll see later, the fact that they are normal
objects also allows them to be stored freely on shared queues.

Running multiple threads

To really
understand the power of threads running in parallel,
though, we have to do something more long-lived in our threads, just
as we did earlier for processes. Let’s mutate the
fork-count
program of the prior section to
use threads. The script in
Example 5-6
starts 5 copies of its
counter
function running in
parallel threads.

Example 5-6. PP4E\System\Threads\thread-count.py

"""
thread basics: start 5 copies of a function running in parallel;
uses time.sleep so that the main thread doesn't die too early--
this kills all other threads on some platforms; stdout is shared:
thread outputs may be intermixed in this version arbitrarily.
"""
import _thread as thread, time
def counter(myId, count): # function run in threads
for i in range(count):
time.sleep(1) # simulate real work
print('[%s] => %s' % (myId, i))
for i in range(5): # spawn 5 threads
thread.start_new_thread(counter, (i, 5)) # each thread loops 5 times
time.sleep(6)
print('Main thread exiting.') # don't exit too early

Each parallel copy of the
counter
function simply counts from zero up
to four here and prints a message to standard output for each
count.

Notice how this script sleeps for 6 seconds at the end. On
Windows and Linux machines this has been tested on, the main thread
shouldn’t exit while any spawned threads are running if it cares about
their work; if it does exit, all spawned threads are immediately
terminated. This differs from processes, where spawned children live
on when parents exit. Without the sleep here, the spawned threads
would die almost immediately after they are started.

This may seem ad hoc, but it isn’t required on all platforms,
and programs are usually structured such that the main thread
naturally lives as long as the threads it starts. For instance, a user
interface may start an FTP download running in a thread, but the
download lives a much shorter life than the user interface itself.
Later in this section, we’ll also see different ways to avoid this
sleep using global locks and flags that let threads signal their
completion.

Moreover, we’ll later find that the
threading
module both provides a
join
method that lets us wait for spawned
threads to finish explicitly, and refuses to allow a program to exit
at all if any of its normal threads are still running (which may be
useful in this case, but can require extra work to shut down in
others). The
multiprocessing
module
we’ll meet later in this chapter also allows spawned children to
outlive their parents, though this is largely an artifact of its
process-based model.

Now, when
Example 5-6
is
run on Windows 7 under Python 3.1, here is the output I get:

C:\...\PP4E\System\Threads>
python thread-count.py
[1] => 0
[1] => 0
[0] => 0
[1] => 0
[0] => 0
[2] => 0
[3] => 0
[3] => 0
[1] => 1
[3] => 1
[3] => 1
[0] => 1[2] => 1
[3] => 1
[0] => 1[2] => 1
[4] => 1
[1] => 2
[3] => 2[4] => 2
[3] => 2[4] => 2
[0] => 2
[3] => 2[4] => 2
[0] => 2
[2] => 2
[3] => 2[4] => 2
[0] => 2
[2] => 2
...more output omitted...
Main thread exiting.

If this looks odd, it’s because it should. In fact, this
demonstrates probably the most unusual aspect of threads. What’s
happening here is that the output of the 5 threads run in parallel is
intermixed—because all the threaded function calls run in the same
process, they all share the same standard output stream (in Python
terms, there is just one
sys.stdout
file between them, which is where printed text is sent). The net
effect is that their output can be combined and confused arbitrarily.
In fact, this script’s output can differ on each run. This jumbling of
output grew even more pronounced in Python 3, presumably due to its
new file output implementation.

More fundamentally, when multiple threads can access a shared
resource like this, their access must be synchronized to avoid overlap
in time—as explained in the next
section.

Synchronizing access to shared objects and names

One of the nice things
about threads is that they automatically come with a
cross-task communications mechanism: objects and namespaces in a
process that span the life of threads are shared by all spawned
threads. For instance, because every thread runs in the same process,
if one Python thread changes a global scope variable, the change can
be seen by every other thread in the process, main or child.
Similarly, threads can share and change mutable objects in the
process’s memory as long as they hold a reference to them (e.g.,
passed-in arguments). This serves as a simple way for a program’s
threads to pass information—exit flags, result objects, event
indicators, and so on—back and forth to each other.

The downside to this scheme is that our threads must sometimes
be careful to avoid changing global objects and names at the same
time. If two threads may change a shared object at once, it’s not
impossible that one of the two changes will be lost (or worse, will
silently corrupt the state of the shared object completely): one
thread may step on the work done so far by another whose operations
are still in progress. The extent to which this becomes an issue
varies per application, and sometimes it isn’t an issue at all.

But even things that aren’t obviously at risk may be at risk.
Files and streams, for example, are shared by all threads in a
program; if multiple threads write to one stream at the same time, the
stream might wind up with interleaved, garbled data.
Example 5-6
of the prior section was
a simple demonstration of this phenomenon in action, but it’s
indicative of the sorts of clashes in time that can occur when our
programs go parallel. Even simple changes can go awry if they might
happen concurrently. To be robust, threaded programs need to control
access to shared global items like these so that only one thread uses
them at once.

Luckily, Python’s
_thread
module comes with its own easy-to-use tools for synchronizing access
to objects shared among threads. These tools are based on the concept
of a
lock
—to change a shared object, threads
acquire
a lock, make their changes, and then
release
the lock for other threads to grab.
Python ensures that only one thread can hold a lock at any point in
time; if others request it while it’s held, they are blocked until the
lock becomes available. Lock objects are allocated and processed with
simple and portable calls in the
_thread
module that are automatically mapped
to thread locking mechanisms on the underlying platform.

For instance, in
Example 5-7
, a lock object created
by
_thread.allocate_lock
is acquired and
released by each thread around the
print
call that writes to the shared
standard output stream.

Example 5-7. PP4E\System\Threads\thread-count-mutex.py

"""
synchronize access to stdout: because it is shared global,
thread outputs may be intermixed if not synchronized
"""
import _thread as thread, time
def counter(myId, count): # function run in threads
for i in range(count):
time.sleep(1) # simulate real work
mutex.acquire()
print('[%s] => %s' % (myId, i)) # print isn't interrupted now
mutex.release()
mutex = thread.allocate_lock() # make a global lock object
for i in range(5): # spawn 5 threads
thread.start_new_thread(counter, (i, 5)) # each thread loops 5 times
time.sleep(6)
print('Main thread exiting.') # don't exit too early

Really, this script simply augments
Example 5-6
to synchronize prints
with a thread lock. The net effect of the additional lock calls in
this script is that no two threads will ever execute a
print
call at the same point in time; the
lock ensures mutually exclusive access to the
stdout
stream. Hence, the output of this
script is similar to that of the original version, except that
standard output text is never mangled by overlapping prints:

C:\...\PP4E\System\Threads>
thread-count-mutex.py
[0] => 0
[1] => 0
[3] => 0
[2] => 0
[4] => 0
[0] => 1
[1] => 1
[3] => 1
[2] => 1
[4] => 1
[0] => 2
[1] => 2
[3] => 2
[4] => 2
[2] => 2
[0] => 3
[1] => 3
[3] => 3
[4] => 3
[2] => 3
[0] => 4
[1] => 4
[3] => 4
[4] => 4
[2] => 4
Main thread exiting.

Though somewhat platform-specific, the order in which the
threads check in with their prints may still be arbitrary from run to
run because they execute in parallel (getting work done in parallel is
the whole point of threads, after all); but they no longer collide in
time while printing their text. We’ll see other cases where the lock
idiom comes in to play later in this chapter—it’s a core component of
the multithreading
model.

BOOK: Programming Python
6.7Mb size Format: txt, pdf, ePub
ads

Other books

Elijah: The Boss's Gift by Sam Crescent
Man in the Dark by Paul Auster
A Whisper to the Living by Ruth Hamilton
Walk With Me by Annie Wald
The Russia House by John le Carre
Revealed by Kate Noble