Read Programming Python Online

Authors: Mark Lutz

Tags: #COMPUTERS / Programming Languages / Python

Programming Python (107 page)

BOOK: Programming Python
7.79Mb size Format: txt, pdf, ePub
ads
Threading Servers

The forking model
just described works well on Unix-like platforms in
general, but it suffers from some potentially significant
limitations:

Performance

On some machines, starting a new process can be fairly
expensive in terms of time and space resources.

Portability

Forking processes is a Unix technique; as we’ve learned, the
os.fork
call currently doesn’t
work on non-Unix platforms such as Windows under standard Python.
As we’ve also learned, forks can be used in the Cygwin version of
Python on Windows, but they may be inefficient and not exactly the
same as Unix forks. And as we just discovered,
multiprocessing
won’t help on Windows,
because connected sockets are not pickleable across process
boundaries.

Complexity

If you think that forking servers can be complicated, you’re
not alone. As we just saw, forking also brings with it all the
shenanigans of managing and reaping zombies—cleaning up after
child processes that live shorter lives than their parents.

If you read
Chapter 5
, you know
that one solution to all of these dilemmas is to use
threads
rather than processes. Threads run in
parallel and share global (i.e., module and interpreter) memory.

Because threads all run in the same process and memory space, they
automatically share sockets passed between them, similar in spirit to
the way that child processes inherit socket descriptors. Unlike
processes, though, threads are usually less expensive to start, and work
on both Unix-like machines and Windows under standard Python today.
Furthermore, many (though not all) see threads as simpler to
program—child threads die silently on exit, without leaving behind
zombies to haunt the server.

To illustrate,
Example 12-7
is another mutation of
the echo server that handles client requests in parallel by running them
in threads rather than in processes.

Example 12-7. PP4E\Internet\Sockets\thread-server.py

"""
Server side: open a socket on a port, listen for a message from a client,
and send an echo reply; echoes lines until eof when client closes socket;
spawns a thread to handle each client connection; threads share global
memory space with main thread; this is more portable than fork: threads
work on standard Windows systems, but process forks do not;
"""
import time, _thread as thread # or use threading.Thread().start()
from socket import * # get socket constructor and constants
myHost = '' # server machine, '' means local host
myPort = 50007 # listen on a non-reserved port number
sockobj = socket(AF_INET, SOCK_STREAM) # make a TCP socket object
sockobj.bind((myHost, myPort)) # bind it to server port number
sockobj.listen(5) # allow up to 5 pending connects
def now():
return time.ctime(time.time()) # current time on the server
def handleClient(connection): # in spawned thread: reply
time.sleep(5) # simulate a blocking activity
while True: # read, write a client socket
data = connection.recv(1024)
if not data: break
reply = 'Echo=>%s at %s' % (data, now())
connection.send(reply.encode())
connection.close()
def dispatcher(): # listen until process killed
while True: # wait for next connection,
connection, address = sockobj.accept() # pass to thread for service
print('Server connected by', address, end=' ')
print('at', now())
thread.start_new_thread(handleClient, (connection,))
dispatcher()

This
dispatcher
delegates each
incoming client connection request to a newly spawned thread running the
handleClient
function. As a result,
this server can process multiple clients at once, and the main
dispatcher loop can get quickly back to the top to check for newly
arrived requests. The net effect is that new clients won’t be denied
service due to a busy server.

Functionally, this version is similar to the
fork
solution (clients are handled in
parallel), but it will work on any machine that supports threads,
including Windows and Linux. Let’s test it on both. First, start the
server on a Linux machine and run clients on both Linux and
Windows:

[window 1: thread-based server process, server keeps accepting
client connections while threads are servicing prior requests]
[...]$
python thread-server.py
Server connected by ('127.0.0.1', 37335) at Sun Apr 25 08:59:05 2010
Server connected by ('72.236.109.185', 58866) at Sun Apr 25 08:59:54 2010
Server connected by ('72.236.109.185', 58867) at Sun Apr 25 08:59:56 2010
Server connected by ('72.236.109.185', 58868) at Sun Apr 25 08:59:58 2010
[window 2: client, but on same remote server machine]
[...]$
python echo-client.py
Client received: b"Echo=>b'Hello network world' at Sun Apr 25 08:59:10 2010"
[windows 3-5: local clients, PC]
C:\...\PP4E\Internet\Sockets>
python echo-client.py learning-python.com
Client received: b"Echo=>b'Hello network world' at Sun Apr 25 08:59:59 2010"
C:\...\PP4E\Internet\Sockets>
python echo-client.py learning-python.com Bruce
Client received: b"Echo=>b'Bruce' at Sun Apr 25 09:00:01 2010"
C:\...\Sockets>
python echo-client.py learning-python.com The Meaning of life
Client received: b"Echo=>b'The' at Sun Apr 25 09:00:03 2010"
Client received: b"Echo=>b'Meaning' at Sun Apr 25 09:00:03 2010"
Client received: b"Echo=>b'of' at Sun Apr 25 09:00:03 2010"
Client received: b"Echo=>b'life' at Sun Apr 25 09:00:03 2010"

Because this server uses threads rather than forked processes, we
can run it portably on both Linux and a Windows PC. Here it is at work
again, running on the same local Windows PC as its clients; again, the
main point to notice is that new clients are accepted while prior
clients are being processed in parallel with other clients and the main
thread (in the five-second sleep delay):

[window 1: server, on local PC]
C:\...\PP4E\Internet\Sockets>
python thread-server.py
Server connected by ('127.0.0.1', 58987) at Sun Apr 25 12:41:46 2010
Server connected by ('127.0.0.1', 58988) at Sun Apr 25 12:41:47 2010
Server connected by ('127.0.0.1', 58989) at Sun Apr 25 12:41:49 2010
[windows 2-4: clients, on local
PC]
C:\...\PP4E\Internet\Sockets>
python echo-client.py
Client received: b"Echo=>b'Hello network world' at Sun Apr 25 12:41:51 2010"
C:\...\PP4E\Internet\Sockets>
python echo-client.py localhost Brian
Client received: b"Echo=>b'Brian' at Sun Apr 25 12:41:52 2010"
C:\...\PP4E\Internet\Sockets>
python echo-client.py localhost Bright side of life
Client received: b"Echo=>b'Bright' at Sun Apr 25 12:41:54 2010"
Client received: b"Echo=>b'side' at Sun Apr 25 12:41:54 2010"
Client received: b"Echo=>b'of' at Sun Apr 25 12:41:54 2010"
Client received: b"Echo=>b'life' at Sun Apr 25 12:41:54 2010"

Remember that a thread silently exits when the function it is
running returns; unlike the process
fork
version, we don’t call anything like
os
.
_exit
in the client handler function (and we
shouldn’t—it may kill all threads in the process, including the main
loop watching for new connections!). Because of this, the thread version
is not only more portable, but also
simpler.

Standard Library Server Classes

Now that I’ve
shown you how to write forking and threading servers to
process clients without blocking incoming requests, I should also tell
you that there are standard tools in the Python standard library to make
this process even easier. In particular, the
socketserver
module defines classes that
implement all flavors of forking and threading servers that you are
likely to be interested in.

Like the manually-coded servers we’ve just studied, this module’s
primary classes implement servers which process clients in parallel
(a.k.a. asynchronously) to avoid denying service to new requests during
long-running transactions. Their net effect is to automate the
top-levels of common server code. To use this module, simply create the
desired kind of imported server object, passing in a handler object with
a callback method of your own, as demonstrated in the threaded TCP
server of
Example 12-8
.

Example 12-8. PP4E\Internet\Sockets\class-server.py

"""
Server side: open a socket on a port, listen for a message from a client, and
send an echo reply; this version uses the standard library module socketserver to
do its work; socketserver provides TCPServer, ThreadingTCPServer, ForkingTCPServer,
UDP variants of these, and more, and routes each client connect request to a new
instance of a passed-in request handler object's handle method; socketserver also
supports Unix domain sockets, but only on Unixen; see the Python library manual.
"""
import socketserver, time # get socket server, handler objects
myHost = '' # server machine, '' means local host
myPort = 50007 # listen on a non-reserved port number
def now():
return time.ctime(time.time())
class MyClientHandler(socketserver.BaseRequestHandler):
def handle(self): # on each client connect
print(self.client_address, now()) # show this client's address
time.sleep(5) # simulate a blocking activity
while True: # self.request is client socket
data = self.request.recv(1024) # read, write a client socket
if not data: break
reply = 'Echo=>%s at %s' % (data, now())
self.request.send(reply.encode())
self.request.close()
# make a threaded server, listen/handle clients forever
myaddr = (myHost, myPort)
server = socketserver.ThreadingTCPServer(myaddr, MyClientHandler)
server.serve_forever()

This server works the same as the threading server we wrote by
hand in the previous section, but instead focuses on service
implementation (the customized
handle
method), not on threading details. It is run the same way, too—here it
is processing three clients started by hand, plus eight spawned by the
testecho
script shown we wrote in
Example 12-3
:

[window 1: server, serverHost='localhost' in echo-client.py]
C:\...\PP4E\Internet\Sockets>
python class-server.py
('127.0.0.1', 59036) Sun Apr 25 13:50:23 2010
('127.0.0.1', 59037) Sun Apr 25 13:50:25 2010
('127.0.0.1', 59038) Sun Apr 25 13:50:26 2010
('127.0.0.1', 59039) Sun Apr 25 13:51:05 2010
('127.0.0.1', 59040) Sun Apr 25 13:51:05 2010
('127.0.0.1', 59041) Sun Apr 25 13:51:06 2010
('127.0.0.1', 59042) Sun Apr 25 13:51:06 2010
('127.0.0.1', 59043) Sun Apr 25 13:51:06 2010
('127.0.0.1', 59044) Sun Apr 25 13:51:06 2010
('127.0.0.1', 59045) Sun Apr 25 13:51:06 2010
('127.0.0.1', 59046) Sun Apr 25 13:51:06 2010
[windows 2-4: client, same machine]
C:\...\PP4E\Internet\Sockets>
python echo-client.py
Client received: b"Echo=>b'Hello network world' at Sun Apr 25 13:50:28 2010"
C:\...\PP4E\Internet\Sockets>
python echo-client.py localhost Arthur
Client received: b"Echo=>b'Arthur' at Sun Apr 25 13:50:30 2010"
C:\...\PP4E\Internet\Sockets>
python echo-client.py localhost Brave Sir Robin
Client received: b"Echo=>b'Brave' at Sun Apr 25 13:50:31 2010"
Client received: b"Echo=>b'Sir' at Sun Apr 25 13:50:31 2010"
Client received: b"Echo=>b'Robin' at Sun Apr 25 13:50:31 2010"
C:\...\PP4E\Internet\Sockets>
python testecho.py

To build a forking server instead, just use the class name
ForkingTCPServer
when creating the
server object. The
socketserver
module has more power than shown by this example; it also supports
nonparallel (a.k.a. serial or synchronous) servers, UDP and Unix domain
sockets, and Ctrl-C server interrupts on Windows. See Python’s library
manual for more details.

For more advanced server needs, Python also comes with standard
library tools that use those shown here, and allow you to implement in
just a few lines of Python code a simple but fully-functional HTTP (web)
server that knows how to run server-side CGI scripts. We’ll explore
those larger server tools in
Chapter 15
.

Multiplexing Servers with select

So far we’ve seen
how to handle multiple clients at once with both forked
processes and spawned threads, and we’ve looked at a library class that
encapsulates both schemes. Under both approaches, all client handlers
seem to run in parallel with one another and with the main dispatch loop
that continues watching for new incoming requests. Because all of these
tasks run in parallel (i.e., at the same time), the server doesn’t get
blocked when accepting new requests or when processing a long-running
client handler.

Technically, though, threads and processes don’t really run in
parallel, unless you’re lucky enough to have a machine with many CPUs.
Instead, your operating system performs a juggling act—it divides the
computer’s processing power among all active tasks. It runs part of one,
then part of another, and so on. All the tasks appear to run in
parallel, but only because the operating system switches focus between
tasks so fast that you don’t usually notice. This process of switching
between tasks is sometimes called
time-slicing
when
done by an operating system; it is more generally known as
multiplexing
.

When we spawn threads and processes, we rely on the operating
system to juggle the active tasks so that none are starved of computing
resources, especially the main server dispatcher loop. However, there’s
no reason that a Python script can’t do so as well. For instance, a
script might divide tasks into multiple steps—run a step of one task,
then one of another, and so on, until all are completed. The script need
only know how to divide its attention among the multiple active tasks to
multiplex on its own.

Servers can apply this technique to yield yet another way to
handle multiple clients at once, a way that requires neither threads nor
forks. By multiplexing client connections and the main dispatcher with
the
select
system call, a single
event loop can process multiple clients and accept new ones in parallel
(or at least close enough to avoid stalling). Such servers are sometimes
called
asynchronous
, because they service clients
in spurts, as each becomes ready to communicate. In asynchronous
servers, a single main loop run in a single process and thread decides
which clients should get a bit of attention each time through. Client
requests and the main dispatcher loop are each given a small slice of
the server’s attention if they are ready to converse.

Most of the magic behind this server structure is the operating
system
select
call, available in
Python’s standard
select
module on
all major platforms. Roughly,
select
is asked to monitor a list of input sources, output sources, and
exceptional condition sources and tells us which sources are ready for
processing. It can be made to simply poll all the sources to see which
are ready; wait for a maximum time period for sources to become ready;
or wait indefinitely until one or more sources are ready for
processing.

However used,
select
lets us
direct attention to sockets ready to communicate, so as to avoid
blocking on calls to ones that are not. That is, when the sources passed
to
select
are sockets, we can be sure
that socket calls like
accept
,
recv
, and
send
will not block (pause) the server when
applied to objects returned by
select
. Because of that, a single-loop server
that uses
select
need not get stuck
communicating with one client or waiting for new ones while other
clients are starved for the server’s attention.

Because this type of server does not need to start threads or
processes, it can be efficient when transactions with clients are
relatively short-lived. However, it also requires that these
transactions be quick; if they are not, it still runs the risk of
becoming bogged down waiting for a dialog with a particular client to
end, unless augmented with threads or forks for long-running
transactions.
[
46
]

A select-based echo server

Let’s see how all of this translates into code. The script in
Example 12-9
implements
another
echo
server, one that can
handle multiple clients without ever starting new processes or
threads.

Example 12-9. PP4E\Internet\Sockets\select-server.py

"""
Server: handle multiple clients in parallel with select. use the select
module to manually multiplex among a set of sockets: main sockets which
accept new client connections, and input sockets connected to accepted
clients; select can take an optional 4th arg--0 to poll, n.m to wait n.m
seconds, or omitted to wait till any socket is ready for processing.
"""
import sys, time
from select import select
from socket import socket, AF_INET, SOCK_STREAM
def now(): return time.ctime(time.time())
myHost = '' # server machine, '' means local host
myPort = 50007 # listen on a non-reserved port number
if len(sys.argv) == 3: # allow host/port as cmdline args too
myHost, myPort = sys.argv[1:]
numPortSocks = 2 # number of ports for client connects
# make main sockets for accepting new client requests
mainsocks, readsocks, writesocks = [], [], []
for i in range(numPortSocks):
portsock = socket(AF_INET, SOCK_STREAM) # make a TCP/IP socket object
portsock.bind((myHost, myPort)) # bind it to server port number
portsock.listen(5) # listen, allow 5 pending connects
mainsocks.append(portsock) # add to main list to identify
readsocks.append(portsock) # add to select inputs list
myPort += 1 # bind on consecutive ports
# event loop: listen and multiplex until server process killed
print('select-server loop starting')
while True:
#print(readsocks)
readables, writeables, exceptions = select(readsocks, writesocks, [])
for sockobj in readables:
if sockobj in mainsocks: # for ready input sockets
# port socket: accept new client
newsock, address = sockobj.accept() # accept should not block
print('Connect:', address, id(newsock)) # newsock is a new socket
readsocks.append(newsock) # add to select list, wait
else:
# client socket: read next line
data = sockobj.recv(1024) # recv should not block
print('\tgot', data, 'on', id(sockobj))
if not data: # if closed by the clients
sockobj.close() # close here and remv from
readsocks.remove(sockobj) # del list else reselected
else:
# this may block: should really select for writes too
reply = 'Echo=>%s at %s' % (data, now())
sockobj.send(reply.encode())

The bulk of this script is its
while
event loop at the end that calls
select
to find out which sockets
are ready for processing; these include both main port sockets on
which clients can connect and open client connections. It then loops
over all such ready sockets, accepting connections on main port
sockets and reading and echoing input on any client sockets ready for
input. Both the
accept
and
recv
calls in this code are guaranteed to
not block the server process after
select
returns; as a result, this server can
quickly get back to the top of the loop to process newly arrived
client requests and already connected clients’ inputs. The net effect
is that all new requests and clients are serviced in pseudoparallel
fashion.

To make this process work, the server appends the connected
socket for each client to the
readables
list passed to
select
, and simply waits for the socket to
show up in the selected inputs list. For illustration purposes, this
server also listens for new clients on more than one port—on ports
50007 and 50008, in our examples. Because these main port sockets are
also interrogated with
select
,
connection requests on either port can be accepted without blocking
either already connected clients or new connection requests appearing
on the other port. The
select
call
returns whatever sockets in
readables
are ready for processing—both
main port sockets and sockets connected to clients currently being
processed.

Running the select server

Let’s run this script locally to see how it does its stuff (the
client and server can also be run on different machines, as in prior
socket examples). First, we’ll assume we’ve already started this
server script on the local machine in one window, and run a few
clients to talk to it. The following listing gives the interaction in
two such client console windows running on Windows. The first client
simply runs the
echo-client
script
twice to contact the server, and the second also kicks off the
testecho
script to spawn eight
echo-client
programs running in
parallel.

As before, the server simply echoes back whatever text that
client sends, though without a sleep pause here (more on this in a
moment). Notice how the second client window really runs a script
called
echo-client-50008
so as to
connect to the second port socket in the server; it’s the same as
echo-client
, with a different
hardcoded port number; alas, the original script wasn’t designed to
input a port number:

[client window 1]
C:\...\PP4E\Internet\Sockets>
python echo-client.py
Client received: b"Echo=>b'Hello network world' at Sun Apr 25 14:51:21 2010"
C:\...\PP4E\Internet\Sockets>
python echo-client.py
Client received: b"Echo=>b'Hello network world' at Sun Apr 25 14:51:27 2010"
[client window 2]
C:\...\PP4E\Internet\Sockets>
python echo-client-5008.py localhost Sir Galahad
Client received: b"Echo=>b'Sir' at Sun Apr 25 14:51:22 2010"
Client received: b"Echo=>b'Galahad' at Sun Apr 25 14:51:22 2010"
C:\...\PP4E\Internet\Sockets>
python testecho.py

The next listing is the sort of output that show up in the
window where the server has been started. The first three connections
come from
echo-client
runs; the
rest is the result of the eight programs spawned by
testecho
in the second client window. We can
run this server on Windows, too, because
select
is available on this platform.
Correlate this output with the server’s code to see how it
runs.

Notice that for
testecho
, new
client connections and client inputs are multiplexed together. If you
study the output closely, you’ll see that they overlap in time,
because all activity is dispatched by the single event loop in the
server. In fact, the trace output on the server will probably look a
bit different nearly every time it runs. Clients and new connections
are interleaved almost at random due to timing differences on the host
machines. This happens in the earlier forking and treading servers,
too, but the operating system automatically switches between the
execution paths of the dispatcher loop and client transactions.

Also note that the server gets an empty string when the client
has closed its socket. We take care to close and delete these sockets
at the server right away, or else they would be needlessly reselected
again and again, each time through the main loop:

[server window]
C:\...\PP4E\Internet\Sockets>
python select-server.py
C:\Users\mark\Stuff\Books\4E\PP4E\dev\Examples\PP4E\Internet\Sockets>python sele
ct-server.py
select-server loop starting
Connect: ('127.0.0.1', 59080) 21339352
got b'Hello network world' on 21339352
got b'' on 21339352
Connect: ('127.0.0.1', 59081) 21338128
got b'Sir' on 21338128
got b'Galahad' on 21338128
got b'' on 21338128
Connect: ('127.0.0.1', 59082) 21339352
got b'Hello network world' on 21339352
got b'' on 21339352
[testecho results]
Connect: ('127.0.0.1', 59083) 21338128
got b'Hello network world' on 21338128
got b'' on 21338128
Connect: ('127.0.0.1', 59084) 21339352
got b'Hello network world' on 21339352
got b'' on 21339352
Connect: ('127.0.0.1', 59085) 21338128
got b'Hello network world' on 21338128
got b'' on 21338128
Connect: ('127.0.0.1', 59086) 21339352
got b'Hello network world' on 21339352
got b'' on 21339352
Connect: ('127.0.0.1', 59087) 21338128
got b'Hello network world' on 21338128
got b'' on 21338128
Connect: ('127.0.0.1', 59088) 21339352
Connect: ('127.0.0.1', 59089) 21338128
got b'Hello network world' on 21339352
got b'Hello network world' on 21338128
Connect: ('127.0.0.1', 59090) 21338056
got b'' on 21339352
got b'' on 21338128
got b'Hello network world' on 21338056
got b'' on 21338056

Besides this more verbose output, there’s another subtle but
crucial difference to
notice—
a
time.sleep
call to simulate a
long-running task doesn’t make sense in the server here. Because all
clients are handled by the same single loop, sleeping would pause
everything
, and defeat the whole point of a
multiplexing server. Again, manual multiplexing servers like this one
work well when transactions are short, but also generally require them
to either be so, or be handled specially.

Before we move on, here are a few additional notes and
options:

select
call
details

Formally,
select
is
called with three lists of selectable objects (input sources,
output sources, and exceptional condition sources), plus an
optional timeout. The timeout argument may be a real wait
expiration value in seconds (use floating-point numbers to
express fractions of a second), a zero value to mean simply poll
and return immediately, or omitted to mean wait until at least
one object is ready (as done in our server script). The call
returns a triple of ready objects—subsets of the first three
arguments—any or all of which may be empty if the timeout
expired before sources became ready.

select
portability

Like threading, but unlike forking, this server works in
standard Windows Python, too. Technically, the
select
call works only for sockets on
Windows, but also works for things like files and pipes on Unix
and Macintosh. For servers running over the Internet, of course,
the primary devices we are interested in are sockets.

Nonblocking sockets

select
lets us be sure
that socket calls like
accept
and
recv
won’t block (pause)
the caller, but it’s also possible to make Python
sockets nonblocking in general. Call the
setblocking
method of socket objects
to set the socket to blocking or nonblocking mode. For example,
given a call like
sock.setblocking(flag)
, the socket
sock
is set to nonblocking
mode if the flag is zero and to blocking mode otherwise. All
sockets start out in blocking mode initially, so socket calls
may always make the caller wait.

However, when in nonblocking mode, a
socket.error
exception is raised if a
recv
socket call doesn’t find
any data, or if a
send
call
can’t immediately transfer data. A script can catch this
exception to determine whether the socket is ready for
processing. In blocking mode, these calls always block until
they can proceed. Of course, there may be much more to
processing client requests than data transfers (requests may
also require long-running computations), so nonblocking sockets
don’t guarantee that servers won’t stall in general. They are
simply another way to code multiplexing servers. Like
select
, they are better suited when
client requests can be serviced quickly.

The
asyncore
module
framework

If you’re interested in using
select
, you will probably also be
interested in checking out the
asyncore.py
module in the standard Python library. It implements a
class-based callback model, where input and output callbacks are
dispatched to class methods by a precoded
select
event loop. As such, it allows
servers to be constructed without threads or forks, and it is a
select
-based alternative to
the
socketserver
module’s
threading and forking module we met in the prior sections. As
for this type of server in general,
asyncore
is best when transactions are
short—what it describes as “I/O bound” instead of “CPU bound”
programs, the latter of which still require threads or forks.
See the Python library manual for details and a usage
example.

Twisted

For other
server options, see also the open source Twisted
system (
http://twistedmatrix.com
). Twisted
is an asynchronous networking framework written in Python that
supports TCP, UDP, multicast, SSL/TLS, serial communication, and
more. It supports both clients and servers and includes
implementations of a number of commonly used network services
such as a web server, an IRC chat server, a mail server, a
relational database interface, and an object broker.

Although Twisted supports processes and threads for
longer-running actions, it also uses an asynchronous,
event-driven model to handle clients, which is similar to the
event loop of GUI libraries like tkinter. It abstracts an event
loop, which multiplexes among open socket connections, automates
many of the details inherent in an asynchronous server, and
provides an event-driven framework for scripts to use to
accomplish application tasks. Twisted’s internal event engine is
similar in spirit to our
select
-based server and the
asyncore
module, but it is regarded as
much more advanced. Twisted is a third-party system, not a
standard library tool; see its website and documentation for
more details.

BOOK: Programming Python
7.79Mb size Format: txt, pdf, ePub
ads

Other books

The Death Of Joan Of Arc by Michael Scott
Dark Magic by Angus Wells
Johnston - Heartbeat by Johnston, Joan
#3 Turn Up for Real by Stephanie Perry Moore
A Reason To Breathe by Smith, C.P.
The Warrior Sheep Go West by Christopher Russell
Roberson, Jennifer - Cheysuli 05 by A Pride of Princes (v1.0)
Her Montana Man by Cheryl St.john