[ANN]: asyncoro: Framework for asynchronous, concurrent, distributed programming

2012-07-02 Thread Giridhar Pemmasani
Hi,

I would like to announce asyncoro (http://asyncoro.sourceforge.net),
a Python framework for developing concurrent, distributed programs with
asynchronous completions and coroutines. asyncoro features include

  * Asynchronous (non-blocking) sockets
  * Efficient polling mechanisms epoll, kqueue, /dev/poll
    (and poll and select if necessary), and Windows I/O Completion Ports (IOCP)
    for high performance and scalability
  * SSL for security
  * Timers, including non-blocking sleep
  * Locking primitives similar to Python threading module
  * Thread pools with asynchronous task completions (for executing
    time consuming synchronous tasks)
  * Asynchronous database cursor operations (using asynchronous thread pool)
  * Communicating coroutines with messages
  * Remote execution of coroutines
  * Coroutines monitoring other coroutines (to get exit status notifications)
  * Hot-swapping and restarting of coroutine functions

Programs developed with asyncoro have same logic and structure as
programs with threads, except for a few syntactic changes. With
asyncoro's message communication, coroutines can exchange messages
one-to-one or through (broadcasting) channels. Coroutines exchanging
messages can be local (within single asyncoro instance) or distributed
(in many asyncoro instances across network).

Cheers,
Giri
-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN]: asyncoro: Framework for asynchronous programming with coroutines

2012-04-06 Thread Giridhar Pemmasani
I posted this message earlier to the list, but realized that URLs appear
broken with '.' at the end of URL. Sorry for that mistake and this duplicate!

asyncoro is a framework for developing concurrent programs with
asynchronous event completions and coroutines. Asynchronous
completions currently implemented in asyncoro are socket I/O
operations, sleep timers, (conditional) event notification and
semaphores. Programs developed with asyncoro will have same logic as
Python programs with synchronous sockets and threads, except for a few
syntactic changes. asyncoro supports polling mechanisms epoll, kqueue,
/dev/poll (and poll and select if necessary), and Windows I/O
Completion Ports (IOCP) for high performance and scalability, and SSL
for security.

More details about asyncoro are at
http://dispy.sourceforge.net/asyncoro.html and can be downloaded from
https://sourceforge.net/projects/dispy/files

asyncoro is used in development of dispy (http://dispy.sourceforge.net), which 
is a
framework for parallel execution of computations by distributing them
across multiple processors on a single machine (SMP), multiple nodes
in a cluster, or large clusters of nodes. The computations can be
standalone programs or Python functions.

Both asyncoro and dispy have been tested with Python versions 2.7 and
3.2 under Linux, OS X and Windows.

Cheers,
Giri-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN]: asyncoro: Framework for asynchronous sockets and coroutines

2012-04-05 Thread Giridhar Pemmasani
asyncoro is a framework for developing concurrent programs with
asynchronous event completions and coroutines. Asynchronous
completions currently implemented in asyncoro are socket I/O
operations, sleep timers, (conditional) event notification and
semaphores. Programs developed with asyncoro will have same logic as
Python programs with synchronous sockets and threads, except for a few
syntactic changes. asyncoro supports polling mechanisms epoll, kqueue,
/dev/poll (and poll, select if necessary), and Windows I/O
Completion Ports (IOCP) for high performance and scalability, and SSL
for security.

More details about asyncoro are at http://dispy.sourceforge.net/asyncoro.html.
It can be downloaded from https://sourceforge.net/projects/dispy/files.

asyncoro is used in development of dispy (http://dispy.sourceforge.net), which
is a framework for parallel execution of computations by distributing them
across multiple processors on a single machine (SMP), multiple nodes
in a cluster, or large clusters of nodes. The computations can be
standalone programs or Python functions.

Both asyncoro and dispy have been tested with Python versions 2.7 and
3.2 under Linux, OS X and Windows.

Cheers,
Giri-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN]: Python module to distribute computations for parallel execution

2012-02-15 Thread Giridhar Pemmasani
Hello,

I would like to announce dispy (http://dispy.sourceforge.net), a
python framework for distributing computations for parallel execution
to processors/cores on single node to many nodes over the network. The
computations can be python functions or programs. If there are any
dependencies, such as other python functions, modules, classes,
objects or files, they are also distributed as well. The results of
each computation, output, error messages and exception trace, if any,
are made available to client program for further processing. Popular
map/reduce style programs can be easily developed and deployed with
dispy.

There is also an implementation of dispy, called discopy, that uses
asynchronous I/O and coroutines, so that discopy will scale
efficiently for large number of network connections (right now this is
a bit academic, until it has been tested with such setups). The
framework with asynchronous I/O and coroutines, called asyncoro, is
independent of dispy - discopy is an implementation of dispy using
asyncoro. Others may find asyncoro itself useful.

Salient features of dispy/discopy are:

  * Computations (python functions or standalone programs) and its
    dependencies (files, python functions, classes, modules) are
    distributed automatically.

  * Computation nodes can be anywhere on the network (local or
    remote). For security, either simple hash based authentication or
    SSL encryption can be used.

  * A computation may specify which nodes are allowed to execute it
    (for now, using simple patterns of IP addresses).

  * After each execution is finished, the results of execution,
    output, errors and exception trace are made available for further
    processing.

  * If callback function is provided, dispy executes that function
    when a job is finished; this feature is useful for further
    processing of job results.


  * Nodes may become available dynamically: dispy will schedule jobs
    whenever a node is available and computations can use that node.

  * Client-side and server-side fault recovery are supported:

    If user program (client) terminates unexpectedly (e.g., due to
    uncaught exception), the nodes continue to execute scheduled jobs.
    If client-side fault recover option is used when creating a cluster,
    the results of the scheduled (but unfinished at the time of crash)
    jobs for that cluster can be easily retrieved later.

    If a computation is marked re-entrant (with 'resubmit=True' option)
    when a cluster is created and a node (server) executing jobs for
    that computation fails, dispy automatically resubmits those jobs
    to other available nodes.

  * In optimization problems it is useful for computations to send
    (successive) provisional results back to the client, so it can,
    for example, terminate computations. If computations are python
    functions, they can use 'dispy_provisional_result' function for
    this purpose.

  * dispy can be used in a single process to use all the nodes
    exclusively (with JobCluster - simpler to use) or in multiple
    processes simultaneously sharing the nodes (with ShareJobCluster
    and dispyscheduler).

dispy works with python 2.7. It has been tested on Linux, Mac OS X and
known to work with Windows. discopy has been tested on Linux and Mac
OS X.

I am not subscribed to the list, so please Cc me if you have comments.

Cheers,
Giri
-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] dispy: distribute computations and execute in parallel

2011-09-10 Thread Giridhar Pemmasani
Hello,

I would like to announce dispy (http://dispy.sf.net) that can
distribute and parallelize computations among computing nodes over
network (yes, yet another implementation of parallelization). This is
useful for problems in SIMD paradigm where a computation can be
executed with multiple data simultaneously. Salient features of dispy
are:

 * Computations (Python functions or standalone programs) and its
   dependencies (files, Python functions, classes, modules) are
   distributed automatically as and when needed.

 * Computation nodes can be anywhere on the network. For security,
   either simple hash based authentication or SSL encryption can be
   used.

 * A computation may specify which nodes are allowed to execute it
   (for now, using simple patterns of IP addresses).

 * After each execution is finished, the results of execution, output,
   errors and exception trace are made available for further
   processing.

 * Nodes may become available dynamically: dispy will schedule jobs
   whenever a node is available and computations can use that node. If
   a node fails while executing scheduled jobs, those jobs may be
   resubmitted to other nodes if computations allow it.

 * dispy can be used in a single process to use all the nodes
   exclusively (simpler to use) or in multiple processes sharing the
   nodes.

Currently dispy works with Python 2.7 and has been tested with nodes
running Linux and OS X.

Thanks,
Giri
-- 
http://mail.python.org/mailman/listinfo/python-list