Ondrej,

I need to take my kids somewhere but the answer to your question is
*yes absolutely*, you can interact with a controller and engine
running on a cluster somewhere from an ipython session on your laptop
sitting in starbucks.  This is *the* main focus of our parallel stuff
in ipython.  More details to follow later.

Cheers,

Brian

On Thu, Mar 19, 2009 at 12:58 PM, Ondrej Certik <ond...@certik.cz> wrote:
>
> On Thu, Mar 19, 2009 at 10:29 AM, Brian Granger <ellisonbg....@gmail.com> 
> wrote:
>>
>> Yes, we do have built-in dynamic load balancing.  Check out the following:
>>
>> client.TaskClient
>> client.StringTask
>> client.MapTask
>>
>> All of this is dynamically load balanced.  Here is how I would approach this:
>>
>> 1. Use MultiEngineClient to setup all the basic imports and variables
>> that all the engines need.
>>
>> 2. Create a TaskClient and then submit many StringTasks/MapTasks to
>> the TaskClient.
>>
>> The only thing to note is that currently, the TaskClient has slightly
>> more overhead than the MultiEngineClient, so you want to make sure
>> that each tasks lasts long enough to be worth it.  If your tasks are
>> really short and you are not seeing good speedup, you may want each
>> StringTask/MapTask instance handle a small batch of actual tasks.
>>
>> But all of this should "Just Work"
>>
>> Right now the best documentation for this is in the doc strings:
>>
>> http://ipython.scipy.org/doc/nightly/html/api/generated/IPython.kernel.taskclient.html
>>
>> http://ipython.scipy.org/doc/nightly/html/api/generated/IPython.kernel.task.html#IPython.kernel.task.StringTask
>>
>> http://ipython.scipy.org/doc/nightly/html/api/generated
>> /IPython.kernel.task.html#IPython.kernel.task.MapTask
>>
>> If these task types don't work for you, it is also possible to define
>> new task types.
>
> Thanks Brian, that should do it. Yes, we have ~1400 of small tests, so
> I can group them in larger groups as you suggested.
>
> One more beginner question, as I am new to cluster computing. :) After
> I install all the necessary dependencies on our UNR cluster, and then
> submit a job that executes ipcluster (using mpirun), then I will have
> an ipcluster running. In order to connect to it, I can submit another
> job that will be my script (e.g. it imports ipython and all should be
> fine). But if I wanted to interactively deal with the ipcluster from
> the ipython session --- do you think there is some way? Because that
> would really rock. I guess it depends on how the cluster is built,
> which I don't know until I try it. I thought that maybe if I run
> ipython on the master node that it could interact with the ipcluster
> --- but maybe this is forbidden, in which case it will not be possible
> to interact with it. When you run ipython on some big cluster, are you
> able to actually interact with it?
>
> Ondrej
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to