[web2py] Re: run web2y scheduler instances on 2 or more servers

2017-01-17 Thread Dave S


On Monday, January 16, 2017 at 8:01:56 PM UTC-8, Manjinder Sandhu wrote:
>
> Hi Andrey/Niphlod,
>
> *Is there a way I can connect servers via SQLite?*
>
> *Regards,*
>
> *Manjinder*
>

If you mean for the scheduler , sure. The connection string handles that 
(although I've only connected to sqlite3 from the local machine)..   But 
sqlite3 has limitations regarding simultaneous access, due to how it 
handles locking; that's done at file level rather than record level.  If 
your environment has a light load on all the servers involved, this might 
not be a problem, and certainly would work for development, but as your 
load increases the limitations will be more apparent.

/dps


 

> On Tuesday, 4 March 2014 20:52:31 UTC-8, Andrey K wrote:
>>
>> Thanks Niphlod, as usual very detail and great answer. Thank you a lot!
>> After you answer I have check the web and have found several tools that 
>> do specifically cluster management: StarCluster, Elasticluster. I am really 
>> keen to try the later one. It looks good specifically for GCE and EC2 work. 
>> However now I know better how I can utilize w2p scheduler. After figuring 
>> out how Elasticluster works - might blend work of w2p scheduler and EC.
>>
>> Thanks again! Really appreciate your help!
>>
>> On Monday, March 3, 2014 11:33:17 PM UTC+3, Niphlod wrote:
>>>
>>>
>>>
>>> On Monday, March 3, 2014 1:10:08 PM UTC+1, Andrey K wrote:

 Wow, what an answer! Niphlod, thanks a lot for such a detailed info 
 with examples - now it is crystal clear for me. Very great help, really 
 appreciate it!!!

 You answer make me clarify the future architecture for my app. Before I 
 thought to use amazon internal tools for  task distribution now I think I 
 can use w2p scheduler at least for the first stage or maybe permanently.

 I have several additional question if you allow me. Hope it helps to 
 other members of the w2p club.
 The plan is to start amazon servers (with web2py preinstalled) 
  programmatically when I need it with the purpose to run  w2p scheduler on 
 it.
 Could you give me your point of your on the following  questions that 
 I need to address in order to build such a service:
 1)Can I set up and cancel workers under web2py programmatically  which 
 equivalent 
 to' python web2py.py -K myapp:fast,myapp:fast,myapp:fast'?

>>>
>>> you can put them to sleep, terminate or kill them (read the book or use 
>>> w2p_scheduler_tests to get comfortable with the terms) but there's no 
>>> "included" way to start them on demand. That job is left to various pieces 
>>> of software that are built from the ground-up to manage external 
>>> processesupstart, systemd, circus, gaffer, supervisord, foreman, etc 
>>> are all good matches but each one with a particular design in mind and 
>>> totally outside the scope of web2py. Coordinating processes among a set of 
>>> servers just needs a more complicated solution than web2py itself.
>>>  
>>>
 2) What is the best way to monitor load of the server to make a 
 decision to start new worker or new server depends on the resources left?

>>>
>>> depends of what you mean by load. Just looking at your question, I see 
>>> that you never had to manage such architecture :-P..usually you don't 
>>> want to monitor the load "of the server" to ADD additional workers... you 
>>> want to monitor the load "of the server" to KILL additional workers or ADD 
>>> servers to process the jobs, watching at the load "of the infrastructure". 
>>> Again usually - because basically every app has its own priorities - you'd 
>>> want to set an estimate (KPI) on how much the queue can grow before jobs 
>>> are actually processed, and if the queue is growing faster than the 
>>> processed items, start either a new worker or a new virtual machine. 
>>>  
>>>
 3)Is it possible to set up folder on dedicated server for web2py file 
 upload and make it accessible to all web2py instances = job workers

 linux has all kinds of support for that: either an smb share or an nfs 
>>> share is the simplest thing to do. a Ceph cluster is probably more 
>>> complicated, but again we're outside of the scope of web2py 
>>>
>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: run web2y scheduler instances on 2 or more servers

2017-01-16 Thread Manjinder Sandhu
Hi Andrey/Niphlod,

*Is there a way I can connect servers via SQLite?*

*Regards,*

*Manjinder*
On Tuesday, 4 March 2014 20:52:31 UTC-8, Andrey K wrote:
>
> Thanks Niphlod, as usual very detail and great answer. Thank you a lot!
> After you answer I have check the web and have found several tools that do 
> specifically cluster management: StarCluster, Elasticluster. I am really 
> keen to try the later one. It looks good specifically for GCE and EC2 work. 
> However now I know better how I can utilize w2p scheduler. After figuring 
> out how Elasticluster works - might blend work of w2p scheduler and EC.
>
> Thanks again! Really appreciate your help!
>
> On Monday, March 3, 2014 11:33:17 PM UTC+3, Niphlod wrote:
>>
>>
>>
>> On Monday, March 3, 2014 1:10:08 PM UTC+1, Andrey K wrote:
>>>
>>> Wow, what an answer! Niphlod, thanks a lot for such a detailed info with 
>>> examples - now it is crystal clear for me. Very great help, really 
>>> appreciate it!!!
>>>
>>> You answer make me clarify the future architecture for my app. Before I 
>>> thought to use amazon internal tools for  task distribution now I think I 
>>> can use w2p scheduler at least for the first stage or maybe permanently.
>>>
>>> I have several additional question if you allow me. Hope it helps to 
>>> other members of the w2p club.
>>> The plan is to start amazon servers (with web2py preinstalled) 
>>>  programmatically when I need it with the purpose to run  w2p scheduler on 
>>> it.
>>> Could you give me your point of your on the following  questions that I 
>>> need to address in order to build such a service:
>>> 1)Can I set up and cancel workers under web2py programmatically  which 
>>> equivalent 
>>> to' python web2py.py -K myapp:fast,myapp:fast,myapp:fast'?
>>>
>>
>> you can put them to sleep, terminate or kill them (read the book or use 
>> w2p_scheduler_tests to get comfortable with the terms) but there's no 
>> "included" way to start them on demand. That job is left to various pieces 
>> of software that are built from the ground-up to manage external 
>> processesupstart, systemd, circus, gaffer, supervisord, foreman, etc 
>> are all good matches but each one with a particular design in mind and 
>> totally outside the scope of web2py. Coordinating processes among a set of 
>> servers just needs a more complicated solution than web2py itself.
>>  
>>
>>> 2) What is the best way to monitor load of the server to make a decision 
>>> to start new worker or new server depends on the resources left?
>>>
>>
>> depends of what you mean by load. Just looking at your question, I see 
>> that you never had to manage such architecture :-P..usually you don't 
>> want to monitor the load "of the server" to ADD additional workers... you 
>> want to monitor the load "of the server" to KILL additional workers or ADD 
>> servers to process the jobs, watching at the load "of the infrastructure". 
>> Again usually - because basically every app has its own priorities - you'd 
>> want to set an estimate (KPI) on how much the queue can grow before jobs 
>> are actually processed, and if the queue is growing faster than the 
>> processed items, start either a new worker or a new virtual machine. 
>>  
>>
>>> 3)Is it possible to set up folder on dedicated server for web2py file 
>>> upload and make it accessible to all web2py instances = job workers
>>>
>>> linux has all kinds of support for that: either an smb share or an nfs 
>> share is the simplest thing to do. a Ceph cluster is probably more 
>> complicated, but again we're outside of the scope of web2py 
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: run web2y scheduler instances on 2 or more servers

2014-03-04 Thread Andrey K
Thanks Niphlod, as usual very detail and great answer. Thank you a lot!
After you answer I have check the web and have found several tools that do 
specifically cluster management: StarCluster, Elasticluster. I am really 
keen to try the later one. It looks good specifically for GCE and EC2 work. 
However now I know better how I can utilize w2p scheduler. After figuring 
out how Elasticluster works - might blend work of w2p scheduler and EC.

Thanks again! Really appreciate your help!

On Monday, March 3, 2014 11:33:17 PM UTC+3, Niphlod wrote:
>
>
>
> On Monday, March 3, 2014 1:10:08 PM UTC+1, Andrey K wrote:
>>
>> Wow, what an answer! Niphlod, thanks a lot for such a detailed info with 
>> examples - now it is crystal clear for me. Very great help, really 
>> appreciate it!!!
>>
>> You answer make me clarify the future architecture for my app. Before I 
>> thought to use amazon internal tools for  task distribution now I think I 
>> can use w2p scheduler at least for the first stage or maybe permanently.
>>
>> I have several additional question if you allow me. Hope it helps to 
>> other members of the w2p club.
>> The plan is to start amazon servers (with web2py preinstalled) 
>>  programmatically when I need it with the purpose to run  w2p scheduler on 
>> it.
>> Could you give me your point of your on the following  questions that I 
>> need to address in order to build such a service:
>> 1)Can I set up and cancel workers under web2py programmatically  which 
>> equivalent 
>> to' python web2py.py -K myapp:fast,myapp:fast,myapp:fast'?
>>
>
> you can put them to sleep, terminate or kill them (read the book or use 
> w2p_scheduler_tests to get comfortable with the terms) but there's no 
> "included" way to start them on demand. That job is left to various pieces 
> of software that are built from the ground-up to manage external 
> processesupstart, systemd, circus, gaffer, supervisord, foreman, etc 
> are all good matches but each one with a particular design in mind and 
> totally outside the scope of web2py. Coordinating processes among a set of 
> servers just needs a more complicated solution than web2py itself.
>  
>
>> 2) What is the best way to monitor load of the server to make a decision 
>> to start new worker or new server depends on the resources left?
>>
>
> depends of what you mean by load. Just looking at your question, I see 
> that you never had to manage such architecture :-P..usually you don't 
> want to monitor the load "of the server" to ADD additional workers... you 
> want to monitor the load "of the server" to KILL additional workers or ADD 
> servers to process the jobs, watching at the load "of the infrastructure". 
> Again usually - because basically every app has its own priorities - you'd 
> want to set an estimate (KPI) on how much the queue can grow before jobs 
> are actually processed, and if the queue is growing faster than the 
> processed items, start either a new worker or a new virtual machine. 
>  
>
>> 3)Is it possible to set up folder on dedicated server for web2py file 
>> upload and make it accessible to all web2py instances = job workers
>>
>> linux has all kinds of support for that: either an smb share or an nfs 
> share is the simplest thing to do. a Ceph cluster is probably more 
> complicated, but again we're outside of the scope of web2py 
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[web2py] Re: run web2y scheduler instances on 2 or more servers

2014-03-03 Thread Niphlod


On Monday, March 3, 2014 1:10:08 PM UTC+1, Andrey K wrote:
>
> Wow, what an answer! Niphlod, thanks a lot for such a detailed info with 
> examples - now it is crystal clear for me. Very great help, really 
> appreciate it!!!
>
> You answer make me clarify the future architecture for my app. Before I 
> thought to use amazon internal tools for  task distribution now I think I 
> can use w2p scheduler at least for the first stage or maybe permanently.
>
> I have several additional question if you allow me. Hope it helps to 
> other members of the w2p club.
> The plan is to start amazon servers (with web2py preinstalled) 
>  programmatically when I need it with the purpose to run  w2p scheduler on 
> it.
> Could you give me your point of your on the following  questions that I 
> need to address in order to build such a service:
> 1)Can I set up and cancel workers under web2py programmatically  which 
> equivalent 
> to' python web2py.py -K myapp:fast,myapp:fast,myapp:fast'?
>

you can put them to sleep, terminate or kill them (read the book or use 
w2p_scheduler_tests to get comfortable with the terms) but there's no 
"included" way to start them on demand. That job is left to various pieces 
of software that are built from the ground-up to manage external 
processesupstart, systemd, circus, gaffer, supervisord, foreman, etc 
are all good matches but each one with a particular design in mind and 
totally outside the scope of web2py. Coordinating processes among a set of 
servers just needs a more complicated solution than web2py itself.
 

> 2) What is the best way to monitor load of the server to make a decision 
> to start new worker or new server depends on the resources left?
>

depends of what you mean by load. Just looking at your question, I see that 
you never had to manage such architecture :-P..usually you don't want 
to monitor the load "of the server" to ADD additional workers... you want 
to monitor the load "of the server" to KILL additional workers or ADD 
servers to process the jobs, watching at the load "of the infrastructure". 
Again usually - because basically every app has its own priorities - you'd 
want to set an estimate (KPI) on how much the queue can grow before jobs 
are actually processed, and if the queue is growing faster than the 
processed items, start either a new worker or a new virtual machine. 
 

> 3)Is it possible to set up folder on dedicated server for web2py file 
> upload and make it accessible to all web2py instances = job workers
>
> linux has all kinds of support for that: either an smb share or an nfs 
share is the simplest thing to do. a Ceph cluster is probably more 
complicated, but again we're outside of the scope of web2py 

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[web2py] Re: run web2y scheduler instances on 2 or more servers

2014-03-03 Thread Andrey K
Wow, what an answer! Niphlod, thanks a lot for such a detailed info with 
examples - now it is crystal clear for me. Very great help, really 
appreciate it!!!

You answer make me clarify the future architecture for my app. Before I 
thought to use amazon internal tools for  task distribution now I think I 
can use w2p scheduler at least for the first stage or maybe permanently.

I have several additional question if you allow me. Hope it helps to other 
members of the w2p club.
The plan is to start amazon servers (with web2py preinstalled) 
 programmatically when I need it with the purpose to run  w2p scheduler on 
it.
Could you give me your point of your on the following  questions that I 
need to address in order to build such a service:
1)Can I set up and cancel workers under web2py programmatically  which 
equivalent 
to' python web2py.py -K myapp:fast,myapp:fast,myapp:fast'?
2) What is the best way to monitor load of the server to make a decision to 
start new worker or new server depends on the resources left?

Looking forward to your answer.
Thank you in advance.

On Saturday, March 1, 2014 3:30:32 PM UTC+3, Niphlod wrote:
>
>
>
> On Friday, February 28, 2014 10:04:59 PM UTC+1, Andrey K wrote:
>>
>> Thank Niphlod for your answer. it is already great if it is possible even 
>> theoretically!
>>
>
> it does even in the real world :-P
>  
>
>> Regarding implementation. Correct me please if I am wrong in following 
>> understanding - am I right that I need to:
>> 1)copy the whole web2py and myapp with necessary libs to another server .
>> 2)set up db connection in the 2nd box's web2py myapp to remote DB (first 
>> server) like:
>> db= DAL('postgres://postgres:a...@196.xx.xx.xx/test', lazy_tables=False, 
>> migrate=True, fake_migrate=False)
>>
>
> I'd say that migrate must be False here the "original" one is doing 
> already the migrations. Also, after the scheduler tables have been created, 
> use
> from gluon.scheduler import Scheduler
> mysched = Scheduler(db, migrate=False)
>  
>
>> 3) run web2py scheduler on the second box as: 
>> web2py.py -K appname 
>>
>
> exactly
>  
>
>>
>> Is that all???It is sounds like magic
>> How the second server job scheduler get understanding of calling task?
>>
>
> exactly as the one in the "original server". All needed informations are 
> right on the scheduler_* tables, nothing else is required.
>  
>
>> If it works I wonder how to control what server (not worker) get next 
>> task - like what should I put in the model to force job be assigned to the 
>> the box I want?
>>
>
> The workers will coordinate themselves and pick up a new task whenever 
> it's ready to be processed. The whole point of the scheduler is having a 
> task processed ASAP, no matter who processes it: the first available worker 
> will do the job.
> Usually given that a task is queued and is ready to be processed, you 
> don't even want to know who processes it, as long as it is processed: 
> that's how scheduler works with default settings.
>
> That being said, if you want to send a specific task to a worker that 
> exists only in a specific server, you can do so with "group_name"(s).
> The concept is having different group of tasks to be processed from 
> schedulers being able to do so (i.e. one has all the libs, other one has 
> only a few, or first one is more "powerful", other one is less "powerful", 
> etc)
>
> You can then specify that a worker has to process only one group of tasks 
> and the other one only another group of tasks.
>
> As specified in the book, the syntax to launch schedulers with one or more 
> group_name is 
>
> python web2py.py -K myapp:group1:group2,myotherapp:group1
>
>
>
> in that case, you'd launch
>
> python web2py.py -K myapp:original
>
> on the original server and
>
> python web2py.py -K myapp:additional
>
> on the additional one.
>
> You can then queue tasks to be processed from the "original" box as
> mysched.queue_task(.., group_name='original')
> and tasks to be processed from the "additional" one as
> mysched.queue_task(.., group_name='additional')
>
>
> PS: all combinations of scheduler and group_names are allowedlet's say 
> the "additional" box will need to "help" the "original" one too , then 
> you'll have
> - 1 worker from the original box processing "original"
> - 1 worker from the additional box processing "additional" AND "original"
> then if you launch
>
> python web2py.py -K myapp:original 
> on the first box and
> python web2py.py -K myapp:original:additional
> on the additional
>
> the workers will coordinate to let BOTH workers process tasks with 
> "original" group_name and only the additional box to process tasks with the 
> "additional" group.
>
> Another example (more complicated): you have a set of tasks that needs 
> "precedence" (i.e. near real-time execution) and a set of tasks that can be 
> executed with more "relax", and 3 boxesyou can do some serious math :-P
>
> box1) python web2py.py -K myapp:fast,myapp:fas

[web2py] Re: run web2y scheduler instances on 2 or more servers

2014-03-01 Thread Niphlod


On Friday, February 28, 2014 10:04:59 PM UTC+1, Andrey K wrote:
>
> Thank Niphlod for your answer. it is already great if it is possible even 
> theoretically!
>

it does even in the real world :-P
 

> Regarding implementation. Correct me please if I am wrong in following 
> understanding - am I right that I need to:
> 1)copy the whole web2py and myapp with necessary libs to another server .
> 2)set up db connection in the 2nd box's web2py myapp to remote DB (first 
> server) like:
> db= DAL('postgres://postgres:a...@196.xx.xx.xx/test', lazy_tables=False, 
> migrate=True, fake_migrate=False)
> 3) run web2py scheduler on the second box as: 
> web2py.py -K appname 
>

exactly
 

>
> Is that all???It is sounds like magic
> How the second server job scheduler get understanding of calling task?
>

exactly as the one in the "original server". All needed informations are 
right on the scheduler_* tables, nothing else is required.
 

> If it works I wonder how to control what server (not worker) get next task 
> - like what should I put in the model to force job be assigned to the the 
> box I want?
>

The workers will coordinate themselves and pick up a new task whenever it's 
ready to be processed. The whole point of the scheduler is having a task 
processed ASAP, no matter who processes it: the first available worker will 
do the job.
Usually given that a task is queued and is ready to be processed, you don't 
even want to know who processes it, as long as it is processed: that's how 
scheduler works with default settings.

That being said, if you want to send a specific task to a worker that 
exists only in a specific server, you can do so with "group_name"(s).
The concept is having different group of tasks to be processed from 
schedulers being able to do so (i.e. one has all the libs, other one has 
only a few, or first one is more "powerful", other one is less "powerful", 
etc)

You can then specify that a worker has to process only one group of tasks 
and the other one only another group of tasks.

As specified in the book, the syntax to launch schedulers with one or more 
group_name is 

python web2py.py -K myapp:group1:group2,myotherapp:group1



in that case, you'd launch

python web2py.py -K myapp:original

on the original server and

python web2py.py -K myapp:additional

on the additional one.

You can then queue tasks to be processed from the "original" box as
mysched.queue_task(.., group_name='original')
and tasks to be processed from the "additional" one as
mysched.queue_task(.., group_name='additional')


PS: all combinations of scheduler and group_names are allowedlet's say 
the "additional" box will need to "help" the "original" one too , then 
you'll have
- 1 worker from the original box processing "original"
- 1 worker from the additional box processing "additional" AND "original"
then if you launch

python web2py.py -K myapp:original 
on the first box and
python web2py.py -K myapp:original:additional
on the additional

the workers will coordinate to let BOTH workers process tasks with 
"original" group_name and only the additional box to process tasks with the 
"additional" group.

Another example (more complicated): you have a set of tasks that needs 
"precedence" (i.e. near real-time execution) and a set of tasks that can be 
executed with more "relax", and 3 boxesyou can do some serious math :-P

box1) python web2py.py -K myapp:fast,myapp:fast,myapp:fast
box2) python web2py.py -K myapp:relax
box3) python web2py.py -K myapp:fast,myapp:fast,myapp:fast:relax

now you have
fast) 3 workers from box1, 2.5 workers on box1 --> 5.5 workers
relax) 1 worker from box2, 0.5 workers on box3 --> 1.5 workers

Now, for speech's sake, if all of your tasks take the same time to be 
executed, and 100 "fast" and 100 "relax" tasks have been queued, in the 
long run you'd observe that:
- ~ 54 "fast" tasks executed by "box1" (100 / 5.5 * 3)
- ~ 46 "fast" tasks executed by "box3" (100 / 5.5 * 2.5)
- ~ 66 "relax" tasks executed by "box2" (100 / 1.5 * 1)
- ~ 34 "relax" tasks executed by "box3" (100 / 1.5 * 0.5)

Also, you'd observe that to have the last "relax" task to be executed at 
the same time of the last "fast" task, you **can** queue up to 366 "fast" 
tasks to match 100 "relax" tasks (because 5.5 workers are "assigned" to 
"fast" and "only" 1.5 are "assigned" to "relax" tasks)...


Hope the whole concept is clear: feel free to ask for more details if 
needed.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[web2py] Re: run web2y scheduler instances on 2 or more servers

2014-02-28 Thread Andrey K
Thank Niphlod for your answer. it is already great if it is possible even 
theoretically!
Regarding implementation. Correct me please if I am wrong in following 
understanding - am I right that I need to:
1)copy the whole web2py and myapp with necessary libs to another server .
2)set up db connection in the 2nd box's web2py myapp to remote DB (first 
server) like:
db= DAL('postgres://postgres:a...@196.xx.xx.xx/test', lazy_tables=False, 
migrate=True, fake_migrate=False)
3) run web2py scheduler on the second box as: 
web2py.py -K appname 

Is that all???It is sounds like magic
How the second server job scheduler get understanding of calling task?
If it works I wonder how to control what server (not worker) get next task 
- like what should I put in the model to force job be assigned to the the 
box I want?

If my understanding is wrong please give me more details or an example. 
Thanks a lot for your help, Niphlod!!!

Just in case here is my code:
*db.py*
  db = DAL('postgres://postgres:a...@196.xx.xx.xx/test', 
lazy_tables=False, migrate=True, fake_migrate=False)
from gluon.scheduler import Scheduler
scheduler = Scheduler(db)
*defatul.py*
from tool_task import 
test_js_tasktask=scheduler.queue_task(test_js_task,pvars=dict(inFileId=inFileId,outDirId=outDirId,cmdLine=cmdLine,me
 
=me1))
*tool_task.py*
def test_js_task(inFileID,outDirID,cmdLine,me):
  subprocess.call(cmdLine.split(), stdout=subprocess.PIPE)  


On Friday, February 28, 2014 10:59:06 PM UTC+3, Niphlod wrote:
>
> the scheduler is designed also to support such cases. You can spin off 
> whatever no. of workers you want from wherever you'd like. The only thing 
> that needs to be reachable is the database where the scheduler tables are.
> you'll just have to copy your web2py folder (all the framework, plus the 
> app(s)) in another server. 
> If that one is just doing the "additional scheduler" role, instead of 
> starting the webserver, you'll start the worker with web2py.py -K appname 
>
> On Friday, February 28, 2014 12:05:22 PM UTC+1, Andrey K wrote:
>>
>> Good day,
>> I am working on the application that requires run background scripts on 
>> user demand.
>> At the moment I have implemented it as native web2py job scheduler. If 
>> the demand will be high enough my server would not be able deliver all 
>> tasks in reasonable time. Thus I am looking the way to destribute tasks to 
>> many servers or to cluster. I wonder what is the best practice for such a 
>> task? Is there a way to run web2py scheduler instances on different servers 
>> while distributing tasks from web2py app to these sheduler instances? 
>> Any ideas or directions to the solution would be very appreciated. Thank 
>> you in advance.
>> Kind regards, Andrey
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[web2py] Re: run web2y scheduler instances on 2 or more servers

2014-02-28 Thread Niphlod
the scheduler is designed also to support such cases. You can spin off 
whatever no. of workers you want from wherever you'd like. The only thing 
that needs to be reachable is the database where the scheduler tables are.
you'll just have to copy your web2py folder (all the framework, plus the 
app(s)) in another server. 
If that one is just doing the "additional scheduler" role, instead of 
starting the webserver, you'll start the worker with web2py.py -K appname 

On Friday, February 28, 2014 12:05:22 PM UTC+1, Andrey K wrote:
>
> Good day,
> I am working on the application that requires run background scripts on 
> user demand.
> At the moment I have implemented it as native web2py job scheduler. If the 
> demand will be high enough my server would not be able deliver all tasks in 
> reasonable time. Thus I am looking the way to destribute tasks to many 
> servers or to cluster. I wonder what is the best practice for such a task? 
> Is there a way to run web2py scheduler instances on different servers while 
> distributing tasks from web2py app to these sheduler instances? 
> Any ideas or directions to the solution would be very appreciated. Thank 
> you in advance.
> Kind regards, Andrey
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.