Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-06-05 Thread Joshua Harlow
Cool, we'll feel free to find the taskflow (and others) either in #openstack-oslo or #openstack-state-management if you have any questions. -Josh pnkk wrote: I am working on NFV orchestrator based on MANO Regards, Kanthi On Thu, Jun 2, 2016 at 3:00 AM, Joshua Harlow mailto:harlo...@fastmail.

Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-06-05 Thread pnkk
I am working on NFV orchestrator based on MANO Regards, Kanthi On Thu, Jun 2, 2016 at 3:00 AM, Joshua Harlow wrote: > Interesting way to combine taskflow + celery. > > I didn't expect it to be used like this, but the more power to you! > > Taskflow itself has some similar capabilities via > htt

Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-06-01 Thread Joshua Harlow
Interesting way to combine taskflow + celery. I didn't expect it to be used like this, but the more power to you! Taskflow itself has some similar capabilities via http://docs.openstack.org/developer/taskflow/workers.html#design but anyway what u've done is pretty neat as well. I am assuming

Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-06-01 Thread pnkk
Thanks for the nice documentation. To my knowledge celery is widely used for distributed task processing. This fits our requirement perfectly where we want to return immediate response to the user from our API server and run long running task in background. Celery also gives flexibility with the w

Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-05-27 Thread Joshua Harlow
Seems like u could just use http://docs.openstack.org/developer/taskflow/jobs.html (it appears that you may not be?); the job itself would when failed be then worked on by a different job consumer. Have u looked at those? It almost appears that u are using celery as a job distribution system

Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-05-27 Thread pnkk
To be specific, we hit this issue when the node running our service is rebooted. Our solution is designed in a way that each and every job is a celery task and inside celery task, we create taskflow flow. We enabled late_acks in celery(uses rabbitmq as message broker), so if our service/node goes

[openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-05-27 Thread pnkk
Hi, When taskflow engine is executing a job, the execution failed due to IO error(traceback pasted below). 2016-05-25 19:45:21.717 7119 ERROR taskflow.engines.action_engine.engine 127.0.1.1 [-] Engine execution has failed, something bad must of happened (last 10 machine transitions were [('SCHED