to implement this, which would be consumable to
> keystonemiddleware as a library [0].
>
> [0] https://etherpad.openstack.org/p/oslo-ptg-queens
>
>
> On 10/11/2017 07:43 AM, pnkk wrote:
>
> Hi,
>
> We have our API server(based on pyramid) integrated with keystone for
> AuthN/Au
Hi,
We have our API server(based on pyramid) integrated with keystone for
AuthN/AuthZ.
So our service has a *.conf file which has [keystone_authtoken] section
that defines all the stuff needed for registering to keystone.
WSGI pipeline will first get filtered with keystone auth token and then
s Inc
> www.mirantis.com
>
> On Thu, Jun 9, 2016 at 11:20 AM, pnkk <pnkk2...@gmail.com> wrote:
>
>> Hi,
>>
>> Can you please suggest a way to mount a cdrom iso to a instance during
>> boot time along with the ac
Hi,
Can you please suggest a way to mount a cdrom iso to a instance during boot
time along with the actual image.
That iso has the bootstrap configuration needed for the VM.
Regards,
Kanthi
___
Mailing list:
kflow itself has some similar capabilities via
> http://docs.openstack.org/developer/taskflow/workers.html#design but
> anyway what u've done is pretty neat as well.
>
> I am assuming this isn't an openstack project (due to usage of celery),
> any details on what's being worked on (am curiou
cted u to that, aka, am curious)?
>
> -Josh
>
> pnkk wrote:
>
>> To be specific, we hit this issue when the node running our service is
>> rebooted.
>> Our solution is designed in a way that each and every job is a celery
>> task and inside celery task, we create t
the node is
rebooted? Who will retry this transaction?
Thanks,
Kanthi
On Fri, May 27, 2016 at 5:39 PM, pnkk <pnkk2...@gmail.com> wrote:
> Hi,
>
> When taskflow engine is executing a job, the execution failed due to IO
> error(traceback pasted below).
>
> 2016-05-25
Hi,
When taskflow engine is executing a job, the execution failed due to IO
error(traceback pasted below).
2016-05-25 19:45:21.717 7119 ERROR taskflow.engines.action_engine.engine
127.0.1.1 [-] Engine execution has failed, something bad must of happened
(last 10 machine transitions were
Joshua,
We are performing few scaling tests for our solution and see that there are
errors as below:
Failed saving logbook 'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b'\n
InternalError: (pymysql.err.InternalError) (1205, u'Lock wait timeout
exceeded; try restarting transaction') [SQL: u'UPDATE logbooks
; can u open a bug @
> bugs.launchpad.net/taskflow for that and we can try to add said lock
> (that should hopefully resolve what u are seeing, although if it doesn't
> then the bug lies somewhere else).
>
> Thanks much!
>
> -Josh
>
>
> On 03/19/2016 08:45 AM, pnkk wrote:
Hi Joshua,
Thanks for all your inputs.
We are using this feature successfully. But I rarely see an issue related
to concurrency.
To give you a brief, we use eventlets and every job runs in a separate
eventlet thread.
In the job execution part, we use taskflow functionality and persist all
the
11 matches
Mail list logo