Re: [Openstack-operators] Next Ops Midcycle NYC August 25-26

2016-06-21 Thread Edgar Magana
All, Great progress and so happy to see that we have defined a venue. Looking forward to seeing you all there. Edgar On 6/21/16, 8:36 AM, "Jonathan D. Proulx" wrote: Hi All, The Ops Meetups Team has selected[1] New York City as the location of the next mid-cycle meetup on August 25 and 26 20

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Matt Fischer
On Tue, Jun 21, 2016 at 7:04 PM, Sam Morrison wrote: > > On 22 Jun 2016, at 10:58 AM, Matt Fischer wrote: > > Have you setup token caching at the service level? Meaning a Memcache > cluster that glance, Nova etc would talk to directly? That will really cut > down the traffic. > > Yeah we have th

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Sam Morrison
> On 22 Jun 2016, at 10:58 AM, Matt Fischer wrote: > > Have you setup token caching at the service level? Meaning a Memcache cluster > that glance, Nova etc would talk to directly? That will really cut down the > traffic. > Yeah we have that although the default cache time is 10 seconds for r

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Matt Fischer
Have you setup token caching at the service level? Meaning a Memcache cluster that glance, Nova etc would talk to directly? That will really cut down the traffic. On Jun 21, 2016 5:55 PM, "Sam Morrison" wrote: > > On 22 Jun 2016, at 9:42 AM, Matt Fischer wrote: > > On Tue, Jun 21, 2016 at 4:21 P

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Steve Martinelli
FWIW, we have refactored the revocation tree into a list, this should speed up the revocation process time significantly ( https://review.openstack.org/#/c/311652/) There is no way to disable revocations, since that would open up a security hole. On Tue, Jun 21, 2016 at 8:55 PM, Sam Morrison wro

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Sam Morrison
> On 22 Jun 2016, at 9:42 AM, Matt Fischer wrote: > > On Tue, Jun 21, 2016 at 4:21 PM, Sam Morrison > wrote: >> >> On 22 Jun 2016, at 1:45 AM, Matt Fischer > > wrote: >> >> I don't have a solution for you, but I will concur that adding r

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Matt Fischer
On Tue, Jun 21, 2016 at 4:21 PM, Sam Morrison wrote: > > On 22 Jun 2016, at 1:45 AM, Matt Fischer wrote: > > I don't have a solution for you, but I will concur that adding revocations > kills performance especially as that tree grows. I'm curious what you guys > are doing revocations on, anythin

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Sam Morrison
> > On 22 Jun 2016, at 1:45 AM, Matt Fischer wrote: > > I don't have a solution for you, but I will concur that adding revocations > kills performance especially as that tree grows. I'm curious what you guys > are doing revocations on, anything other than logging out of Horizon? > Is there a

Re: [Openstack-operators] [Magnum] Keystone error while creating a baymodel

2016-06-21 Thread Abhishek Chanda
Sorry for being vague. Here are some more details: I installed the Mitaka packages from opensuse repo. Here are the config files magnum.conf https://gist.github.com/achanda/3fca8914e225e430e8e6a86f321cb77d api-paste.ini https://gist.github.com/achanda/4415d5554156234c9ef5da0300e1487e policy.json h

Re: [Openstack-operators] [Magnum] Keystone error while creating a baymodel

2016-06-21 Thread Hongbin Lu
Hi Abhishek, I have no idea ant need further information. Could you provide the following information? * How you installed Magnum (install from source or package, manually or using any tool, etc.)? * Which version of Magnum you installed (master, Mitaka, etc.)? * Could you paste your Magnum conf

[Openstack-operators] [Magnum] Keystone error while creating a baymodel

2016-06-21 Thread Abhishek Chanda
Hi all, I am trying to run Magnum on 3 management nodes. I get the following error in api logs while trying to create a baymodel Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/wsmeext/pecan.py", line 84, in callfunction result = f(self, *args, **kwargs) File "",

[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG Meeting Wednesday 0700 UTC

2016-06-21 Thread Stig Telfer
Hello all - We have a Scientific WG IRC meeting on Wednesday 22 June at 0700 UTC on channel #openstack-meeting. The agenda is available here[1] and full IRC meeting details are here[2]. The headline items for discussion are: * Scientific OpenStack at Supercomputing 2016. OpenStack activities

[Openstack-operators] Help the community recruit app developers!

2016-06-21 Thread Kruithof Jr, Pieter
Operators, Apologies for any cross postings. As part of a long-term commitment to enhance ease-of-use, the OpenStack UX project, with support of the OpenStack Foundation and the Technical Committee, is now building a community of application and software developers interested in providing thei

Re: [Openstack-operators] [Glance] Default policy in policy.json

2016-06-21 Thread Andrew Laski
On Tue, Jun 21, 2016, at 12:27 PM, Adam Young wrote: > On 06/20/2016 10:09 PM, Michael Richardson wrote: > > On Fri, 17 Jun 2016 16:27:54 + > > > >> Also which would be preferred "role:admin" or "!"? Brian points out on [1] > >> that "!" would in effect, notify the admins that a policy is n

Re: [Openstack-operators] [Glance] Default policy in policy.json

2016-06-21 Thread Adam Young
On 06/20/2016 10:09 PM, Michael Richardson wrote: On Fri, 17 Jun 2016 16:27:54 + Also which would be preferred "role:admin" or "!"? Brian points out on [1] that "!" would in effect, notify the admins that a policy is not defined as they would be unable to preform the action themselves. +

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Matt Fischer
I don't have a solution for you, but I will concur that adding revocations kills performance especially as that tree grows. I'm curious what you guys are doing revocations on, anything other than logging out of Horizon? On Tue, Jun 21, 2016 at 5:45 AM, Jose Castro Leon wrote: > Hi all, > > While

[Openstack-operators] Next Ops Midcycle NYC August 25-26

2016-06-21 Thread Jonathan D. Proulx
Hi All, The Ops Meetups Team has selected[1] New York City as the location of the next mid-cycle meetup on August 25 and 26 2016 at Civic Hall[2] Many thanks to Bloomberg for sponsoring the location. And thanks to BestBuy as well for their offer of the Seattle location. The choice was very clos

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Jonathan D. Proulx
On Tue, Jun 21, 2016 at 11:42:45AM +0200, Michael Stang wrote: :I think I have asked my question not correctly, it is not for the cinder :backend, I meant the shared storage for the instances which is shared by the :compute nodes. Or can cinder also be used for this? Sorry if I ask stupid :question

[Openstack-operators] [neutron] Packet loss with DVR and IPv6

2016-06-21 Thread Tomas Vondra
Dear list, I've stumbled upon a weird condition in Neutron and couldn't find a bug filed for it. So even if it is happening with the Kilo release, it could still be relevant. The setup has 3 network nodes and 1 compute node currently hosting a virtual network (GRE based). DVR is enabled. I have ju

[Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Jose Castro Leon
Hi all, While doing scale tests on our infrastructure, we observed some increase in the response times of our keystone servers. After further investigation we observed that we have a hot key in our cache configuration (this means than all keystone servers are checking this key quite frequently)

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
Hello Saverio, many thanks, I will have a look :-) Regards, Michael > Saverio Proto hat am 21. Juni 2016 um 12:41 geschrieben: > > Hello Michael, > > have a look at Openstack Manila and CephFS > > Cheers > > Saverio > > > 2016-06-21 11:42 GMT+02:00 Michael Stang mailto:michael.s

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Saverio Proto
Hello Michael, have a look at Openstack Manila and CephFS Cheers Saverio 2016-06-21 11:42 GMT+02:00 Michael Stang : > I think I have asked my question not correctly, it is not for the cinder > backend, I meant the shared storage for the instances which is shared by > the compute nodes. Or can

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
I think I have asked my question not correctly, it is not for the cinder backend, I meant the shared storage for the instances which is shared by the compute nodes. Or can cinder also be used for this? Sorry if I ask stupid questions, OpenStack is still new for me ;-) Regards, Michael > Matt J

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
Hello Sampath, no I haven't read this one yet, thank you I will go through it. Regards, Michael > Sam P hat am 21. Juni 2016 um 09:55 geschrieben: > > > Hi, > > Hope you have already gone through this document... if not FYI > http://docs.openstack.org/ops-guide/arch_storage.html > > As Saveri

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
Hello Saverio, thank you I will have a look at these documents. Michael > Saverio Proto hat am 21. Juni 2016 um 09:42 geschrieben: > > > Hello Michael, > > a very widely adopted solution is to use Ceph with rbd volumes. > > http://docs.openstack.org/liberty/config-reference/content/ceph-rados

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Matt Jarvis
If you look at the user survey ( https://www.openstack.org/user-survey/survey-2016-q1/landing ) you can see what the current landscape looks like in terms of deployments. Ceph is by far the most commonly used storage backend for Cinder. On 21 June 2016 at 08:27, Michael Stang wrote: > Hi, > > I

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Sam P
Hi, Hope you have already gone through this document... if not FYI http://docs.openstack.org/ops-guide/arch_storage.html As Saverio said, Ceph is widely adopted solution. For small clouds, we found that NFS is much affordable solution in terms of cost and the complexity. --- Regards, Sampath

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Saverio Proto
Hello Michael, a very widely adopted solution is to use Ceph with rbd volumes. http://docs.openstack.org/liberty/config-reference/content/ceph-rados.html http://docs.ceph.com/docs/master/rbd/rbd-openstack/ you find more options here under Volume drivers: http://docs.openstack.org/liberty/config-

[Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
Hi, I wonder what is the recommendation for a shared storage for the compute nodes? At the moment we are using an iSCSI device which is served to all compute nodes with multipath, the filesystem is OCFS2. But this makes it a little unflexible in my opinion, because you have to decide how many com