Re: [Openstack-operators] Ansible-driven management of Dell server BIOS and RAID config

2017-01-10 Thread John Dewey
On a similar note, if you’re looking to test Ansible roles, have a look at
molecule.
https://molecule.readthedocs.io

On January 10, 2017 at 7:42:02 AM, Stig Telfer (stig.openst...@telfer.org)
wrote:

Hi All -

We’ve just published the sources and a detailed writeup for some new tools
for Ansible-driven management of Dell iDRAC BIOS and RAID configuration:

https://www.stackhpc.com/ansible-drac.html

The code’s up on Github and Ansible Galaxy.

It should fit neatly into any infrastructure using OpenStack Ironic for
infrastructure management (and Dell server hardware).

Share and enjoy,
Stig
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging Virtualenvs

2016-06-23 Thread John Dewey
Not only is it interesting, it’s awesome :)

John


On June 23, 2016 at 5:53:59 PM, Silence Dogood (m...@nycresistor.com) wrote:

I'll check out giftwrap.  never heard of it.  But interesting.

On Thu, Jun 23, 2016 at 7:50 PM, Xav Paice  wrote:

> Can I suggest that using the tool https://github.com/openstack/giftwrap
> might make live a bunch easier?
>
> I went down a similar path with building Debs in a venv using
> dh_virtualenv, with some good success when I sorted the shebang. I later
> found that the debs produced by Giftwrap are not only very easy to build
> and test, but also take a bunch less effort to maintain and create new
> packages for new things.  To run the resulting code, I just symlink the
> ${venv}/bin/$binary to /usr/local/bin and run the thing using very
> similar init scripts to the ones supplied by the distro packages.  Works
> like a charm, because the shebang in the binary points at the venv, not
> the system python.
>
> I do, however, package the init scripts, sample configs, etc in a
> separate .deb, which is really very quick and easy and allows me to
> control the bits I want to, and let Giftwrap take care of the OpenStack
> code repos.
>
>
> On Thu, 2016-06-23 at 23:40 +, Matt Joyce wrote:
> > I want the script to dynamically instantiate the venv is call activate
> > this at execution time and deactivate when done.
> >
> >
> >
> > On June 23, 2016 5:12:07 PM EDT, Doug Hellmann 
> > wrote:
> > Excerpts from Silence Dogood's message of 2016-06-23 15:45:34
> -0400:
> >  I know from conversations that a few folks package
> their python apps as
> >  distributable virtualenvs.   spotify created
> dh-virtualenv for this.  you
> >  can do it pretty simply by hand.
> >
> >  I built a toolchain for building rpms as distributable
> virtualenvs and that
> >  works really well.
> >
> >  What I'd like to do is make it so that every app that's
> built as a
> >  virtualenv gets setup to automatically execute at call
> time in their
> >  virtualenv.
> >
> >  I see two options:
> >
> >  1)  Dynamically generate a wrapper script during build
> and put it in the
> >  RPM.  Call the wrapper.
> >
> >  2)  Created a dependency global module ( as an rpm )
> set it as a
> >  dependency.  And basically it'd be an autoexecuting
> import that
> >
> > instantiates the virtualenv.  it would probably know all
> it needs to
> >  because I am building all my packages to an internal
> standard.  Then when
> >  building the APP rpm all I need to do is inject an
> import into the import
> >  chain if it's being built as a virtualenv.  Then I have
> what are
> >  effectively statically compiled python apps.
> >
> >  I like 2.  But 1 isn't very bad.  Both are a little
> hokey.
> >
> >  Was curious if folks might have a preference, or a
> better idea.
> >
> >  Thanks.
> >
> >  Matt
> >
> > I'm not sure what you mean by a "wrapper script".  If you run the
> > Python console script from within the virtualenv you've packaged,
> > you shouldn't need to do anything to "activate" that environment
> > separately because it should have the correct shebang line.
> >
> > Are you seeing different behavior?
> >
> > Doug
> >
> >
> > __
> >
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] oslo.log syslog configuration

2016-04-06 Thread John Dewey
Does anyone have a good reference on implementing oslo.log into rsyslog? 
Currently, we have configured the various service to log to syslog, and rsyslog 
is able to do the appropriate thing.  However, the services still open their 
respective logfiles, and emit stack traces there vs syslog.  I’m curious if 
people are maintaining their own logging.confs or if oslo.log makes it 
unnecessary now.

John___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] YAML config

2016-02-01 Thread John Dewey
IMO config files should generally be managed as a template. If the files were 
YAML, I would still be managing them as a template. I went down the path of 
managing ini files with Ansible’s ini_file module, and it’s just not worth it.

John
On February 1, 2016 at 8:59:10 AM, Alexis Lee (lx...@hpe.com) wrote:

Hi operators,  

I have a spec up to allow config files to be specified as YAML instead  
of INI: https://review.openstack.org/273468  

The main benefit from my perspective is being able to use YAML tooling  
to transform config (EG as part of an Ansible run). Crudini doesn't work  
well with MultiStrOpts.  


There's also a patch to allow logging config to be specified as YAML:  
https://review.openstack.org/259000  

The main benefit here is not having to declare your handlers, loggers  
and formatters before defining them. This has caught my team a couple of  
times when making logging changes.  


Are these features you are interested in or should I let them die?  


Alexis (lxsli)  
--  
Nova developer, Hewlett-Packard Limited.  
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.  
Registered Number: 00690597 England  
VAT number: GB 314 1496 79  

___  
OpenStack-operators mailing list  
OpenStack-operators@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators  
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-11-30 Thread John Dewey
100% agree.

We should look at uwsgi as the reference architecture.  Nginx/Apache/etc should 
be interchangeable, and up to the operator which they choose to use.  Hell, 
with tcp load balancing now in opensource Nginx, I could get rid of Apache and 
HAProxy by utilizing uwsgi.

John
On November 30, 2015 at 1:05:26 PM, Paul Czarkowski 
(pczarkowski+openstack...@bluebox.net) wrote:

I don't have a problem with eventlet itself going away, but I do feel that 
keystone should pick a python based web server capable of running WSGI apps ( 
such as uWSGI ) for the reference implementation rather than Apache which can 
be declared appropriately in the requirements.txt of the project.   I feel it 
is important to allow the operator to make choices based on their 
organization's skill sets ( i.e. apache vs nginx ) to help keep complexity low.

I understand there are some newer features that rely on Apache ( federation, 
etc )  but we should allow the need for those features inform the operators 
choice of web server rather than force it for everybody.

Having a default implementation using uWSGI is also more inline with the 12 
factor way of writing applications and will run a lot more comfortably in 
[application] containers than apache would which is probably an important 
consideration given how many people are focused on being able to run openstack 
projects inside containers.

On Mon, Nov 30, 2015 at 2:36 PM, Jesse Keating  wrote:
I have an objection to eventlet going away. We have problems with running 
Apache and mod_wsgi with multiple python virtual environments. In some of our 
stacks we're running both Horizon and Keystone. Each get their own virtual 
environment. Apache mod_wsgi doesn't really work that way, so we'd have to do 
some ugly hacks to expose the python environments of both to Apache at the same 
time.

I believe we spoke about this at Summit. Have you had time to look into this 
scenario and have suggestions?


- jlk

On Mon, Nov 30, 2015 at 10:26 AM, Steve Martinelli  wrote:
This post is being sent again to the operators mailing list, and i apologize if 
it's duplicated for some folks. The original thread is here: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080816.html

In the Mitaka release, the keystone team will be removing functionality that 
was marked for deprecation in Kilo, and marking certain functions as deprecated 
in Mitaka (that may be removed in at least 2 cycles).

removing deprecated functionality
=

This is not a full list, but these are by and large the most contentious topics.

* Eventlet support: This was marked as deprecated back in Kilo and is currently 
scheduled to be removed in Mitaka in favor of running keystone in a WSGI 
server. This is currently how we test keystone in the gate, and based on the 
feedback we received at the summit, a lot of folks have moved to running 
keystone under Apache since we’ve announced this change. OpenStack's CI is 
configured to mainly test using this deployment model. See [0] for when we 
started to issue warnings.

* Using LDAP to store assignment data: Like eventlet support, this feature was 
also deprecated in Kilo and scheduled to be removed in Mitaka. To store 
assignment data (role assignments) we suggest using an SQL based backend rather 
than LDAP. See [1] for when we started to issue warnings.

* Using LDAP to store project and domain data: The same as above, see [2] for 
when we started to issue warnings.

* for a complete list: 
https://blueprints.launchpad.net/keystone/+spec/removed-as-of-mitaka

functions deprecated as of mitaka
=

The following will adhere to the TC’s new standard on deprecating functionality 
[3].

* LDAP write support for identity: We suggest simply not writing to LDAP for 
users and groups, this effectively makes create, delete and update of LDAP 
users and groups a no-op. It will be removed in the O release.

* PKI tokens: We suggest using UUID or fernet tokens instead. The PKI token 
format has had issues with security and causes problems with both horizon and 
swift when the token contains an excessively large service catalog. It will be 
removed in the O release.

* v2.0 of our API: Lastly, the keystone team recommends using v3 of our 
Identity API. We have had the intention of deprecating v2.0 for a while (since 
Juno actually), and have finally decided to formally deprecate v2.0. 
OpenStack’s CI runs successful v3 only jobs, there is complete feature parity 
with v2.0, and we feel the CLI exposed via openstackclient is mature enough to 
say with certainty that we can deprecate v2.0. It will be around for at least 
FOUR releases, with the authentication routes (POST /auth/tokens) potentially 
sticking around for longer.

* for a complete list: 
https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-mitaka


If you have ANY concern about the following, please speak up now and let us 

Re: [Openstack-operators] Neutron DHCP failover bug

2015-09-29 Thread John Dewey
Why not run neutron dhcp agents on both nodes? 

On September 29, 2015 at 7:04:57 PM, Sam Morrison (sorri...@gmail.com) wrote:

Hi All,

We are running Kilo and have come across this bug 
https://bugs.launchpad.net/neutron/+bug/1410067

Pretty easy to replicate, have 2 network nodes, shutdown 1 of them and DHCP 
etc. moves over to the new host fine. Except doing a port-show on the DHCP port 
shows it still on the old host and in state BUILD.
Everything works but the DB is in the wrong state.

Just wondering if anyone else sees this and if so if they know the associated 
fix in Liberty that addresses this.

Cheers,
Sam

___  
OpenStack-operators mailing list  
OpenStack-operators@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators  
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OSAD for RHEL

2015-07-08 Thread John Dewey


On Wednesday, July 8, 2015 at 9:33 PM, Adam Young wrote:

 On 07/07/2015 05:55 PM, Kris G. Lindgren wrote:
  +1 on RHEL support. I have some interest in moving away from packages and
  am interested in the OSAD tooling as well.
  
 
 
 I would not recommend an approach targetting RHEL that does not use 
 packages.
 
 OSAD support for RHEL using packages would be an outstanding tool.
 
 Which way are you planning on taking it?
IMO - registering the systems with subscription manager or pointing to in house 
yum repos should be included as part of system bootstrapping, and not a part of 
OSAD.  OSAD should simply install the specific packages for the alternate 
distro.

Might also be a good time to abstract the system packaging module into a higher 
level one which handles `yum` or `apt` behind the scenes.We can then manage 
the list of packages per distro[1].  Throwing this out as an idea vs copy-paste 
every apt with a yum section.

[1] https://gist.github.com/retr0h/dd4cbd27829a3095f37a
  
  
  Kris Lindgren
  Senior Linux Systems Engineer
  GoDaddy, LLC.
  
  
  
  
  
  
  
  On 7/7/15, 3:38 PM, Abel Lopez alopg...@gmail.com 
  (mailto:alopg...@gmail.com) wrote:
  
   Hey everyone,
   I've started looking at osad, and I like much of the direction it takes.
   I'm pretty interested in developing it to run on RHEL, I just wanted to
   check if anyone would be -2 opposed to that before I spend cycles on it.
   
  
  
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org 
  (mailto:OpenStack-operators@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
 
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org 
 (mailto:OpenStack-operators@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OSAD for RHEL

2015-07-08 Thread John Dewey
This would not be acceptable for those running OSP.  


On Wednesday, July 8, 2015 at 10:12 PM, Kris G. Lindgren wrote:

 I should be more clear. My current thought is to have a venv packaged
 inside an rpm - so the rpm includes the needed init scripts, ensures the
 required system level binaries are installed, adds the users - ect ect.
 But would be a single deployable autonomous unit. Also, have a versioning
 schema to roll forward and back between venvs for quick update/rollback.
 We are already working on doing something similar to this to run kilo on
 cent6 boxen, until we can finish revving the remaining parts of the fleet
 to cent7.
  
 My desire is to move away from using system level python  openstack
 packages, so that I can possibly run mismatched versions if I need to. We
 had a need to run kilo ceilometer and juno neutron/nova on a single
 server. The conflicting python requirements between those made that task
 impossible. In general I want to get away from treating Openstack as a
 single system that everything needs to be upgraded in lock step (packages
 force you into this). I want to move to being able to upgrade say
 oslo.messaging to a newer version on just say nova on my control plane
 servers. Or upgrade nova to kilo while keeping the rest of the system
 (neutron) on juno. Unless I run each service in a vm/container or on a
 physical piece of hardware that is pretty much impossible to do with
 packages - outside of placing everything inside venv's.
  
 However, it is my understanding that OSAD already builds its own
 python-wheels and runs those inside lxc containers. So I don¹t really
 follow what good throwing those into an rpm would really do?
 
  
 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.
  
  
 On 7/8/15, 10:33 PM, Adam Young ayo...@redhat.com 
 (mailto:ayo...@redhat.com) wrote:
  
  On 07/07/2015 05:55 PM, Kris G. Lindgren wrote:
   +1 on RHEL support. I have some interest in moving away from packages
   and
   am interested in the OSAD tooling as well.

   
   
  I would not recommend an approach targetting RHEL that does not use
  packages.
   
  OSAD support for RHEL using packages would be an outstanding tool.
   
  Which way are you planning on taking it?
   
   

   Kris Lindgren
   Senior Linux Systems Engineer
   GoDaddy, LLC.







   On 7/7/15, 3:38 PM, Abel Lopez alopg...@gmail.com 
   (mailto:alopg...@gmail.com) wrote:

Hey everyone,
I've started looking at osad, and I like much of the direction it
takes.
I'm pretty interested in developing it to run on RHEL, I just wanted to
check if anyone would be -2 opposed to that before I spend cycles on
it.
 


   ___
   OpenStack-operators mailing list
   OpenStack-operators@lists.openstack.org 
   (mailto:OpenStack-operators@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

   
   
   
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org 
  (mailto:OpenStack-operators@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
   
  
  
  
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org 
 (mailto:OpenStack-operators@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] What are people using for configuration management? Puppet? Chef? Other?

2015-03-26 Thread John Dewey
Although that does bring up a topic I have thought about for quite some time. 

I personally would love to switch out all the shell script craziness in 
devstack with ansible.  Devstack becomes the reference architecture to 
deploying openstack.  Deploying to a workstation, or multi-node, or production 
is a matter of swapping out the site.yml.


On Thursday, March 26, 2015 at 12:51 PM, matt wrote:

 referring to Tim Bell's link =P
 
 On Thu, Mar 26, 2015 at 3:45 PM, Matthew Kaufman mkfmn...@gmail.com 
 (mailto:mkfmn...@gmail.com) wrote:
  hey Matt, nice seeing you last night - who said anything about devstack 
  here?
  
  On Thu, Mar 26, 2015 at 3:30 PM, matt m...@nycresistor.com 
  (mailto:m...@nycresistor.com) wrote:
   Not sure I'd call devstack configuration management.
   
   On Thu, Mar 26, 2015 at 3:13 PM, John Dewey j...@dewey.ws 
   (mailto:j...@dewey.ws) wrote:
We are also in the process of looking at stackstorm[1] as a means to 
operate openstack.  The ability limit playbook execution based on a 
users role, perform auditing, and automatic remediation are intriguing. 

[1] http://stackstorm.com

On Thursday, March 26, 2015 at 11:42 AM, John Dewey wrote:

 We use ansible to orchestrate puppet.  Why?  We already were using 
 puppet, and the eventual convergence is a pain.  Also, we are able to 
 piecemeal out puppet for ansible as we move forward. 
 
 John 
 
 On Thursday, March 26, 2015 at 10:22 AM, Jesse Keating wrote:
 
  We are using Ansible. We need the orchestration capability that 
  Ansible provides, particularly for upgrades where pieces have to 
  move in a very coordinated order.
  
  https://github.com/blueboxgroup/ursula
  
  - jlk 
  On Thu, Mar 26, 2015 at 9:40 AM, Forrest Flagg 
  fostro.fl...@gmail.com (mailto:fostro.fl...@gmail.com) wrote:
   Hi all,
   
   Getting ready to install a Juno or Kilo cloud and was wondering 
   what people are using for configuration management to deploy 
   openstack.  Are you using Puppet, Chef, something else?  What was 
   the decision process for making your choice?
   
   Thanks,
   
   Forrest 
   ___
   OpenStack-operators mailing list
   OpenStack-operators@lists.openstack.org 
   (mailto:OpenStack-operators@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
   
  
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org 
  (mailto:OpenStack-operators@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  
  
 
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org 
(mailto:OpenStack-operators@lists.openstack.org)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

   
   
   ___
   OpenStack-operators mailing list
   OpenStack-operators@lists.openstack.org 
   (mailto:OpenStack-operators@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
   
  
 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenStack services and ca certificate config entries

2015-03-25 Thread John Dewey
I faced this very issue in the past.  We solved the problem by adding the CA to 
the system bundle (as you stated).  We also ran into problems where python 
would still not validate the CA.  However, this turned out to be a permissions 
error with cacerts.txt[1] when httplib2 was installed through pip.  Nowadays 
openstack uses requests which I don’t believe utilizes httplib2.

[1] https://code.google.com/p/httplib2/issues/detail?id=292q=certificate  


On Wednesday, March 25, 2015 at 11:13 AM, Jesse Keating wrote:

 We're facing a bit of a frustration. In some of our environments, we're using 
 a self-signed certificate for our ssl termination (haproxy). We have our 
 various services pointing at the haproxy for service cross-talk, such as nova 
 to neutron or nova to glance or nova to cinder or neutron to nova or cinder 
 to glance or all the things to keystone. When using a self-signed 
 certificate, these services have trouble validating the cert when they 
 attempt to talk to each other. This problem can be solved in a few ways, such 
 as adding the CA to the system bundle (of your platform has such a thing), 
 adding the CA to the bundle python requests uses (because hilariously it 
 doesn't always use the system bundle), or the more direct way of telling 
 nova, neutron, et al the direct path to the CA file.
  
 This last choice is the way we went forward, more explicit, and didn't depend 
 on knowledge if python-requests was using its own bundle or the operating 
 system's bundle. To configure this there are a few places that need to be 
 touched.
  
 nova.conf:
 [keystone_authtoken]
 cafile = path
  
 [neutron]
 ca_certificates_file = path
  
 [cinder]
 ca_certificates_file = path
  
 (nothing for glance hilariously)
  
  
 neutron.conf
 [DEFAULT]
  
 nova_ca_certificates_file = path
  
 [keystone_authtoken]
 cafile = path
  
 glance-api.conf and glance-registry.conf
 [keystone_authtoken]
 cafile = path
  
 cinder.conf
 [DEFAULT]
 glance_ca_certificates_file = path
  
 [keystone_authtoken]
 cafile = path
  
 heat.conf
 [clients]
 ca_file = path
  
 [clients_whatever]
 ca_file = path
  
  
 As you can see, there are a lot of places where one would have to define the 
 path, and the frustrating part is that the config name and section varies 
 across the services. Does anybody think this is a good thing? Can anybody 
 think of a good way forward to come to some sort of agreement on config 
 names? It does seem like heat is a winner here, it has a default that can be 
 defined for all clients, and then each client could potentially point to a 
 different path, but every config entry is named the same. Can we do that 
 across all the other services?
  
 I chatted a bit on twitter last night with some nova folks, they suggested 
 starting a thread here on ops list and potentially turning it into a hallway 
 session or real session at the Vancouver design summit (which operators are 
 officially part of).
  
 - jlk  
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org 
 (mailto:OpenStack-operators@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] FYI: Rabbit Heartbeat Patch Landed

2015-03-20 Thread John Dewey
Why would anyone want to run rabbit behind haproxy?  I get people did it post 
the ‘rabbit_servers' flag.  Allowing the client to detect, handle, and retry is 
a far better alternative than load balancer health check intervals.  

On Thursday, March 19, 2015 at 9:42 AM, Kris G. Lindgren wrote:

 I have been working with dism and sileht on testing this patch in one of
 our pre-prod environments. There are still issues with rabbitmq behind
 haproxy that we are working through. However, in testing if you are using
 a list of hosts you should see significantly better catching/fixing of
 faults.
  
 If you are using cells with the don¹t forget to also apply:
 https://review.openstack.org/#/c/152667/
 
  
 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.
  
  
  
 On 3/19/15, 10:22 AM, Mark Voelker mvoel...@vmware.com 
 (mailto:mvoel...@vmware.com) wrote:
  
  At the Operator¹s midcycle meetup in Philadelphia recently there was a
  lot of operator interest[1] in the idea behind this patch:
   
  https://review.openstack.org/#/c/146047/
   
  Operators may want to take note that it merged yesterday. Happy testing!
   
   
  [1] See bottom of https://etherpad.openstack.org/p/PHL-ops-rabbit-queue
   
  At Your Service,
   
  Mark T. Voelker
  OpenStack Architect
   
   
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org 
  (mailto:OpenStack-operators@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
   
  
  
  
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org 
 (mailto:OpenStack-operators@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  
  


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [all] SQL Schema Downgrades: A Good Idea?

2015-01-29 Thread John Dewey
On Thursday, January 29, 2015 at 11:40 AM, Fischer, Matt wrote:
  
 From: Morgan Fainberg morgan.fainb...@gmail.com 
 (mailto:morgan.fainb...@gmail.com)
 Date: Thursday, January 29, 2015 at 12:26 PM
 To: openstack-operators openstack-operators@lists.openstack.org 
 (mailto:openstack-operators@lists.openstack.org)
 Subject: [Openstack-operators] [all] SQL Schema Downgrades: A Good Idea?
  
 From an operator perspective I wanted to get input on the SQL Schema 
 Downgrades.  
 
 Today most projects (all?) provide a way to downgrade the SQL Schemas after 
 you’ve upgraded. Example would be moving from Juno to Kilo and then back to 
 Juno. There are some odd concepts when handling a SQL migration downgrade 
 specifically around the state of the data. A downgrade, in many cases, 
 causes permanent and irrevocable data loss. When phrased like that (and 
 dusting off my deployer/operator hat) I would be hesitant to run a 
 downgrade in any production, stagings, or even QA environment.
 
 In light of what a downgrade actually means I would like to get the views of 
 the operators on SQL Migration Downgrades:
 
 1) Would you actually perform a programatic downgrade via the cli tools or 
 would you just do a restore-to-last-known-good-before-upgrade (e.g. from a 
 DB dump)?
 2) Would you trust the data after a programatic downgrade or would the data 
 only really be trustworthy if from a restore? Specifically the new code 
 *could* be relying on new data structures and a downgrade could result in 
 weird states of services.
 
 I’m looking at the expectation that a downgrade is possible. Each time I 
 look at the downgrades I feel that it doesn’t make sense to ever really 
 perform a downgrade outside of a development environment. The potential for 
 permanent loss of data / inconsistent data leads me to believe the downgrade 
 is a flawed design. Input from the operators on real-world cases would be 
 great to have.
 
 This is an operator specific set of questions related to a post I made to 
 the OpenStack development mailing list: 
 http://lists.openstack.org/pipermail/openstack-dev/2015-January/055586.html
 
 Cheers,
 Morgan  
  
  
  
 When moving major releases, we  backup our databases and shutdown most of the 
 cluster so that portion of the cluster is still “good”. We then upgrade one 
 node completely, validate it, then join the rest of the nodes back in. If it 
 goes horribly wrong at that point we’d restore from backup. The main reasons 
 for me are two fold. First, rightly or wrongly, I assume that downgrades are 
 not well tested and rarely used by anyone. We certainly never test it during 
 our upgrade planning. Secondly, until I’m sure that the upgrade worked well, 
 I’d rather just go back to a clean state than rely on a downgrade just 
 because I know that state is 100% functional without further validation. 
 Especially if I’m in an outage window I don’t have time to mess around with a 
 downgrade and hope it works. I’ll kill the bad node and rebuild it, either 
 just restarting the old DB or restoring if needed. The tl;dr here is why take 
 the chance that the downgrade works when you have saner alternatives.

Yeah, this is the general approach we perform as well, and few other shops I 
know.  
  
  
 (please excuse the cruft below)  
  
  
  
  
  
 This E-mail and any of its attachments may contain Time Warner Cable 
 proprietary information, which is privileged, confidential, or subject to 
 copyright belonging to Time Warner Cable. This E-mail is intended solely for 
 the use of the individual or entity to which it is addressed. If you are not 
 the intended recipient of this E-mail, you are hereby notified that any 
 dissemination, distribution, copying, or action taken in relation to the 
 contents of and attachments to this E-mail is strictly prohibited and may be 
 unlawful. If you have received this E-mail in error, please notify the sender 
 immediately and permanently delete the original and any copy of this E-mail 
 and any printout.
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org 
 (mailto:OpenStack-operators@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [openstack-operators] [Keystone] flush expired tokens and moves deleted instance

2015-01-27 Thread John Dewey
This is one reason to use the memcached backend. Why replicate these tokens in 
the first place. 


On Tuesday, January 27, 2015 at 10:21 AM, Clint Byrum wrote:

 
 Excerpts from Tim Bell's message of 2015-01-25 22:10:10 -0800:
  This is often mentioned as one of those items which catches every OpenStack 
  cloud operator at some time. It's not clear to me that there could not be a 
  scheduled job built into the system with a default frequency (configurable, 
  ideally).
  
  If we are all configuring this as a cron job, is there a reason that it 
  could not be built into the code ?
 It has come up before.
 
 The main reason not to build it into the code as it's even better to
 just _never store tokens_:
 
 https://blueprints.launchpad.net/keystone/+spec/non-persistent-tokens
 http://git.openstack.org/cgit/openstack/keystone-specs/plain/specs/juno/non-persistent-tokens.rst
 
 or just use certs:
 
 https://blueprints.launchpad.net/keystone/+spec/keystone-tokenless-authz-with-x509-ssl-client-cert
 
 The general thought is that putting lots of things in the database that
 don't need to be stored anywhere is a bad idea. The need for the cron
 job is just a symptom of that bug.
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Specifying multiple tenants for aggregate_multitenancy_isolation_filter

2015-01-27 Thread John Dewey


On Tuesday, January 27, 2015 at 2:03 PM, Jesse Keating wrote:

 On 1/27/15 1:54 PM, Sam Morrison wrote:
  Hi operators,
   
  I have a review up to fix this filter to allow multiple tenants, there
  are 2 proposed ways in which this can be specified.
   
  1. using a comma e.g., tenantid1,tenantid2
  2. Using a json list eg. [“tenantid1”, “tenantid2”]
   
  Which one do you think is better?
   
  https://review.openstack.org/148807
  
 Is this intended to be written by a human using the command line tools,  
 or to be filled in via other methods?
  
 For command line tools, making it json seems wrong, and a comma list  
 would be more friendly. Internally the command line tools could easily  
 format it as a json list to pass along.
  
  

Yeah, I agree with this perspective.
  
  
 What do the other various things that take lists expect? I'd say that's  
 more of a consideration too, uniformity across the inputs.
  
 --  
 -jlk
  
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org 
 (mailto:OpenStack-operators@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] HAPROXY 504 errors in HA conf

2015-01-13 Thread John Dewey
Are you using ucarp or keepalived to manage the VIP address?  Basically, are 
you rebooting the load balancer, which everything is configured to use?

John 


On Tuesday, January 13, 2015 at 5:04 AM, Pedro Sousa wrote:

 Hi all,
 
 I have 3 nodes that are loadbalacing some API Openstack Based Services. When 
 I reboot one of my servers to test HA, and when that server comes online I 
 start getting 504 errors, specially with nova-api, glance and keystone APIS. 
 This only happens If I reboot the server. 
 
 Then to recover from that errors I have to restart all my services in all the 
 servers, without rebooting, running openstack-service restart. 
 
 I'm providing my haproxy conf in attach case someone can assist me. 
 
 Thank you.
 
 
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org 
 (mailto:OpenStack-operators@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
 
 
 Attachments: 
 - haproxy.cfg
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [oslo] Healthcheck middleware

2014-07-08 Thread John Dewey
I am looking to add health check middleware [1] into Keystone, and eventually 
other API endpoints.  I understand it makes sense to move this into oslo, so 
other projects can utilize it in their pate pipelines.  My question is where in 
oslo should this go?

Thanks -
John

[1] https://review.openstack.org/#/c/105311/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data from Operators needed.

2014-04-01 Thread John Dewey
Thanks Eugene

John 


On Tuesday, April 1, 2014 at 3:02 AM, Eugene Nikanorov wrote:

 Hi folks,
 
 On the last meeting we decided to collect usage data so we could prioritize 
 features and see what is demanded most.
 
 Here's the blank page to do that (in a free form). I'll structure it once we 
 have some data. 
 https://wiki.openstack.org/wiki/Neutron/LBaaS/Usecases
 
 Please fill with the data you have.
 
 Thanks, 
 Eugene.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute not re-establishing connectivity after controller switchover

2014-03-24 Thread John Dewey
Jay had responded to a similar issue [1] some time ago (I swear I saw talk of 
this last week but can’t find the newer thread).  Since the posting referenced 
we also found rabbit 3.2.x with esl erlang helped a ton.

tl;dr It is a client issue.  See the thread for further details.


[1] http://lists.openstack.org/pipermail/openstack/2013-August/000934.html  


On Monday, March 24, 2014 at 10:40 AM, Chris Friesen wrote:

 On 03/24/2014 11:31 AM, Chris Friesen wrote:
  
  It looks like we're raising
   
  RecoverableConnectionError: connection already closed
   
  down in /usr/lib64/python2.7/site-packages/amqp/abstract_channel.py, but
  nothing handles it.
   
  It looks like the most likely place that should be handling it is
  nova.openstack.common.rpc.impl_kombu.Connection.ensure().
   
   
  In the current oslo.messaging code the ensure() routine explicitly
  handles connection errors (which RecoverableConnectionError is) and
  socket timeouts--the ensure() routine in Havana doesn't do this.
   
  
  
 I misread the code, ensure() in Havana does in fact monitor socket  
 timeouts, but it doesn't handle connection errors.
  
 It looks like support for handling connection errors was added to  
 oslo.messaging just recently in git commit 0400cbf. The git commit  
 comment talks about clustered rabbit nodes and mirrored queues which  
 doesn't apply to our scenario, but I suspect it would probably fix the  
 problem that we're seeing as well.
  
 Chris
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-24 Thread John Dewey
I have a similar concern.  The underlying driver may support different 
functionality, but the differentiators need exposed through the top level API.

I see the SSL work is well underway, and I am in the process of defining L7 
scripting requirements.  However, I will definitely need L7 scripting prior to 
the API being defined.
Is this where vendor extensions come into play?  I kinda like the route the 
Ironic guy safe taking with a “vendor passthru” API.  

John  


On Monday, March 24, 2014 at 3:17 PM, Brandon Logan wrote:

 Creating a separate driver for every new need brings up a concern I have had. 
  If we are to implement a separate driver for every need then the 
 permutations are endless and may cause a lot drivers and technical debt.  If 
 someone wants an ha-haproxy driver then great.  What if they want it to be 
 scalable and/or HA, is there supposed to be scalable-ha-haproxy, 
 scalable-haproxy, and ha-haproxy drivers?  Then what if instead of doing 
 spinning up processes on the host machine we want a nova VM or a container to 
 house it?  As you can see the permutations will begin to grow exponentially.  
 I'm not sure there is an easy answer for this.  Maybe I'm worrying too much 
 about it because hopefully most cloud operators will use the same driver that 
 addresses those basic needs, but worst case scenarios we have a ton of 
 drivers that do a lot of similar things but are just different enough to 
 warrant a separate driver.  
 From: Susanne Balle [sleipnir...@gmail.com (mailto:sleipnir...@gmail.com)]
 Sent: Monday, March 24, 2014 4:59 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and 
 managed services
  
 Eugene,  
  
 Thanks for your comments,  
  
 See inline:  
  
 Susanne  
  
  
 On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov enikano...@mirantis.com 
 (mailto:enikano...@mirantis.com) wrote:
  Hi Susanne,  
   
  a couple of comments inline:  
   
   
  
   We would like to discuss adding the concept of “managed services” to the 
   Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA 
   proxy. The latter could be a second approach for some of the software 
   load-balancers e.g. HA proxy since I am not sure that it makes sense to 
   deploy Libra within Devstack on a single VM.
 
   Currently users would have to deal with HA, resiliency, monitoring and 
   managing their load-balancers themselves.  As a service provider we are 
   taking a more managed service approach allowing our customers to consider 
   the LB as a black box and the service manages the resiliency, HA, 
   monitoring, etc. for them.


   
   
   
   
   
   
  
   
  As far as I understand these two abstracts, you're talking about making 
  LBaaS API more high-level than it is right now.
  I think that was not on our roadmap because another project (Heat) is 
  taking care of more abstracted service.
  The LBaaS goal is to provide vendor-agnostic management of load balancing 
  capabilities and quite fine-grained level.
  Any higher level APIs/tools can be built on top of that, but are out of 
  LBaaS scope.
   
  
 [Susanne] Yes. Libra currently has some internal APIs that get triggered when 
 an action needs to happen. We would like similar functionality in Neutron 
 LBaaS so the user doesn’t have to manage the load-balancers but can consider 
 them as black-boxes. Would it make sense to maybe consider integrating 
 Neutron LBaaS with heat to support some of these use cases?   
   
  
   We like where Neutron LBaaS is going with regards to L7 policies and SSL 
   termination support which Libra is not currently supporting and want to 
   take advantage of the best in each project.
   We have a draft on how we could make Neutron LBaaS take advantage of 
   Libra in the back-end.   
   The details are available at: 
   https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft


   

   
  I looked at the proposal briefly, it makes sense to me. Also it seems to be 
  the simplest way of integrating LBaaS and Libra - create a Libra driver for 
  LBaaS.  
   
   
   
   
  
  
 [Susanne] Yes that would be the short team solution to get us where we need 
 to be. But We do not want to continue to enhance Libra. We would like move to 
 Neutron LBaaS and not have duplicate efforts.  
   

   While this would allow us to fill a gap short term we would like to 
   discuss the longer term strategy since we believe that everybody would 
   benefit from having such “managed services” artifacts built directly into 
   Neutron LBaaS.


   
   
  I'm not sure about building it directly into LBaaS, although we can discuss 
  it.  
   
   
   
   
  
  
 [Susanne] The idea behind the “managed services” aspect/extensions would be 
 reusable for other software LB.   
   
  For instance, HA is definitely on roadmap and everybody seems to 

Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread John Dewey
I am interested 


On Thursday, March 6, 2014 at 7:32 AM, Jorge Miramontes wrote:

 Hi everyone,
 
 I'd like to gauge everyone's interest in a possible mini-summit for Neturon 
 LBaaS. If enough people are interested I'd be happy to try and set something 
 up. The Designate team just had a productive mini-summit in Austin, TX and it 
 was nice to have face-to-face conversations with people in the Openstack 
 community. While most of us will meet in Atlanta in May, I feel that a 
 focused mini-summit will be more productive since we won't have other 
 Openstack distractions around us. Let me know what you all think! 
 
 Cheers, 
 --Jorge
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review approval

2014-03-06 Thread John Dewey
On Thursday, March 6, 2014 at 11:09 AM, Russell Bryant wrote:
 On 03/06/2014 01:05 PM, Sean Dague wrote:
  One of the issues that the Nova team has definitely hit is
  Blueprint overload. At some point there were over 150 blueprints.
  Many of them were a single sentence.
  
  The results of this have been that design review today is typically
  not happening on Blueprint approval, but is instead happening once
  the code shows up in the code review. So -1s and -2s on code review
  are a mix of design and code review. A big part of which is that
  design was never in any way sufficiently reviewed before the code
  started.
  
 
 
 We certainly did better this cycle. Having a team of people do the
 reviews helped. We have some criteria documented [1]. Trying to do
 reviews the blueprint whiteboard is just a painful disaster of a workflow.
 
  In today's Nova meeting a new thought occurred. We already have
  Gerrit which is good for reviewing things. It gives you detailed
  commenting abilities, voting, and history. Instead of attempting
  (and usually failing) on doing blueprint review in launchpad (or
  launchpad + an etherpad, or launchpad + a wiki page) we could do
  something like follows:
  
  1. create bad blueprint 2. create gerrit review with detailed
  proposal on the blueprint 3. iterate in gerrit working towards
  blueprint approval 4. once approved copy back the approved text
  into the blueprint (which should now be sufficiently detailed)
  
  Basically blueprints would get design review, and we'd be pretty
  sure we liked the approach before the blueprint is approved. This
  would hopefully reduce the late design review in the code reviews
  that's happening a lot now.
  
  There are plenty of niggly details that would be need to be worked
  out
  
  * what's the basic text / template format of the design to be
  reviewed (probably want a base template for folks to just keep
  things consistent). * is this happening in the nova tree (somewhere
  in docs/ - NEP (Nova Enhancement Proposals), or is it happening in
  a separate gerrit tree. * are there timelines for blueprint
  approval in a cycle? after which point, we don't review any new
  items.
  
  Anyway, plenty of details to be sorted. However we should figure
  out if the big idea has support before we sort out the details on
  this one.
  
  Launchpad blueprints will still be used for tracking once things
  are approved, but this will give us a standard way to iterate on
  that content and get to agreement on approach.
  
 
 
 I am a *HUGE* fan of the general idea. It's a tool we already use for
 review and iterating on text. It seems like it would be a huge win.
 I also think it would allow and encourage a lot more people to get
 involved in the reviews.
 
 I like the idea of iterating in gerrit until it's approved, and then
 using blueprints to track status throughout development. We could
 copy the text back into the blueprint, or just have a link to the
 proper file in the git repo.
 
 I think a dedicated git repo for this makes sense.
 openstack/nova-blueprints or something, or openstack/nova-proposals if
 we want to be a bit less tied to launchpad terminology.
 
 If folks are on board with the idea, I'm happy to work on getting a
 repo set up. The base template could be the first review against the
 repo.
 
 [1] https://wiki.openstack.org/wiki/Blueprints
Funny, we actually had this very recommendation come out of the OpenStack 
Operators mini-summit this week.  There are other people very interested in 
this approach for blueprints.

John
 
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Health monitoring and statistics for complex LB configurations.

2014-03-05 Thread John Dewey
On Wednesday, March 5, 2014 at 12:41 PM, Eugene Nikanorov wrote:
 Hi community,
 
 Another interesting questions were raised during object model discussion 
 about how pool statistics and health monitoring should be used in case of 
 multiple vips sharing one pool. 
 
 Right now we can query statistics for the pool, and some data like in/out 
 bytes and request count will be returned.
 If we had several vips sharing the pool, what kind of statistics would make 
 sense for the user?
 The options are:
 
 1) aggregated statistics for the pool, e.g. statistics of all requests that 
 has hit the pool through any VIP
 2) per-vip statistics for the pool.
 
 
 

Would it be crazy to offer both?  We can return stats for each pool associated 
with the VIP as you described below.  However, we also offer an aggregated 
section for those interested.

IMO, having stats broken out per-pool seem more helpful than only aggregated, 
while both would be ideal.

John
 
 Depending on the answer, the statistics workflow will be different.
 
 The good option of getting the statistics and health status could be to query 
 it through the vip and get it for the whole logical instance, e.g. a call 
 like: 
  lb-vip-statistics-get --vip-id vip_id
 the would result in json that returns statistics for every pool associated 
 with the vip, plus operational status of all members for the pools associated 
 with that VIP.
 
 Looking forward to your feedback.
 
 Thanks,
 Eugene.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [TripleO] consistency vs packages in TripleO

2014-02-13 Thread John Dewey
On Thursday, February 13, 2014 at 1:27 PM, Robert Collins wrote:
 So progressing with the 'and folk that want to use packages can' arc,
 we're running into some friction.
 
 I've copied -operators in on this because its very relevant IMO to operators 
 :)
 
 So far:
 - some packages use different usernames
 - some put things in different places (and all of them use different
 places to the bare metal ephemeral device layout which requires
 /mnt/).
 - possibly more in future.
 
 Now, obviously its a 'small matter of code' to deal with this, but the
 impact on ops folk isn't so small. There are basically two routes that
 I can see:
 
 # A
 - we have a reference layout - install from OpenStack git / pypi
 releases; this is what we will gate on, and can document.
 - and then each distro (both flavor of Linux and also possibly things
 like Fuel that distribution OpenStack) is different - install on X,
 get some delta vs reference.
 - we need multiple manuals describing how to operate and diagnose
 issues in such a deployment, which is a matrix that overlays platform
 differences the user selects like 'Fedora' and 'Xen'.
 
 # B
 - we have one layout, with one set of install paths, usernames
 - package installs vs source installs make no difference - we coerce
 the package into reference upstream shape as part of installing it.
 - documentation is then identical for all TripleO installs, except
 the platform differences (as above - systemd on Fedora, upstart on
 Ubuntu, Xen vs KVM)
 
 B seems much more useful to our ops users - less subtly wrong docs, we
 avoid bugs where tools we write upstream make bad assumptions,
 experience operating a TripleO deployed OpenStack is more widely
 applicable (applies to all such installs, not just those that happened
 to use the same package source).
 
 I see this much like the way Nova abstracts out trivial Hypervisor
 differences to let you 'nova boot' anywhere, that we should be hiding
 these incidental (vs fundamental capability) differences.
 
 

I personally like B.  In the OpenStack Chef community, there has been quite a 
bit of excitement over the work that Craig Tracey has been doing with 
omnibus-openstack [1].  It is very similar to B, however, it builds a super 
package per distro, with all dependencies into a known location (e.g. 
/opt/openstack/).

Regardless of how B is ultimately implemented, I personally like the suggestion.

[1] https://github.com/craigtracey/omnibus-openstack

John 
 
 What say ye all?
 
 -Robv
 
 
 -- 
 Robert Collins rbtcoll...@hp.com (mailto:rbtcoll...@hp.com)
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org 
 (mailto:openstack-operat...@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] nova-novnc SSL configuration - Havana

2014-02-13 Thread John Dewey
Hi Nagaraj -

I ran into this problem long ago when I offloaded SSL on the VNC URL.
It is an issue with the javascript, and detailed in bug 1228649 [1].

[1] https://bugs.launchpad.net/ubuntu/raring/+source/nova/+bug/1228649 


On Thursday, February 13, 2014 at 2:39 AM, Nagaraj Mandya wrote:

 Hello,
   I want to run the Nova noVNC proxy over SSL (HTTPS). And so, I created a 
 self-signed certificate and put both the certificate and the private key in 
 /etc/nova/ssl. I then edited nova.conf and added the following:
 
 ssl_only=true
 cert=/etc/nova/ssl/cert.crt
 key=/etc/nova/ssl/private.key
 
   I then modified the nova proxy URL to use https instead of http and 
 restarted the noVNC service. However, though Nova returns the VNC URL as 
 https, I am unable to connect over HTTPS to the URL. Is there any other 
 configuration that needs to be changed? I do not see any logs from novnc as 
 well. Thanks.
 --
 Regards,
 Nagaraj
 
 
 
 
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org (mailto:openstack@lists.openstack.org)
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 
 


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [keystone] memcache token backend performance

2014-01-07 Thread John Dewey
Sounds like: 

https://bugs.launchpad.net/keystone/+bug/1251123 


On Friday, January 3, 2014 at 8:38 PM, Xu (Simon) Chen wrote:

 Hi folks,
 
 I am having trouble with using memcache as the keystone token backend. I have 
 three keystone nodes running active/active. Each is running keystone on 
 apache (for kerberos auth). I recently switched from using sql backend to 
 memcache, while have memcached running on all three of the keystone nodes.  
 
 This setup would run well for a while, but then apache would start to hog 
 CPUs, and memcached would increase to 30% or so. I tried to increase 
 memcached cluster from 3 to 6 nodes, but in general the performance is much 
 worse compared to sql backend. 
 
 Any ideas?
 
 Thanks.
 -Simon
 
 
 
 
 
 
 
 
 
 
 
 
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org (mailto:openstack@lists.openstack.org)
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 
 


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] nova list failure when quantum_url points to HTTPS endpoint

2013-07-30 Thread John Dewey
Was curious if anyone else has run into this issue, and could provide some 
feedback.

https://bugs.launchpad.net/nova/+bug/1206330

Thanks -
John


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack