Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Matthew Treinish
On Mon, May 01, 2017 at 05:00:17AM -0700, Flavio Percoco wrote:
> On 28/04/17 11:19 -0500, Eric Fried wrote:
> > If it's *just* glance we're making an exception for, I prefer #1 (don't
> > deprecate/remove [glance]api_servers).  It's way less code &
> > infrastructure, and it discourages others from jumping on the
> > multiple-endpoints bandwagon.  If we provide endpoint_override_list
> > (handwave), people will think it's okay to use it.
> > 
> > Anyone aware of any other services that use multiple endpoints?
> 
> Probably a bit late but yeah, I think this makes sense. I'm not aware of other
> projects that have list of api_servers.

I thought it was just nova too, but it turns out cinder has the same exact
option as nova: (I hit this in my devstack patch trying to get glance deployed
as a wsgi app)

https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55

Although from what I can tell you don't have to set it and it will fallback to
using the catalog, assuming you configured the catalog info for cinder:

https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129


-Matt Treinish


> 
> > On 04/28/2017 10:46 AM, Mike Dorman wrote:
> > > Maybe we are talking about two different things here?  I’m a bit confused.
> > > 
> > > Our Glance config in nova.conf on HV’s looks like this:
> > > 
> > > [glance]
> > > api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
> > > glance_api_insecure=True
> > > glance_num_retries=4
> > > glance_protocol=http
> 
> 
> FWIW, this feature is being used as intended. I'm sure there are ways to 
> achieve
> this using external tools like haproxy/nginx but that adds an extra burden to
> OPs that is probably not necessary since this functionality is already there.
> 
> Flavio
> 
> > > So we do provide the full URLs, and there is SSL support.  Right?  I am 
> > > fairly certain we tested this to ensure that if one URL fails, nova goes 
> > > on to retry the next one.  That failure does not get bubbled up to the 
> > > user (which is ultimately the goal.)
> > > 
> > > I don’t disagree with you that the client side choose-a-server-at-random 
> > > is not a great load balancer.  (But isn’t this roughly the same thing 
> > > that oslo-messaging does when we give it a list of RMQ servers?)  For us 
> > > it’s more about the failure handling if one is down than it is about 
> > > actually equally distributing the load.
> > > 
> > > In my mind options One and Two are the same, since today we are already 
> > > providing full URLs and not only server names.  At the end of the day, I 
> > > don’t feel like there is a compelling argument here to remove this 
> > > functionality (that people are actively making use of.)
> > > 
> > > To be clear, I, and I think others, are fine with nova by default getting 
> > > the Glance endpoint from Keystone.  And that in Keystone there should 
> > > exist only one Glance endpoint.  What I’d like to see remain is the 
> > > ability to override that for nova-compute and to target more than one 
> > > Glance URL for purposes of fail over.
> > > 
> > > Thanks,
> > > Mike
> > > 
> > > 
> > > 
> > > 
> > > On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:
> > > 
> > > Thank you both for your feedback - that's really helpful.
> > > 
> > > Let me say a few more words about what we're trying to accomplish here
> > > overall so that maybe we can figure out what the right way forward is.
> > > (it may be keeping the glance api servers setting, but let me at least
> > > make the case real quick)
> > > 
> > >  From a 10,000 foot view, the thing we're trying to do is to get 
> > > nova's
> > > consumption of all of the OpenStack services it uses to be less 
> > > special.
> > > 
> > > The clouds have catalogs which list information about the services -
> > > public, admin and internal endpoints and whatnot - and then we're 
> > > asking
> > > admins to not only register that information with the catalog, but to
> > > also put it into the nova.conf. That means that any updating of that
> > > info needs to be an API call to keystone and also a change to 
> > > nova.conf.
> > > If we, on the other hand, use the catalog, then nova can pick up 
> > > changes
> > > in real time as they're rolled out to the cloud - and there is 
> > > hopefully
> > > a sane set of defaults we could choose (based on operator feedback 
> > > like
> > > what you've given) so that in most cases you don't have to tell nova
> > > where to find glance _at_all_ becuase the cloud already knows where it
> > > is. (nova would know to look in the catalog for the interal interface 
> > > of
> > > the image service - for instance - there's no need to ask an operator 
> > > to
> > > add to the config "what is the service_type of the image service we
> > > should talk to" :) )
> > 

Re: [Openstack-operators] [openstack-dev] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-07 Thread Matthew Treinish
On Mon, Dec 07, 2015 at 06:18:04PM -0500, Steve Martinelli wrote:
> 
> ... re-adding the operators mailing list.
> 
> sounds like we should document how to do this, with the assertion that it
> is not tested with our CI.
> 
> with that said, we should try to have a job that sets up keystone with
> nginx that is run periodically (similar to our eventlet job at the moment).

So, we actually run keystone with eventlet on every tempest-dsvm-postgres-full
job. It runs way more than periodically:

http://status.openstack.org/openstack-health/#/job/gate-tempest-dsvm-postgres-full
 

That's just a 24 hr window in the gate queue, including check it's much more.

This has been long standing behavior ever since keystone under mod_wsgi support
was added to devstack:

https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/devstack-gate.yaml#L1395-L1429

It's 1 of 3 things that are different that make the postgres job different. I've
always viewed that job config overloading as a bug, for this exact reason.

-Matt Treinish
 
> 
> From: Brant Knudson 
> To:   "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 2015/12/07 05:52 PM
> Subject:  Re: [openstack-dev] [Openstack-operators] [keystone] Removing
> functionality that was deprecated in Kilo and upcoming
> deprecated functionality in Mitaka
> 
> 
> 
> 
> 
> On Tue, Dec 1, 2015 at 12:57 AM, Steve Martinelli 
> wrote:
>   Trying to summarize here...
> 
>   - There isn't much interest in keeping eventlet around.
>   - Folks are OK with running keystone in a WSGI server, but feel they are
>   constrained by Apache.
>   - uWSGI could help to support multiple web servers.
> 
>   My opinion:
> 
>   - Adding support for uWSGI definitely sounds like it's worth
>   investigating, but not achievable in this release (unless someone already
>   has something cooked up).
> 
> 
> 
> What needs to change to support uWSGI? You can already run keystone in
> python uwsgi and then front it with nginx:
> 
>  $ uwsgi --socket 127.0.0.1:5001 --wsgi-file $(which keystone-wsgi-public)
> --honour-stdin --enable-threads --workers 6
>  $ uwsgi --socket 127.0.0.1:35358 --wsgi-file $(which keystone-wsgi-admin)
> --honour-stdin --enable-threads --workers 6
> 
>  $ sudo vi /etc/nginx/sites-available/keystone
> 
> server {
>   listen 5000 default_server;
>   server_name localhost;
>   location / {
>     include uwsgi_params;
>     uwsgi_pass 127.0.0.1:5001;
>     uwsgi_param SCRIPT_NAME /;
>   }
> }
> server {
>   listen 35357 default_server;
>   server_name localhost;
>   location / {
>     include uwsgi_params;
>     uwsgi_pass 127.0.0.1:35358;
>     uwsgi_param SCRIPT_NAME /;
>   }
> }
> 
>  $ sudo ln -x /etc/nginx/sites-available/keystone /etc/nginx/sites-enabled/
> 
>  $ sudo nginx
> 
> and then you can make your regular curl calls.
> 
> Also, you can run keystone with regular http in python uwsgi (uwsgi --http)
> and then just do normal reverse proxy (from Apache or nginx or whatever),
> which I think would be adequate for keystone.
> 
> We don't do anything in keystone to stop deployments in web servers other
> than Apache. Keystone is just a regular wsgi app. We document Apache since
> it's popular and it provides mod_shib, which is the only saml2 module for
> web servers that I know of. Keystone can work with other saml2 modules and
> in different servers, it just takes the environment variables that the
> module sets and runs it through some mapping code. The mapping code has
> been shown to work alternative authentication modules (for ldap and
> kerberos).
> 
> - Brant
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: PGP signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Tempest configuration for only admin api enforcement

2015-10-27 Thread Matthew Treinish
On Thu, Oct 22, 2015 at 05:39:23PM +, Edgar Magana wrote:
> Folks,
> 
> We need to modify our CI/CD tempest execution to adjust some changes that we 
> are doing in neutron/policy.json file. Basically, we are limiting all the 
> POST operations for admin user. This makes some tempest tests to fail for 
> obvious reasons, any idea what would be the best way to make tempest aware of 
> our new policy.json configuration? We do not want to hack the code to run all 
> the tests as admin, this is why we are looking for suggestions.

So tempest by design doesn't make this exactly easy, it breaks some of the
interop stuff that tempest is trying to enforce. 

But, that being said there are 2 ways I think you can do this today. You could
use an accounts.yaml file that has admin users listed but doesn't list admin as
a role in the yaml.[1] This should allow the credentials to be used for 
non-admin
tests (which is normally blocked) The alternative is if you're using tenant
isolation/dynamic creds you can add the admin role to the tempest_roles option
to assign admin to every user tempest creates. [2] 

However, the caveat here is that I've not ever seen these configuration paths
used before. So I can definitely see there being weird side effects from doing
this. Mostly because admin has the ability to do more than regular users which
will break some of the tests.

Thanks,

Matt Treinish


[1] 
http://docs.openstack.org/developer/tempest/configuration.html#locking-test-accounts-aka-accounts-yaml-or-accounts-file
[2] 
http://docs.openstack.org/developer/tempest/configuration.html#dynamic-credentials


signature.asc
Description: PGP signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Matthew Treinish
On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
> This came up in the operators mailing list back in June [1] but given the
> subject probably didn't get much attention.
> 
> Basically there is a really old bug [2] from Grizzly that is still a problem
> and affects multiple projects.  A tenant can be deleted in Keystone even
> though other resources in other projects are under that project, and those
> resources aren't cleaned up.

I agree this probably can be a major pain point for users. We've had to work 
around it
in tempest by creating things like:

http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
and
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py

to ensure we aren't dangling resources after a run. But, this doesn't work in
all cases either. (like with tenant isolation enabled)

I also know there is a stackforge project that is attempting something similar
here:

http://git.openstack.org/cgit/stackforge/ospurge/

It would be much nicer if the burden for doing this was taken off users and this
was just handled cleanly under the covers.

> 
> Keystone implemented event notifications back in Havana [3] but the other
> projects aren't listening on them to know when a project has been deleted
> and act accordingly.
> 
> The bug has several people saying "we should talk about this at the summit"
> for several summits, but I can't find any discussion or summit sessions
> related back to the bug.
> 
> Given this is an operations and cross-project issue, I'd like to bring it up
> again for the Vancouver summit if there is still interest (which I'm
> assuming there is from operators).

I'd definitely support having a cross-project session on this.

> 
> There is a blueprint specifically for the tenant deletion case but it's
> targeted at only Horizon [4].
> 
> Is anyone still working on this? Is there sufficient interest in a
> cross-project session at the L summit?
> 
> Thinking out loud, even if nova doesn't listen to events from keystone, we
> could at least have a periodic task that looks for instances where the
> tenant no longer exists in keystone and then take some action (log a
> warning, shutdown/archive/, reap, etc).
> 
> There is also a spec for L to transfer instance ownership [5] which could
> maybe come into play, but I wouldn't depend on it.
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
> [2] https://bugs.launchpad.net/nova/+bug/967832
> [3] https://blueprints.launchpad.net/keystone/+spec/notifications
> [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
> [5] https://review.openstack.org/#/c/105367/
> 

-Matt Treinish


pgpI3tvj5IaT6.pgp
Description: PGP signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators