[Openstack-operators] Boston Summit Forum and working session Log messages (please comment and participate)

2017-05-01 Thread Rochelle Grober
Hey folks!

I just wanted to raise your awareness of a forum session and a working session 
on Log Messages that is happening during the Boston Summit.

Here is the link to the etherpad for these sessions:

https://etherpad.openstack.org/p/BOS-forum-log-messages

We’ve been going around in circles on some details of the log messages for 
years, and Doug Hellmann has graciously stepped up to try and wrestle this 
beast into submissions.  So, besides giving him a warm round of applause, let’s 
give him (and the small cadre of folks working with him on this) our respectful 
interactions, comments, concerns, high fives, etc. and turn up to the sessions 
to get this spec implementable in the here and now.

Please add your comments, topics, pre forum discussions, etc. on the etherpad 
so that we remember to review and discuss them in the sessions.


Thanks and see you soon!
--Rocky


As reference, here is Doug’s email [1] advertising the spec:
I am looking for some feedback on two new proposals to add IDs to
log messages.

The tl;dr is that we’ve been talking about adding unique IDs to log
messages for 5 years. I myself am still not 100% convinced the idea
is useful, but I would like us to either do it or definitively say
we won't ever do it so that we can stop talking about it and consider
some other improvements to logging instead.

Based on early feedback from a small group who have been involved
in the conversations about this in the past, I have drafted new two
specs with different approaches that try to avoid the pitfalls that
blocked the earlier specs:

1. A cross-project spec to add logging message IDs in (what I hope
   is) a less onerous way than has been proposed before:
   https://review.openstack.org/460110

2. An Oslo spec to add some features to oslo.log to try to achieve the
   goals of the original proposal without having to assign message IDs:
   https://review.openstack.org/460112

To understand the full history and context, you’ll want to read the
blog post I write last week [1].  The reference lists of the specs
also point to some older specs with different proposals that have
failed to gain traction in the past.

I expect all three proposals to be up for discussion during the
logging working group session at the summit/forum, so if you have
any interest in the topic please plan to attend [2].

Thanks!
Doug

[1] 
https://doughellmann.com/blog/2017/04/20/lessons-learned-from-working-on-large-scale-cross-project-initiatives-in-openstack/
[2] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18507/logging-working-group-working-session


[1] http://lists.openstack.org/pipermail/openstack-dev/2017-April/115958.html


华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]
Rochelle Grober
Sr. Staff Architect, Open Source
Office Phone:408-330-5472
Email:rochelle.gro...@huawei.com

 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [LDT] Scheduling of 'plan the week' at the summit

2017-05-01 Thread David Medberry
I poked Matt VanWinkle this morning. He's working it. And apparently we're
all starting to take a look at the schedule which is a good thing.

I see no Ceph Operators meeting/discussion on the schedule. Some folks have
asked me if there is one. (I didn't propose one but maybe I should have.)

On Mon, May 1, 2017 at 2:53 PM, Sam Morrison  wrote:

> Hmm I’m pretty sure the answer here is no. I’ll see if I can get Tom’s or
> Matt’s attention via other mechanisms.
>
> Sam
>
>
> On 27 Apr 2017, at 5:10 pm, Tim Bell  wrote:
>
>
> The Large Deployment Team meeting for ‘Plan the Week’ (
> https://www.openstack.org/summit/boston-2017/summit-
> schedule/events/18404/large-deployment-team-planning-the-week) seems to
> be on Wednesday at 11h00 and the Recapping the week is the next slot at
> 11h50 (https://www.openstack.org/summit/boston-2017/summit-
> schedule/events/18406/large-deployment-team-recapping-the-week)
>
> Is this intended to have the two sessions so close together?
>
> Tim
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [LDT] Scheduling of 'plan the week' at the summit

2017-05-01 Thread Sam Morrison
Hmm I’m pretty sure the answer here is no. I’ll see if I can get Tom’s or 
Matt’s attention via other mechanisms.

Sam


> On 27 Apr 2017, at 5:10 pm, Tim Bell  wrote:
> 
>  
> The Large Deployment Team meeting for ‘Plan the Week’ 
> (https://www.openstack.org/summit/boston-2017/summit-schedule/events/18404/large-deployment-team-planning-the-week)
>  
> 
>  seems to be on Wednesday at 11h00 and the Recapping the week is the next 
> slot at 11h50 
> (https://www.openstack.org/summit/boston-2017/summit-schedule/events/18406/large-deployment-team-recapping-the-week)
>  
> 
>  
> Is this intended to have the two sessions so close together?
>  
> Tim
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Defining the agenda for Kubernetes Ops on OpenStack forum session @ OpenStack Summit Boston

2017-05-01 Thread Steve Gordon
Hi all,

There will be a forum session at OpenStack Summit Boston next week on the topic 
of Kubernetes Ops on OpenStack on OpenStack. This session will be occurring on 
the Wednesday, May 10, at 1:50pm-2:30pm [1]. If you are an operator, developer, 
or other contributor attending OpenStack Summit who would like to participate 
in this session we would love to have you. We're working on framing the agenda 
for the session in this Etherpad:

https://etherpad.openstack.org/p/BOS-forum-kubernetes-ops-on-openstack

Feel free to add your own thoughts and look forward to seeing you there. If 
this email has caused you to ask yourself what the forum is and why you'd be 
there, I'd suggest starting here:

https://wiki.openstack.org/wiki/Forum

Thanks!

Steve

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18764/kubernetes-ops-on-openstack

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova][blazar][scientific] advanced instance scheduling: reservations and preeemption - Forum session

2017-05-01 Thread Jay Pipes

On 05/01/2017 03:39 PM, Blair Bethwaite wrote:

Hi all,

Following up to the recent thread "[Openstack-operators] [scientific]
Resource reservation requirements (Blazar) - Forum session" and adding
openstack-dev.

This is now a confirmed forum session
(https://www.openstack.org/summit/boston-2017/summit-schedule/events/18781/advanced-instance-scheduling-reservations-and-preemption)
to cover any advanced scheduling use-cases people want to talk about,
but in particular focusing on reservations and preemption as they are
big priorities particularly for scientific deployers.

>

Etherpad draft is
https://etherpad.openstack.org/p/BOS-forum-advanced-instance-scheduling,
please attend and contribute! In particular I'd appreciate background
spec and review links added to the etherpad.

Jay, would you be able and interested to moderate this from the Nova side?


Masahito Muroi is currently marked as the moderator, but I will indeed 
be there and happy to assist Masahito in moderating, no problem.


Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Eric Fried
Sam-

Under the current design, you can provide a specific endpoint
(singular) via the `endpoint_override` conf option.  Based on feedback
on this thread, we will also be keeping support for
`[glance]api_servers` for consumers who actually need to be able to
specify multiple endpoints.  See latest spec proposal[1] for details.

[1] https://review.openstack.org/#/c/461481/

Thanks,
Eric (efried)

On 05/01/2017 12:20 PM, Sam Morrison wrote:
> 
>> On 1 May 2017, at 4:24 pm, Sean McGinnis  wrote:
>>
>> On Mon, May 01, 2017 at 10:17:43AM -0400, Matthew Treinish wrote:
 
>>>
>>> I thought it was just nova too, but it turns out cinder has the same exact
>>> option as nova: (I hit this in my devstack patch trying to get glance 
>>> deployed
>>> as a wsgi app)
>>>
>>> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>>>
>>> Although from what I can tell you don't have to set it and it will fallback 
>>> to
>>> using the catalog, assuming you configured the catalog info for cinder:
>>>
>>> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>>>
>>>
>>> -Matt Treinish
>>>
>>
>> FWIW, that came with the original fork out of Nova. I do not have any real
>> world data on whether that is used or not.
> 
> Yes this is used in cinder.
> 
> A lot of the projects you can set endpoints for them to use. This is 
> extremely useful in a a large production Openstack install where you want to 
> control the traffic.
> 
> I can understand using the catalog in certain situations and feel it’s OK for 
> that to be the default but please don’t prevent operators configuring it 
> differently.
> 
> Glance is the big one as you want to control the data flow efficiently but 
> any service to service configuration should ideally be able to be manually 
> configured.
> 
> Cheers,
> Sam
> 
> 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova][blazar][scientific] advanced instance scheduling: reservations and preeemption - Forum session

2017-05-01 Thread Blair Bethwaite
Hi all,

Following up to the recent thread "[Openstack-operators] [scientific]
Resource reservation requirements (Blazar) - Forum session" and adding
openstack-dev.

This is now a confirmed forum session
(https://www.openstack.org/summit/boston-2017/summit-schedule/events/18781/advanced-instance-scheduling-reservations-and-preemption)
to cover any advanced scheduling use-cases people want to talk about,
but in particular focusing on reservations and preemption as they are
big priorities particularly for scientific deployers.

Etherpad draft is
https://etherpad.openstack.org/p/BOS-forum-advanced-instance-scheduling,
please attend and contribute! In particular I'd appreciate background
spec and review links added to the etherpad.

Jay, would you be able and interested to moderate this from the Nova side?

Cheers,

On 12 April 2017 at 05:22, Jay Pipes  wrote:
> On 04/11/2017 02:08 PM, Pierre Riteau wrote:
>>>
>>> On 4 Apr 2017, at 22:23, Jay Pipes >> > wrote:
>>>
>>> On 04/04/2017 02:48 PM, Tim Bell wrote:

 Some combination of spot/OPIE
>>>
>>>
>>> What is OPIE?
>>
>>
>> Maybe I missed a message: I didn’t see any reply to Jay’s question about
>> OPIE.
>
>
> Thanks!
>
>> OPIE is the OpenStack Preemptible Instances
>> Extension: https://github.com/indigo-dc/opie
>> I am sure other on this list can provide more information.
>
>
> Got it.
>
>> I think running OPIE instances inside Blazar reservations would be
>> doable without many changes to the implementation.
>> We’ve talked about this idea several times, this forum session would be
>> an ideal place to draw up an implementation plan.
>
>
> I just looked through the OPIE source code. One thing I'm wondering is why
> the code for killing off pre-emptible instances is being done in the
> filter_scheduler module?
>
> Why not have a separate service that merely responds to the raising of a
> NoValidHost exception being raised from the scheduler with a call to go and
> terminate one or more instances that would have allowed the original request
> to land on a host?
>
> Right here is where OPIE goes and terminates pre-emptible instances:
>
> https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L92-L100
>
> However, that code should actually be run when line 90 raises NoValidHost:
>
> https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L90
>
> There would be no need at all for "detecting overcommit" here:
>
> https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L96
>
> Simply detect a NoValidHost being returned to the conductor from the
> scheduler, examine if there are pre-emptible instances currently running
> that could be terminated and terminate them, and re-run the original call to
> select_destinations() (the scheduler call) just like a Retry operation
> normally does.
>
> There's be no need whatsoever to involve any changes to the scheduler at
> all.
>
 and Blazar would seem doable as long as the resource provider
 reserves capacity appropriately (i.e. spot resources>>blazar
 committed along with no non-spot requests for the same aggregate).
 Is this feasible?
>
>
> No. :)
>
> As mentioned in previous emails and on the etherpad here:
>
> https://etherpad.openstack.org/p/new-instance-reservation
>
> I am firmly against having the resource tracker or the placement API
> represent inventory or allocations with a temporal aspect to them (i.e.
> allocations in the future).
>
> A separate system (hopefully Blazar) is needed to manage the time-based
> associations to inventories of resources over a period in the future.
>
> Best,
> -jay
>
>>> I'm not sure how the above is different from the constraints I mention
>>> below about having separate sets of resource providers for preemptible
>>> instances than for non-preemptible instances?
>>>
>>> Best,
>>> -jay
>>>
 Tim

 On 04.04.17, 19:21, "Jay Pipes" >>> > wrote:

On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
> Hi Jay,
>
> On 4 April 2017 at 00:20, Jay Pipes >>> > wrote:
>> However, implementing the above in any useful fashion requires
 that Blazar
>> be placed *above* Nova and essentially that the cloud operator
 turns off
>> access to Nova's  POST /servers API call for regular users.
 Because if not,
>> the information that Blazar acts upon can be simply
 circumvented by any user
>> at any time.
>
> That's something of an oversimplification. A reservation system
> outside of Nova could manipulate Nova host-aggregates to "cordon
 off"
> infrastructure from on-demand access (I believe Blazar already uses
> this approach), and it's not much of a jump to imagine operators
 being
> able to twiddle the available reserved capacity in a finite cloud
 so
> that reser

Re: [Openstack-operators] [openstack-dev] [scientific][nova][cyborg] Special Hardware Forum session

2017-05-01 Thread Blair Bethwaite
Thanks Rochelle. I encourage everyone to dump thoughts into the
etherpad (https://etherpad.openstack.org/p/BOS-forum-special-hardware
- feel free to garden it as you go!) so we can have some chance of
organising a coherent session. In particular it would be useful to
know what is going to be most useful for the Nova and Cyborg devs so
that we can give that priority before we start the show-and-tell /
knowledge-share that is often a large part of these sessions. I'd also
be very happy to have a co-moderator if any wants to volunteer.

On 26 April 2017 at 03:11, Rochelle Grober  wrote:
>
> I know that some cyborg folks and nova folks are planning to be there. Now
> we need to drive some ops folks.
>
>
> Sent from HUAWEI AnyOffice
> From:Blair Bethwaite
> To:openstack-...@lists.openstack.org,openstack-oper.
> Date:2017-04-25 08:24:34
> Subject:[openstack-dev] [scientific][nova][cyborg] Special Hardware Forum
> session
>
> Hi all,
>
> A quick FYI that this Forum session exists:
> https://www.openstack.org/summit/boston-2017/summit-schedule/events/18803/special-hardware
> (https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
> thing this Forum.
>
> It would be great to see a good representation from both the Nova and
> Cyborg dev teams, and also ops ready to share their experience and
> use-cases.
>
> --
> Cheers,
> ~Blairo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Blair Bethwaite
On 29 April 2017 at 01:46, Mike Dorman  wrote:
> I don’t disagree with you that the client side choose-a-server-at-random is 
> not a great load balancer.  (But isn’t this roughly the same thing that 
> oslo-messaging does when we give it a list of RMQ servers?)  For us it’s more 
> about the failure handling if one is down than it is about actually equally 
> distributing the load.

Maybe not great, but still better than making operators deploy (often
complex) full-featured external LBs when they really just want
*enough* redundancy. In many cases this seems to just create pets in
the control plane. I think it'd be useful if all OpenStack APIs and
their clients actively handled this poor-man's HA without having to
resort to haproxy etc, or e.g., assuming operators own the DNS.

-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Nikhil Komawar
I agree.

I think the solution proposed earlier in this thread about making default
to service catalog and optionally allow ops to choose 'the list of
glance-apis to send data to', would make everyone's life easier.

On Mon, May 1, 2017 at 2:16 PM, Blair Bethwaite 
wrote:

> On 28 April 2017 at 21:17, Sean Dague  wrote:
> > On 04/28/2017 12:50 AM, Blair Bethwaite wrote:
> >> We at Nectar are in the same boat as Mike. Our use-case is a little
> >> bit more about geo-distributed operations though - our Cells are in
> >> different States around the country, so the local glance-apis are
> >> particularly important for caching popular images close to the
> >> nova-computes. We consider these glance-apis as part of the underlying
> >> cloud infra rather than user-facing, so I think we'd prefer not to see
> >> them in the service-catalog returned to users either... is there going
> >> to be a (standard) way to hide them?
> >
> > In a situation like this, where Cells are geographically bounded, is
> > there also a Region for that Cell/Glance?
>
> Hi Sean. Nope, just the one global region and set of user-facing APIs.
> Those other glance-apis are internal architectural details and should
> be hidden from the public catalog so as not to confuse users and/or
> over-expose information.
>
> Cheers,
>
> --
> Cheers,
> ~Blairo
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Blair Bethwaite
On 28 April 2017 at 21:17, Sean Dague  wrote:
> On 04/28/2017 12:50 AM, Blair Bethwaite wrote:
>> We at Nectar are in the same boat as Mike. Our use-case is a little
>> bit more about geo-distributed operations though - our Cells are in
>> different States around the country, so the local glance-apis are
>> particularly important for caching popular images close to the
>> nova-computes. We consider these glance-apis as part of the underlying
>> cloud infra rather than user-facing, so I think we'd prefer not to see
>> them in the service-catalog returned to users either... is there going
>> to be a (standard) way to hide them?
>
> In a situation like this, where Cells are geographically bounded, is
> there also a Region for that Cell/Glance?

Hi Sean. Nope, just the one global region and set of user-facing APIs.
Those other glance-apis are internal architectural details and should
be hidden from the public catalog so as not to confuse users and/or
over-expose information.

Cheers,

-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [newton] [keystone] [nova] [novaclient] [shibboleth] [v3token] [ecp] nova boot fails for federated users

2017-05-01 Thread Evan Bollig PhD
Trying to figure out if this is a bug in ECP support within
novaclient, or if I am misconfiguring something. Any feedback helps!

We have keystone configured to use a separate Shibboleth server for
auth (with an ECP endpoint). Federated users with the _member_ role on
a project can boot VMs using "openstack server create", but attempts
to use "nova boot" (novaclient) are blocked by this error:

 $ nova list
ERROR (AttributeError): 'Namespace' object has no attribute 'os_user_id'

To auth, we have users generate a token with unscoped saml:

export OS_AUTH_TYPE=v3unscopedsaml
unset OS_AUTH_STRATEGY
export OS_IDENTITY_PROVIDER=testshib
export OS_PROTOCOL=saml2
export OS_IDENTITY_PROVIDER_URL=https://shibboleth-server/ECP
unset OS_TOKEN
export OS_TOKEN=$( openstack token issue -c id -f value --debug )
unset OS_PASSWORD
if [ -z $OS_TOKEN ]; then
  echo -e "\nERROR: Bad authentication"
  unset OS_TOKEN
else
  echo -e "\nAuthenticated."
fi
unset OS_USER_DOMAIN_NAME
export OS_AUTH_TYPE=v3token

Cheers,
-E


--
Evan F. Bollig, PhD
Scientific Computing Consultant, Application Developer | Scientific
Computing Solutions (SCS)
Minnesota Supercomputing Institute | msi.umn.edu
University of Minnesota | umn.edu
boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Sam Morrison

> On 1 May 2017, at 4:24 pm, Sean McGinnis  wrote:
> 
> On Mon, May 01, 2017 at 10:17:43AM -0400, Matthew Treinish wrote:
>>> 
>> 
>> I thought it was just nova too, but it turns out cinder has the same exact
>> option as nova: (I hit this in my devstack patch trying to get glance 
>> deployed
>> as a wsgi app)
>> 
>> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>> 
>> Although from what I can tell you don't have to set it and it will fallback 
>> to
>> using the catalog, assuming you configured the catalog info for cinder:
>> 
>> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>> 
>> 
>> -Matt Treinish
>> 
> 
> FWIW, that came with the original fork out of Nova. I do not have any real
> world data on whether that is used or not.

Yes this is used in cinder.

A lot of the projects you can set endpoints for them to use. This is extremely 
useful in a a large production Openstack install where you want to control the 
traffic.

I can understand using the catalog in certain situations and feel it’s OK for 
that to be the default but please don’t prevent operators configuring it 
differently.

Glance is the big one as you want to control the data flow efficiently but any 
service to service configuration should ideally be able to be manually 
configured.

Cheers,
Sam


> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] User Committee IRC Meeting - Monday May 1st

2017-05-01 Thread Edgar Magana
Dear UC Community,

This is a kind reminder that we are having our UC IRC meeting today at 1900 UTC 
in (freenode) #openstack-meeting

Agenda:
https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee

Thanks,

Edgar Magana
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Eric Fried
Matt-

Yeah, clearly other projects have the same issuethis blueprint is
trying to solve in nova.  I think the idea is that, once the
infrastructure is in place and nova has demonstrated the concept, other
projects can climbaboard.

It's conceivable that the new get_service_url() method could be
moved to a more common lib (ksaor os-client-config perhaps) in the
future to facilitate this.

Eric (efried)

On 05/01/2017 09:17 AM, Matthew Treinish wrote:
> On Mon, May 01, 2017 at 05:00:17AM -0700, Flavio Percoco wrote:
>> On 28/04/17 11:19 -0500, Eric Fried wrote:
>>> If it's *just* glance we're making an exception for, I prefer #1 (don't
>>> deprecate/remove [glance]api_servers).  It's way less code &
>>> infrastructure, and it discourages others from jumping on the
>>> multiple-endpoints bandwagon.  If we provide endpoint_override_list
>>> (handwave), people will think it's okay to use it.
>>>
>>> Anyone aware of any other services that use multiple endpoints?
>> Probably a bit late but yeah, I think this makes sense. I'm not aware of 
>> other
>> projects that have list of api_servers.
> I thought it was just nova too, but it turns out cinder has the same exact
> option as nova: (I hit this in my devstack patch trying to get glance deployed
> as a wsgi app)
>
> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>
> Although from what I can tell you don't have to set it and it will fallback to
> using the catalog, assuming you configured the catalog info for cinder:
>
> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>
>
> -Matt Treinish
>
>
>>> On 04/28/2017 10:46 AM, Mike Dorman wrote:
 Maybe we are talking about two different things here?  I’m a bit confused.

 Our Glance config in nova.conf on HV’s looks like this:

 [glance]
 api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
 glance_api_insecure=True
 glance_num_retries=4
 glance_protocol=http
>>
>> FWIW, this feature is being used as intended. I'm sure there are ways to 
>> achieve
>> this using external tools like haproxy/nginx but that adds an extra burden to
>> OPs that is probably not necessary since this functionality is already there.
>>
>> Flavio
>>
 So we do provide the full URLs, and there is SSL support.  Right?  I am 
 fairly certain we tested this to ensure that if one URL fails, nova goes 
 on to retry the next one.  That failure does not get bubbled up to the 
 user (which is ultimately the goal.)

 I don’t disagree with you that the client side choose-a-server-at-random 
 is not a great load balancer.  (But isn’t this roughly the same thing that 
 oslo-messaging does when we give it a list of RMQ servers?)  For us it’s 
 more about the failure handling if one is down than it is about actually 
 equally distributing the load.

 In my mind options One and Two are the same, since today we are already 
 providing full URLs and not only server names.  At the end of the day, I 
 don’t feel like there is a compelling argument here to remove this 
 functionality (that people are actively making use of.)

 To be clear, I, and I think others, are fine with nova by default getting 
 the Glance endpoint from Keystone.  And that in Keystone there should 
 exist only one Glance endpoint.  What I’d like to see remain is the 
 ability to override that for nova-compute and to target more than one 
 Glance URL for purposes of fail over.

 Thanks,
 Mike




 On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:

 Thank you both for your feedback - that's really helpful.

 Let me say a few more words about what we're trying to accomplish here
 overall so that maybe we can figure out what the right way forward is.
 (it may be keeping the glance api servers setting, but let me at least
 make the case real quick)

  From a 10,000 foot view, the thing we're trying to do is to get nova's
 consumption of all of the OpenStack services it uses to be less 
 special.

 The clouds have catalogs which list information about the services -
 public, admin and internal endpoints and whatnot - and then we're 
 asking
 admins to not only register that information with the catalog, but to
 also put it into the nova.conf. That means that any updating of that
 info needs to be an API call to keystone and also a change to 
 nova.conf.
 If we, on the other hand, use the catalog, then nova can pick up 
 changes
 in real time as they're rolled out to the cloud - and there is 
 hopefully
 a sane set of defaults we could choose (based on operator feedback like
 what you've given) so that in mos

Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Sean McGinnis
On Mon, May 01, 2017 at 10:17:43AM -0400, Matthew Treinish wrote:
> > 
> 
> I thought it was just nova too, but it turns out cinder has the same exact
> option as nova: (I hit this in my devstack patch trying to get glance deployed
> as a wsgi app)
> 
> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
> 
> Although from what I can tell you don't have to set it and it will fallback to
> using the catalog, assuming you configured the catalog info for cinder:
> 
> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
> 
> 
> -Matt Treinish
> 

FWIW, that came with the original fork out of Nova. I do not have any real
world data on whether that is used or not.


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Matthew Treinish
On Mon, May 01, 2017 at 05:00:17AM -0700, Flavio Percoco wrote:
> On 28/04/17 11:19 -0500, Eric Fried wrote:
> > If it's *just* glance we're making an exception for, I prefer #1 (don't
> > deprecate/remove [glance]api_servers).  It's way less code &
> > infrastructure, and it discourages others from jumping on the
> > multiple-endpoints bandwagon.  If we provide endpoint_override_list
> > (handwave), people will think it's okay to use it.
> > 
> > Anyone aware of any other services that use multiple endpoints?
> 
> Probably a bit late but yeah, I think this makes sense. I'm not aware of other
> projects that have list of api_servers.

I thought it was just nova too, but it turns out cinder has the same exact
option as nova: (I hit this in my devstack patch trying to get glance deployed
as a wsgi app)

https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55

Although from what I can tell you don't have to set it and it will fallback to
using the catalog, assuming you configured the catalog info for cinder:

https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129


-Matt Treinish


> 
> > On 04/28/2017 10:46 AM, Mike Dorman wrote:
> > > Maybe we are talking about two different things here?  I’m a bit confused.
> > > 
> > > Our Glance config in nova.conf on HV’s looks like this:
> > > 
> > > [glance]
> > > api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
> > > glance_api_insecure=True
> > > glance_num_retries=4
> > > glance_protocol=http
> 
> 
> FWIW, this feature is being used as intended. I'm sure there are ways to 
> achieve
> this using external tools like haproxy/nginx but that adds an extra burden to
> OPs that is probably not necessary since this functionality is already there.
> 
> Flavio
> 
> > > So we do provide the full URLs, and there is SSL support.  Right?  I am 
> > > fairly certain we tested this to ensure that if one URL fails, nova goes 
> > > on to retry the next one.  That failure does not get bubbled up to the 
> > > user (which is ultimately the goal.)
> > > 
> > > I don’t disagree with you that the client side choose-a-server-at-random 
> > > is not a great load balancer.  (But isn’t this roughly the same thing 
> > > that oslo-messaging does when we give it a list of RMQ servers?)  For us 
> > > it’s more about the failure handling if one is down than it is about 
> > > actually equally distributing the load.
> > > 
> > > In my mind options One and Two are the same, since today we are already 
> > > providing full URLs and not only server names.  At the end of the day, I 
> > > don’t feel like there is a compelling argument here to remove this 
> > > functionality (that people are actively making use of.)
> > > 
> > > To be clear, I, and I think others, are fine with nova by default getting 
> > > the Glance endpoint from Keystone.  And that in Keystone there should 
> > > exist only one Glance endpoint.  What I’d like to see remain is the 
> > > ability to override that for nova-compute and to target more than one 
> > > Glance URL for purposes of fail over.
> > > 
> > > Thanks,
> > > Mike
> > > 
> > > 
> > > 
> > > 
> > > On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:
> > > 
> > > Thank you both for your feedback - that's really helpful.
> > > 
> > > Let me say a few more words about what we're trying to accomplish here
> > > overall so that maybe we can figure out what the right way forward is.
> > > (it may be keeping the glance api servers setting, but let me at least
> > > make the case real quick)
> > > 
> > >  From a 10,000 foot view, the thing we're trying to do is to get 
> > > nova's
> > > consumption of all of the OpenStack services it uses to be less 
> > > special.
> > > 
> > > The clouds have catalogs which list information about the services -
> > > public, admin and internal endpoints and whatnot - and then we're 
> > > asking
> > > admins to not only register that information with the catalog, but to
> > > also put it into the nova.conf. That means that any updating of that
> > > info needs to be an API call to keystone and also a change to 
> > > nova.conf.
> > > If we, on the other hand, use the catalog, then nova can pick up 
> > > changes
> > > in real time as they're rolled out to the cloud - and there is 
> > > hopefully
> > > a sane set of defaults we could choose (based on operator feedback 
> > > like
> > > what you've given) so that in most cases you don't have to tell nova
> > > where to find glance _at_all_ becuase the cloud already knows where it
> > > is. (nova would know to look in the catalog for the interal interface 
> > > of
> > > the image service - for instance - there's no need to ask an operator 
> > > to
> > > add to the config "what is the service_type of the image service we
> > > should talk to" :) )
> > 

Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Flavio Percoco

On 28/04/17 11:19 -0500, Eric Fried wrote:

If it's *just* glance we're making an exception for, I prefer #1 (don't
deprecate/remove [glance]api_servers).  It's way less code &
infrastructure, and it discourages others from jumping on the
multiple-endpoints bandwagon.  If we provide endpoint_override_list
(handwave), people will think it's okay to use it.

Anyone aware of any other services that use multiple endpoints?


Probably a bit late but yeah, I think this makes sense. I'm not aware of other
projects that have list of api_servers.


On 04/28/2017 10:46 AM, Mike Dorman wrote:

Maybe we are talking about two different things here?  I’m a bit confused.

Our Glance config in nova.conf on HV’s looks like this:

[glance]
api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
glance_api_insecure=True
glance_num_retries=4
glance_protocol=http



FWIW, this feature is being used as intended. I'm sure there are ways to achieve
this using external tools like haproxy/nginx but that adds an extra burden to
OPs that is probably not necessary since this functionality is already there.

Flavio


So we do provide the full URLs, and there is SSL support.  Right?  I am fairly 
certain we tested this to ensure that if one URL fails, nova goes on to retry 
the next one.  That failure does not get bubbled up to the user (which is 
ultimately the goal.)

I don’t disagree with you that the client side choose-a-server-at-random is not 
a great load balancer.  (But isn’t this roughly the same thing that 
oslo-messaging does when we give it a list of RMQ servers?)  For us it’s more 
about the failure handling if one is down than it is about actually equally 
distributing the load.

In my mind options One and Two are the same, since today we are already 
providing full URLs and not only server names.  At the end of the day, I don’t 
feel like there is a compelling argument here to remove this functionality 
(that people are actively making use of.)

To be clear, I, and I think others, are fine with nova by default getting the 
Glance endpoint from Keystone.  And that in Keystone there should exist only 
one Glance endpoint.  What I’d like to see remain is the ability to override 
that for nova-compute and to target more than one Glance URL for purposes of 
fail over.

Thanks,
Mike




On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:

Thank you both for your feedback - that's really helpful.

Let me say a few more words about what we're trying to accomplish here
overall so that maybe we can figure out what the right way forward is.
(it may be keeping the glance api servers setting, but let me at least
make the case real quick)

 From a 10,000 foot view, the thing we're trying to do is to get nova's
consumption of all of the OpenStack services it uses to be less special.

The clouds have catalogs which list information about the services -
public, admin and internal endpoints and whatnot - and then we're asking
admins to not only register that information with the catalog, but to
also put it into the nova.conf. That means that any updating of that
info needs to be an API call to keystone and also a change to nova.conf.
If we, on the other hand, use the catalog, then nova can pick up changes
in real time as they're rolled out to the cloud - and there is hopefully
a sane set of defaults we could choose (based on operator feedback like
what you've given) so that in most cases you don't have to tell nova
where to find glance _at_all_ becuase the cloud already knows where it
is. (nova would know to look in the catalog for the interal interface of
the image service - for instance - there's no need to ask an operator to
add to the config "what is the service_type of the image service we
should talk to" :) )

Now - glance, and the thing you like that we don't - is especially hairy
because of the api_servers list. The list, as you know, is just a list
of servers, not even of URLs. This  means it's not possible to configure
nova to talk to glance over SSL (which I know you said works for you,
but we'd like for people to be able to choose to SSL all their things)
We could add that, but it would be an additional pile of special config.
Because of all of that, we also have to attempt to make working URLs
from what is usually a list of IP addresses. This is also clunky and
prone to failure.

The implementation on the underside of the api_servers code is the
world's dumbest load balancer. It picks a server from the  list at
random and uses it. There is no facility for dealing with a server in
the list that stops working or for allowing rolling upgrades like there
would with a real load-balancer across the set. If one of the API
servers goes away, we have no context to know that, so just some of your
internal calls to glance fail.

Those are the issues - basicall