Re: [openstack-dev] [Ironic] [TripleO] virtual-ironic job now voting!

2014-05-25 Thread Dmitry Tantsur
Great news! Even being non-voting, it already helped me 2-3 times to
spot a subtle error in a patch.

On Fri, 2014-05-23 at 18:56 -0700, Devananda van der Veen wrote:
> Just a quick heads up to everyone -- the tempest-dsvm-virtual-ironic
> job is now fully voting in both check and gate queues for Ironic. It's
> also now symmetrically voting on diskimage-builder, since that tool is
> responsible for building the deploy ramdisk used by this test.
> 
> 
> Background: We discussed this prior to the summit, and agreed to
> continue watching the stability of the job through the summit week.
> It's been reliable for over a month now, and I've seen it catch
> several real issues, both in Ironic and in other projects, and all the
> core reviewers I spoke lately have been eager to enable voting on this
> test. So, it's done!
> 
> 
> Cheers,
> Devananda
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-25 Thread balaj...@freescale.com
Hi Kenichi and Isaku,

Thanks for bringing this to discussion.

IMHO, NFV ETSI drafts are still evolving and it is good that we should keep 
track of these drafts so that NFV and Service VM teams will align to these 
drafts for NFV deployments.

Also, NFV ETSI drafts has robust architecture, which we have to infuse it in 
Service VM architecture for aligning with NFV ETSI drafts and discussions.

Any comments/suggestions appreciated.

Regards,
Balaji.P 

> -Original Message-
> From: Isaku Yamahata [mailto:isaku.yamah...@gmail.com]
> Sent: Monday, May 26, 2014 10:17 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: isaku.yamah...@gmail.com
> Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit
> 
> On Fri, May 23, 2014 at 04:13:57PM +0900, "Ogaki, Kenichi"
>  wrote:
> 
> > Hi All,
> 
> Hi.
> 
> > I'm newbie to Openstack, so I want to clarify how OpenStack can
> > implement ETSI NFV Architecture.
> >
> > The concept of Advanced service looks like Network Service in ETSI NFV
> > Architecture as shown in Figure 3 below:
> > http://docbox.etsi.org/ISG/NFV/Open/Published/gs_NFV002v010101p.pdf
> >
> > As the functional role, VNF (Virtualized Network Function) may be
> > corespondent to Logical Service Instance.
> > However, in ETSI NFV Architecture, VNF is composed of VNFC (VNF
> > Component) or VDU (Virtual Deployment Unit) and each VNFC or VDU
> > instance is deployed as a VM.
> > These VNFC or VDU instances are connected by logical or physical
> > network links in a manner of a kind of service chaining, then a VNF
> > instance is created.
> > In the same manner, Network Service is created from one or multiple
> VNF(s).
> 
> Hmm, we don't use same terminology. Is there any public documentation for
> those terminology? The public docuemnts I can find is too high-level to
> understand the requirement.
> 
> The first target of servicevm project is to address the case of single
> service in single VM(VNFC in NFV terminology?) at first.
> Then evolve the implementation for more complex case with experiment.
> 
> 
> > My question is:
> > Is it possible that the current OpenStack components realize an
> > advanced service in the above manner?
> > Meaning, an advanced service is composed of hierarchical multiple VMs.
> 
> I suspect no one knows. This is why we unite to make efforts for NFV.
> 
> 
> thanks,
> Isaku Yamahata
> 
> 
> > All the best,
> > Kenichi
> >
> >
> >
> > > From: Dmitry [mailto:mey...@gmail.com]
> > > Sent: Thursday, May 22, 2014 5:40 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit
> > >
> > > Hi Isaku,
> > > Thank you for the updated link. I'n not sure where from I get the
> > > previous one, probably from the direct Google search.
> > > If we're talking about NFV Mano, it's very important to keep NFVO
> > > and VNFM as a separate services, where VNFM might be (and probably
> > > will be) supplied jointly with a vendor's specific VNF.
> > > In addition, it's possible that VNFC components will not be able to
> > > be placed on the same machine - anti-affinity rules.
> > > Talking in NFV terminology, we need to have a new OpenStack Services
> > > which (from what I've understood from the document you sent) is
> > > called Adv Service and is responsible to be:
> > > 1) NFVO - which is using Nova to provision new Service VMs and
> > > Neutron to establish connectivity and service chaining
> > > 2) Service Catalog - to accommodate multiple VNF services. Question:
> > > the same problem exists with Trove which need a catalog for multiple
> > > concrete DB implementations. Do you know which solution they will
> take for Juno?
> > > 2) Infrastructure for VNFM plugins - which will be called by NFVO to
> > > decide where Service VM should be placed and which LSI should be
> > > provisioned on these Service VMs.
> > >
> > > This flow is more or less what was stated by NFV committee.
> > >
> > > Please let me know what you think about this and how far is that
> > > from what you planed for Service VM.
> > > In addition, I would happy to know if Service VM will be incubated
> > > for Juno release.
> > >
> > > Thank you very much,
> > > Dmitry
> > >
> > >
> > >
> > > On Thu, May 22, 2014 at 9:28 AM, Isaku Yamahata
> > > 
> > > wrote:
> > >
> > >
> > > On Wed, May 21, 2014 at 10:54:03AM +0300,
> > > Dmitry  wrote:
> > >
> > > > HI,
> > >
> > > Hi.
> > >
> > >
> > > > I would happy to get explanation of what is the difference
> > > between Adv
> > >
> > > > Service Management<
> > > https://docs.google.com/file/d/0Bz-bErEEHJxLTGY4NUVvTzRDaEk/edit>from
> > > > the Service VM
> > >
> > > The above document is stale.
> > > the right one is
> > >
> > > https://docs.google.com/document/d/1pwFVV8UavvQkBz92bT-BweBAiIZoMJP0
> > > NPAO4-60XFY/edit?pli=1
> > >
> > >
> https://docs.google.com/document/d/1ZW

Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-25 Thread Ogaki, Kenichi
Hi Isaku,

Thank you for your reply.

2014-05-26 13:47 GMT+09:00 Isaku Yamahata :

> On Fri, May 23, 2014 at 04:13:57PM +0900,
> "Ogaki, Kenichi"  wrote:
>
> > Hi All,
>
> Hi.
>
> > I’m newbie to Openstack, so I want to clarify how OpenStack can implement
> > ETSI NFV Architecture.
> >
> > The concept of Advanced service looks like Network Service in ETSI NFV
> > Architecture as shown in Figure 3 below:
> > http://docbox.etsi.org/ISG/NFV/Open/Published/gs_NFV002v010101p.pdf
> >
> > As the functional role, VNF (Virtualized Network Function) may be
> > corespondent to Logical Service Instance.
> > However, in ETSI NFV Architecture, VNF is composed of VNFC (VNF
> Component)
> > or VDU (Virtual Deployment Unit) and each VNFC or VDU instance is
> deployed
> > as a VM.
> > These VNFC or VDU instances are connected by logical or physical network
> > links in a manner of a kind of service chaining, then a VNF instance is
> > created.
> > In the same manner, Network Service is created from one or multiple
> VNF(s).
>
> Hmm, we don't use same terminology. Is there any public documentation for
> those terminology? The public docuemnts I can find is too high-level to
> understand the requirement.
>

Most of documents haven’t been publicized yet, But, if your affiliation is
a member or participant of ETSI NFV ISG, you can get the final or stable
working group drafts below.

https://portal.etsi.org/tb.aspx?tbid=789&SubTB=789,795,796,801,800,798,799,797,802#lt-50612-drafts

DGS/NFV-MAN001, DGS/NFV-SWA001 should help you to understand the
architecture.



>
> The first target of servicevm project is to address the case of single
> service in single VM(VNFC in NFV terminology?) at first.
> Then evolve the implementation for more complex case with experiment.
>
>
I understand the current servicevm project is targeting an advanced service
composed of single VM.

Thanks,
Kenichi


> > My question is:
> > Is it possible that the current OpenStack components realize an advanced
> > service in the above manner?
> > Meaning, an advanced service is composed of hierarchical multiple VMs.
>
> I suspect no one knows. This is why we unite to make efforts for NFV.
>
>
> thanks,
> Isaku Yamahata
>
>
> > All the best,
> > Kenichi
> >
> >
> >
> > > From: Dmitry [mailto:mey...@gmail.com]
> > > Sent: Thursday, May 22, 2014 5:40 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit
> > >
> > > Hi Isaku,
> > > Thank you for the updated link. I'n not sure where from I get the
> previous
> > > one, probably from the direct Google search.
> > > If we're talking about NFV Mano, it's very important to keep NFVO and
> VNFM
> > > as a separate services, where VNFM might be (and probably will be)
> supplied
> > > jointly with a vendor's specific VNF.
> > > In addition, it's possible that VNFC components will not be able to be
> > > placed on the same machine - anti-affinity rules.
> > > Talking in NFV terminology, we need to have a new OpenStack Services
> which
> > > (from what I've understood from the document you sent) is called Adv
> > > Service and is responsible to be:
> > > 1) NFVO - which is using Nova to provision new Service VMs and Neutron
> to
> > > establish connectivity and service chaining
> > > 2) Service Catalog - to accommodate multiple VNF services. Question:
> the
> > > same problem exists with Trove which need a catalog for multiple
> concrete
> > > DB implementations. Do you know which solution they will take for Juno?
> > > 2) Infrastructure for VNFM plugins - which will be called by NFVO to
> > > decide where Service VM should be placed and which LSI should be
> > > provisioned on these Service VMs.
> > >
> > > This flow is more or less what was stated by NFV committee.
> > >
> > > Please let me know what you think about this and how far is that from
> what
> > > you planed for Service VM.
> > > In addition, I would happy to know if Service VM will be incubated for
> > > Juno release.
> > >
> > > Thank you very much,
> > > Dmitry
> > >
> > >
> > >
> > > On Thu, May 22, 2014 at 9:28 AM, Isaku Yamahata <
> isaku.yamah...@gmail.com>
> > > wrote:
> > >
> > >
> > > On Wed, May 21, 2014 at 10:54:03AM +0300,
> > > Dmitry  wrote:
> > >
> > > > HI,
> > >
> > > Hi.
> > >
> > >
> > > > I would happy to get explanation of what is the difference
> > > between Adv
> > >
> > > > Service Management<
> > > https://docs.google.com/file/d/0Bz-bErEEHJxLTGY4NUVvTzRDaEk/edit>from
> > > > the Service VM
> > >
> > > The above document is stale.
> > > the right one is
> > >
> > >
> https://docs.google.com/document/d/1pwFVV8UavvQkBz92bT-BweBAiIZoMJP0NPAO4-60XFY/edit?pli=1
> > >
> > >
> https://docs.google.com/document/d/1ZWDDTjwhIUedyipkDztM0_nBYgfCEP9Q77hhn1ZduCA/edit?pli=1#
> > > https://wiki.openstack.org/wiki/ServiceVM
> > >
> > > Anyway how did you find the link? I'd 

Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-25 Thread Gregory Haynes
Excerpts from Robert Collins's message of 2014-05-25 23:12:26 +:
> On 23 May 2014 04:57, Gregory Haynes  wrote:
> >>
> >> Eventually we may need to scale traffic beyond one HAProxy, at which
> >> point we'll need to bring something altogether more sophisticated in -
> >> lets design that when we need it.
> >> Sooner than that we're likely going to need to scale load beyond one
> >> control plane server at which point the HAProxy VIP either needs to be
> >> distributed (so active-active load receiving) or we need to go
> >> user -> haproxy (VIP) -> SSL endpoint (on any control plane node) ->
> >> localhost bound service.
> >
> > Putting SSL termination behind HAProxy seems odd. Typically your load
> > balancer wants to be able to grok the traffic sent though it which is
> 
> Not really :). There is a sophistication curve - yes, but generally
> load balancers don't need to understand the traffic *except* when the
> application servers they are sending to have locality of reference
> performance benefits from clustered requests. (e.g. all requests from
> user A on server Z will hit a local cache of user metadata as long as
> they are within 5 seconds). Other than that, load balancers care about
> modelling server load to decide where to send traffic).
> 
> SSL is a particularly interesting thing because you know that all
> requests from that connection are from one user - its end to end
> whereas HTTP can be multiplexed by intermediaries. This means that
> while you don't know that 'all user A's requests' are on the one
> socket, you do know that all requests on that socket are from user A.
> 
> So for our stock - and thus probably most common - API clients we have
> the following characteristics:
>  - single threaded clients
>  - one socket (rather than N)
> 
> Combine these with SSL and clearly whatever efficiency we *can* get
> from locality of reference, we will get just by taking SSL and
> backending it to one backend. That backend might itself be haproxy
> managing local load across local processes but there is no reason to
> expose the protocol earlier.

This is a good point and I agree that performance-wise there is not an
issue here.

> 
> > not possible in this setup. For an environment where sending unencrypted
> > traffic across the internal work is not allowed I agree with Mark's
> > suggestion of re-encrypting for internal traffic, but IMO it should
> > still pass through the load balancer unencrypted. Basically:
> > User -> External SSL Terminate -> LB -> SSL encrypt -> control plane
> 
> I think this is wasted CPU cycles given the characteristics of the
> APIs we're balancing. We have four protocols that need VIP usage AIUI:
> 

One other, separate issue with letting external SSL pass through to your
backends has to do with secutity: Your app servers (or in our case
control nodes) generally have a larger attack surface and are more
distributed than your load balancers (or an SSL endpoint placed infront
of them). Additionally, compromise of an external-facing SSL cert is far
worse than an internal-only SSL cert which could be made backend-server
specific.

I agree that re-encryption is not useful with our current setup, though:
It would occur on a control node which removes the security benefits (I
still wanted to make sure this point is made :)).

TL;DR - +1 on the 'User -> haproxy -> ssl endpoint -> app' design.

Thanks,
Greg

-- 
Gregory Haynes
g...@greghaynes.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-25 Thread Isaku Yamahata
On Thu, May 22, 2014 at 11:40:02AM +0300,
Dmitry  wrote:

> Hi Isaku,
> Thank you for the updated link. I'n not sure where from I get the previous
> one, probably from the direct Google search.

> If we're talking about NFV Mano, it's very important to keep NFVO and VNFM
> as a separate services, where VNFM might be (and probably will be) supplied
> jointly with a vendor's specific VNF.

Can you please point the public documentations that describes those
terminology and architecture?
The pptx slide you pointed out below describes only the overview.
The public documents I can find, ETSO GS NFV 001,002,003,004, NFV-PER002
and white paper describes them in very high level.



> In addition, it's possible that VNFC components will not be able to be
> placed on the same machine - anti-affinity rules.
> Talking in NFV terminology, we need to have a new OpenStack Services which
> (from what I've understood from the document you sent) is called Adv
> Service and is responsible to be:

Probably it will corresponds to vm/service scheduler.
Eventually it would be integrated into Gantt.


> 1) NFVO - which is using Nova to provision new Service VMs and Neutron to
> establish connectivity and service chaining
> 2) Service Catalog - to accommodate multiple VNF services. Question: the
> same problem exists with Trove which need a catalog for multiple concrete
> DB implementations. Do you know which solution they will take for Juno?

Regarding to Trove, I don't know.
Any Trove developer, can you comment on it?


> 2) Infrastructure for VNFM plugins - which will be called by NFVO to decide
> where Service VM should be placed and which LSI should be provisioned on
> these Service VMs.

I don't know what VNFM plugins means. Can you please elaborate it?


> This flow is more or less what was stated by NFV committee.

Where is publicly available documents that describe it?



> Please let me know what you think about this and how far is that from what
> you planed for Service VM.

The first things to do is to clarify the requirement of NFV and to unify
the terminology(or something like terminology conversion matrix).
and then analyze the gap.

The first target of servicevm is to address the case of single function
in single VM(VNFC in NFV terminology?).
Then evolve the implementation for more complex case like forwarding graph
(VNF and VNF-FG in NFV terminology?).


> In addition, I would happy to know if Service VM will be incubated for Juno
> release.

Yea, I'm going to create the first repo in stackforge in one or two weeks.

thanks,
Isaku Yamahata


> Thank you very much,
> Dmitry
> 
> 
> 
> On Thu, May 22, 2014 at 9:28 AM, Isaku Yamahata 
> wrote:
> 
> > On Wed, May 21, 2014 at 10:54:03AM +0300,
> > Dmitry  wrote:
> >
> > > HI,
> >
> > Hi.
> >
> > > I would happy to get explanation of what is the difference between Adv
> > > Service Management<
> > https://docs.google.com/file/d/0Bz-bErEEHJxLTGY4NUVvTzRDaEk/edit>from
> > > the Service VM
> >
> > The above document is stale.
> > the right one is
> >
> > https://docs.google.com/document/d/1pwFVV8UavvQkBz92bT-BweBAiIZoMJP0NPAO4-60XFY/edit?pli=1
> >
> > https://docs.google.com/document/d/1ZWDDTjwhIUedyipkDztM0_nBYgfCEP9Q77hhn1ZduCA/edit?pli=1#
> > https://wiki.openstack.org/wiki/ServiceVM
> >
> > Anyway how did you find the link? I'd like to remove stale links.
> >
> >
> > > and NFVO
> > > orchestration<
> > http://www.ietf.org/proceedings/88/slides/slides-88-opsawg-6.pdf>from
> > > NFV Mano.
> > > The most interesting part if service provider management as part of the
> > > service catalog.
> >
> > servicevm corresponds to (a part of) NFV orchestrator and VNF manager.
> > Especially life cycle management of VMs/services. configuration of
> > services.
> > I think the above document and the NFV documents only give high level
> > statement of components, right?
> >
> > thanks,
> >
> > >
> > > Thanks,
> > > Dmitry
> > >
> > >
> > > On Wed, May 21, 2014 at 9:01 AM, Isaku Yamahata <
> > isaku.yamah...@gmail.com>wrote:
> > >
> > > > Hi, I will also attend the NFV IRC meeting.
> > > >
> > > > thanks,
> > > > Isaku Yamahata
> > > >
> > > > On Tue, May 20, 2014 at 01:23:22PM -0700,
> > > > Stephen Wong  wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I am part of the ServiceVM team and I will attend the NFV IRC
> > > > meetings.
> > > > >
> > > > > Thanks,
> > > > > - Stephen
> > > > >
> > > > >
> > > > > On Tue, May 20, 2014 at 8:59 AM, Chris Wright 
> > > > wrote:
> > > > >
> > > > > > * balaj...@freescale.com (balaj...@freescale.com) wrote:
> > > > > > > > -Original Message-
> > > > > > > > From: Kyle Mestery [mailto:mest...@noironetworks.com]
> > > > > > > > Sent: Tuesday, May 20, 2014 12:19 AM
> > > > > > > > To: OpenStack Development Mailing List (not for usage
> > questions)
> > > > > > > > Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design
> > > > summit
> > > > > > > >
> > > > > > > > On Mon, May 19, 2014 at 1:44 PM, Ian Wells <
> > ijw.ubu...@cack.or

Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-25 Thread Isaku Yamahata
On Fri, May 23, 2014 at 04:13:57PM +0900,
"Ogaki, Kenichi"  wrote:

> Hi All,

Hi.

> I’m newbie to Openstack, so I want to clarify how OpenStack can implement
> ETSI NFV Architecture.
> 
> The concept of Advanced service looks like Network Service in ETSI NFV
> Architecture as shown in Figure 3 below:
> http://docbox.etsi.org/ISG/NFV/Open/Published/gs_NFV002v010101p.pdf
> 
> As the functional role, VNF (Virtualized Network Function) may be
> corespondent to Logical Service Instance.
> However, in ETSI NFV Architecture, VNF is composed of VNFC (VNF Component)
> or VDU (Virtual Deployment Unit) and each VNFC or VDU instance is deployed
> as a VM.
> These VNFC or VDU instances are connected by logical or physical network
> links in a manner of a kind of service chaining, then a VNF instance is
> created.
> In the same manner, Network Service is created from one or multiple VNF(s).

Hmm, we don't use same terminology. Is there any public documentation for 
those terminology? The public docuemnts I can find is too high-level to
understand the requirement.

The first target of servicevm project is to address the case of single
service in single VM(VNFC in NFV terminology?) at first.
Then evolve the implementation for more complex case with experiment.


> My question is:
> Is it possible that the current OpenStack components realize an advanced
> service in the above manner?
> Meaning, an advanced service is composed of hierarchical multiple VMs.

I suspect no one knows. This is why we unite to make efforts for NFV.


thanks,
Isaku Yamahata


> All the best,
> Kenichi
> 
> 
> 
> > From: Dmitry [mailto:mey...@gmail.com]
> > Sent: Thursday, May 22, 2014 5:40 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit
> >
> > Hi Isaku,
> > Thank you for the updated link. I'n not sure where from I get the previous
> > one, probably from the direct Google search.
> > If we're talking about NFV Mano, it's very important to keep NFVO and VNFM
> > as a separate services, where VNFM might be (and probably will be) supplied
> > jointly with a vendor's specific VNF.
> > In addition, it's possible that VNFC components will not be able to be
> > placed on the same machine - anti-affinity rules.
> > Talking in NFV terminology, we need to have a new OpenStack Services which
> > (from what I've understood from the document you sent) is called Adv
> > Service and is responsible to be:
> > 1) NFVO - which is using Nova to provision new Service VMs and Neutron to
> > establish connectivity and service chaining
> > 2) Service Catalog - to accommodate multiple VNF services. Question: the
> > same problem exists with Trove which need a catalog for multiple concrete
> > DB implementations. Do you know which solution they will take for Juno?
> > 2) Infrastructure for VNFM plugins - which will be called by NFVO to
> > decide where Service VM should be placed and which LSI should be
> > provisioned on these Service VMs.
> >
> > This flow is more or less what was stated by NFV committee.
> >
> > Please let me know what you think about this and how far is that from what
> > you planed for Service VM.
> > In addition, I would happy to know if Service VM will be incubated for
> > Juno release.
> >
> > Thank you very much,
> > Dmitry
> >
> >
> >
> > On Thu, May 22, 2014 at 9:28 AM, Isaku Yamahata 
> > wrote:
> >
> >
> > On Wed, May 21, 2014 at 10:54:03AM +0300,
> > Dmitry  wrote:
> >
> > > HI,
> >
> > Hi.
> >
> >
> > > I would happy to get explanation of what is the difference
> > between Adv
> >
> > > Service Management<
> > https://docs.google.com/file/d/0Bz-bErEEHJxLTGY4NUVvTzRDaEk/edit>from
> > > the Service VM
> >
> > The above document is stale.
> > the right one is
> >
> > https://docs.google.com/document/d/1pwFVV8UavvQkBz92bT-BweBAiIZoMJP0NPAO4-60XFY/edit?pli=1
> >
> > https://docs.google.com/document/d/1ZWDDTjwhIUedyipkDztM0_nBYgfCEP9Q77hhn1ZduCA/edit?pli=1#
> > https://wiki.openstack.org/wiki/ServiceVM
> >
> > Anyway how did you find the link? I'd like to remove stale links.
> >
> >
> > > and NFVO
> > > orchestration<
> > http://www.ietf.org/proceedings/88/slides/slides-88-opsawg-6.pdf>from
> >
> > > NFV Mano.
> > > The most interesting part if service provider management as part
> > of the
> > > service catalog.
> >
> >
> > servicevm corresponds to (a part of) NFV orchestrator and VNF
> > manager.
> > Especially life cycle management of VMs/services. configuration of
> > services.
> > I think the above document and the NFV documents only give high
> > level
> > statement of components, right?
> >
> > thanks,
> >
> >
> > >
> > > Thanks,
> > > Dmitry
> > >
> > >
> > > On Wed, May 21, 2014 at 9:01 AM, Isaku Yamahata <
> > isaku.yamah...@g

[openstack-dev] [Tempest][Neutron][TripleO][Ironic] flaky CI tests - 'Unable to enable DHCP for $net-uuid' - bug 1323152

2014-05-25 Thread Robert Collins
I just filed https://bugs.launchpad.net/ironic/+bug/1323152 - TripleO
changes are failing on check-tempest-dsvm-virtual-ironic which has
just been made voting (and I support that) - but we need to get this
fixed asap or make it nonvoting again. I haven't dug deep enough to
know if its rooted in Ironic/Tempest/devstack or is a genuine Neutron
bug (though there is some reason to think its a Neutron bug based on
the kibana search I linked in the bug).

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] No templated service catalog for V3?

2014-05-25 Thread Kieran Spear
Great, thanks for working on it. I'll test it out asap.

Cheers,
Kieran


On 23 May 2014 23:18, Brant Knudson  wrote:
>
>
>
> On Thu, May 22, 2014 at 7:32 PM, Kieran Spear  wrote:
>>
>> Hi,
>>
>> I notice that the templated catalog doesn't support the V3 API*. This
>> is a blocker for us, particularly for Heat since it uses V3
>> internally. We could switch to the SQL backend, but I'm sure others
>> are affected by this too. Is it hard to fix?
>>
>> Cheers,
>> Kieran
>>
>>
>> * https://bugs.launchpad.net/keystone/+bug/1313458
>>
>
> I posted an initial change for it in https://review.openstack.org/#/c/70630/
> but then got distracted. I'll take another look at it today.
>
> - Brant
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-25 Thread Robert Collins
On 23 May 2014 04:57, Gregory Haynes  wrote:
>>
>> Eventually we may need to scale traffic beyond one HAProxy, at which
>> point we'll need to bring something altogether more sophisticated in -
>> lets design that when we need it.
>> Sooner than that we're likely going to need to scale load beyond one
>> control plane server at which point the HAProxy VIP either needs to be
>> distributed (so active-active load receiving) or we need to go
>> user -> haproxy (VIP) -> SSL endpoint (on any control plane node) ->
>> localhost bound service.
>
> Putting SSL termination behind HAProxy seems odd. Typically your load
> balancer wants to be able to grok the traffic sent though it which is

Not really :). There is a sophistication curve - yes, but generally
load balancers don't need to understand the traffic *except* when the
application servers they are sending to have locality of reference
performance benefits from clustered requests. (e.g. all requests from
user A on server Z will hit a local cache of user metadata as long as
they are within 5 seconds). Other than that, load balancers care about
modelling server load to decide where to send traffic).

SSL is a particularly interesting thing because you know that all
requests from that connection are from one user - its end to end
whereas HTTP can be multiplexed by intermediaries. This means that
while you don't know that 'all user A's requests' are on the one
socket, you do know that all requests on that socket are from user A.

So for our stock - and thus probably most common - API clients we have
the following characteristics:
 - single threaded clients
 - one socket (rather than N)

Combine these with SSL and clearly whatever efficiency we *can* get
from locality of reference, we will get just by taking SSL and
backending it to one backend. That backend might itself be haproxy
managing local load across local processes but there is no reason to
expose the protocol earlier.

> not possible in this setup. For an environment where sending unencrypted
> traffic across the internal work is not allowed I agree with Mark's
> suggestion of re-encrypting for internal traffic, but IMO it should
> still pass through the load balancer unencrypted. Basically:
> User -> External SSL Terminate -> LB -> SSL encrypt -> control plane

I think this is wasted CPU cycles given the characteristics of the
APIs we're balancing. We have four protocols that need VIP usage AIUI:

HTTP API
HTTP Data (Swift only atm)
AMQP
MySQL

For HTTP API see my analysis above. For HTTP Data unwrapping and
re-wrapping is expensive and must be balanced against expected
benefits: what request characteristic would be
pinning/balancing/biasing on for Swift?

For AMQP and MySQL we'll be in tunnel mode anyway, so there is no
alternative but SSL to the backend machine and unwrap there.

> This is a bit overkill given our current state, but I think for now its
> important we terminate external SSL earlier on: See ML thread linked
> above for reasoning.

If I read this correctly, you're arguing yourself back to the "User ->
haproxy (VIP) -> SSL endpoint (on any control plane node) -> localhost
bound service." I mentioned ?

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-25 Thread Jay Pipes

On 05/24/2014 01:36 PM, Armando M. wrote:

I appreciate that there is a cost involved in relying on distributed
communication, but this must be negligible considered what needs to
happen end-to-end. If the overhead being referred here is the price to
pay for having a more dependable system (e.g. because things can be
scaled out and/or made reliable independently), then I think this is a
price worth paying.


Yes, I agree 100%.

best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-25 Thread Jay Pipes

On 05/23/2014 03:31 PM, Robert Kukura wrote:

The core refactoring effort may eventually provide a nice solution, but
we can't wait for this. It seems we'll need to either use
python-neutronclient or get access to the Controller classes in the
meantime.


Using python-neutronclient will be the cleanest and most reliable 
implementation, rather than getting access to the Controller classes -- 
which may change substantially more often than the public API.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Hide CI comments in Gerrit

2014-05-25 Thread Radoslav Gerganov
Hi,

I created a small userscript that allows you to hide CI comments in Gerrit. 
That way you can read only comments written by humans and hide everything else. 
I’ve been struggling for a long time to follow discussions on changes with many 
patch sets because of the CI noise. So I came up with this userscript:

https://gist.github.com/rgerganov/35382752557cb975354a

It adds “Toggle CI” button at the bottom of the page that hides/shows CI 
comments. Right now it is configured for Nova CIs, as I contribute mostly 
there, but you can easily make it work for other projects as well. It supports 
both the “old” and “new” screens that we have.

How to install on Chrome: open chrome://extensions and drag&drop the script 
there
How to install on Firefox: install Greasemonkey first and then open the script

Known issues:
 - you may need to reload the page to get the new button
 - I tried to add the button somewhere close to the collapse/expand links but 
it didn’t work for some reason

Hope you will find it useful. Any feedback is welcome :)

Thanks,
Rado

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy for linking bug or bp in commit message

2014-05-25 Thread Assaf Muller


- Original Message -
> Hi folks
> 
> I believed we should link bug or bp for any commit except automated
> commit by infra.

I think that stuff like refactors should be exempt, if for the simple
truth that often there's no bug involved.

> However, I found also there is no written policy for this.
> so may be, I'm wrong for here.
> 
> The reason, we need bug or bp linked , is
> 
> (1) Triage for core reviewing
> (2) Avoid duplication of works
> (3) Release management
> 
> IMO, generally, the answer is yes.
> 
> However, how about small 5-6 nit change?
> so such patch will be exception or not?
> 
> I wanna ask community opinion, and I'll update gerrit workflow page based on
> this discussion.
> 
> Best
> Nachi
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-05-25 Thread Avishay Traeger
Hello Mitsuhiro,
I'm sorry, but I remain unconvinced.  Is there a customer demand for this
feature?
If you'd like, feel free to add this topic to a Cinder weekly meeting
agenda, and join the meeting so that we can have an interactive discussion.
https://wiki.openstack.org/wiki/CinderMeetings

Thanks,
Avishay


On Sat, May 24, 2014 at 12:31 AM, Mitsuhiro Tanino  wrote:

>  Hi Avishay-san,
>
>
>
> Thank you for your review and comments for my proposal. I commented
> in-line.
>
>
>
> >>So the way I see it, the value here is a generic driver that can work
> with any storage.  The downsides:
>
>
>
> A generic ­driver for any storage is an one of benefit.
>
> But main benefit of proposed driver is as follows.
>
> - Reduce hardware based storage workload by offloading the workload to
> software based volume operation.
>
>
>
> Conventionally, operations to an enterprise storage such as volume
> creation, deletion, snapshot, etc
>
> are only permitted system administrator and they handle these operations
> after carefully examining.
>
> In OpenStack cloud environment, every user have a permission to execute
> these storage operations
>
> via cinder. As a result, workloads of storages have been increasing and it
> is difficult to manage
>
> the workloads.
>
>
>
> If we have two drivers in regards to a storage, we can use both way as the
> situation demands.
>
> Ex.
>
>   As for "Standard" type storage, use proposed software based LVM cinder
> driver.
>
>   As for "High performance" type storage, use hardware based cinder driver.
>
>
>
> As a result, we can offload the workload of standard type storage from
> physical storage to cinder host.
>
>
>
> >>1. The admin has to manually provision a very big volume and attach it
> to the Nova and Cinder hosts.
>
> >>  Every time a host is rebooted,
>
>
>
> I thinks current FC-based cinder drivers using scsi scan to find created
> LU.
>
> # echo "- - -" > /sys/class/scsi_host/host#/scan
>
>
>
> The admin can find additional LU using this, so host reboot are not
> required.
>
>
>
> >> or introduced, the admin must do manual work. This is one of the things
> OpenStack should be trying
>
> >> to avoid. This can't be automated without a driver, which is what
> you're trying to avoid.
>
>
>
> Yes. Some admin manual work is required and can’t be automated.
>
> I would like to know whether these operations are acceptable range to
> enjoy benefits from
>
> my proposed driver.
>
>
>
> >>2. You lose on performance to volumes by adding another layer in the
> stack.
>
>
>
> I think this is case by case.  When user use a cinder volume for DATA BASE,
> they prefer
>
> raw volume and proposed driver can’t provide raw cinder volume.
>
> In this case, I recommend "High performance" type storage.
>
>
>
> LVM is a default feature in many Linux distribution. Also LVM is used
> many enterprise
>
> systems and I think there is not critical performance loss.
>
>
>
> >>3. You lose performance with snapshots - appliances will almost
> certainly have more efficient snapshots
>
> >> than LVM over network (consider that for every COW operation, you are
> reading synchronously over the network).
>
> >> (Basically, you turned your fully-capable storage appliance into a dumb
> JBOD)
>
>
>
> I agree that storage has efficient COW snapshot feature, so we can create
> new Boot Volume
>
> from glance quickly. In this case, I recommend "High performance" type
> storage.
>
> LVM can’t create nested snapshot with shared LVM now. Therefore, we can’t
> assign
>
> writable LVM snapshot to instances.
>
>
>
> Is this answer for your comment?
>
>
>
> >> In short, I think the cons outweigh the pros.  Are there people
> deploying OpenStack who would deploy
>
> >> their storage like this?
>
>
>
> Please consider above main benefit.
>
>
>
> Regards,
>
> Mitsuhiro Tanino 
>
>  *HITACHI DATA SYSTEMS*
>
>  c/o Red Hat, 314 Littleton Road, Westford, MA 01886
>
>
>
> *From:* Avishay Traeger [mailto:avis...@stratoscale.com]
> *Sent:* Wednesday, May 21, 2014 4:36 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* Tomoki Sekiyama
> *Subject:* Re: [openstack-dev] [Cinder] Support LVM on a shared LU
>
>
>
> So the way I see it, the value here is a generic driver that can work with
> any storage.  The downsides:
>
> 1. The admin has to manually provision a very big volume and attach it to
> the Nova and Cinder hosts.  Every time a host is rebooted, or introduced,
> the admin must do manual work. This is one of the things OpenStack should
> be trying to avoid. This can't be automated without a driver, which is what
> you're trying to avoid.
>
> 2. You lose on performance to volumes by adding another layer in the stack.
>
> 3. You lose performance with snapshots - appliances will almost certainly
> have more efficient snapshots than LVM over network (consider that for
> every COW operation, you are reading synchronously over the network).
>
>
>
> (Basically, you turned your fully-capable storage app