Re: [openstack-dev] [octavia] Joining Neutron under the big tent

2015-05-01 Thread Jorge Miramontes
Good stuff. Thanks everyone for your hard work on getting Octavia to this point!

Cheers,
--Jorge

From: Brandon Logan 
mailto:brandon.lo...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, May 1, 2015 at 10:20 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [octavia] Joining Neutron under the big tent


​+1 from me


From: Kyle Mestery mailto:mest...@mestery.com>>
Sent: Friday, May 1, 2015 8:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [octavia] Joining Neutron under the big tent

On Thu, Apr 30, 2015 at 5:17 PM, Eichberger, German 
mailto:german.eichber...@hp.com>> wrote:
Hi,

I am proposing that Octavia is joining the Networking program as a project
under the Services area.

Octavia is the open scalable reference implementation for Neutron LBaaS V2
and has always seen itself as part of the networking program. We have
adopted most governance rules from the Networking program, sharing the
same build structure, and are organized like an OpenDStack project.


This sounds fine to me German. To proceed here, propose something similar to 
what Russell has proposed for OVN here [1], and ping me on IRC with the review 
so I can ack it. The TC can then approve it at a future meeting.

Thanks!
Kyle

[1] https://review.openstack.org/#/c/175954/

Thanks,
German



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] adding lbaas core

2015-04-21 Thread Jorge Miramontes
I want to be friends with Phil :(. Congrats!

Cheers,
--Jorge




On 4/21/15, 11:57 AM, "Brandon Logan"  wrote:

>Welcome Phil!
>
>From: Doug Wiegley 
>Sent: Tuesday, April 21, 2015 11:54 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [neutron][lbaas] adding lbaas core
>
>It¹s been a week, welcome Phil.
>
>Thanks,
>doug
>
>
>> On Apr 13, 2015, at 2:39 PM, Doug Wiegley
>> wrote:
>>
>> Hi all,
>>
>> I'd like to nominate Philip Toohill as a neutron-lbaas core. Good guy,
>>did a bunch of work on the ref impl for lbaasv2, and and I'll let the
>>numbers[1] speak for themselves.
>>
>> Existing lbaas cores, please vote.  All three of us.  :-)
>>
>> [1] http://stackalytics.com/report/contribution/neutron-lbaas/30
>>
>> Thanks,
>> doug
>>
>>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Object statuses

2015-01-23 Thread Jorge Miramontes
The example you put implicitly assigns the status tree to a load balancer.
Is sharing only allowed for sub resources? Or can sub resources be shared
across multiple load balancers? If that is the case then I suspect that
statuses may be exposed in many different places correct?

Cheers,
--Jorge




On 1/23/15, 1:21 AM, "Brandon Logan"  wrote:

>
>So I am resurrecting this topic now because we put this discussion on a a
>brief hold,
>but are now discussing it again and need to decide asap. We've all agreed
>we need a
>provisioning_status and operating_status fields. We now need to decide
>where to show
>these statuses to the user.
>
>Option 1:
>Show the statuses directly on the entity.
>
>Option 2-A:
>Show a status tree only on the load balancer object, but not on any
>entities.
>
>Option 2-B:
>Expose a resource for a GET request that will return that status tree.
>
>Example:
>GET /lbaas/loadbalancers/{LB_UUID}/statuses
>
>
>Option 1 is probably what most people are used to but it doesn't allow
>for sharing of
>objects, and when/if sharing is enabled, it will cause a break in
>contract and a new
>version of the API.  So it would essentially disallow object sharing.
>This requires
>a lot less work to implement.
>
>Option 2-* can be done with or without sharing, and when/if object
>sharing is enabled
>it wont break contract.  This will require more work to implement.
>
>My personal opinion is in favor of Option 2-B, but wouldn't argue with
>2-A either.
>
>Thanks,
>Brandon
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-05 Thread Jorge Miramontes
Thanks German,

It looks like the conversation is going towards using the HAProxy stats 
interface and/or iptables. I just wanted to explore logging a bit. That said, 
can you and Stephen share your thoughts on how we might implement that 
approach? I'd like to get a spec out soon because I believe metric gathering 
can be worked on in parallel with the rest of the project. In fact, I was 
hoping to get my hands dirty on this one and contribute some code, but a 
strategy and spec are needed first before I can start that ;)

Cheers,
--Jorge

From: , German 
mailto:german.eichber...@hp.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 5, 2014 3:50 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge,

I am still not convinced that we need to use logging for usage metrics. We can 
also use the haproxy stats interface (which the haproxy team is willing to 
improve based on our input) and/or iptables as Stephen suggested. That said 
this probably needs more exploration.

>From an HP perspective the full logs on the load balancer are mostly 
>interesting for the user of the loadbalancer – we only care about aggregates 
>for our metering. That said we would be happy to just move them on demand to a 
>place the user can access.

Thanks,
German


From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Tuesday, November 04, 2014 8:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Susanne,

Thanks for the reply. As Angus pointed out, the one big item that needs to be 
addressed with this method is network I/O of raw logs. One idea to mitigate 
this concern is to store the data locally for the operator-configured 
granularity, process it and THEN send it to cielometer, etc. If we can't 
engineer a way to deal with the high network I/O that will inevitably occur we 
may have to move towards a polling approach. Thoughts?

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 4, 2014 11:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the 
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be moved 
to various backends such as an elastic search, hadoop HDFS, Swift, etc as well 
as by default (but with the option to disable it) ceilometer. Ceilometer is the 
metering defacto for OpenStack so we need to support it. We would like the 
integration with Ceilometer to be based on Notifications. I believe German send 
a reference to that in another email. The pre-processing will need to be 
optional and the amount of data aggregation configurable.

What you describe below to me is usage gathering/metering. The billing is 
independent since companies with private clouds might not want to bill but 
still need usage reports for capacity planning etc. Billing/Charging is just 
putting a monetary value on the various form of usage,

I agree with all points.

> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).

> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.

"Keep the logs": This is what we would use log forwarding to either Swift or 
Elastic Search, etc.

>- Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we were 
in disagreement on the IRC. I am not sure why but it sounded like you were 
talking about something else when you were talking about the real time 
processing. If we are just taking about moving the logs to your Hadoop cluster 
or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more i

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-04 Thread Jorge Miramontes
Hi Susanne,

Thanks for the reply. As Angus pointed out, the one big item that needs to be 
addressed with this method is network I/O of raw logs. One idea to mitigate 
this concern is to store the data locally for the operator-configured 
granularity, process it and THEN send it to cielometer, etc. If we can't 
engineer a way to deal with the high network I/O that will inevitably occur we 
may have to move towards a polling approach. Thoughts?

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 4, 2014 11:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the 
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be moved 
to various backends such as an elastic search, hadoop HDFS, Swift, etc as well 
as by default (but with the option to disable it) ceilometer. Ceilometer is the 
metering defacto for OpenStack so we need to support it. We would like the 
integration with Ceilometer to be based on Notifications. I believe German send 
a reference to that in another email. The pre-processing will need to be 
optional and the amount of data aggregation configurable.

What you describe below to me is usage gathering/metering. The billing is 
independent since companies with private clouds might not want to bill but 
still need usage reports for capacity planning etc. Billing/Charging is just 
putting a monetary value on the various form of usage,

I agree with all points.

> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).

> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.

"Keep the logs": This is what we would use log forwarding to either Swift or 
Elastic Search, etc.

>- Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we were 
in disagreement on the IRC. I am not sure why but it sounded like you were 
talking about something else when you were talking about the real time 
processing. If we are just taking about moving the logs to your Hadoop cluster 
or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns
on the connection logging feature for their load balancer it will already
have a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually
after a tragic lb event). By already capturing the logs I'm sure customers
will be extremely happy to see that there are already X days worth of logs
they can immediately sift through.
B) Operators and their support teams can leverage logs when providing
service to their customers. This is huge for finding issues and resolving
them quickly.
C) Albeit a minor point, building support for logs from the get-go
mitigates capacity management uncertainty. My example earlier was the
extreme case of every customer turning on logging 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-28 Thread Jorge Miramontes
Thanks for the reply Angus,

DDoS attacks are definitely a concern we are trying to address here. My
assumptions are based on a solution that is engineered for this type of
thing. Are you more concerned with network I/O during a DoS attack or
storing the logs? Under the idea I had, I wanted to make the amount of
time logs are stored for configurable so that the operator can choose
whether they want the logs after processing or not. The network I/O of
pumping logs out is a concern of mine, however.

Sampling seems like the go-to solution for gathering usage but I was
looking for something different as sampling can get messy and can be
inaccurate for certain metrics. Depending on the sampling rate, this
solution has the potential to miss spikes in traffic if you are gathering
gauge metrics such as active connections/sessions. Using logs would be
100% accurate in this case. Also, I'm assuming LBaaS will have events so
combining sampling with events (CREATE, UPDATE, SUSPEND, DELETE, etc.)
gets complicated. Combining logs with events is arguably less complicated
as the granularity of logs is high. Due to this granularity, one can split
the logs based on the event times cleanly. Since sampling will have a
fixed cadence you will have to perform a "manual" sample at the time of
the event (i.e. add complexity).

At the end of the day there is no free lunch so more insight is
appreciated. Thanks for the feedback.

Cheers,
--Jorge




On 10/27/14 6:55 PM, "Angus Lees"  wrote:

>On Wed, 22 Oct 2014 11:29:27 AM Robert van Leeuwen wrote:
>> > I,d like to start a conversation on usage requirements and have a few
>> > suggestions. I advocate that, since we will be using TCP and
>>HTTP/HTTPS
>> > based protocols, we inherently enable connection logging for load
>> 
>> > balancers for several reasons:
>> Just request from the operator side of things:
>> Please think about the scalability when storing all logs.
>> 
>> e.g. we are currently logging http requests to one load balanced
>>application
>> (that would be a fit for LBAAS) It is about 500 requests per second,
>>which
>> adds up to 40GB per day (in elasticsearch.) Please make sure whatever
>> solution is chosen it can cope with machines doing 1000s of requests per
>> second...
>
>And to take this further, what happens during DoS attack (either syn
>flood or 
>full connections)?  How do we ensure that we don't lose our logging
>system 
>and/or amplify the DoS attack?
>
>One solution is sampling, with a tunable knob for the sampling rate -
>perhaps 
>tunable per-vip.  This still increases linearly with attack traffic,
>unless you 
>use time-based sampling (1-every-N-seconds rather than 1-every-N-packets).
>
>One of the advantages of (eg) polling the number of current sessions is
>that 
>the cost of that monitoring is essentially fixed regardless of the number
>of 
>connections passing through.  Numerous other metrics (rate of new
>connections, 
>etc) also have this property and could presumably be used for accurate
>billing 
>- without amplifying attacks.
>
>I think we should be careful about whether we want logging or metrics for
>more 
>accurate billing.  Both are useful, but full logging is only really
>required 
>for ad-hoc debugging (important! but different).
>
>-- 
> - Gus
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-27 Thread Jorge Miramontes
Hey German,

I totally agree on the security/privacy aspect of logs, especially due to
the SSL/TLS Termination feature.

After looking at BP [1] and the spec [2] for metering, it looks like it is
proposing to send more than just billable usage to cielometer. From my
previous email I considered this "tracking" usage ("billable" usage can be
a subset of tracking usage). It also appears to me that there is an
implied interface  for cielometer as we need to be able to capture metrics
from various lb devices (HAProxy, Nginx, Netscaler, etc.), standardize
them, and then send them off. That said, what type of implementation was
HP thinking of to gather these metrics? Instead of focusing on my idea of
using logging I'd like to change the discussion and get a picture as to
what you all are envisioning for a possible implementation direction.
Important items for Rackspace include accuracy of data, no lost data (i.e.
when sending to upstream system ensure it gets there), reliability of
cadence when sending usage to upstream system, and the ability to
backtrack and audit data whenever there seems to be a discrepancy in a
customer's monthly statement. Keep in mind that we need to integrate with
our current billing pipeline so we are not planning on using cielometer at
the moment. Thus, we need to make this somewhat configurable for those not
using cielometer.

Cheers,
--Jorge

[1] 
https://blueprints.launchpad.net/ceilometer/+spec/ceilometer-meter-lbaas

[2] https://review.openstack.org/#/c/94958/12/specs/juno/lbaas_metering.rst


On 10/24/14 5:19 PM, "Eichberger, German"  wrote:

>Hi Jorge,
>
>I agree completely with the points you make about the logs. We still feel
>that metering and logging are two different problems. The ceilometers
>community has a proposal on how to meter lbaas (see
>http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/lbaas_met
>ering.html) and we at HP think that those values are be sufficient for us
>for the time being.
>
>I think our discussion is mostly about connection logs which are emitted
>some way from amphora (e.g. haproxy logs). Since they are customer's logs
>we need to explore on our end the privacy implications (I assume at RAX
>you have controls in place to make sure that there is no violation :-).
>Also I need to check if our central logging system is scalable enough and
>we can send logs there without creating security holes.
>
>Another possibility is to log like syslog our apmphora agent logs to a
>central system to help with trouble shooting debugging. Those could be
>sufficiently anonymized to avoid privacy issue. What are your thoughts on
>logging those?
>
>Thanks,
>German
>
>-Original Message-
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>Sent: Thursday, October 23, 2014 3:30 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements
>
>Hey German/Susanne,
>
>To continue our conversation from our IRC meeting could you all provide
>more insight into you usage requirements? Also, I'd like to clarify a few
>points related to using logging.
>
>I am advocating that logs be used for multiple purposes, including
>billing. Billing requirements are different that connection logging
>requirements. However, connection logging is a very accurate mechanism to
>capture billable metrics and thus, is related. My vision for this is
>something like the following:
>
>- Capture logs in a scalable way (i.e. capture logs and put them on a
>separate scalable store somewhere so that it doesn't affect the amphora).
>- Every X amount of time (every hour, for example) process the logs and
>send them on their merry way to cielometer or whatever service an
>operator will be using for billing purposes.
>- Keep logs for some configurable amount of time. This could be anything
>from indefinitely to not at all. Rackspace is planing on keeping them for
>a certain period of time for the following reasons:
>   
>   A) We have connection logging as a planned feature. If a customer turns
>on the connection logging feature for their load balancer it will already
>have a history. One important aspect of this is that customers (at least
>ours) tend to turn on logging after they realize they need it (usually
>after a tragic lb event). By already capturing the logs I'm sure
>customers will be extremely happy to see that there are already X days
>worth of logs they can immediately sift through.
>   B) Operators and their support teams can leverage logs when providing
>service to their customers. This is huge for finding issues and resolving
>them quickly.
>   C) Albeit a minor point, building support for logs from the get-go
>mit

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-23 Thread Jorge Miramontes
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns
on the connection logging feature for their load balancer it will already
have a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually
after a tragic lb event). By already capturing the logs I'm sure customers
will be extremely happy to see that there are already X days worth of logs
they can immediately sift through.
B) Operators and their support teams can leverage logs when providing
service to their customers. This is huge for finding issues and resolving
them quickly.
C) Albeit a minor point, building support for logs from the get-go
mitigates capacity management uncertainty. My example earlier was the
extreme case of every customer turning on logging at the same time. While
unlikely, I would hate to manage that!

I agree that there are other ways to capture billing metrics but, from my
experience, those tend to be more complex than what I am advocating and
without the added benefits listed above. An understanding of HP's desires
on this matter will hopefully get this to a point where we can start
working on a spec.

Cheers,
--Jorge

P.S. Real-time stats is a different beast and I envision there being an
API call that returns "real-time" data such as this ==>
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.


From:  , German 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Wednesday, October 22, 2014 2:41 PM
To:  "OpenStack Development Mailing List (not for usage questions)"

Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements


>Hi Jorge,
> 
>Good discussion so far + glad to have you back
>J
> 
>I am not a big fan of using logs for billing information since ultimately
>(at least at HP) we need to pump it into ceilometer. So I am envisioning
>either the
> amphora (via a proxy) to pump it straight into that system or we collect
>it on the controller and pump it from there.
> 
>Allowing/enabling logging creates some requirements on the hardware,
>mainly, that they can handle the IO coming from logging. Some operators
>might choose to
> hook up very cheap and non performing disks which might not be able to
>deal with the log traffic. So I would suggest that there is some rate
>limiting on the log output to help with that.
>
> 
>Thanks,
>German
> 
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>
>Sent: Wednesday, October 22, 2014 6:51 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements
>
>
> 
>Hey Stephen (and Robert),
>
> 
>
>For real-time usage I was thinking something similar to what you are
>proposing. Using logs for this would be overkill IMO so your suggestions
>were what I was
> thinking of starting with.
>
> 
>
>As far as storing logs is concerned I was definitely thinking of
>offloading these onto separate storage devices. Robert, I totally hear
>you on the scalability
> part as our current LBaaS setup generates TB of request logs. I'll start
>planning out a spec and then I'll let everyone chime in there. I just
>wanted to get a general feel for the ideas I had mentioned. I'll also
>bring it up in today's meeting.
>
> 
>
>Cheers,
>
>--Jorge
>
>
>
>
> 
>
>From:
>Stephen Balukoff 
>Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
>Date: Wednesday, October 22, 2014 4:04 AM
>To: "OpenStack Development Mailing List (not for usage questions)"
>
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Jorge Miramontes
Hey Stephen (and Robert),

For real-time usage I was thinking something similar to what you are proposing. 
Using logs for this would be overkill IMO so your suggestions were what I was 
thinking of starting with.

As far as storing logs is concerned I was definitely thinking of offloading 
these onto separate storage devices. Robert, I totally hear you on the 
scalability part as our current LBaaS setup generates TB of request logs. I'll 
start planning out a spec and then I'll let everyone chime in there. I just 
wanted to get a general feel for the ideas I had mentioned. I'll also bring it 
up in today's meeting.

Cheers,
--Jorge

From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, October 22, 2014 4:04 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge!

Welcome back, eh! You've been missed.

Anyway, I just wanted to say that your proposal sounds great to me, and it's 
good to finally be closer to having concrete requirements for logging, eh. Once 
this discussion is nearing a conclusion, could you write up the specifics of 
logging into a specification proposal document?

Regarding the discussion itself: I think we can ignore UDP for now, as there 
doesn't seem to be high demand for it, and it certainly won't be supported in v 
0.5 of Octavia (and maybe not in v1 or v2 either, unless we see real demand).

Regarding the 'real-time usage' information: I have some ideas regarding 
getting this from a combination of iptables and / or the haproxy stats 
interface. Were you thinking something different that involves on-the-fly 
analysis of the logs or something?  (I tend to find that logs are great for 
non-real time data, but can often be lacking if you need, say, a gauge like 
'currently open connections' or something.)

One other thing: If there's a chance we'll be storing logs on the amphorae 
themselves, then we need to have log rotation as part of the configuration 
here. It would be silly to have an amphora failure just because its ephemeral 
disk fills up, eh.

Stephen

On Wed, Oct 15, 2014 at 4:03 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on "average concurrent connections". This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help customers out with billing and setup issues. Auditing will
also be easier with raw logs.

4) Enabling logs for all load balancers will help mitigate uncertainty in
terms of capacity planning. Imagine if every customer suddenly enabled
logs without it ever being turned on. This could produce a spike in
resource utilization that will be hard to manage. Enabling logs from the
start means we are certain as to what to plan for other than the nature of
the customer's traffic pattern.

Some Cons I can think of (please add more as I think the pros outweigh the
cons):

1) If we every add UDP based protocols then this model won't work. < 1% of
our load balancers at Rackspace are UDP based so we are not looking at
using this protocol for Octavia. I'm more of a fan of building a really
good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
a different problem. For me different problem == different product.

2) I'm

[openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-15 Thread Jorge Miramontes
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on "average concurrent connections". This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help customers out with billing and setup issues. Auditing will
also be easier with raw logs.

4) Enabling logs for all load balancers will help mitigate uncertainty in
terms of capacity planning. Imagine if every customer suddenly enabled
logs without it ever being turned on. This could produce a spike in
resource utilization that will be hard to manage. Enabling logs from the
start means we are certain as to what to plan for other than the nature of
the customer's traffic pattern.

Some Cons I can think of (please add more as I think the pros outweigh the
cons):

1) If we every add UDP based protocols then this model won't work. < 1% of
our load balancers at Rackspace are UDP based so we are not looking at
using this protocol for Octavia. I'm more of a fan of building a really
good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
a different problem. For me different problem == different product.

2) I'm assuming HA Proxy. Thus, if we choose another technology for the
amphora then this model may break.


Also, and more generally speaking, I have categorized usage into three
categories:

1) Tracking usage - this is usage that will be used my operators and
support teams to gain insight into what load balancers are doing in an
attempt to monitor potential issues.
2) Billable usage - this is usage that is a subset of tracking usage used
to bill customers.
3) Real-time usage - this is usage that should be exposed via the API so
that customers can make decisions that affect their configuration (ex.
"Based off of the number of connections my web heads can handle when
should I add another node to my pool?").

These are my preliminary thoughts, and I'd love to gain insight into what
the community thinks. I have built about 3 usage collection systems thus
far (1 with Brandon) and have learned a lot. Some basic rules I have
discovered with collecting usage are:

1) Always collect granular usage as it "paints a picture" of what actually
happened. Massaged/un-granular usage == lost information.
2) Never imply, always be explicit. Implications usually stem from bad
assumptions.


Last but not least, we need to store every user and system load balancer
event such as creation, updates, suspension and deletion so that we may
bill on things like uptime and serve our customers better by knowing what
happened and when.


Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] VM/Container Naming Issue

2014-09-05 Thread Jorge Miramontes
Hey guys,

I just noticed that "Amphora" won the vote. I have several issues with
this.

1) Amphora wasn't in the first list of items to vote on. I'm confused as
to how it ended up in the "final" round. The fact that it did makes me
feel like the first round of votes were totally disregarded.

2) The first vote was on Wednesday. The "final" vote was due less that 24
hours after that vote. This did not give me enough time to vote as I was
rather busy and I'm sure other were as well. This is more of a minor
point, however, as I realize we can't wait for the world to vote otherwise
we would get nowhere. The bigger issue is that I was able to vote in the
first round and was fine with the top 5 first round items going to the
final round. As far as I know amphora wasn't in the first round.

3) The word amphora is a very specific type of physical container used in
Greco-Roman times to store a variety of things such as water, wine, and
grain (yay 8th grade classical heritage!). This makes no sense for what we
are trying to name other than the fact that it relates to a container. In
my mind, the words vase and jug should have also been added to the final
round if that's the precedent we want to set. If amphora were in the first
round I would be okay if majority won. I just feel the democratic process
was not followed the way it should have been.

All this said, I don't want to stifle progress so I don't necessarily want
a re-vote. I just want to make sure we are not setting a bad precedent for
voting on future items.


Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly IRC Agenda

2014-08-20 Thread Jorge Miramontes
Hey LBaaS folks,

This is you friendly reminder to provide any agenda items for tomorrow's weekly 
IRC meeting. Please add them to the agenda wiki ==> 
https://wiki.openstack.org/wiki/Network/LBaaS#Agenda.

Cheers,
--Jorge
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly IRC Agenda

2014-08-13 Thread Jorge Miramontes
Hey LBaaS folks,

This is you friendly reminder to provide any agenda items for tomorrow's weekly 
IRC meeting. Please add them to the agenda wiki ==> 
https://wiki.openstack.org/wiki/Network/LBaaS#Agenda. The agenda currently has 
these items:

  *   Review the work items from the Hackathon and check on the status and/or 
if we still think it’s relevant

Cheers,
--Jorge

P.S. Also, please don't forget to update the weekly standup ==> 
https://etherpad.openstack.org/p/neutron-lbaas-weekly-standup
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly IRC Agenda

2014-08-06 Thread Jorge Miramontes
Hey LBaaS folks,

This is you friendly reminder to provide any agenda items for tomorrow's weekly 
IRC meeting. Please add them to the agenda wiki ==> 
https://wiki.openstack.org/wiki/Network/LBaaS#Agenda. The agenda currently has 
these items:

  *   Review Updates

Cheers,
--Jorge

P.S. Also, please don't forget to update the weekly standup ==> 
https://etherpad.openstack.org/p/neutron-lbaas-weekly-standup
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly IRC Agenda

2014-07-30 Thread Jorge Miramontes
Hey LBaaS folks,

This is you friendly reminder to provide any agenda items for tomorrow's weekly 
IRC meeting. The agenda currently has these items:

  *   Review Updates
  *   Octavia Work

Cheers,
--Jorge

P.S. Please don't forget to update the weekly standup ==> 
https://etherpad.openstack.org/p/neutron-lbaas-weekly-standup
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status and Expectations for Juno

2014-07-28 Thread Jorge Miramontes
Hey Doug,

In terms of taking a step backward from a user perspective I'm fine with
making v1 the default. I think there was always the notion of supporting
what v1 currently offers by making a config change. Thus, Horizon should
still have all the support it had in Icehouse. I am a little worried about
the delivery of items we said we wanted to deliver however. The reason we
are focusing on the current items is that Octavia is also part of the
picture, albeit, behind the scenes right now. Thus, the argument that the
new reference driver is less capable is actually a means to getting
Octavia out. Eventually, we were hoping to get Octavia as the reference
implementation which, from the user's perspective, will be much better
since you can actually run it at operator scale. To be realistic, the v2
implementation is a WIP and focusing on the control plane first seems to
make the most sense. Having a complete end-to-end v2 implementation is
large in scope and I don't think anyone expected it to be a full-fledged
product by Juno, but we are getting closer!


Cheers,
--Jorge




On 7/28/14 8:02 AM, "Doug Wiegley"  wrote:

>Hi Brandon,
>
>Thanks for bringing this up. If you¹re going to call me out by name, I
>guess I have to respond to the Horizon thing.  Yes, I don¹t like it, from
>a user perspective.  We promise a bunch of new features, new driversŠ and
>none of them are visible.  Or the horizon support does land, and suddenly
>the user goes from a provider list of 5 to 2.  Sucks if you were using one
>of the others.  Anyway, back to a project status.  To summarize, listed by
>feature, priority, status:
>
>LBaaS V2 API,   high, reviews in gerrit
>Ref driver, high, removed agent, review in gerrit
>CLI V2, high, not yet in review
>Devstack,   high, not started
>+TLS,   medium, lots done in parallel
>+L7,medium, not started
>Shim V1 -> V2,  low, minimally complete
>Horizon V2, low, not started
>ref agent,  low, not started
>Drivers,low, one vendor driver in review, several in progress
>
>And with a review submission freeze of August 21st.  Let¹s work backwards:
>
>Dependent stuff will need at least two weeks to respond to the final
>changes and submit.  That¹d be:
>
>Devstack,   high, not started
>+TLS,   medium, lots done in parallel
>+L7,medium, not started
>Shim V1 -> V2,  low, minimally complete
>Horizon V2, low, not started
>ref agent,  low, not started
>Drivers,low, one vendor driver in review, several in progress
>
>Š I¹m not including TLS, since it¹s work has been very parallel so far,
>even though logically it should be there.  But that would mean the
>following should be ³done² and merged by August 7th:
>
>LBaaS V2 API,   high, reviews in gerrit
>Ref driver, high, removed agent, review in gerrit
>CLI V2, high, not yet in review
>
>Š that¹s a week and a half, for a big pile of new code.  At the current
>change velocity, I have my doubts.  And if that slips, the rest starts to
>look very very hazy.  Backing up, and focusing on the user, here¹s lbaas
>v1:
>
>
>
>
>- Current object model, basic http lb
>- Ref driver with agent, +3 vendors (with +3 more backends not submitting
>drivers because of v2)
>- UI
>
>Š what we initially planned for Juno:
>
>- Shiny new object model (base for some new features)
>- TLS termination/offload
>- L7 routing
>- Ref driver with agent, support old drivers, support new drivers
>- UI, new and improved
>
>Š what we¹re now thinking of shipping:
>
>- Shiny new object model (base for some new features)
>- TLS termination/offload
>- Ref driver no agent, between 0-2 vendor drivers
>- No UI
>
>So people get one new feature, a reference backend that is even less
>production ready, no UI, and fewer providers. That seems like a step
>backwards from a user perspective (at least from the non-huge operator
>with a custom UI and custom driver perspective), not forward. And that
>implies we need to rethink two decisions:
>
>- Having the V2 lbaas stuff be the default service extension/plugin
>- Not supporting or reviewing new v1 drivers
>
>I think that we either need to deliver a more complete feature set, or
>admit this thing needs to be experimental/not the default, and give a
>little more attention to giving support to the default.
>
>Thanks,
>doug
>
>
>
>
>
>On 7/28/14, 12:42 AM, "Brandon Logan"  wrote:
>
>>There is going to be a mad rush to get many things into Neutron for Juno
>>here in the last few weeks.  Neutron is overly saturated with code
>>reviews.  So I'd like to list out some of the things LBaaS had planned
>>for Juno, what the status each of those are, and my thoughts on the
>>feasibility of actually getting it into Juno.  I'm just trying to have
>>realistic expectations.  Please share any items I missed and any
>>thoughts you have.  Kyle, if you have anything to add, I'd really love
>>to hear that.  Even if its just agreeing.
>>
>>1. Code for LBaaS V2 API with reference imple

[openstack-dev] [Neutron][LBaaS] Weekly IRC Agenda

2014-07-23 Thread Jorge Miramontes
Hey LBaaS folks,

This is you friendly reminder to provide any agenda items for tomorrow's weekly 
IRC meeting. The agenda currently has two items:

  *   Review Updates
  *   TLS work division

Cheers,
--Jorge

P.S. Please don't forget to update the weekly standup ==> 
https://etherpad.openstack.org/p/neutron-lbaas-weekly-standup
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Milestone and Due Dates

2014-07-21 Thread Jorge Miramontes
Hey Kyle,

I've viewed that link many times but it mentions nothing about 7-20 being
Spec approval deadline. Am I missing something?

Cheers,
--Jorge




On 7/18/14 9:52 PM, "Kyle Mestery"  wrote:

>On Fri, Jul 18, 2014 at 4:40 PM, Jorge Miramontes
> wrote:
>> Hey Kyle (and anyone else that may know the answers to my questions),
>>
>> There are several blueprints that don't have Juno milestones attached to
>> them and was wondering if we could assign them so the broader community
>>is
>> aware of the work the LBaaS folks are working on. These are the
>>blueprints
>> that are currently being worked on but do not have an assigned
>>milestone:
>>
>> 
>>https://blueprints.launchpad.net/neutron/+spec/lbaas-ref-impl-tls-support
>> (no milestone)
>> https://blueprints.launchpad.net/neutron/+spec/lbaas-ssl-termination
>>('next'
>> milestone. Not sure if this means juno-2 or juno-3)
>> https://blueprints.launchpad.net/neutron/+spec/lbaas-l7-rules (no
>>milestone)
>> https://blueprints.launchpad.net/neutron/+spec/neutron-flavor-framework
>>(no
>> milestone)
>> https://blueprints.launchpad.net/neutron/+spec/lbaas-l7-rules (no
>>milestone)
>>
>These do not have a milestone set in LP yet because the specs are not
>approved. It's unclear if all of these will be approved for Juno-3 at
>this point, though I suspect at least a few will be. I'm actively
>reviewing final specs for approval before Spec Approval Deadline on
>Sunday, 7-20.
>
>> Also, please let me know if I left something out everyone.
>>
>> Lastly, what are the definitive spec/implementation dates that the LBaaS
>> community should be aware of? A lot of us are confused on exact dates
>>and I
>> wanted to make sure we were all on the same page so that we can put
>> resources on items that are more time sensitive.
>>
>Per above, SAD is this Sunday. The Juno release schedule is on the
>wiki here [1].
>
>Thanks,
>Kyle
>
>[1] https://wiki.openstack.org/wiki/Juno_Release_Schedule
>
>> Cheers,
>> --Jorge
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Milestone and Due Dates

2014-07-18 Thread Jorge Miramontes
Hey Kyle (and anyone else that may know the answers to my questions),

There are several blueprints that don't have Juno milestones attached to them 
and was wondering if we could assign them so the broader community is aware of 
the work the LBaaS folks are working on. These are the blueprints that are 
currently being worked on but do not have an assigned milestone:

https://blueprints.launchpad.net/neutron/+spec/lbaas-ref-impl-tls-support (no 
milestone)
https://blueprints.launchpad.net/neutron/+spec/lbaas-ssl-termination ('next' 
milestone. Not sure if this means juno-2 or juno-3)
https://blueprints.launchpad.net/neutron/+spec/lbaas-l7-rules (no milestone)
https://blueprints.launchpad.net/neutron/+spec/neutron-flavor-framework (no 
milestone)
https://blueprints.launchpad.net/neutron/+spec/lbaas-l7-rules (no milestone)

Also, please let me know if I left something out everyone.

Lastly, what are the definitive spec/implementation dates that the LBaaS 
community should be aware of? A lot of us are confused on exact dates and I 
wanted to make sure we were all on the same page so that we can put resources 
on items that are more time sensitive.

Cheers,
--Jorge
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly IRC Meeting Agenda

2014-07-09 Thread Jorge Miramontes
Hi LBaaS folks,

This is your weekly friendly reminder to give me agenda items for tomorrow's 
meeting. Also please update the weekly standup document when you get a chance! 
Thanks!

Current agenda items:

  *   Paris summit talks (Susanne)
  *   Reviews

Cheers,
--Jorge
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-07 Thread Jorge Miramontes
Hey Mark,

To add, one reason we have a DELETED status at Rackspace is that certain
sub-resources are still relevant to our customers. For example, we have a
usage sub-resource which reveals usage records for the load balancer. To
illustrate, a user issues a DELETE on /loadbalancers/ but can still
issue a GET on /loadbalancers//usage. If /loadbalancers/ were
truly deleted (i.e a 404 is returned) it wouldn't make RESTful sense to
expose the usage sub-resource. Furthermore, even if we don't plan on
having sub-resources that a user will actually query I would still like a
DELETED status as our customers use it for historical and debugging
purposes. It provides users with a sense of clarity and doesn't leave them
scratching their heads thinking, "How were those load balancers configured
when we had that issue the other day?" for example.

I agree on your objection for unattached objects assuming API operations
for these objects will be synchronous in nature. However, since the API is
suppose to be asynchronous a QUEUED status will make sense for the API
operations that are truly asynchronous. In an earlier email I stated that
a QUEUED status would be beneficial when compared to just a BUILD status
because it would allow for more accurate metrics in regards to
provisioning time. Customers will complain more if it appears provisioning
times are taking a long time when in reality they are actually queued do
to high API traffic.

Thoughts?

Cheers,
--Jorge




On 7/7/14 9:32 AM, "Mark McClain"  wrote:

>
>On Jul 4, 2014, at 5:27 PM, Brandon Logan 
>wrote:
>
>> Hi German,
>> 
>> That actually brings up another thing that needs to be done.  There is
>> no DELETED state.  When an entity is deleted, it is deleted from the
>> database.  I'd prefer a DELETED state so that should be another feature
>> we implement afterwards.
>> 
>> Thanks,
>> Brandon
>> 
>
>This is an interesting discussion since we would create an API
>inconsistency around possible status values.  Traditionally, status has
>been be fabric status and we have not always well defined what the values
>should mean to tenants.  Given that this is an extension, I think that
>adding new values would be ok (Salvatore might have a different opinion
>than me).
>
>Right we¹ve never had a deleted state because the record has been removed
>immediately in most implementations even if the backend has not fully
>cleaned up.  I was thinking for the v3 core we should have a DELETING
>state that is set before cleanup is dispatched to the backend
>driver/worker.  The record can then be deleted when the backend has
>cleaned up.
>
>For unattached objects, I¹m -1 on QUEUED because some will interpret that
>the system is planning to execute immediate operations on the resource
>(causing customer queries/complaints about why it has not transitioned).
>Maybe use something like DEFERRED, UNBOUND, or VALIDATED?
>
>mark
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-07-03 Thread Jorge Miramontes
I was implying that it applies to all drivers.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 3, 2014 3:30 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - 
comapre_type values

> I also don't think it is fair for certain drivers to hold other drivers 
> "hostage"

For some time there was a policy (openstack-wide) that public API should have a 
free open source implementation.
In this sense open source driver may hold other drivers as "hostages".

Eugene.


On Thu, Jul 3, 2014 at 10:37 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
I agree.

Also, since we are planning on having two different API versions run in 
parallel the only driver that needs to be worked on initially is the reference 
implementation. I'm guessing we will have two reference implementations, one 
for v1 and one for v2. The v2 implementation currently seems to be modified 
from v1 in order to get the highest velocity in terms of exposing API 
functionality. There is a reason we aren't working on Octavia right now and I 
think the same rationale holds for other drivers. So, I believe we should 
expose as much functionality possible with a functional open-source driver and 
then other drivers will catch up.

As for drivers that can't implement certain features the only potential issue I 
see is a type of vendor lock-in. For example, let's say I am an operator 
agnostic power API user. I host with operator A and they use a driver that 
implements all functionality exposed via the API. Now, let's say I want to move 
to operator B because operator A isn't working for me. Let's also say that 
operator B doesn't implement all functionality exposed via the API. From the 
user's perspective they are locked out of going to operator B because their API 
integrated code won't port seamlessly. With this example in mind, however, I 
also don't think it is fair for certain drivers to hold other drivers 
"hostage". From my perspective, if users really want a feature then every 
driver implementor should have the incentive to implement said feature and will 
benefit them in the long run. Anyways, that my $0.02.

Cheers,
--Jorge

From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 24, 2014 7:30 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>

Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - 
comapre_type values

Making sure all drivers support the features offered in Neutron LBaaS means we 
are stuck going with the 'least common denominator' in all cases. While this 
ensures all vendors implement the same things in the functionally the same way, 
it also is probably a big reason the Neutron LBaaS project has been so 
incredibly slow in seeing new features added over the last two years.

In the gerrit review that Dustin linked, it sounds like the people contributing 
to the discussion are in favor of allowing drivers to reject some 
configurations as unsupported through use of exceptions (details on how that 
will work is being hashed out now if you want to participate in that 
discussion).  Let's assume, therefore, that with the LBaaS v2 API and Object 
model we're also going to get this ability-- which of course also means that 
drivers do not have to support every feature exposed by the API.

(And again, as Dustin pointed out, a Linux LVS-based driver definitely wouldn't 
be able to support any L7 features at all, yet it's still a very useful driver 
for many deployments.)

Finally, I do not believe that the LBaaS project should be "held back" because 
one vendor's implementation doesn't work well with a couple features exposed in 
the API. As Dustin said, let the API expose a rich feature set and allow 
drivers to reject certain configurations when they don't support them.

Stephen



On Tue, Jun 24, 2014 at 9:09 AM, Dustin Lundquist 
mailto:dus...@null-ptr.net>> wrote:
I brought this up on https://review.openstack.org/#/c/101084/.


-Dustin


On Tue, Jun 24, 2014 at 7:57 AM, Avishay Balderman 
mailto:avish...@radware.com>> wrote:
Hi Dustin
I agree with the concept you described but as far as I understand it is not 
currently supported in Neutron.
So a driver should be fully compatible with the interface it implements.

Avishay

From: Dustin Lundquist [mailto:d

Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-03 Thread Jorge Miramontes
+1 to QUEUED status.

For entities that have the concept of being attached/detached why not have
a 'DETACHED' status to indicate that the entity is not provisioned at all
(i.e. The config is just stored in the DB). When it is attached during
provisioning then we can set it to 'ACTIVE' or any of the other
provisioning statuses such as 'ERROR', 'PENDING_UPDATE', etc. Lastly, it
wouldn't make much sense to have a 'DELETED' status on these types of
entities until the user actually issues a DELETE API request (not to be
confused with detaching). Which begs another question, when items are
deleted how long should the API return responses for that resource? We
have a 90 day threshold for this in our current implementation after which
the API returns 404's for the resource.

Cheers,
--Jorge




On 7/3/14 10:39 AM, "Phillip Toohill" 
wrote:

>If the objects remain in 'PENDING_CREATE' until provisioned it would seem
>that the process got stuck in that status and may be in a bad state from
>user perspective. I like the idea of QUEUED or similar to reference that
>the object has been accepted but not provisioned.
>
>Phil
>
>On 7/3/14 10:28 AM, "Brandon Logan"  wrote:
>
>>With the new API and object model refactor there have been some issues
>>arising dealing with the status of entities.  The main issue is that
>>Listener, Pool, Member, and Health Monitor can exist independent of a
>>Load Balancer.  The Load Balancer is the entity that will contain the
>>information about which driver to use (through provider or flavor).  If
>>a Listener, Pool, Member, or Health Monitor is created without a link to
>>a Load Balancer, then what status does it have?  At this point it only
>>exists in the database and is really just waiting to be provisioned by a
>>driver/backend.
>>
>>Some possibilities discussed:
>>A new status of QUEUED, PENDING_ACTIVE, SCHEDULED, or some other name
>>Entities just remain in PENDING_CREATE until provisioned by a driver
>>Entities just remain in ACTIVE until provisioned by a driver
>>
>>Opinions and suggestions?
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-07-03 Thread Jorge Miramontes
Hey German,

We have similar statuses. I have been wanting to add a 'QUEUED' status
however. The reason is that we currently use 'BUILD' which indicates
active provisioning when in reality it is actually queued first and then
provisioned. Thus, there are potential issues when trying to determine
average provisioning times. Furthermore, customers are accustomed to
certain provisioning times and if those times seems longer than usual they
tend to complain. If we had a 'QUEUED' status then customers would most
likely not get upset (or as upset). I would also like the ability to move
from 'ERROR' back to an 'ACTIVE' state. And error status for us means
something didn't happen correctly during provisioning and updating.
However, most of the time the load balancer is still servicing traffic.
Forcing a customer to re-create a load balancer that is serving web
traffic is a bad thing, especially in our case since we have static ip
addresses. We have monitoring on load balancers that go into an 'ERROR'
status and take action to correct the issue.

Cheers,
--Jorge




On 6/24/14 11:30 PM, "Eichberger, German"  wrote:

>Hi,
>
>I second Stephen's suggestion with the status matrix. I have heard of
>(provisional) status, operational status, admin status -- I am curious
>what states exists and how an object would transition between them.
>
>Libra uses only one status field and it transitions as follows:
>
>BUILDING -> ACTIVE|ERROR
>ACTIVE -> DEGARDED|ERROR|DELETED
>DEGRADED -> ACTIVE|ERROR|DELETED
>ERROR -> DELETED
>
>That said I don't think admin status is that important for me as an
>operator since my user's usually delete lbs and re-create them. But I am
>curious how other operators feel.
>
>Thanks,
>German
>
>-Original Message-
>From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
>Sent: Tuesday, June 24, 2014 8:46 PM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need status
>
>Alright y'all have convinced me for now.  How the status is show on
>shared entities is still yet to be determined.  However, we don't have
>any shared entities (unless we really want health monitors shareable
>right now) at this point so the status won't get complicated for this
>first iteration. 
>
>Thanks,
>Brandon
>
>On Wed, 2014-06-25 at 01:10 +, Doug Wiegley wrote:
>> Hi Stephen,
>> 
>> 
>> > Ultimately, as we will have several objects which have many-to-many
>> relationships with other objects, the 'status' of an object that is
>> shared between what will ultimately be two separate physical entities
>> on the back-end should be represented by a dictionary, and any
>> 'reduction' of this on behalf of the user should happen within the UI.
>> It does make things more complex to deal with in certain kinds of
>> failure scenarios, but we don't help ourselves at all by trying to
>> hide, say, when a member of a pool referenced by one listener is 'UP'
>> and the same member of the same pool referenced by a different
>> listener is 'DOWN'.  :/
>> 
>> 
>> For M:N, that’s actually an additional status field that rightly
>> belongs as another column in the join table, if at all (allow me to
>> queue up all of my normal M:N objections here in this case, I’m just
>> talking normal db representation.)  The bare object itself still has
>> status of its own.
>> 
>> 
>> Doug
>> 
>> 
>> 
>> 
>> 
>> 
>> From: Stephen Balukoff 
>> Reply-To: "OpenStack Development Mailing List (not for usage
>> questions)" 
>> Date: Tuesday, June 24, 2014 at 6:02 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need
>> status
>> 
>> 
>> 
>> Ultimately, as we will have several objects which have many-to-many
>> relationships with other objects, the 'status' of an object that is
>> shared between what will ultimately be two separate physical entities
>> on the back-end should be represented by a dictionary, and any
>> 'reduction' of this on behalf of the user should happen within the UI.
>> It does make things more complex to deal with in certain kinds of
>> failure scenarios, but we don't help ourselves at all by trying to
>> hide, say, when a member of a pool referenced by one listener is 'UP'
>> and the same member of the same pool referenced by a different
>> listener is 'DOWN'.  :/
>> 
>> 
>> Granted, our version 1 implementation of these objects is going to be
>> simplified, but it doesn't hurt to think about where we're headed with
>> this API and object model.
>> 
>> 
>> I think it would be worthwhile for someone to produce a status matrix
>> showing which kinds of status are available for each object type, and
>> what the possible values of those statuses are, and what they mean.
>> Given the question of what 'status' means is very complicated indeed,
>> I think this is the only way we're going to actually make forward
>> progress in this discussion.
>> 
>> 
>> Stephen
>> 
>> 
>> 
>> 
>> On Tue, Jun 24, 2014

Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-07-03 Thread Jorge Miramontes
I agree.

Also, since we are planning on having two different API versions run in 
parallel the only driver that needs to be worked on initially is the reference 
implementation. I'm guessing we will have two reference implementations, one 
for v1 and one for v2. The v2 implementation currently seems to be modified 
from v1 in order to get the highest velocity in terms of exposing API 
functionality. There is a reason we aren't working on Octavia right now and I 
think the same rationale holds for other drivers. So, I believe we should 
expose as much functionality possible with a functional open-source driver and 
then other drivers will catch up.

As for drivers that can't implement certain features the only potential issue I 
see is a type of vendor lock-in. For example, let's say I am an operator 
agnostic power API user. I host with operator A and they use a driver that 
implements all functionality exposed via the API. Now, let's say I want to move 
to operator B because operator A isn't working for me. Let's also say that 
operator B doesn't implement all functionality exposed via the API. From the 
user's perspective they are locked out of going to operator B because their API 
integrated code won't port seamlessly. With this example in mind, however, I 
also don't think it is fair for certain drivers to hold other drivers 
"hostage". From my perspective, if users really want a feature then every 
driver implementor should have the incentive to implement said feature and will 
benefit them in the long run. Anyways, that my $0.02.

Cheers,
--Jorge

From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 24, 2014 7:30 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - 
comapre_type values

Making sure all drivers support the features offered in Neutron LBaaS means we 
are stuck going with the 'least common denominator' in all cases. While this 
ensures all vendors implement the same things in the functionally the same way, 
it also is probably a big reason the Neutron LBaaS project has been so 
incredibly slow in seeing new features added over the last two years.

In the gerrit review that Dustin linked, it sounds like the people contributing 
to the discussion are in favor of allowing drivers to reject some 
configurations as unsupported through use of exceptions (details on how that 
will work is being hashed out now if you want to participate in that 
discussion).  Let's assume, therefore, that with the LBaaS v2 API and Object 
model we're also going to get this ability-- which of course also means that 
drivers do not have to support every feature exposed by the API.

(And again, as Dustin pointed out, a Linux LVS-based driver definitely wouldn't 
be able to support any L7 features at all, yet it's still a very useful driver 
for many deployments.)

Finally, I do not believe that the LBaaS project should be "held back" because 
one vendor's implementation doesn't work well with a couple features exposed in 
the API. As Dustin said, let the API expose a rich feature set and allow 
drivers to reject certain configurations when they don't support them.

Stephen



On Tue, Jun 24, 2014 at 9:09 AM, Dustin Lundquist 
mailto:dus...@null-ptr.net>> wrote:
I brought this up on https://review.openstack.org/#/c/101084/.


-Dustin


On Tue, Jun 24, 2014 at 7:57 AM, Avishay Balderman 
mailto:avish...@radware.com>> wrote:
Hi Dustin
I agree with the concept you described but as far as I understand it is not 
currently supported in Neutron.
So a driver should be fully compatible with the interface it implements.

Avishay

From: Dustin Lundquist [mailto:dus...@null-ptr.net]
Sent: Tuesday, June 24, 2014 5:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - 
comapre_type values

I think the API should provide an richly featured interface, and individual 
drivers should indicate if they support the provided configuration. For example 
there is a spec for a Linux LVS LBaaS driver, this driver would not support TLS 
termination or any layer 7 features, but would still be valuable for some 
deployments. The user experience of such a solution could be improved if the 
driver to propagate up a message specifically identifying the unsupported 
feature.


-Dustin

On Tue, Jun 24, 2014 at 4:28 AM, Avishay Balderman 
mailto:avish...@radware.com>> wrote:
Hi
One of L7 Rule attributes is ‘compare_type’.
This field is the match operator that the rule should activate against the 
value found in the request.
Below is list of the possible values:
- Regexp
- StartsWith
- EndsWith
- Contains
- EqualTo (*)
- GreaterThan (*)
- LessThan (*)

The last

[openstack-dev] [Neutron][LBaaS] Agenda for weekly IRC meeting

2014-07-02 Thread Jorge Miramontes
Hey LBaaS folks!

Please send me any agenda items you would like discussed tomorrow so I can 
organize the meeting. And as usual, please update the weekly standup etherpad. 
Everything should be organized on the main wiki page now ==> 
https://wiki.openstack.org/wiki/Neutron/LBaaS :)

Cheers,
--Jorge
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly Standup

2014-06-25 Thread Jorge Miramontes
Hey LBaaS folks,

This is your friendly reminder to update the weekly standup etherpad so that 
everyone is aware of what everyone is working on. Here is the link ==> 
https://etherpad.openstack.org/p/neutron-lbaas-weekly-standup. Thanks!

Cheers,
--Jorge

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

2014-06-25 Thread Jorge Miramontes
Hey Andres,

Sorry for the late reply. I was out of town all last week. I would suggest 
continuing the email thread before we put this on a wiki somewhere so others 
can chime in.

Cheers,
--Jorge

From: , Andres 
mailto:andres.buras...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 16, 2014 10:06 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

Hi Jorge, thanks for your reply! You are right about summarizing too much. The 
idea is to identify which kinds of data could be retrieved in a summarized way 
without losing detail (i.e.: uptime can be better described with start-end 
timestamps than with lots of samples with up/down status) or simply to provide 
different levels of granularity and let the user decide (yes, it can be 
sometimes dangerous).
Having said this, how could we share the current metrics intended to be 
exposed? Is there a document or should I follow the “Requirements around 
statistics and billing” thread?

Thank you!
Andres

From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Thursday, June 12, 2014 6:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

Hey Andres,

In my experience with usage gathering consolidating statistics at the root 
layer is usually a bad idea. The reason is that you lose potentially useful 
information once you consolidate data. When it comes to troubleshooting issues 
(such as billing) this lost information can cause problems since there is no 
way to "replay" what had actually happened. That said, there is no free lunch 
and keeping track of huge amounts of data can be a huge engineering challenge. 
We have a separate thread on what kinds of metrics we want to expose from the 
LBaaS service so perhaps it would be nice to understand these in more detail.

Cheers,
--Jorge

From: , Andres 
mailto:andres.buras...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 10, 2014 3:34 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

Hi, we have been struggling with getting a meaningful set of metrics from LB 
stats thru ceilometer, and from a discussion about module responsibilities for 
providing data, an interesting idea came up. (Thanks Pradeep!)
The proposal is to consolidate some kinds of metrics as pool up time (hours) 
and average or historic response times of VIPs and listeners, to avoid having 
ceilometer querying for the state so frequently. There is a trade-off between 
fast response time (high sampling rate) and reasonable* amount of cumulative 
samples.
The next step in order to give more detail to the idea is to work on a use 
cases list to better explain / understand the benefits of this kind of data 
grouping.

What dou you think about this?
Do you find it will be useful to have some processed metrics on the 
loadbalancer side instead of the ceilometer side?
Do you identify any measurements about the load balancer that could not be 
obtained/calculated from ceilometer?
Perhaps this could be the base for other stats gathering solutions that may be 
under discussion?

Andres
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

2014-06-12 Thread Jorge Miramontes
Hey Andres,

In my experience with usage gathering consolidating statistics at the root 
layer is usually a bad idea. The reason is that you lose potentially useful 
information once you consolidate data. When it comes to troubleshooting issues 
(such as billing) this lost information can cause problems since there is no 
way to "replay" what had actually happened. That said, there is no free lunch 
and keeping track of huge amounts of data can be a huge engineering challenge. 
We have a separate thread on what kinds of metrics we want to expose from the 
LBaaS service so perhaps it would be nice to understand these in more detail.

Cheers,
--Jorge

From: , Andres 
mailto:andres.buras...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 10, 2014 3:34 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

Hi, we have been struggling with getting a meaningful set of metrics from LB 
stats thru ceilometer, and from a discussion about module responsibilities for 
providing data, an interesting idea came up. (Thanks Pradeep!)
The proposal is to consolidate some kinds of metrics as pool up time (hours) 
and average or historic response times of VIPs and listeners, to avoid having 
ceilometer querying for the state so frequently. There is a trade-off between 
fast response time (high sampling rate) and reasonable* amount of cumulative 
samples.
The next step in order to give more detail to the idea is to work on a use 
cases list to better explain / understand the benefits of this kind of data 
grouping.

What dou you think about this?
Do you find it will be useful to have some processed metrics on the 
loadbalancer side instead of the ceilometer side?
Do you identify any measurements about the load balancer that could not be 
obtained/calculated from ceilometer?
Perhaps this could be the base for other stats gathering solutions that may be 
under discussion?

Andres
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly Standup Trial

2014-06-11 Thread Jorge Miramontes
Hey Neutron LBaaS folks!

I created the following etherpad in an effort to mitigate the visibility issue 
some of us have been trying to address. Please update it before tomorrow's 
weekly IRC meeting if possible so that the community is aware of what every 
team is currently engaged on. This will definitely help us not duplicate 
efforts and help allocate our combined resources more effectively on a weekly 
basis. Let's try it out and see how it works. Let's take it for a spin tomorrow 
at the weekly meeting! I took the first step and added my team's items.

https://etherpad.openstack.org/p/neutron-lbaas-weekly-standup

Until tomorrow,
--Jorge
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Jorge Miramontes
Hey German,

I agree with you. I don't really want to go with option #1 because making
decisions on behalf of the user (especially when security is involved) can
be quite tricky and dangerous. Your concerns are valid for option #2 but I
still think it is the better option to go with. I believe Carlos and Adam
are working with our Barbican team on a blueprint for option #2 so it
would be nice if you could take a look at that and see how we can
implement it to mitigate the concerns you laid out. While it would be nice
for us to figure out how to ensure registration/unregistration at least
the API user has the necessary info to ensure it themselves if need be.

I'm not sure if I like the "auto-update" flag concept after all as it adds
a layer of complexity depending on what the user has set.  I'd prefer
either an "LBaaS makes all decisions on behalf of the user" or "LBaaS
makes no deacons on behalf of the user" approach with the latter being my
preference. In one of my earlier emails I asked the fundamental question
of whether "flexibility" is worthwhile at the cost of complexity. I prefer
to start off simple since we don't have any real validation on whether
these "flexible" features will actually be used. Once we have a product
that is being widely deployed should "flexible" feature necessity become
evident.

Cheers,
--Jorge




On 6/6/14 5:52 PM, "Eichberger, German"  wrote:

>Jorge + John,
>
>I am most concerned with a user changing his secret in barbican and then
>the LB trying to update and causing downtime. Some users like to control
>when the downtime occurs.
>
>For #1 it was suggested that once the event is delivered it would be up
>to a user to enable an "auto-update flag".
>
>In the case of #2 I am a bit worried about error cases: e.g. uploading
>the certificates succeeds but registering the loadbalancer(s) fails. So
>using the barbican system for those warnings might not as fool proof as
>we are hoping. 
>
>One thing I like about #2 over #1 is that it pushes a lot of the
>information to Barbican. I think a user would expect when he uploads a
>new certificate to Barbican that the system warns him right away about
>load balancers using the old cert. With #1 he might get an e-mails from
>LBaaS telling him things changed (and we helpfully updated all affected
>load balancers) -- which isn't as immediate as #2.
>
>If we implement an "auto-update flag" for #1 we can have both. User's who
>like #2 juts hit the flag. Then the discussion changes to what we should
>implement first and I agree with Jorge + John that this should likely be
>#2.
>
>German
>
>-Original Message-
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>Sent: Friday, June 06, 2014 3:05 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>Hey John,
>
>Correct, I was envisioning that the Barbican request would not be
>affected, but rather, the GUI operator or API user could use the
>registration information to do so should they want to do so.
>
>Cheers,
>--Jorge
>
>
>
>
>On 6/6/14 4:53 PM, "John Wood"  wrote:
>
>>Hello Jorge,
>>
>>Just noting that for option #2, it seems to me that the registration
>>feature in Barbican would not be required for the first version of this
>>integration effort, but we should create a blueprint for it nonetheless.
>>
>>As for your question about services not registering/unregistering, I
>>don't see an issue as long as the presence or absence of registered
>>services on a Container/Secret does not **block** actions from
>>happening, but rather is information that can be used to warn clients
>>through their processes. For example, Barbican would still delete a
>>Container/Secret even if it had registered services.
>>
>>Does that all make sense though?
>>
>>Thanks,
>>John
>>
>>
>>From: Youcef Laribi [youcef.lar...@citrix.com]
>>Sent: Friday, June 06, 2014 2:47 PM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>>Integration Ideas
>>
>>+1 for option 2.
>>
>>In addition as an additional safeguard, the LBaaS service could check
>>with Barbican when failing to use an existing secret to see if the
>>secret has changed (lazy detection).
>>
>>Youcef
>>
>>-Original Message-
>>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>>Sent: Friday, June 06, 2014 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-06 Thread Jorge Miramontes
Hey John,

Correct, I was envisioning that the Barbican request would not be
affected, but rather, the GUI operator or API user could use the
registration information to do so should they want to do so.

Cheers,
--Jorge




On 6/6/14 4:53 PM, "John Wood"  wrote:

>Hello Jorge,
>
>Just noting that for option #2, it seems to me that the registration
>feature in Barbican would not be required for the first version of this
>integration effort, but we should create a blueprint for it nonetheless.
>
>As for your question about services not registering/unregistering, I
>don't see an issue as long as the presence or absence of registered
>services on a Container/Secret does not **block** actions from happening,
>but rather is information that can be used to warn clients through their
>processes. For example, Barbican would still delete a Container/Secret
>even if it had registered services.
>
>Does that all make sense though?
>
>Thanks,
>John
>
>
>From: Youcef Laribi [youcef.lar...@citrix.com]
>Sent: Friday, June 06, 2014 2:47 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>+1 for option 2.
>
>In addition as an additional safeguard, the LBaaS service could check
>with Barbican when failing to use an existing secret to see if the secret
>has changed (lazy detection).
>
>Youcef
>
>-Original Message-
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>Sent: Friday, June 06, 2014 12:16 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>Hey everyone,
>
>Per our IRC discussion yesterday I'd like to continue the discussion on
>how Barbican and Neutron LBaaS will interact. There are currently two
>ideas in play and both will work. If you have another idea please free to
>add it so that we may evaluate all the options relative to each other.
>Here are the two current ideas:
>
>1. Create an eventing system for Barbican that Neutron LBaaS (and other
>services) consumes to identify when to update/delete updated secrets from
>Barbican. For those that aren't up to date with the Neutron LBaaS API
>Revision, the project/tenant/user provides a secret (container?) id when
>enabling SSL/TLS functionality.
>
>* Example: If a user makes a change to a secret/container in Barbican
>then Neutron LBaaS will see an event and take the appropriate action.
>
>PROS:
> - Barbican is going to create an eventing system regardless so it will
>be supported.
> - Decisions are made on behalf of the user which lessens the amount of
>calls the user has to make.
>
>CONS:
> - An eventing framework can become complex especially since we need to
>ensure delivery of an event.
> - Implementing an eventing system will take more time than option #2ŠI
>think.
>
>2. Push orchestration decisions to API users. This idea comes with two
>assumptions. The first assumption is that most providers' customers use
>the cloud via a GUI, which in turn can handle any orchestration decisions
>that need to be made. The second assumption is that power API users are
>savvy and can handle their decisions as well. Using this method requires
>services, such as LBaaS, to "register" in the form of metadata to a
>barbican container.
>
>* Example: If a user makes a change to a secret the GUI can see which
>services are registered and opt to warn the user of consequences. Power
>users can look at the registered services and make decisions how they see
>fit.
>
>PROS:
> - Very simple to implement. The only code needed to make this a reality
>is at the control plane (API) level.
> - This option is more loosely coupled that option #1.
>
>CONS:
> - Potential for services to not register/unregister. What happens in
>this case?
> - Pushes complexity of decision making on to GUI engineers and power API
>users.
>
>
>I would like to get a consensus on which option to move forward with ASAP
>since the hackathon is coming up and delivering Barbican to Neutron LBaaS
>integration is essential to exposing SSL/TLS functionality, which almost
>everyone has stated is a #1/#2 priority.
>
>I'll start the decision making process by advocating for option #2. My
>reason for choosing option #2 has to deal mostly with the simplicity of
>implementing such a mechanism. Simplicity also means we can implement the
>necessary code and get it approved much faster which seems to be a
>concern for everyone. What option does everyone else want to move forward
>with?
>
>
>
>Chee

Re: [openstack-dev] [Neutron][LBaaS] dealing with M:N relashionships for Pools and Listeners

2014-06-06 Thread Jorge Miramontes
A couple of questions have come to mind since reading this thread:

1) We are assuming that load balancers can only operate on one update at a
time correct? I.E. We are not allowing multiple updates to occur
concurrently? Whatever the case on this I advocate that we do NOT allow
concurrent modification as complexity of the system increases dramatically
and the gain is very small since load balancers are usually configured
upon create and then rarely updated over their lifetime.

2) It appears that sharing objects is causing complexity to creep in the
deeper the conversations are getting. For example, managing object
statuses seems like a nightmare! That said, does the fundamental rationale
for sharing objects outweigh the simplicity of not sharing objects?

I understand that we've been diligently working on the API revision but I
wanted to take a step back and look at the goal we are trying to solve and
weigh the perceived need for complexity (i.e "flexibility" in the API) vs
a simpler solution.

Cheers,
--Jorge




From:  , German 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Tuesday, June 3, 2014 5:40 PM
To:  "OpenStack Development Mailing List (not for usage questions)"

Subject:  Re: [openstack-dev] [Neutron][LBaaS] dealing with M:N
relashionships for Pools and Listeners


>Hi,
> 
>From deep below in the e-mail chain:
>Same here. Cascade-deleting of shared objects should not be allowed in
>any case.
> 
>Being able to delete all lbs and related constructs after a customer
>leaves and/or for tests is a pretty important requirements for us. It
>does not necessarily
> have to be accomplished by a cascading delete on the user api (we could
>use an admin api for that) but it is important in  our data model to
>avoid  constraint violation when we want to clean everything outŠ
> 
>I am still with Jorge that sharing of objects in whatever form might
>confuse customers who will then use up costly customer support time and
>hence not entirely
> in the interest of us public cloud providers. The examples with the
>status is another example for thatŠ
> 
>German
> 
>From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
>
>Sent: Friday, May 30, 2014 9:44 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] dealing with M:N
>relashionships for Pools and Listeners
>
> 
>Hi y'all!
> 
>
>Re-responses inline:
>
> 
>On Fri, May 30, 2014 at 8:25 AM, Brandon Logan
> wrote:
>
>> § Where can a user check the success of the update?
>>
>>
>>
>>
>> Depending on the object... either the status of the child object
>> itself or all of its affected parent(s). Since we're allowing reusing
>> of the pool object, when getting the status of a pool, maybe it makes
>> sense to produce a list showing the status of all the pool's members,
>> as well as the update status of all the listeners using the pool?
>
>This is confusing to me.  Will there be a separate provisioning status
>field on the loadbalancer and just a generic status on the child
>objects?  I get the idea of a pool having a status the reflects the
>state of all of its members.  Is that what you mean by status of a child
>object?
> 
>
>It seems to me that we could use the 'generic status' field on the load
>balancer to show provisioning status as well. :/  Is there a compelling
>reason we couldn't do this? (Sam?)
>
>And yes, I think that's what I mean with one addition. For example:
>
>If I have Listener A and B which use pool X which has members M and N...
>if I set member 'M' to be 'ADMIN_STATE_DISABLED', then what I would
>expect to see, if I ask for the status of pool X immediately after this
>change is:
>
>* An array showing N is 'UP' and 'M' is in state 'ADMIN_STATE_DISABLED'
>and
>
>* An array showing that listeners 'A' and 'B' are in 'PENDING_UPDATE'
>state (or something similar).
>
> 
>
>I would also expect listeners 'A' and 'B' to go back to 'UP' state
>shortly thereafter.
>
> 
>
>Does this make sense?
>
> 
>
>Note that there is a problem with my suggestion: What does the status of
>a member mean when the member is referenced indirectly by several
>listeners?  (For example, listener A could see member N as being UP,
>whereas listener B could see
> member N as being DOWN.)  Should member statuses also be an array from
>the perspective of each listener? (in other words, we'd have a
>two-dimensional array here.)
>
>If we do this then perhaps the right thing to do is just list the pool
>members' statuses in context of the listeners.  In other words, if we're
>reporting this way, then given the same scenario above, if we set member
>'M' to be 'ADMIN_STATE_DISABLED', then asking
> for the status of pool X immediately after this change is:
>
>* (Possibly?) an array for each listener status showing them as
>'PENDING_UPDATE'
>
>* An array for member statuses which contain:
>
>** An array which shows member N is 'UP' for listener 'A' and 'DOWN' for
>listener 'B'
>
>** An ar

[openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-06 Thread Jorge Miramontes
Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on
how Barbican and Neutron LBaaS will interact. There are currently two
ideas in play and both will work. If you have another idea please free to
add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets from
Barbican. For those that aren't up to date with the Neutron LBaaS API
Revision, the project/tenant/user provides a secret (container?) id when
enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican then
Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will be
supported.
 - Decisions are made on behalf of the user which lessens the amount of
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI
think.

2. Push orchestration decisions to API users. This idea comes with two
assumptions. The first assumption is that most providers' customers use
the cloud via a GUI, which in turn can handle any orchestration decisions
that need to be made. The second assumption is that power API users are
savvy and can handle their decisions as well. Using this method requires
services, such as LBaaS, to "register" in the form of metadata to a
barbican container.

* Example: If a user makes a change to a secret the GUI can see which
services are registered and opt to warn the user of consequences. Power
users can look at the registered services and make decisions how they see
fit.

PROS:
 - Very simple to implement. The only code needed to make this a reality
is at the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in this
case?
 - Pushes complexity of decision making on to GUI engineers and power API
users.


I would like to get a consensus on which option to move forward with ASAP
since the hackathon is coming up and delivering Barbican to Neutron LBaaS
integration is essential to exposing SSL/TLS functionality, which almost
everyone has stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My
reason for choosing option #2 has to deal mostly with the simplicity of
implementing such a mechanism. Simplicity also means we can implement the
necessary code and get it approved much faster which seems to be a concern
for everyone. What option does everyone else want to move forward with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements around statistics and billing

2014-06-06 Thread Jorge Miramontes
Hey Stephen,

What we really care about are the following:

- Inbound bandwidth (bytes)
- Outbound bandwidth (bytes)
- "Instance" Uptime (requires create/delete events)

Just to note our current LBaaS implementation at Rackspace keeps track of
when features are enabled/disabled. For example, we have markers for when
SSL is turned on/off, markers for when we suspend/unsuspend load
balancers, etc. Some of this stuff is used for tracking purposes, some of
it is used for billing purposes and some of it used for both purposes. We
also keep track of all user initiated API requests to help us out when
issues arise.

>From my experience building usage collection systems just know it is not a
trivial task, especially if we need to track events. One good tip is to be
as explicit as possible and as granular as possible. Being implicit causes
bad things to happen. Also, if we didn't have UDP as a protocol I would
recommend using Hadoop's map reduce functionality to get accurate
statistics by map-reducing request logs.

I would not advocate tracking per node statistics as the user can track
that information by themselves if they really want to. We currently, don't
have any customers that have asked for this feature.

If you want to tackle the usage collection problem for Neutron LBaaS I
would be glad to help as I've got quite a bit of experience in this
subject matter.

Cheers,
--Jorge




From:  , German 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Tuesday, June 3, 2014 5:20 PM
To:  "OpenStack Development Mailing List (not for usage questions)"

Subject:  Re: [openstack-dev] [Neutron][LBaaS] Requirements around
statistics and  billing


>Hi Stephen,
> 
>We would like all those numbers as well
>J
> 
>Additionally, we measure:
>·
>When a lb instance was created, deleted, etc.
>·
>For monitoring we ³ping² a load balancers health check and report/act on
>the results
>·
>For user¹s troubleshooting we make the haproxy logs available. Which
>contain connection information like from, to, duration, protocol, status
>(though
> we frequently have been told that this is not really useful for
>debuggingŠ) and of course having that more gui-fied would be neat
> 
>German
> 
> 
> 
>From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
>
>Sent: Tuesday, May 27, 2014 8:22 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: [openstack-dev] [Neutron][LBaaS] Requirements around statistics
>and billing
>
> 
>Hi folks!
> 
>
>We have yet to have any kind of meaningful discussion on this list around
>load balancer stats (which, I presume to include data that will
>eventually need to be consumed by a billing system). I'd like to get the
>discussion started here,
> as this will have significant meaning for how we both make this data
>available to users, and how we implement back-end systems to be able to
>provide this data.
>
> 
>
>So!  What kinds of data are people looking for, as far as load balancer
>statistics.
>
> 
>
>For our part, as an absolute minimum we need the following per
>loadbalancer + listener combination:
>
> 
>
>* Total bytes transferred in for a given period
>
>* Total bytes transferred out for a given period
>
> 
>
>Our product and billing people I'm sure would like the following as well:
>
> 
>
>* Some kind of peak connections / second data (95th percentile or average
>over a period, etc.)
>
>* Total connections for a given period
>
>* Total HTTP / HTTPS requests served for a given period
>
> 
>
>And the people who work on UIs and put together dashboards would like:
>
> 
>
>* Current requests / second (average for last X seconds, either
>on-demand, or simply dumped regularly).
>
>* Current In/Out bytes throughput
>
> 
>
>And our monitoring people would like this:
>
> 
>
>* Errors / second
>
>* Current connections / second and bytes throughput secant slope (ie.
>like derivative but easier to calculate from digital data) for last X
>seconds (ie. detecting massive spikes or drops in traffic, potentially
>useful for detecting a problem
> before it becomes critical)
>
> 
>
>And some of our users would like all of the above data per pool, and not
>just for loadbalancer + listener. Some would also like to see it per
>member (though I'm less inclined to make this part of our standard).
>
> 
>
>I'm also interested in hearing vendor capabilities here, as it doesn't
>make sense to design stats that most can't implement, and I imagine
>vendors also have valuable data on what their customer ask for / what
>stats are most useful in troubleshooting.
>
> 
>
>What other statistics data for load balancing are meaningful and
>hopefully not too arduous to calculate? What other data are your users
>asking for or accustomed to seeing?
>
> 
>
>Thanks,
>
>Stephen
>
> 
>
>-- 
>Stephen Balukoff 
>Blue Box Group, LLC
>(800)613-4305 x807 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.ope

Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Jorge Miramontes
Sam, our larger customers especially care about affinity since they have many 
load balancer instances. Their use case usually centers around being 
re-sellers. Also, if you have a deployment that utilizes several load balancers 
our customers have created tickets to ensure they are on different host 
machines so that they can mitigate host machine outages (we currently don't 
allow the tenant to choose affinity within a cluster, only across DC's). In a 
nut shell, the use cases are relevant, especially to our larger 
customers/tenants (i.e. customers that we pay special attention to since they 
bring in the majority of revenue).

Let me know if I am misunderstanding this,and please explain it
further.
A single neutron port can have many fixed ips on many subnets.  Since
this is the case you're saying that there is no need for the API to
define multiple VIPs since a single neutron port can represent all the
IPs that all the VIPs require?
Right, if you want to to have both ipv4 and ipv6 addresses on the VIP then it's 
possible with single neutron port.
So multiple VIPs for this case are not needed.

Eugene/Sam, a single Neutron port does allow for multiple subnets. However, 
this precludes tenants from having a load balancer that serves multiple 
networks. An example use case is the following:

"As a tenant I have several isolated private networks that were created to host 
different aspects of my business. They have been in use for a while. I also 
have a new shared service (i.e. a database, wiki, etc.) that needs to be load 
balanced. I want each isolated private network to access the load balanced 
service."

As you can see this requires multiple vips. I can think of several other use 
cases but agree with others that even if multiple vips aren't needed (which 
they are) a load balancer object is still needed for everything that Stephen 
presented.

Cheers,
--Jorge

From: Samuel Bercovici mailto:samu...@radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, May 9, 2014 3:37 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

It boils down to two aspects:

1.   How common is it for tenant to care about affinity or have more than a 
single VIP used in a way that adding an additional (mandatory) construct makes 
sense for them to handle?

For example if 99% of users do not care about affinity or will only use a 
single VIP (with multiple listeners). In this case does adding an additional 
object that tenants need to know about makes sense?

2.   Scheduling this so that it can be handled efficiently by different 
vendors and SLAs. We can elaborate on this F2F next week.

Can providers share their statistics to assist to understand how common are 
those use cases?

Regards,
-Sam.



From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Friday, May 09, 2014 9:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

Hi Eugene,

This assumes that 'VIP' is an entity that can contain both an IPv4 address and 
an IPv6 address. This is how it is in the API proposal and corresponding object 
model that I suggested, but it is a slight re-definition of the term "virtual 
IP" as it's used in the rest of the industry. (And again, we're not yet in 
agreement that 'VIP' should actually contain two ip addresses like this.)

In my mind, the main reasons I would like to see the container object are:


  *   It solves the colocation / apolcation (or affinity / anti-affinity) 
problem for VIPs in a way that is much more intuitive to understand and less 
confusing for users than either the "hints" included in my API, or something 
based off the nova blueprint for doing the same for virtual servers/containers. 
(Full disclosure: There probably would still be a need for some anti-affinity 
logic at the logical load balancer level as well, though at this point it would 
be an operator concern only and expressed to the user in the "flavor" of the 
logical load balancer object, and probably be associated with different billing 
strategies. "The user wants a dedicated physical load balancer? Then he should 
create one with this flavor, and note that it costs this much more...")
  *   From my experience, users are already familiar with the concept of what a 
logical load balancer actually is (ie. something that resembles a physical or 
virtual appliance from their perspective). So this probably fits into their 
view of the world better.
  *   It makes sense for "Load Balancer as a Service" to hand out logical load 
balancer objects. I think this will aid in a more intuitive understanding of 
the service for users who otherwise don't want to be concerned with operations.
  *   This opens up the 

Re: [openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday, 05/08 14-00 UTC

2014-05-07 Thread Jorge Miramontes
All of our relevant material is in this Google Drive folder ==>
https://drive.google.com/#folders/0B_x8_4x6DRLad1NZMjgyVFhqakU

Cheers,
--Jorge




On 5/7/14 1:19 PM, "Kyle Mestery"  wrote:

>Lets go over the Rackspace portion of the API comparison tomorrow
>then, and we can cover Stephen's on the ML when it's complete.
>
>On Wed, May 7, 2014 at 4:55 AM, Stephen Balukoff 
>wrote:
>> Howdy, y'all!
>>
>> I just wanted to give you a quick update: It looks like the Rackspace
>>team
>> is mostly done with their half of the API comparison, however, it is
>> extremely unlikely I'll be able to finish my half of this in time for
>>the
>> team meeting this Thursday. I apologize for this.
>>
>> Stephen
>>
>>
>> On Tue, May 6, 2014 at 1:27 PM, Eugene Nikanorov
>>
>> wrote:
>>>
>>> Hi folks,
>>>
>>> This will be the last meeting before the summit, so I suggest we will
>>> focus on the agenda for two
>>> design track slots we have.
>>>
>>> Per my experience design tracks are not very good for in-depth
>>>discussion,
>>> so it only make sense to present a road map and some other items that
>>>might
>>> need core team attention like interaction with Barbican and such.
>>>
>>> Another item for the meeting will be comparison of API proposals which
>>>as
>>> an action item from the last meeting.
>>>
>>> Thanks,
>>> Eugene.
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Stephen Balukoff
>> Blue Box Group, LLC
>> (800)613-4305 x807
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-06 Thread Jorge Miramontes
Okay, makes sense to gather all data now and interpret this later. I'm too
jaded for these types of debates right now since the summit is around the
corner.

Cheers,
--Jorge




On 5/6/14 3:32 PM, "Jay Pipes"  wrote:

>On 05/06/2014 04:22 PM, Stephen Balukoff wrote:
>> I think the plan is to release all the raw results of this to the public
>> as well--  meaning that it should be possible to come up with a
>> "representative average" per organization, as well as several other ways
>> to interpret the data. Right now, the emphasis is just to gather the
>> data. We can decide how to interpret it later.
>
>I don't think a representative average for each organization is
>necessarily useful (compared to the average over some larger grouping),
>but sure, the focus should be on data collection now, not interpretation.
>
>> There's a reason this survey is not anonymous. :)
>
>++
>
>-jay
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-06 Thread Jorge Miramontes
I agree that everyone's thoughts should be in it. I don't see why a
representative vote does not allow for that. Sam put a text box on each
use case to capture extra thoughts.

I would hope that no organization would be so confused as to have widely
varying viewpoints on *what their customers want*, since that is the
supposed purpose of all of this, right? We're supposed to be deciding
which use-cases matter *to our customers*, so there should be no real
variance for what I would vote versus what my teammates would vote, since
we have the same customersŠ


Also, if we are using this as a type of voting mechanism then interests of
large/vocal organizations drown out smaller organizations. If this is
being used as a voting mechanism then how do you suggest we weight votes
for smaller companies so that we do not alienate them from further
voting/discussions?

Cheers,
--Jorge




On 5/6/14 1:52 PM, "Jay Pipes"  wrote:

>On 05/06/2014 02:42 PM, Jorge Miramontes wrote:
>> Sam,
>>
>> I'm assuming you want one person from each company to answer correct?
>> I'm pretty sure people in each organization will vote the sameŠat least
>> I'd hope!
>
>I'd hope not! :)
>
>Even within the same organization or company, we all have different
>ideas on use cases, the appropriateness of certain things "in the
>cloud", and the role of a load balancer service in the general mix of
>things.
>
>I certainly would hope that lots of Mirantis engineers other than myself
>fill out the use case survey and offer their own insights.
>
>Best,
>-jay
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-06 Thread Jorge Miramontes
Sam,

I'm assuming you want one person from each company to answer correct? I'm 
pretty sure people in each organization will vote the same…at least I'd hope!

Cheers,
--Jorge

From: Samuel Bercovici mailto:samu...@radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, May 6, 2014 2:56 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

Hi Everyone,

The survey is now live via: http://eSurv.org?u=lbaas_project_user
The password is: lbaas

The survey includes all the tenant facing use cases from 
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing
Please try and fill the survey this week so we can have enough information to 
base decisions next week.

Regards,
-Sam.



From: Samuel Bercovici
Sent: Monday, May 05, 2014 4:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

Hi,

I will not freeze the document to allow people to work on requirements which 
are not tenant facing (ex: operator, etc.)
I think that we have enough use cases for tenant facing capabilities to reflect 
most common use cases.
I am in the process of creation a survey in surveymonkey for tenant facing use 
cases and hope to send it to ML ASAP.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Thursday, May 01, 2014 8:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici
Subject: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

Hi Everyone!

To assist in evaluating the use cases that matter and since we now have ~45 use 
cases, I would like to propose to conduct a survey using something like 
surveymonkey.
The idea is to have a non-anonymous survey listing the use cases and ask you 
identify and vote.
Then we will publish the results and can prioritize based on this.

To do so in a timely manner, I would like to freeze the document for editing 
and allow only comments by Monday May 5th 08:00AMUTC and publish the survey 
link to ML ASAP after that.

Please let me know if this is acceptable.

Regards,
-Sam.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-05-01 Thread Jorge Miramontes
As usual, comments are inline.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 3:10 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

Hi,


On Thu, May 1, 2014 at 10:46 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey Eugene,

I think there is a misunderstanding on what iterative development means to you 
and me and I want to make sure we are on the same page. First of all, I'll try 
not to use the term "duct-taping" even though it's a widely used term in the 
industry.
I'm not against the term itself.
It was applied several times to existing code base, apparently without ANY real 
code analysis.
That's especially clearly seen because all API proposals so far are focusing on 
managing the same set of lb primitives.
Yes, the proposals introduce some new primitives; yes some attributes and 
relationships differ from what is in the code.
But nothing was proposed so far that would require to completely throw away 
existing code, not a single requirement.

I understand that writing something from scratch can be more convenient for 
developers than studying existing code, but that's something we all have to do 
when working on opensource project.

To be perfectly clear we are not advocating "starting from scratch". If it has 
come out that way then let me be the first to correct that on behalf of my 
team. In reality, defining a brand new API specification is irrelevant to 
implementation. I like to see defining a spec as similar to defining an RFC. 
The reason why I don't even want to think about implementation is that it does 
not allow discussion to be open-minded. I agree that a particular API proposal 
might be easier to mold existing code to than another. However, this goes 
against the mentality of comparing on equal footing. You, for example, seem 
biased toward Stephen's proposal because you understand the current code base 
the best (since you wrote the majority of it) and see his proposal as most 
inline with said code. However, I ask that you try not to let current 
implementation cloud your judgement. If Stephen's proposal is what the 
community agrees upon then great! All I ask is that we compare fairly and 
without implementation in mind since we are defining what we want, not what we 
currently have in place. Once an API specification is agreed upon, then and 
only then, should we figure out how to mold the existing implementation towards 
the state the spec defines. Does that make sense?


My main concern is that implementing code on top of the current codebase to 
meet the smorgasbord of new requirements without thinking about overall design 
(since we know we will eventually want all the requirements satisfied at some 
point per your words)
Overall design was thought out long before we started having all these 
discussions.
And things are not quick in neutron project, that's regardless of amount of dev 
resources lbaas subteam may have.

While overall design may have been thought out long ago it doesn't mean that 
the discussion should be closed. By saying this, you are implying that 
newcomers are not welcome those discussions. At least, that is how your 
statement rubs off on me. I'll give you the benefit of the doubt to correct my 
understanding of that.


is that some requirement implemented 6 months from now may change code 
architecture. Since we know we want to meet all requirements eventually, its 
makes logical sense to design for what we know we need and then figure out how 
to iteratively implement code over time.
That was initially done on Icehouse summit, and we just had to reiterate the 
discussion for new subteam members who has joined recently.
I agree that "to design for what we know we need", but the primary option 
should be to continue existing work and analyse it to find gaps, that what 
Samuel and me were focusing on. Stephen's proposal also goes along this idea 
because everything in his doc can be implemented gradually starting from 
existing code.

That being said, if it makes sense to use existing code first then fine. In 
fact, I am a fan of trying manipulate as little code as possible unless we 
absolutely have to. I just want to be a smart developer and design knowing I 
will eventually have to implement something. Not keeping things in mind can be 
dangerous.
I fully agree and that's well understood.

In short, I want to avoid having to perform multiple code refactors if possible 
and design upfront with the list of requirements the community has spent time 
fleshing out.

Also, it seems li

Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-01 Thread Jorge Miramontes
That sounds good to me. The only thing I would caution is that we have 
prioritized certain requirements (like HA and SSL Termination) and I want to 
ensure we use the survey to compliment what we have already mutually agreed 
upon. Thanks for spearheading this!

Cheers,
--Jorge

From: Samuel Bercovici mailto:samu...@radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 12:39 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

Hi Everyone!

To assist in evaluating the use cases that matter and since we now have ~45 use 
cases, I would like to propose to conduct a survey using something like 
surveymonkey.
The idea is to have a non-anonymous survey listing the use cases and ask you 
identify and vote.
Then we will publish the results and can prioritize based on this.

To do so in a timely manner, I would like to freeze the document for editing 
and allow only comments by Monday May 5th 08:00AMUTC and publish the survey 
link to ML ASAP after that.

Please let me know if this is acceptable.

Regards,
-Sam.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-05-01 Thread Jorge Miramontes
Hey Eugene,

I think there is a misunderstanding on what iterative development means to you 
and me and I want to make sure we are on the same page. First of all, I'll try 
not to use the term "duct-taping" even though it's a widely used term in the 
industry. My main concern is that implementing code on top of the current 
codebase to meet the smorgasbord of new requirements without thinking about 
overall design (since we know we will eventually want all the requirements 
satisfied at some point per your words) is that some requirement implemented 6 
months from now may change code architecture. Since we know we want to meet all 
requirements eventually, its makes logical sense to design for what we know we 
need and then figure out how to iteratively implement code over time. That 
being said, if it makes sense to use existing code first then fine. In fact, I 
am a fan of trying manipulate as little code as possible unless we absolutely 
have to. I just want to be a smart developer and design knowing I will 
eventually have to implement something. Not keeping things in mind can be 
dangerous. In short, I want to avoid having to perform multiple code refactors 
if possible and design upfront with the list of requirements the community has 
spent time fleshing out.

Also, it seems like you have some implicit developer requirements that I'd like 
written somewhere. This may ease confusion as well. For example, you stated 
"Consistency is important". A clear definition in the form of a developer 
requirement would be nice so that the community understands your expectations.

Lastly, in relation to operator requirements I didn't see you comment on 
whether you are fan of working on an open-source driver together. Just so you 
know, operator requirements are very important for us and I honestly don't see 
how we can use any current driver without major modifications. This leads me to 
want to create a new driver with operator requirements being central to the 
design.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 8:12 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

Hi Jorge,

A couple of inline comments:

Now that we have a set of requirements the next question to ask is, "How
doe we prioritize requirements so that we can start designing and
implementing them"?
Prioritization basically means that we want to support everything and only 
choose what is
more important right now and what is less important and can be implemented 
later.

Assuming requirements are prioritized (which as of today we have a pretty
good idea of these priorities) the next step is to design before laying
down any actual code.
That's true. I'd only would like to notice that there were actually a road map 
and requirements
with design before the code was written, that's both for the features that are 
already implemented,
and those which now are hanging in limbo.

I agree with Samuel that pushing the cart before the
horse is a bad idea in this case (and it usually is the case in software
development), especially since we have a pretty clear idea on what we need
to be designing for. I understand that the current code base has been
worked on by many individuals and the work done thus far is the reason why
so many new faces are getting involved. However, we now have a completely
updated set of requirements that the community has put together and trying
to fit the requirements to existing code may or may not work.

In my experience, I would argue that 99% of the time duct-taping existing code
I really don't like the term "duct-taping" here.
Here's the problem: you'll never will be able to implement everything at once, 
you have to do it incrementally.
That's how ecosystem works.
Each step can be then considered as 'duct-taping' because each state you're 
getting to
is not accounting for everything what was planned.
And for sure, there will be design mistakes that need to be fixed.
In the end there will be another cloud provider with another set of 
requirements...

So in order to deal with that in a productive way there are a few guidelines:
1) follow the style of ecosystem. Consistency is important. Keeping the style 
helps both developers, reviewers and users of the product.
2) Preserve backward compatibility whenever possible.
That's a very important point which however can be 'relaxed' if existing code 
base is completely unable to evolve to support new requirements.

to fit in new requirements results in buggy software. That being said, I
usually don't like to rebuild a project from scratch. If I can I try to
refactor as much as possible first. However, in this case we have a
particular set of requirements that changes the game. P

Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS

2014-04-30 Thread Jorge Miramontes
I agree it may be odd, but is that a strong argument? To me, following RESTful 
style/constructs is the main thing to consider. If people can specify 
everything in the parent resource then let them (i.e. single call). If they 
want to specify at a more granular level then let them do that too (i.e. 
multiple calls). At the end of the day the API user can choose the style they 
want.

Cheers,
--Jorge

From: Youcef Laribi mailto:youcef.lar...@citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, April 30, 2014 1:35 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style 
in LBaaS

Sam,

I think it’s important to keep the Neutron API style consistent. It would be 
odd if LBaaS uses a different style than the rest of the Neutron APIs.

Youcef

From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Wednesday, April 30, 2014 10:59 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in 
LBaaS

Hi Everyone,

During the last few days I have looked into the different LBaaS API proposals.
I have also looked on the API style used in Neutron. I wanted to see how 
Neutron APIs addressed “tree” like object models.
Follows my observation:

1.   Security groups -  
http://docs.openstack.org/api/openstack-network/2.0/content/security-groups-ext.html)
 –

a.   
security-group-rules
 are children of 
security-groups,
 the capability to create a security group with its children in a single call 
is not possible.

b.   The capability to create 
security-group-rules
 using the following URI path 
v2.0/security-groups/{SG-ID}/security-group-rules
 is not supported

c.The capability to update 
security-group-rules
 using the following URI path 
v2.0/security-groups/{SG-ID}/security-group-rules/{SGR-ID}
 is not supported

d.   The notion of creating 
security-group-rules
 (child object) without providing the parent {SG-ID} is not supported

2.   Firewall as a service - 
http://docs.openstack.org/api/openstack-network/2.0/content/fwaas_ext.html - 
the API to manage firewall_policy and firewall_rule which have parent child 
relationships behaves the same way as Security groups

3.   Group Policy – this is work in progress - 
https://wiki.openstack.org/wiki/Neutron/GroupPolicy - If I understand 
correctly, this API has a complex object model while the API adheres to the way 
other neutron APIs are done (ex: flat model, granular api, etc.)

How critical is it to preserve a consistent API style for LBaaS?
Should this be a consideration when evaluating API proposals?

Regards,
-Sam.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-04-30 Thread Jorge Miramontes
Oops! Everywhere I said Samuel I meant Stephen. Sorry you both have SB as
you initials so I got confused. :)

Cheers,
--Jorge




On 4/30/14 5:17 PM, "Jorge Miramontes" 
wrote:

>Hey everyone,
>
>I agree that we need to be preparing for the summit. Using Google docs
>mixed with Openstack wiki works for me right now. I need to become more
>familiar the gerrit process and I agree with Samuel that it is not
>conducive to "large" design discussions. That being said I'd like to add
>my thoughts on how I think we can most effectively get stuff done.
>
>As everyone knows there are many new players from across the industry that
>have an interest in Neutron LBaaS. Companies I currently see
>involved/interested are Mirantis, Blue Box Group, HP, PNNL, Citrix,
>eBay/Paypal and Rackspace. We also have individuals involved as well. I
>echo Kyle's sentiment on the passion everyone is bringing to the project!
>Coming into this project a few months ago I saw that a few things needed
>to be done. Most notably, I realized that gathering everyone's
>expectations on what they wanted Neutron LBaaS to be was going to be
>crucial. Hence, I created the requirements document. Written requirements
>are important within a single organization. They are even more important
>when multiple organizations are working together because everyone is
>spread out across the world and every organization has a different
>development process. Again, my goal with the requirements document is to
>make sure that everyone's voice in the community is taken into
>consideration. The benefit I've seen from this document is that we ask
>"Why?" to each other, iterate on the document and in the end have a clear
>understanding of everyone's motives. We also learn from each other by
>doing this which is one of the great benefits of open source.
>
>Now that we have a set of requirements the next question to ask is, "How
>doe we prioritize requirements so that we can start designing and
>implementing them"? If this project were a completely new piece of
>software I would argue that we iterate on individual features based on
>anecdotal information. In essence I would argue an agile approach.
>However, most of the companies involved have been operating LBaaS for a
>while now. Rackspace, for example, has been operating LBaaS for the better
>part of 4 years. We have a clear understanding of what features our
>customers want and how to operate at scale. I believe other operators of
>LBaaS have the same understanding of their customers and their operational
>needs. I guess my main point is that, collectively, we have data to back
>up which requirements we should be working on. That doesn't mean we
>preclude requirements based on anecdotal information (i.e. "Our customers
>are saying they want new shiny feature X"). At the end of the day I want
>to prioritize the community's requirements based on factual data and
>anecdotal information.
>
>Assuming requirements are prioritized (which as of today we have a pretty
>good idea of these priorities) the next step is to design before laying
>down any actual code. I agree with Samuel that pushing the cart before the
>horse is a bad idea in this case (and it usually is the case in software
>development), especially since we have a pretty clear idea on what we need
>to be designing for. I understand that the current code base has been
>worked on by many individuals and the work done thus far is the reason why
>so many new faces are getting involved. However, we now have a completely
>updated set of requirements that the community has put together and trying
>to fit the requirements to existing code may or may not work. In my
>experience, I would argue that 99% of the time duct-taping existing code
>to fit in new requirements results in buggy software. That being said, I
>usually don't like to rebuild a project from scratch. If I can I try to
>refactor as much as possible first. However, in this case we have a
>particular set of requirements that changes the game. Particularly,
>operator requirements have not been given the attention they deserve.
>
>I think of Openstack as being cloud software that is meant to operate at
>scale and have the necessary operator tools to do so. Otherwise, why do we
>have so many companies interested in Openstack if you can't operate a
>cloud that scales? In the case of LBaaS, user/feature requirements and
>operator requirements are not necessarily mutually exclusive. How you
>design the system in regards to one set of requirements affects the design
>of the system in regards to the other set of requirements. SSL
>termination, for example, affects the ability to scale since it is CPU
>intensive.

[openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-04-30 Thread Jorge Miramontes
Hey everyone,

I agree that we need to be preparing for the summit. Using Google docs
mixed with Openstack wiki works for me right now. I need to become more
familiar the gerrit process and I agree with Samuel that it is not
conducive to "large" design discussions. That being said I'd like to add
my thoughts on how I think we can most effectively get stuff done.

As everyone knows there are many new players from across the industry that
have an interest in Neutron LBaaS. Companies I currently see
involved/interested are Mirantis, Blue Box Group, HP, PNNL, Citrix,
eBay/Paypal and Rackspace. We also have individuals involved as well. I
echo Kyle's sentiment on the passion everyone is bringing to the project!
Coming into this project a few months ago I saw that a few things needed
to be done. Most notably, I realized that gathering everyone's
expectations on what they wanted Neutron LBaaS to be was going to be
crucial. Hence, I created the requirements document. Written requirements
are important within a single organization. They are even more important
when multiple organizations are working together because everyone is
spread out across the world and every organization has a different
development process. Again, my goal with the requirements document is to
make sure that everyone's voice in the community is taken into
consideration. The benefit I've seen from this document is that we ask
"Why?" to each other, iterate on the document and in the end have a clear
understanding of everyone's motives. We also learn from each other by
doing this which is one of the great benefits of open source.

Now that we have a set of requirements the next question to ask is, "How
doe we prioritize requirements so that we can start designing and
implementing them"? If this project were a completely new piece of
software I would argue that we iterate on individual features based on
anecdotal information. In essence I would argue an agile approach.
However, most of the companies involved have been operating LBaaS for a
while now. Rackspace, for example, has been operating LBaaS for the better
part of 4 years. We have a clear understanding of what features our
customers want and how to operate at scale. I believe other operators of
LBaaS have the same understanding of their customers and their operational
needs. I guess my main point is that, collectively, we have data to back
up which requirements we should be working on. That doesn't mean we
preclude requirements based on anecdotal information (i.e. "Our customers
are saying they want new shiny feature X"). At the end of the day I want
to prioritize the community's requirements based on factual data and
anecdotal information.

Assuming requirements are prioritized (which as of today we have a pretty
good idea of these priorities) the next step is to design before laying
down any actual code. I agree with Samuel that pushing the cart before the
horse is a bad idea in this case (and it usually is the case in software
development), especially since we have a pretty clear idea on what we need
to be designing for. I understand that the current code base has been
worked on by many individuals and the work done thus far is the reason why
so many new faces are getting involved. However, we now have a completely
updated set of requirements that the community has put together and trying
to fit the requirements to existing code may or may not work. In my
experience, I would argue that 99% of the time duct-taping existing code
to fit in new requirements results in buggy software. That being said, I
usually don't like to rebuild a project from scratch. If I can I try to
refactor as much as possible first. However, in this case we have a
particular set of requirements that changes the game. Particularly,
operator requirements have not been given the attention they deserve.

I think of Openstack as being cloud software that is meant to operate at
scale and have the necessary operator tools to do so. Otherwise, why do we
have so many companies interested in Openstack if you can't operate a
cloud that scales? In the case of LBaaS, user/feature requirements and
operator requirements are not necessarily mutually exclusive. How you
design the system in regards to one set of requirements affects the design
of the system in regards to the other set of requirements. SSL
termination, for example, affects the ability to scale since it is CPU
intensive. As an operator, I need to know how to provision load balancer
instances efficiently so that I'm not having to order new hardware more
than I have to. With this in mind, I am assuming that most of us are
vendor-agnostic and want to cooperate in developing an open source driver
while letting vendors create their own drivers. If this is not the case
then perhaps a lot of the debates we have been having are moot since we
can separate efforts depending on what driver we want to work on. The only
item of Neutron LBaaS that we need to have consensus on then i

Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Jorge Miramontes
+1 for German's use cases. We need SSL re-encryption for decisions the
load balancer needs to make at the l7 layer as well. Thanks Clint, for
your thorough explanation from a security standpoint.

Cheers,
--Jorge




On 4/18/14 1:38 PM, "Clint Byrum"  wrote:

>Excerpts from Stephen Balukoff's message of 2014-04-18 10:36:11 -0700:
>> Dang.  I was hoping this wasn't the case.  (I personally think it's a
>> little silly not to trust your service provider to secure a network when
>> they have root access to all the machines powering your cloud... but I
>> digress.)
>> 
>
>No one person or even group of people on the operator's network will have
>full access to everything. Security is best when it comes in layers. Area
>51 doesn't just have a guard shack and then you drive right into the
>hangars with the UFO's and alien autopsies. There are sensors, mobile
>guards, secondary checkpoints, locks on the outer doors, and locks on
>the inner doors. And perhaps most importantly, the MP who approves your
>entry into the first gate, does not even have access to the next one.
>
>Your SSL terminator is a gate. What happens once an attacker (whoever
>that may be, your disgruntled sysadmin, or rogue hackers) is behind that
>gate _may_ be important.
>
>> Part of the reason I was hoping this wasn't the case, isn't just
>>because it
>> consumes a lot more CPU on the load balancers, but because now we
>> potentially have to manage client certificates and CA certificates (for
>> authenticating from the proxy to back-end app servers). And we also
>>have to
>> decide whether we allow the proxy to use a different client cert / CA
>>per
>> pool, or per member.
>> 
>> Yes, I realize one could potentially use no client cert or CA (ie.
>> encryption but no auth)...  but that actually provides almost no extra
>> security over the unencrypted case:  If you can sniff the traffic
>>between
>> proxy and back-end server, it's not much more of a stretch to assume you
>> can figure out how to be a man-in-the-middle.
>>
>
>A passive attack where the MITM does not have to witness the initial
>handshake or decrypt/reencrypt to sniff things is quite a bit easier to
>pull off and would be harder to detect. So "almost no extra security"
>is not really accurate. But this is just one point of data for risk
>assessment.
>
>> Do any of you have a use case where some back-end members require SSL
>> authentication from the proxy and some don't? (Again, deciding whether
>> client cert / CA usage should attach to a "pool" or to a "member.")
>> 
>> It's a bit of a rabbit hole, eh.
>>
>
>Security turns into an endless rat hole when you just look at it as a
>product, such as "A secure load balancer."
>
>If, however, you consider that it is really just a process of risk
>assessment and mitigation, then you can find a sweet spot that works
>in your business model. "How much does it cost to mitigate the risk
>of unencrypted backend traffic from the load balancer?  What is the
>potential loss if the traffic is sniffed? How likely is it that it will
>be sniffed?" .. Those are ongoing questions that need to be asked and
>then reevaluated, but they don't have a fruitless stream of what-if's
>that have to be baked in like the product discussion. It's just part of
>your process, and processes go on until they aren't needed anymore.
>
>IMO a large part of operating a cloud is decoupling the ability to setup
>a system from the ability to enable your business with a system. So
>if you can communicate the risks of doing without backend encryption,
>and charge the users appropriately when they choose that the risk is
>worth the added cost, then I think it is worth it to automate the setup
>of CA's and client certs and put that behind an API. Luckily, you will
>likely find many in the OpenStack community who can turn that into a
>business opportunity and will help.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements and API revision progress

2014-04-16 Thread Jorge Miramontes
Hi all,

In order to ease confusion I think I might create use case walk-throughs to 
show how the API would work. There's only been one week to work on this (minus 
other work) so I haven't had enough time to create them. I'll try to capture 
most of them in this form over the following week as I really think it will aid 
in understanding the document Brandon provided. Sometimes an illustration is 
easier to understand :). Anyways, just know that simplicity, flexibility and 
the ability to capture the majority of use cases was kept in mind when creating 
this proposal and I really think it will satisfy the requirements that everyone 
has put forth. See you all on IRC in a few hours!

Cheers,
--Jorge

From: Brandon Logan 
mailto:brandon.lo...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, April 16, 2014 9:17 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements and API revision 
progress

Stephen,
Commenting in line below

On 04/16/2014 07:56 PM, Stephen Balukoff wrote:
Hi y'all!

This is actually a pretty good start for a revision of the Neutron LBaaS API.

My feedback on your proposed API v2.0 is actually pretty close to Eugene's, 
with a couple additions:

You say 'only one port and protocol per load balancer', yet I don't know how 
this works. Could you define what a 'load balancer' is in this case?  (port and 
protocol are attributes that I would associate with a TCP or UDP listener of 
some kind.)  Are you using 'load balancer' to mean 'listener' in this case 
(contrary to previous discussion of this on this list and the one defined here 
https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary#Loadbalancer )?

Yes, it could be considered as a Listener according to that documentation.  The 
way to have a "listener" using the same VIP but listen on two different ports 
is something we call VIP sharing.  You would assign a VIP to one load balancer 
that uses one port, and then assign that same VIP to another load balancer but 
that load balancer is using a different port than the first one.  How the 
backend implements it is an implementation detail (redudant, I know).  In the 
case of HaProxy it would just add the second port to the same config that the 
first load balancer was using.  In other drivers it might be different.


As pointed out, one pool per load balancer breaks any L7 switching 
functionality. SSL and L7 were the two major features that spawned this whole 
discussion about LBaaS a couple months ago, so any solution we propose should 
probably have these features.
Yes we agree one pool per load balancer breaks L7 switching functionality.  
However, as I told Eugene, we also came up with a "content_switching" object 
that would be a part of the load balancer root object.  In that object it does 
define multiple pools and rules.  The details of the pools and rules may indeed 
need some tweaking, but that doesn't mean this solution breaks the L7 switching 
requirement.

As for SSL, this absolutely allows SSL.  Using the common use case for SSL 
Termination:
1. Create an HTTP load balancer listening on port 80.
2. Create an HTTPS load balancer listening on port 443 sharing the same VIP and 
pool as the first load balancer.  Also, add an SSL Termination/SSL Decryption 
object to this 2nd load balancer.

We did not say much about the SSL Termination/SSL Decryption object because we 
wanted to make sure it was able to meet other requirements before we started to 
discuss that.

Context switching is the *only* reason to have multiple pools per load 
balancer... and I really just don't understand where the "consistency" argument 
between having "a pool" vs. "pools." I don't understand why one would think 
having multiple pools for a load balancer (that doesn't need them) would be a 
desired way to handle this "inconsistency" problem. Anyway... There's been 
discussion of this previously here: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/l7  ...and I think I can 
illustrate (via proposed API) a better way to do this...  (in a nutshell, you 
need to have an additional object which links listeners to pools via a policy 
or rule. API is going to need to have controls to modify these rules.)

I'm not sure I fully understand the requirements behind the "single API call" 
proposal for creating a LBaaS service instance (whatever that means). 
Therefore, for now, I'm going to withhold any judgement on this or anything 
attempting to meet this requirement. Where does this need come from, and what 
are people expecting to see for their "single API call"?
The "single API call" is something we do currently use.  One reason to have it 
is because it is easier to understand from a user standpoint that creating a 
fully provisioned load balancer is done in one step at the /loadb

Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui screen captures

2014-04-11 Thread Jorge Miramontes
Hi Kevin,

We are trying to prioritize features based on actual data utilization. If you 
have some, by all means please add it to 
https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=0.
 One reason we are focusing on HTTP(S) and not FTP is that 0.27% of our lb 
instances leverage the FTP protocol. That being said, we are only one cloud 
provider so if you have an interesting use case please add it to the links that 
Sam added. Once it is in the docs then it will be easier for everyone to be 
aware of it and thus make for a more spirited discussion.

Cheers,
--Jorge

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, April 9, 2014 7:21 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>, 
"Eugene Nikanorov (enikano...@mirantis.com)" 
mailto:enikano...@mirantis.com>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web 
ui screen captures

I'm not seeing anything here about non http(s) related Load balancing.  We're 
interested in load balancing ssh, ftp, and other services too.

Thanks,
Kevin

From: Samuel Bercovici [samu...@radware.com]
Sent: Sunday, April 06, 2014 5:51 AM
To: OpenStack Development Mailing List 
(openstack-dev@lists.openstack.org); 
Eugene Nikanorov (enikano...@mirantis.com)
Subject: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui 
screen captures

Per the last LBaaS meeting.


1.   Please find a list of use cases.
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing


a)  Please review and see if you have additional ones for the project-user

b)  We can then chose 2-3 use cases to play around with how the CLI, API, 
etc. would look


2.   Please find a document to place screen captures of web UI. I took the 
liberty to place a few links showing ELB.
https://docs.google.com/document/d/10EOCTej5CvDfnusv_es0kFzv5SIYLNl0uHerSq3pLQA/edit?usp=sharing


Regards,
-Sam.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Clarification in regards to https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

2014-04-09 Thread Jorge Miramontes
Answers inlined. Thanks for the questions! They forced me to think about 
certain features.

Cheers,
--Jorge

From: Samuel Bercovici mailto:samu...@radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, April 9, 2014 6:10 AM
To: "OpenStack Development Mailing List 
(openstack-dev@lists.openstack.org)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][LBaaS]Clarification in regards to 
https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

Hi,

I have looked at 
https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1
 and have a few questions:

1.   Monitoring Tab:

a.   Are there users that use load balancing who do not monitor members? 
Can you share the use cases where this makes sense?

This is a good question. In our case we supply static ip addresses so some 
users have only one backend node. With one node it isn't necessary. Another 
case I can think of is lbs that are being used for non-critical environments 
(i.e. dev or testing environment). For the most part it would make sense to 
have monitoring.

b.  Does it make sense to define the different type of monitors (ex: TCP, 
HTTP HTTPS)?

Yes it does. Http monitoring, for example, allows you to monitor specific 
URI's. I just put total utilization for all three to get some data out.

c.   Does any existing cloud service besides the current implementation of 
the LBaaS API supports using multiple monitors on the same pool? Is this a 
required feature?

I would think multiple monitors wouldn't make sense as they could potentially 
conflict. How would a decision be made in such a case?

2.   Logging Tab:

a.   What is logging use for?

This is specifically connection logging. It allows the user to see all of the 
requests that went through the load balancer. It is mostly used for big data 
and troubleshooting.

b.  How does the tenant consume the logs?

For our offering, we send their logs in a compressed format to swift. However, 
I am open to discussion on how to handle this in a more flexible manner.

3.   SSL Tab:

a.   Please explain if SSL means passing SSL traffic through the load 
balancer or using the load balancer to terminate certificates.

SSL termination. I updated the tab.

b.  Does it make sense to separate those (SSL termination and non HTTPS 
terminated traffic) as different rows?

Blue Box added a few extra rows. I identified lbs that terminate only secure 
traffic and lbs that allow both secure and insecure traffic.

c.   Can anyone explain the use cases for SSL_MIXED?

A lot of web sites have mixed content. The lb terminates the secure traffic. 
The insecure traffic passes through normally.

4.   HA Tab:

a.   Is this a tenant facing option or is it the way the operator chose to 
implement the service

For us, this is operator implementation. However, since most lbs are considered 
mission critical almost all production users require HA. I could see this being 
a toggable feature from the tenant side if they wanted to use a lb for testing 
or something non mission critical.

5.   Content Caching Tab:

a.   Is this a load balancer feature or a CDN like feature.

This is a lb feature. However, depending on the amount of content you'd like to 
cache using a CDN may be overkill. Here is a link that may shed some light: 
http://www.rackspace.com/knowledge_center/article/content-caching-for-cloud-load-balancers

6.   L7

a.   Does any cloud provider support L7 switching and L7 content 
modifications?

We currently do not.

b.  If so can you please add a tab noting how much such features are used?

N/A – Delegating to someone who actually has data.

c.   If not, can anyone attest to whether this feature was requested by 
customers?

Good question. I can see the use cases but operator data on this would be nice 
for those that have it. We have had a few requests but not enough that would 
warrant development effort at this time. Hence, I would mark this priority low 
unless we can back it up with data.

Thanks!
-Sam.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data from Operators needed.

2014-04-02 Thread Jorge Miramontes
Thanks Eugene,

I added our data onto the requirements page since I was hoping to prioritize 
requirements based on the operator data that gets provided. We can move it over 
to the other page if you think that makes sense. See everyone on the weekly 
meeting tomorrow!

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, April 1, 2014 4:09 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data 
from Operators needed.

I added two more. I am still working on our HA use cases. Susanne


On Tue, Apr 1, 2014 at 4:16 PM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
I added our priorities. I hope its formatted well enough. I just took a stab in 
the dark.

Thanks,
Kevin

From: Eugene Nikanorov [enikano...@mirantis.com]
Sent: Tuesday, April 01, 2014 3:02 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data from 
Operators needed.

Hi folks,

On the last meeting we decided to collect usage data so we could prioritize 
features and see what is demanded most.

Here's the blank page to do that (in a free form). I'll structure it once we 
have some data.
https://wiki.openstack.org/wiki/Neutron/LBaaS/Usecases

Please fill with the data you have.

Thanks,
Eugene.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

2014-03-25 Thread Jorge Miramontes
Hey Susanne,

I think it makes sense to group drivers by each LB software. For example, there 
would be a driver for HAProxy, one for Citrix's Netscalar, one for Riverbed's 
Stingray, etc. One important aspect about Openstack that I don't want us to 
forget though is that a tenant should be able to move between cloud providers 
at their own will (no vendor lock-in). The API contract is what allows this. 
The challenging aspect is ensuring different drivers support the API contract 
in the same way. What components should drivers share is also and interesting 
conversation to be had.

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, March 25, 2014 6:59 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed 
services"

John, Brandon,

I agree that we cannot have a multitude of drivers doing the same thing or 
close to because then we end-up in the same situation as we are today where we 
have duplicate effort and technical debt.

The goal would be here to be able to built a framework around the drivers that 
would allow for resiliency, failover, etc...

If the differentiators are in higher level APIs then we can have one a single 
driver (in the best case) for each software LB e.g. HA proxy, nginx, etc.

Thoughts?

Susanne


On Mon, Mar 24, 2014 at 11:26 PM, John Dewey 
mailto:j...@dewey.ws>> wrote:
I have a similar concern.  The underlying driver may support different 
functionality, but the differentiators need exposed through the top level API.

I see the SSL work is well underway, and I am in the process of defining L7 
scripting requirements.  However, I will definitely need L7 scripting prior to 
the API being defined.
Is this where vendor extensions come into play?  I kinda like the route the 
Ironic guy safe taking with a “vendor passthru” API.

John

On Monday, March 24, 2014 at 3:17 PM, Brandon Logan wrote:

Creating a separate driver for every new need brings up a concern I have had.  
If we are to implement a separate driver for every need then the permutations 
are endless and may cause a lot drivers and technical debt.  If someone wants 
an ha-haproxy driver then great.  What if they want it to be scalable and/or 
HA, is there supposed to be scalable-ha-haproxy, scalable-haproxy, and 
ha-haproxy drivers?  Then what if instead of doing spinning up processes on the 
host machine we want a nova VM or a container to house it?  As you can see the 
permutations will begin to grow exponentially.  I'm not sure there is an easy 
answer for this.  Maybe I'm worrying too much about it because hopefully most 
cloud operators will use the same driver that addresses those basic needs, but 
worst case scenarios we have a ton of drivers that do a lot of similar things 
but are just different enough to warrant a separate driver.

From: Susanne Balle [sleipnir...@gmail.com]
Sent: Monday, March 24, 2014 4:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed 
services"

Eugene,

Thanks for your comments,

See inline:

Susanne


On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov 
mailto:enikano...@mirantis.com>> wrote:
Hi Susanne,

a couple of comments inline:





We would like to discuss adding the concept of “managed services” to the 
Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA proxy. 
The latter could be a second approach for some of the software load-balancers 
e.g. HA proxy since I am not sure that it makes sense to deploy Libra within 
Devstack on a single VM.



Currently users would have to deal with HA, resiliency, monitoring and managing 
their load-balancers themselves.  As a service provider we are taking a more 
managed service approach allowing our customers to consider the LB as a black 
box and the service manages the resiliency, HA, monitoring, etc. for them.


As far as I understand these two abstracts, you're talking about making LBaaS 
API more high-level than it is right now.
I think that was not on our roadmap because another project (Heat) is taking 
care of more abstracted service.
The LBaaS goal is to provide vendor-agnostic management of load balancing 
capabilities and quite fine-grained level.
Any higher level APIs/tools can be built on top of that, but are out of LBaaS 
scope.



[Susanne] Yes. Libra currently has some internal APIs that get triggered when 
an action needs to happen. We would like similar functionality in Neutron LBaaS 
so the user doesn’t have to manage the load-balancers but can consider them as 
black-boxes. Would it make sense to maybe consider integrating Neutron LBaaS 
with heat to support some

Re: [openstack-dev] [Neutron][LBaaS] addition to requirement wiki

2014-03-25 Thread Jorge Miramontes
Thanks Itsuro,

Good requirement since Neutron LBaaS is an asynchronous API.

Cheers,
--Jorge




On 3/24/14 7:27 PM, "Itsuro ODA"  wrote:

>Hi LBaaS developpers,
>
>I added 'Status Indication' to requirement Wiki.
>It may be independent from object model discussion
>but I think this is an item which should not be forgotten.
>
>Thanks.
>-- 
>Itsuro ODA 
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-20 Thread Jorge Miramontes
The use case from our customers has been mostly for database (MySql) load 
balancing. If the master goes down then they want another master/slave on 
standby ready to receive traffic. In the simplest case, I think Neutron can 
achieve this with 2 pools with 1 node each. If pool #1 goes down then pool #2 
becomes active. We currently solve this with the notion of primary and 
secondary nodes. If all primary nodes go down then secondary nodes become 
active.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, March 20, 2014 11:35 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki



  *   Active/Passive Failover
 *   I think this is solved with multiple pools.
The multiple pools support that is coming with L7 rules is to support 
content-switching based on L7 HTTP information (URL, headers, etc.). There is 
no support today for an active vs. passive pool.
I'm not sure that's the priority. It depends on if this is widely supported 
among vendors.

A commercial load balancer that doesn't have high availability features? Is 
there really such a thing still being sold in 2014? ;)
I might be missing something fundamental here, but we're talking about 
'additional' HA at pool level? Why not just add nodes to the pool?


Also, Jorge-- thanks for creating that page! I've made a few additions to it as 
well that I'd love to see prioritized.


Stephen




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-20 Thread Jorge Miramontes
Thanks for the input. I too was thinking "IP Access Control" could be solved 
with the firewall service in Neutron. To clarify what I mean check out our 
current API docs on this feature 
here<http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Manage_Access_Lists-d1e3187.html>.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, March 20, 2014 1:35 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Hi folks, my comments inlined:


On Thu, Mar 20, 2014 at 6:13 AM, Youcef Laribi 
mailto:youcef.lar...@citrix.com>> wrote:
Jorge,

Thanks for taking the time to put up a requirements list. Some comments below:

  *   Static IP Addresses
 *   Our current Cloud Load Balancing (CLB) offering utilizes static IP 
addresses which is something our customers really like, especially when setting 
up DNS. AWS for example, gives you an A record which you CNAME to.
This should also already be addressed, as you can today specify the VIP’s IP 
address explicitly on creation. We do not have DNS-based support for LB like in 
AWS ELB though.
Right, it's already there. Probably that's why it confused me :)

  *   Active/Passive Failover
 *   I think this is solved with multiple pools.
The multiple pools support that is coming with L7 rules is to support 
content-switching based on L7 HTTP information (URL, headers, etc.). There is 
no support today for an active vs. passive pool.
I'm not sure that's the priority. It depends on if this is widely supported 
among vendors.


  *   IP Access Control
 *   Our current CLB offering allows the user to restrict access through 
their load balancer by blacklisting/whitelisting cidr blocks and even 
individual ip addresses. This is just a basic security feature.
Is this controlling access to the VIP’s IP address or to pool members IP 
addresses? There is also a Firewall service in Neutron. Could this feature 
better fit in that service?
Agree, it's better to utilize what fwaas has to offer.

Eugene.



Youcef

From: Jorge Miramontes 
[mailto:jorge.miramon...@rackspace.com<mailto:jorge.miramon...@rackspace.com>]
Sent: Wednesday, March 19, 2014 11:44 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Oleg, thanks for the updates.

Eugene, High/Medium/Low is fine with me. I really just wanted to find a way to 
rank even amongst all of 'X' priorities. As people start adding more items we 
may need more columns to add things such as this, links to blueprints (per 
Ryan's idea), etc. In terms of the requirements marked with a '?' I can try to 
clarify here:


  *   Static IP Addresses

 *   Our current Cloud Load Balancing (CLB) offering utilizes static IP 
addresses which is something our customers really like, especially when setting 
up DNS. AWS for example, gives you an A record which you CNAME to.

  *   Active/Passive Failover

 *   I think this is solved with multiple pools.

  *   IP Access Control

 *   Our current CLB offering allows the user to restrict access through 
their load balancer by blacklisting/whitelisting cidr blocks and even 
individual ip addresses. This is just a basic security feature.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, March 19, 2014 7:32 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Hi Jorge,

Thanks for taking care of the page. I've added priorities, although I'm not 
sure we need precise priority weights.
Those features that still have '?' need further clarification.

Thanks,
Eugene.


On Wed, Mar 19, 2014 at 11:18 AM, Oleg Bondarev 
mailto:obonda...@mirantis.com>> wrote:
Hi Jorge,

Thanks for taking care of this and bringing it all together! This will be 
really useful for LBaaS discussions.
I updated the wiki to include L7 rules support and also marking already 
implemented requirements.

Thanks,
Oleg

On Wed, Mar 19, 2014 at 2:57 AM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey Neutron LBaaS folks,

Per last week's IRC meeting I have created a preliminary requirements &
use case wiki page. I requested adding such a page since there appears to
be a lot of new interest in load balancing and feel that we need a
structured way to align everyone's 

Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-19 Thread Jorge Miramontes
Oleg, thanks for the updates.

Eugene, High/Medium/Low is fine with me. I really just wanted to find a way to 
rank even amongst all of 'X' priorities. As people start adding more items we 
may need more columns to add things such as this, links to blueprints (per 
Ryan's idea), etc. In terms of the requirements marked with a '?' I can try to 
clarify here:


  *   Static IP Addresses
 *   Our current Cloud Load Balancing (CLB) offering utilizes static IP 
addresses which is something our customers really like, especially when setting 
up DNS. AWS for example, gives you an A record which you CNAME to.
  *   Active/Passive Failover
 *   I think this is solved with multiple pools.
  *   IP Access Control
 *   Our current CLB offering allows the user to restrict access through 
their load balancer by blacklisting/whitelisting cidr blocks and even 
individual ip addresses. This is just a basic security feature.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, March 19, 2014 7:32 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Hi Jorge,

Thanks for taking care of the page. I've added priorities, although I'm not 
sure we need precise priority weights.
Those features that still have '?' need further clarification.

Thanks,
Eugene.



On Wed, Mar 19, 2014 at 11:18 AM, Oleg Bondarev 
mailto:obonda...@mirantis.com>> wrote:
Hi Jorge,

Thanks for taking care of this and bringing it all together! This will be 
really useful for LBaaS discussions.
I updated the wiki to include L7 rules support and also marking already 
implemented requirements.

Thanks,
Oleg


On Wed, Mar 19, 2014 at 2:57 AM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey Neutron LBaaS folks,

Per last week's IRC meeting I have created a preliminary requirements &
use case wiki page. I requested adding such a page since there appears to
be a lot of new interest in load balancing and feel that we need a
structured way to align everyone's interest in the project. Furthermore,
it appears that understanding everyone's requirements and use cases will
aid in the current object model discussion we all have been having. That
being said, this wiki is malleable and open to discussion. I have added
some preliminary requirements from my team's perspective in order to start
the discussion. My vision is that people add requirements and use cases to
the wiki for what they envision Neutron LBaaS becoming. That way, we can
all discuss as a group, figure out what should and shouldn't be a
requirement and prioritize the rest in an effort to focus development
efforts. ReadyŠsetŠgo!

Here is the link to the wiki ==>
https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements

Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-18 Thread Jorge Miramontes
Hey Neutron LBaaS folks,

Per last week's IRC meeting I have created a preliminary requirements &
use case wiki page. I requested adding such a page since there appears to
be a lot of new interest in load balancing and feel that we need a
structured way to align everyone's interest in the project. Furthermore,
it appears that understanding everyone's requirements and use cases will
aid in the current object model discussion we all have been having. That
being said, this wiki is malleable and open to discussion. I have added
some preliminary requirements from my team's perspective in order to start
the discussion. My vision is that people add requirements and use cases to
the wiki for what they envision Neutron LBaaS becoming. That way, we can
all discuss as a group, figure out what should and shouldn't be a
requirement and prioritize the rest in an effort to focus development
efforts. ReadyŠsetŠgo!

Here is the link to the wiki ==>
https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements

Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-13 Thread Jorge Miramontes
Hey everyone,

Now that the thread has had enough time for people to reply it appears that the 
majority of people that vocalized their opinion are in favor of a mini-summit, 
preferably to occur in Atlanta days before the Openstack summit. There are 
concerns however, most notably the concern that the mini-summit is not 100% 
inclusive (this seems to imply that other mini-summits are not 100% inclusive). 
Furthermore, there seems to be a concern about timing. I am relatively new to 
Openstack processes so I want to make sure I am following them. In this case, 
does majority vote win? If so, I'd like to further this discussion into 
actually planning a mini-summit. Thoughts?

Cheers,
--Jorge

From: Mike Wilson mailto:geekinu...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, March 11, 2014 11:57 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

Hangouts  worked well at the nova mid-cycle meetup. Just make sure you have 
your network situation sorted out before hand. Bandwidth and firewalls are what 
comes to mind immediately.

-Mike


On Tue, Mar 11, 2014 at 9:34 AM, Tom Creighton 
mailto:tom.creigh...@rackspace.com>> wrote:
When the Designate team had their mini-summit, they had an open Google Hangout 
for remote participants.  We could even have an open conference bridge if you 
are not partial to video conferencing.  With the issue of inclusion solved, 
let’s focus on a date that is good for the team!

Cheers,

Tom Creighton


On Mar 10, 2014, at 4:10 PM, Edgar Magana 
mailto:emag...@plumgrid.com>> wrote:

> Eugene,
>
> A have a few arguments why I believe this is not 100% inclusive
>   • Is the foundation involved on this process? How? What is the budget? 
> Who is the responsible from the foundation  side?
>   • If somebody made already travel arraignments, it won't be possible to 
> make changes at not cost.
>   • Staying extra days in a different city could impact anyone's budget
>   • As a OpenStack developer. I want to understand why the summit is not 
> enough for deciding the next steps for each project. If that is the case, I 
> would prefer to make changes on the organization of the summit instead of 
> creating mini-summits all around!
> I could continue but I think these are good enough.
>
> I could agree with your point about previous summits being distractive for 
> developers, this is why this time the OpenStack foundation is trying very 
> hard to allocate specific days for the conference and specific days for the 
> summit.
> The point that I am totally agree with you is that we SHOULD NOT have session 
> about work that will be done no matter what!  Those are just a waste of good 
> time that could be invested in very interesting discussions about topics that 
> are still not clear.
> I would recommend that you express this opinion to Mark. He is the right guy 
> to decide which sessions will bring interesting discussions and which ones 
> will be just a declaration of intents.
>
> Thanks,
>
> Edgar
>
> From: Eugene Nikanorov 
> mailto:enikano...@mirantis.com>>
> Reply-To: OpenStack List 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Monday, March 10, 2014 10:32 AM
> To: OpenStack List 
> mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?
>
> Hi Edgar,
>
> I'm neutral to the suggestion of mini summit at this point.
> Why do you think it will exclude developers?
> If we keep it 1-3 days prior to OS Summit in Atlanta (e.g. in the same city) 
> that would allow anyone who joins OS Summit to save on extra travelling.
> OS Summit itself is too distractive to have really productive discussions, 
> unless your missing the sessions and spend time discussing.
> For instance design sessions basically only good for declaration of intents, 
> but not for real discussion of a complex topic at meaningful detail level.
>
> What would be your suggestions to make this more inclusive?
> I think the time and place is the key here - hence Atlanta and few days prior 
> OS summit.
>
> Thanks,
> Eugene.
>
>
>
> On Mon, Mar 10, 2014 at 10:59 PM, Edgar Magana 
> mailto:emag...@plumgrid.com>> wrote:
>> Team,
>>
>> I found that having a mini-summit with a very short notice means excluding
>> a lot of developers of such an interesting topic for Neutron.
>> The OpenStack summit is the opportunity for all developers to come
>> together and discuss the next steps, there are many developers that CAN
>> NOT afford another trip for a "special" summit. I am personally against
>> that and I do support Mark's proposal of having all the conversation over
>> IRC and mailing list.
>>
>> Please, do not start excluding people that won't be able to attend another
>> face-to-face meeting besides the summit. I believe tha

[openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Jorge Miramontes
Hi everyone,

I'd like to gauge everyone's interest in a possible mini-summit for Neturon 
LBaaS. If enough people are interested I'd be happy to try and set something 
up. The Designate team just had a productive mini-summit in Austin, TX and it 
was nice to have face-to-face conversations with people in the Openstack 
community. While most of us will meet in Atlanta in May, I feel that a focused 
mini-summit will be more productive since we won't have other Openstack 
distractions around us. Let me know what you all think!

Cheers,
--Jorge
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev