Re: [openstack-dev] [Octavia] [lbaas] Mid-Cycle proposed for the week of August 22nd

2016-06-15 Thread Trevor Vardeman
For most of the mid cycles we've set up some video conferencing for remote 
people to participate.  I'll add a new section in the etherpad for remote 
people so we can set up all that information in one location.


-Trevor



From: Nir Magnezi 
Sent: Wednesday, June 15, 2016 5:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Octavia] [lbaas] Mid-Cycle proposed for the week 
of August 22nd

Hi Michael,

Will there be an option to participate from remote?

Thanks,
Nir

On Fri, Jun 10, 2016 at 1:51 AM, Michael Johnson 
> wrote:
Just a reminder, we have a proposed mid-cycle meeting set for the week
of August 22nd in San Antonio.

If you would like to attend and have not yet signed up, please add
your name to the list on our etherpad:

https://etherpad.openstack.org/p/lbaas-octavia-newton-midcycle

Thank you,
Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Logo for Octavia project

2015-04-15 Thread Trevor Vardeman
I have a couple proposals done up on paper that I'll have available
shortly, I'll reply with a link.

 - Trevor J. Vardeman
 - trevor.varde...@rackspace.com
 - (210) 312 - 4606




On 4/14/15, 5:34 PM, Eichberger, German german.eichber...@hp.com wrote:

All,

Let's decide on a logo tomorrow so we can print stickers in time for
Vancouver. Here are some designs to consider:
http://bit.ly/Octavia_logo_vote

We will discuss more at tomorrow's meeting - Agenda:
https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Meeting_2015
-04-15 - but please come prepared with one of your favorite designs...

Thanks,
German

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Octavia Weekly Standup Reminder

2014-10-08 Thread Trevor Vardeman
Hey all!

Friendly reminder to throw a little info in the Standup for this week
before the meeting this afternoon.

-Trevor

PS.  Just tryin to help out Balukoff :P


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Trevor Vardeman
Hello all,

TL;DR
Using the SameHostFilter and DifferentHostFilter will work functionally
for what Octavia needs for colocation, apolocation, and HA
anti-affinity.  There are a couple topics that need discussed:

How should VMs be allocated per host when evaluating colocation, if each
load balancer minimally has 2 VMs?  (Active-Active or Active-Passive)

How would a spare node pool handle affinity (i.e. will every host have a
separate spare node pool)?



Brandon and I spent a little time white-boarding our thoughts on this
affinity/anti-affinity problem.  Basically we came up with a couple
tables we'll need in the DB, and one table representing information
retrieved from nova, as follows:
Note:  The tables were written with fixed width text.  Looks really
bad in HTML.


LB Table
+---+--+---+
| LB_ID | colocate | apolocate |
+---+--+---+
|   1   |  |   |
+---+--+---+
|   2   |  | 1 |
+---+--+---+
|   3   |2 |   |
+---+--+---+
|   4   |1 | 3 |
+---+--+---+

DB Association Table
+---+---+-+
| LB_ID | VM_ID | HOST_ID |
+---+---+-+
|   1   |   A   |I|
+---+---+-+
|   1   |   B   |II   |
+---+---+-+
|   2   |   C   |   III   |
+---+---+-+
|   2   |   D   |IV   |
+---+---+-+
|   3   |   E   |   III   |
+---+---+-+
|   3   |   F   |IV   |
+---+---+-+
|   4   |   G   |I|
+---+---+-+
|   4   |   H   |II   |
+---+---+-+

Nova Information Table
+---++-+-+
| VM_ID | SameHostFilter | DifferentHostFilter | HOST_ID |
+---++-+-+
|   A   || |I|
+---++-+-+
|   B   ||  A  |II   |
+---++-+-+
|   C   || A B |   III   |
+---++-+-+
|   D   ||A B C|IV   |
+---++-+-+
|   E   |  C D   | |   III   |
+---++-+-+
|   F   |  C D   |  E  |IV   |
+---++-+-+
|   G   |  A B   | E F |I|
+---++-+-+
|   H   |  A B   |E F G|II   |
+---++-+-+

The first thing we discussed was an Active-Active setup.  Above you can
see I enforce that the first VM will not be on the same host as the
second.  In the first table, I've given some ideas about what LB will
colocate/apolocate with another, and configured them in the association
table appropriately.  Can you see any configuration combination we might
have over-looked?

As for scaling, we considered adding of VMs in an Active-Active setup to
be just as trivial as the initial creation.  Just include another VM id
in the list for DifferentHostFilter and it'll guarantee a different host
assignment.

The second discussion was for Active-Passive, and we decided it would be
very similar to Active-Active in accordance to appending to a list for
filtering.  For each required Active node created for scaling, standing
up another Passive node would happen with just another VM id
specification in the filter.  This keeps all the Active/Passive on
different hosts.  One could just as easily write some logic to keep all
the Passives on one Host and all the Actives on a different, though this
would potentially cause other problems.

One thing that just popped into my head would be scaling on different
hosts to different degrees.  Example:  I already have 2 VMs, 1 active
and 1 passive each (so 4 VMs total right now).  My scaling solution
could call for another 4 VMs to be stood up in the same fashion, but the
hosts matching up like the following table:

+---+---+-++
| LB_ID | VM_ID | HOST_ID | ACTIVE |
+---+---+-++
|   1   |   A   |I|1   |
+---+---+-++
|   1   |   B   |II   |0   |
+---+---+-++
|   2   |   C   |   III   |1   |
+---+---+-++
|   2   |   D   |IV   |0   |
+---+---+-++
|   1   |   E   |I|1   |
+---+---+-++
|   1   |   F   |II   |0   |
+---+---+-++
|   2   |   G   |   III   |1   |
+---+---+-++
|   2   |   H   |IV   |0   |

[openstack-dev] [Octavia] Minutes from 8/20/2013 meeting

2014-08-21 Thread Trevor Vardeman
Agenda items are numbered, and topics, as discussed, are described beneath in 
list format.


1) Revisit some basic features of loadbalancing as a service's object model and 
api.
   a) Brandon advocated Loadbalancer as only root object
  + The reason for root objects was for sharing.
   b) Will we allow sharing of pools in a listener?
  + Stephen suggests providing sharing to the customer for benefits
 - provides simplicity to the user
 - Example:  L7 rules all referencing the same pool: simpler for the 
user to handle it.
 - Without sharing there may also be a series of extra health checks 
that are unnecessary
  + German wants placement of the pool to be on the load balancer
 - This allows sharing pools between different listeners.
 - Counter argument by Stephen:  Sharing pools between HTTP/HTTPS load 
balancers would
   be really rare, where normally people would use a different 
port.  Adding another health
   check wouldn't be a big deal.  Proposed L7 policies where you 
have a complicated rule
   set causing duplication for a or set, would increase the 
health check requirement.
   (Refer to email in list)
   c) If we desire many to many, there will be more root objects than just load 
balancer.
  + Moving to many-to-many after establishing one root object would be 
difficult

2) Get consensus on initial project direction and implementation details
   a) One HA proxy instance per load balancer or one HA proxy instance per 
listener?
  + Per ML discussion:  Keeping listener on one HA Proxy instance increases 
performance on one
Octavia VM
 - Desires benchmarks for this to support  (German has this included in 
his next sprint)
  + Suggested shelving this until benchmarks are researched.
  + Future discussions on the ML for this decision
  + A concern from Vijay:  with one HA Proxy instance per listener, would 
that affect scalability?
 - This was suggested to move to the mailing list

3) When decisions (like #2) have been made, where should this be stored, wiki 
or in code?
   a) Bad thing about wiki is if Openstack makes a documentation overhaul the 
decision information
 might get lost.
   b) Bad thing about code is its harder to find and read.
   c) Decision was to keep it in the Wiki.

4) Whose responsibility is it to update the wiki with these decisions?
   a) For now, Stephen has been updating the wiki
   b) In the future, people involved in the decision will decide someone to 
update the wiki at the time

5) What else is needed to change in the 0.5 design before it can be approved 
and implementation
  can begin?
   a) Action item for everyone:  Review this design before next week's meeting. 
 Keep in mind the
 document is supposed to be somewhat general.

6) Start going over action items 
(https://etherpad.openstack.org/p/Octavia_Action_Items)
   a) Action Item for everyone:  Review the migration information proposed by 
Brandon.
   b) Per link above, start from 1 and move the way down the list.
   c) How can we decide who is working on what?
  + Get launchpad set up for octavia to allow for blueprint additions and 
thus allow people to
contribute to a specific effort
   d) We need a list of things that are required to do and what needs hooked up 
how (the glue
 between the different pieces)
   e) What kind of communication between different components?
  + XMLRPC?
  + A REST interface?
  + Something different?
   f) Brandon working on Data Models and SQL Alchemy Models.
   g) Stephen working on Octavia VM API interface, including what technology to 
use
   h) Doug working on Skeleton Structure
   i) Brandon working on launchpad and blueprints issue as well
   j) Stephen will also prioritize this list
   k) Topics that need discussed should be expressed and discussed in the 
mailing list
   l) Michael Johnson working on the base image scripts
  + Would we use an image we've built or set it up after creation of a VM
 - Start with a base image with pre-packaging of Octavia scripts and 
such instead of Cloud init
   doing all the work downloading and such.  Saves time/resources.
 - Ideally we would have a place in the Octavia repo with a script or 
something that when run
   would create an image.
  + The images will potentially change based on flavoring options.
 - This includes custom images via customer requirements

-- After meeting --
Q:  Are we going to be incubated?
A:  Yes, we are basically destined for incubation, period.  Note:  we will 
assuredly not be in Juno.

Q:  Why be part of Neutron?  Why not just be our own program?
A:  We want to distance ourselves from Neutron to some extent.  We will 
formalize this via a 
 networking driver in Octavia.  Note: we do not want to burn any 
bridges here, so we want to
 be 

[openstack-dev] [Octavia] Minutes from 8/13/2014 meeting

2014-08-18 Thread Trevor Vardeman
Agenda items are numbered, and topics, as discussed, are described beneath in 
list format.

1) Discuss future of Octavia in light of Neutron-incubator project proposal.
a) There are many problems with Neutron-Incubator as currently described
b) The political happenings in Neutron leave our LBaaS patches under review 
unlikely to land in Juno
c) The Incubator proposal doesn't affect Octavia development direction, 
with inclination to distance ourselves from Neutron proper
d) With the Neutron Incubator proposal in current scope, efforts of people 
pushing forward Neutron LBaaS patches should be re-focused into Octavia.

2) Discuss operator networking requirements (carry-over from last week)
a) Both HP and Rackspace seem to agree that as long as Octavia uses 
Neutron-like floating IPs, their networks should be able to work with proposed 
Octavia topologies
b) (Blue Box) also wanted to meet with Rackspace's networking team during 
the operator summit a few weeks from now to thoroughly discuss network concerns

3) Discuss v0.5 component design proposal  
[https://review.openstack.org/#/c/113458/]
a) Notification for back-end node health (aka being offline) isn't required 
for 0.5, but is a must have later
b) Notification of LB health (HA Proxy, etc) is definitely a requirement in 
0.5
c) Still looking for more feedback on the proposal itself

4) Discuss timeline on moving these meetings to IRC.
a) Most members in favor of keeping the webex meetings for the time being
b) One major point was other openstack/stackforge use video meetings as 
their primary source as well


Sorry for the lack of density.  I forgot to have the meeting recorded, but 
I hope I included some major points.  Feel free to respond in line with any 
more information anyone can recall concerning the meeting information.  Thanks!

-Trevor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Minutes from 8/6/2014 meeting

2014-08-06 Thread Trevor Vardeman
Agenda items are numbered, and topics, as discussed, are described beneath in 
list format.

1) Octavia Constitution and Project Direction Documents (Road map)
a) Constitution and Road map will potentially be adopted after another 
couple days; providing those who were busy more time to review the information

2) Octavia Design Proposals
a) Difference between version 0.5 and 1.0 isn't huge
b) Version 2 has many network topology changes and Layer 4 routing
+ This includes N node Active-Active
+ Would like to avoid Layer 2 connectivity with Load Balancers 
(included in version 1 however)
+ Layer router driver
+ Layer router controller
+ Long term solution
c) After refining Version 1 document (with some scrutiny) all changes will 
be propagated to the Version 2 document
d) Version 0.5 is unpublished
e) All control layer, anything connected to the intermediate message bus in 
version 1, will be collapsed down to 1 daemon.
+ No scale-able control, but scale-able service delivery
+ Version 1 will be the first large operator compatible version, that 
will have both scale-able control and scale-able service delivery
+ 0.5 will be a good start
- laying out ground work
- rough topology for the end users
- must be approved by the networking teams for each contributing 
company
f) The portions under control of neutron lbaas is the User API and the 
driver (for neutron lbaas)
g) If neutron LBaaS is a sufficient front-end (user API doesn't suck), then 
Octavia will be kept as a vendor driver
h) Potentially including a REST API on top of Octavia
+ Octavia is initially just a vendor driver, no real desire for another 
API in front of Octavia
+ If someone wants it, the work is trivial and can be done in another 
project at another time
i) Octavia should have a loose coupling with Neutron; use a shim for 
network connectivity (one specifically for Neutron communication in the start)
+ This is going to hold any dirty hacks that would be required to get 
something done, keeping Octavia clean
- Example: changing the mac address on a port

3) Operator Network Topology Requirements
a) One requirement is floating IPs.
b) IPv6 is in demand, but is currently not supported reliably on Neutron
+ IPv6 would be represented as a different load balancer entity, and 
possibly include co-location with another Load Balancer
c) Network interface plug-ability (potentially)
d) Sections concerning front-end connectivity should be forwarded to each 
company's network specialists for review
+ Share findings in the mailing list, and dissect the proposals with 
the information and comment what requirements are needing added etc.

4) HA/Failover Options/Solutions
a) Rackspace may have a solution to this, but the conversation will be 
pushed off to the next meeting (at least)
+ Will gather more information from another member in Rackspace to 
provide to the ML for initial discussions
b) One option for HA:  Spare pool option (similar to Libra)
+ Poor recovery time is a big problem
c) Another option for HA:  Active/Passive
+ Bluebox uses one active and one passive configuration, and has 
sub-second fail over.  However is not resource-sufficient

Questions:
Q:  What is the expectation for a release time-frame
A:  Wishful thinking; Octavia version 0.5 beta for Juno (probably not, but 
would be awesome to push for that)

Notes:
 + We need to pressure the Neutron core reviewers to review the Neutron LBaaS 
changes to get merges.
 + Version 2 front-end topology is different than the Version 1.  Please review 
them individually, and thoroughly


PS.  I re-wrote most of the information from the recording (thanks again Doug). 
 I have one question for everyone: should I just email this out after each 
meeting to the Octavia mailing list, or should I also add it to a page in an 
Octavia wiki for Meeting Notes/Minutes or something for review by anyone?  What 
are your thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Wednesday meeting agenda topics

2014-07-09 Thread Trevor Vardeman
Hello all!

Earlier in the meetings we discussed using the Wednesday meeting for only 
Octavia discussions.  One of my teammates would like to allocate the Wednesday 
meeting time for face-to-face discussions of the same meeting topics we have on 
Thursday morning.  I think its a good idea to use the time that way at least 
until Octavia is more of a priority.  Does anyone else share this same concern? 
 Do we think that's a good use of that time?

-Trevor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Trouble with Devstack

2014-06-24 Thread Trevor Vardeman
I'm running Ubuntu 14.04, and rather suddenly I'm unable to run ./stack.sh 
successfully.  Brandon, who is also running Ubuntu 14.04, is seeing no issues 
here.  However, all the same, I'm at a loss as to understand what the problem 
is.  At the bottom of my text is the terminal output from running ./stack.sh

It should be noted, I don't use a python virtual environment.  My reasoning is 
simple: I have a specific partition set up to use devstack, and only devstack.  
I don't think its necessary to use a VE mostly because I would find it weird to 
handle dependencies in an isolated environment rather than the host environment 
I've already dedicated to the project in the first place.  Not sure any of you 
will agree with me, and I'd only really entertain the idea of said VE if its 
the only solution to my problem.  I've installed python-pip as the latest 
version, 1.5.6.  When running ./stack.sh it will uninstall the latest version 
and try using pip 1.4.1, to no avail, and where it would try to install 1.4.1 
escapes me, according to the following output.  If I manually install 1.4.1 and 
add files to the appropriate location for its use according to ./stack.sh it 
still uninstalls the installed packages, and then fails, under what appeared to 
me to be the same output and failure as the following.  If anyone can help me 
sort this out, I'd be very appreciative.  Please feel free to message me on IRC 
(handle TrevorV) if you have a suggestion or are confused about anything I've 
done/tried.

Terminal
Using mysql database backend
2014-06-24 17:16:32.095 | + echo_summary 'Installing package prerequisites'
2014-06-24 17:16:32.095 | + [[ -t 3 ]]
2014-06-24 17:16:32.095 | + [[ True != \T\r\u\e ]]
2014-06-24 17:16:32.095 | + echo -e Installing package prerequisites
2014-06-24 17:16:32.095 | + source 
/home/stack/workspace/devstack/tools/install_prereqs.sh
2014-06-24 17:16:32.095 | ++ [[ -n '' ]]
2014-06-24 17:16:32.095 | ++ [[ -z /home/stack/workspace/devstack ]]
2014-06-24 17:16:32.095 | ++ 
PREREQ_RERUN_MARKER=/home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_HOURS=2
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_SECONDS=7200
2014-06-24 17:16:32.096 | +++ date +%s
2014-06-24 17:16:32.096 | ++ NOW=1403630192
2014-06-24 17:16:32.096 | +++ head -1 /home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.096 | ++ LAST_RUN=1403628907
2014-06-24 17:16:32.096 | ++ DELTA=1285
2014-06-24 17:16:32.096 | ++ [[ 1285 -lt 7200 ]]
2014-06-24 17:16:32.096 | ++ [[ -z '' ]]
2014-06-24 17:16:32.096 | ++ echo 'Re-run time has not expired (5915 seconds 
remaining) '
2014-06-24 17:16:32.096 | Re-run time has not expired (5915 seconds remaining)
2014-06-24 17:16:32.096 | ++ echo 'and FORCE_PREREQ not set; exiting...'
2014-06-24 17:16:32.096 | and FORCE_PREREQ not set; exiting...
2014-06-24 17:16:32.096 | ++ return 0
2014-06-24 17:16:32.096 | + [[ False != \T\r\u\e ]]
2014-06-24 17:16:32.096 | + /home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | +++ dirname 
/home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOOLS_DIR=/home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools/..
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOP_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + source /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 |  dirname /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 | +++ cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | +++ pwd
2014-06-24 17:16:32.096 | ++ FUNC_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | ++ source 
/home/stack/workspace/devstack/functions-common
2014-06-24 17:16:32.105 | + FILES=/home/stack/workspace/devstack/files
2014-06-24 17:16:32.105 | + PIP_GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py
2014-06-24 17:16:32.106 | ++ basename https://bootstrap.pypa.io/get-pip.py
2014-06-24 17:16:32.107 | + 
LOCAL_PIP=/home/stack/workspace/devstack/files/get-pip.py
2014-06-24 17:16:32.107 | + GetDistro
2014-06-24 17:16:32.107 | + GetOSVersion
2014-06-24 17:16:32.108 | ++ which sw_vers
2014-06-24 17:16:32.111 | + [[ -x '' ]]
2014-06-24 17:16:32.111 | ++ which lsb_release
2014-06-24 17:16:32.114 | + [[ -x /usr/bin/lsb_release ]]
2014-06-24 17:16:32.115 | ++ lsb_release -i -s
2014-06-24 17:16:32.160 | + os_VENDOR=Ubuntu
2014-06-24 17:16:32.161 | ++ lsb_release -r -s
2014-06-24 17:16:32.209 | + os_RELEASE=14.04
2014-06-24 17:16:32.209 | + os_UPDATE=
2014-06-24 17:16:32.209 | + os_PACKAGE=rpm
2014-06-24 17:16:32.209 | + [[ Debian,Ubuntu,LinuxMint =~ Ubuntu ]]
2014-06-24 17:16:32.209 | + os_PACKAGE=deb
2014-06-24 17:16:32.210 | ++ lsb_release -c -s
2014-06-24 17:16:32.262 | + os_CODENAME=trusty
2014-06-24 17:16:32.262 | + 

Re: [openstack-dev] [Neutron][LBaaS] Trouble with Devstack

2014-06-24 Thread Trevor Vardeman
Fawad,

Thanks Fawad, that seems to have fixed my issue at this point.  Amused me, 
since pip is supposed to replace easy_install, but I won't nitpick if it fixes 
it ha ha.

-Trevor

From: Fawad Khaliq [fa...@plumgrid.com]
Sent: Tuesday, June 24, 2014 12:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Trouble with Devstack

Hi Trevor,

I ran into the same issue. I worked around quickly by doing the following:

  *   After stack.sh uninstalls pip, and fails with the 
pkg_resources.DistributionNotFound: pip==1.4.1 error, install pip from 
easy_install
 *   # easy_install pip
  *   And re - stack.sh

Haven't done the investigation yet but this may help you move past this issue 
for now.

Thanks,
Fawad Khaliq



On Tue, Jun 24, 2014 at 10:32 AM, Trevor Vardeman 
trevor.varde...@rackspace.commailto:trevor.varde...@rackspace.com wrote:
I'm running Ubuntu 14.04, and rather suddenly I'm unable to run ./stack.sh 
successfully.  Brandon, who is also running Ubuntu 14.04, is seeing no issues 
here.  However, all the same, I'm at a loss as to understand what the problem 
is.  At the bottom of my text is the terminal output from running ./stack.sh

It should be noted, I don't use a python virtual environment.  My reasoning is 
simple: I have a specific partition set up to use devstack, and only devstack.  
I don't think its necessary to use a VE mostly because I would find it weird to 
handle dependencies in an isolated environment rather than the host environment 
I've already dedicated to the project in the first place.  Not sure any of you 
will agree with me, and I'd only really entertain the idea of said VE if its 
the only solution to my problem.  I've installed python-pip as the latest 
version, 1.5.6.  When running ./stack.sh it will uninstall the latest version 
and try using pip 1.4.1, to no avail, and where it would try to install 1.4.1 
escapes me, according to the following output.  If I manually install 1.4.1 and 
add files to the appropriate location for its use according to ./stack.sh it 
still uninstalls the installed packages, and then fails, under what appeared to 
me to be the same output and failure as the following.  If anyone can help me 
sort this out, I'd be very appreciative.  Please feel free to message me on IRC 
(handle TrevorV) if you have a suggestion or are confused about anything I've 
done/tried.

Terminal
Using mysql database backend
2014-06-24 17:16:32.095 | + echo_summary 'Installing package prerequisites'
2014-06-24 17:16:32.095 | + [[ -t 3 ]]
2014-06-24 17:16:32.095 | + [[ True != \T\r\u\e ]]
2014-06-24 17:16:32.095 | + echo -e Installing package prerequisites
2014-06-24 17:16:32.095 | + source 
/home/stack/workspace/devstack/tools/install_prereqs.sh
2014-06-24 17:16:32.095 | ++ [[ -n '' ]]
2014-06-24 17:16:32.095 | ++ [[ -z /home/stack/workspace/devstack ]]
2014-06-24 17:16:32.095 | ++ 
PREREQ_RERUN_MARKER=/home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_HOURS=2
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_SECONDS=7200
2014-06-24 17:16:32.096 | +++ date +%s
2014-06-24 17:16:32.096 | ++ NOW=1403630192
2014-06-24 17:16:32.096 | +++ head -1 /home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.096 | ++ LAST_RUN=1403628907
2014-06-24 17:16:32.096 | ++ DELTA=1285
2014-06-24 17:16:32.096 | ++ [[ 1285 -lt 7200 ]]
2014-06-24 17:16:32.096 | ++ [[ -z '' ]]
2014-06-24 17:16:32.096 | ++ echo 'Re-run time has not expired (5915 seconds 
remaining) '
2014-06-24 17:16:32.096 | Re-run time has not expired (5915 seconds remaining)
2014-06-24 17:16:32.096 | ++ echo 'and FORCE_PREREQ not set; exiting...'
2014-06-24 17:16:32.096 | and FORCE_PREREQ not set; exiting...
2014-06-24 17:16:32.096 | ++ return 0
2014-06-24 17:16:32.096 | + [[ False != \T\r\u\e ]]
2014-06-24 17:16:32.096 | + /home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | +++ dirname 
/home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOOLS_DIR=/home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools/..
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOP_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + source /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 |  dirname /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 | +++ cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | +++ pwd
2014-06-24 17:16:32.096 | ++ FUNC_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | ++ source 
/home/stack/workspace/devstack/functions-common
2014-06-24 17:16:32.105 | + FILES=/home/stack/workspace/devstack/files
2014-06-24 17:16:32.105 | + PIP_GET_PIP_URL=https

Re: [openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-call)

2014-05-01 Thread Trevor Vardeman
Vijay,

Comments in-line, hope I can clear some of this up for you :)

-Trevor

On Thu, 2014-05-01 at 13:16 +, Vijay Venkatachalam wrote:
 I am expecting to be more active on community on the LBaaS front. 
 
 May be reviewing and picking-up a few items to  work as well.
 
 I had a look at the proposal. Seeing Single  Multi-Call approach for each 
 workflow 
 makes it easy to understand. 
 
 Thanks for the clear documentation, it is welcoming to review :-). I was not 
 allowed to comment on WorkFlow doc, can you enable comments?
 
 The single-call approach essentially creates the global pool/VIP. Once 
 VIP/Pool is created using single call, are they reusable in multi-call?
 For example: Can a pool created for HTTP endpoint/loadbalancer be used in 
 HTTPS endpoint LB where termination occurs as well?

From what I remember discussing with my team (being a developer under
Jorge's umbrella) There is a 1-M relationship between load balancer and
pool.  Also, the protocol is specified on the Load Balancer, not the
pool, meaning you could expose TCP traffic via one Load Balancer to a
pool, and HTTP traffic via another Load Balancer to that same pool.
This is easily modified such 

 
 Also, would it be useful to include PUT as a single call? I see PUT only for 
 POOL not for LB.
 A user who started with single-call  POST, might like to continue to use the 
 same approach for PUT/update as well.

On the fifth page of the document found here:
https://docs.google.com/document/d/1mTfkkdnPAd4tWOMZAdwHEx7IuFZDULjG9bTmWyXe-zo/edit
There is a PUT detailed for a Load Balancer.  There should be support
for PUT on any parent object assuming the fields one would update are
not read-only.

 
 Thanks,
 Vijay V.
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
 Sent: Thursday, May 1, 2014 3:57 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process
 
 Oops! Everywhere I said Samuel I meant Stephen. Sorry you both have SB as you 
 initials so I got confused. :)
 
 Cheers,
 --Jorge
 
 
 
 
 On 4/30/14 5:17 PM, Jorge Miramontes jorge.miramon...@rackspace.com
 wrote:
 
 Hey everyone,
 
 I agree that we need to be preparing for the summit. Using Google docs 
 mixed with Openstack wiki works for me right now. I need to become more 
 familiar the gerrit process and I agree with Samuel that it is not 
 conducive to large design discussions. That being said I'd like to 
 add my thoughts on how I think we can most effectively get stuff done.
 
 As everyone knows there are many new players from across the industry 
 that have an interest in Neutron LBaaS. Companies I currently see 
 involved/interested are Mirantis, Blue Box Group, HP, PNNL, Citrix, 
 eBay/Paypal and Rackspace. We also have individuals involved as well. I 
 echo Kyle's sentiment on the passion everyone is bringing to the project!
 Coming into this project a few months ago I saw that a few things 
 needed to be done. Most notably, I realized that gathering everyone's 
 expectations on what they wanted Neutron LBaaS to be was going to be 
 crucial. Hence, I created the requirements document. Written 
 requirements are important within a single organization. They are even 
 more important when multiple organizations are working together because 
 everyone is spread out across the world and every organization has a 
 different development process. Again, my goal with the requirements 
 document is to make sure that everyone's voice in the community is 
 taken into consideration. The benefit I've seen from this document is 
 that we ask Why? to each other, iterate on the document and in the 
 end have a clear understanding of everyone's motives. We also learn 
 from each other by doing this which is one of the great benefits of open 
 source.
 
 Now that we have a set of requirements the next question to ask is, 
 How doe we prioritize requirements so that we can start designing and 
 implementing them? If this project were a completely new piece of 
 software I would argue that we iterate on individual features based on 
 anecdotal information. In essence I would argue an agile approach.
 However, most of the companies involved have been operating LBaaS for a 
 while now. Rackspace, for example, has been operating LBaaS for the 
 better part of 4 years. We have a clear understanding of what features 
 our customers want and how to operate at scale. I believe other 
 operators of LBaaS have the same understanding of their customers and 
 their operational needs. I guess my main point is that, collectively, 
 we have data to back up which requirements we should be working on. 
 That doesn't mean we preclude requirements based on anecdotal 
 information (i.e. Our customers are saying they want new shiny feature 
 X). At the end of the day I want to prioritize the community's 
 requirements based on factual data and anecdotal information.
 
 

[openstack-dev] [Neutron][LBaaS] Updated Use Cases Assessment and Questions

2014-05-01 Thread Trevor Vardeman
Hello,

I've been going through the 40+ use cases, and I couldn't help but
notice some additions that are either unclear or not descriptive.

For ease of reference, I'll link the document: 
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit#

I took a look at most of them in a high-level thought process to use for
evaluation in feasibility for the Rackspace API proposal, and began
documenting them for purpose of comparison.  However, I've run into some
issues understanding and/or evaluating them.

One section of the use-cases come to mind specifically.  Numbers 31
through 39 are not very descriptive.  Many of these don't seem like
use-cases as much as they seem like feature requests.  Ideally there
would be more information, or an example of a problem to solve including
the use-case, similar to many of the others.

On that same note, there are some use-cases I simply don't understand,
be-it my own naivety, or wording in the use-case.

Use-Case 10:  I assumed this was referring to the source-IP that
accesses the Load Balancer.  As far as I know the X-Forwarded-For header
includes this.  To satisfy this use-case, was there some expectation to
retrieve this information through an API request?  Also, with the
trusted-proxy evaluation, is that being handled by the pool member, or
was this in reference to an access list so-to-speak defined on the
load balancer?

Use-Case 20:  I do not believe much of this is handled within the LBaaS
API, but with a different service that provides auto-scaling
functionality.  Especially the on-the-fly updating of properties.
This also becomes incredibly difficult when considering TCP session
persistence when the possible pool member could be removed at any
automated time.

Use-Case 25:  I think this one is referring to the functionality of a
draining status for a pool member; the pool member will not receive
any new connections, and will not force any active connection closed.
Is that the right way to understand that use-case?

Use-Case 26:  Is this functionally wanting something like an error
page to come up during the maintenance window?  Also, to accept only
connections from a specific set of IPs only during the maintenance
window, one would manually have to create an access list for the load
balancer during the time for testing, and then either modify or remove
it after maintenance is complete.  Does this sound like an accurate
understanding/solution?

Use-Case 37:  I'm not entirely sure what this one would mean.  I know I
included it in the section that sounded more like features, but I was
still curious what this one referred to.  Does this have to do with the
desire for auto-scaling?  When a pool member gains a certain threshold
of connections another pool member is created or chosen to handle the
next connection(s) as they come?

Please feel free to correct me anywhere I've blundered here, and if my
proposed solution is inaccurate or not easily understood, I'd be more
than happy to explain in further detail.  Thanks for any help you can
offer!

-Trevor Vardeman
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-call)

2014-05-01 Thread Trevor Vardeman
Vijay, I'm following suit: Replies in line :D

On Thu, 2014-05-01 at 16:11 +, Vijay Venkatachalam wrote:
 Thanks Trevor. Replies inline!
 
  -Original Message-
  From: Trevor Vardeman [mailto:trevor.varde...@rackspace.com]
  Sent: Thursday, May 1, 2014 7:30 PM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-
  call)
  
  Vijay,
  
  Comments in-line, hope I can clear some of this up for you :)
  
  -Trevor
  
  On Thu, 2014-05-01 at 13:16 +, Vijay Venkatachalam wrote:
   I am expecting to be more active on community on the LBaaS front.
  
   May be reviewing and picking-up a few items to  work as well.
  
   I had a look at the proposal. Seeing Single  Multi-Call approach for
   each workflow makes it easy to understand.
  
   Thanks for the clear documentation, it is welcoming to review :-). I was 
   not
  allowed to comment on WorkFlow doc, can you enable comments?
  
   The single-call approach essentially creates the global pool/VIP. Once
  VIP/Pool is created using single call, are they reusable in multi-call?
   For example: Can a pool created for HTTP endpoint/loadbalancer be used
  in HTTPS endpoint LB where termination occurs as well?
  
  From what I remember discussing with my team (being a developer under
  Jorge's umbrella) There is a 1-M relationship between load balancer and
  pool.  Also, the protocol is specified on the Load Balancer, not the pool,
  meaning you could expose TCP traffic via one Load Balancer to a pool, and
  HTTP traffic via another Load Balancer to that same pool.
  This is easily modified such
  
 
 Ok. Thanks! Should there be a separate use case for covering this (If it is 
 not already present)?

This is already reflected in at least one use-case.  I've been
documenting the solutions so to speak to many of the use cases with
regards to the Rackspace API proposal, if you'd like to see some of
those examples (keep in mind they are a WIP) I'll provide a link to them
for you:  https://drive.google.com/#folders/0B2r4apUP7uPwRVc2MzQ2MHNpcE0

 
  
   Also, would it be useful to include PUT as a single call? I see PUT only 
   for
  POOL not for LB.
   A user who started with single-call  POST, might like to continue to use 
   the
  same approach for PUT/update as well.
  
  On the fifth page of the document found here:
  https://docs.google.com/document/d/1mTfkkdnPAd4tWOMZAdwHEx7IuFZ
  DULjG9bTmWyXe-zo/edit
  There is a PUT detailed for a Load Balancer.  There should be support for 
  PUT
  on any parent object assuming the fields one would update are not read-
  only.
  
 
 My mistake, didn't explain properly.
 I see PUT of loadbalancer containing only loadbalancer properties. 
 I was wondering if it makes sense for PUT of LOADBALANCER to contain 
 pool+members also. Similar to the POST payload.

For this API proposal, we wanted to enforce the updating of properties
as single requests to the resource, where the POST context includes
creations/attachments of resources to one another.  To update a pool/its
members you would use the /pools or /pools/{pool_id}/members
endpoints accordingly.  Also a POST to
/loadbalancers/{loadbalancer_id}/pools will create/attach a pool to
the Load Balancer, however PUT would not be supported at this endpoint.

 
 Also, will delete of loadbalancer  DELETE the pool/vip, if they are no more 
 referenced by another loadbalancer.
 
 Or, they have to be cleaned up separately?

Following the concept of the Neutron port in essence detaching
rather than removing the references has us leaving the extra pieces
intact but disconnected from a Load Balancer.  One would delete the Load
Balancer, and still be able to retrieve the VIP or Pool from their
root-resource references.  This would allow someone the ability to
delete a specific Load Balancer, and then create an entirely new one
while referencing the original pool and VIP.

 
  
   Thanks,
   Vijay V.
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-01 Thread Trevor Vardeman
Hello,

After going back through the use-cases to double check some of my
understanding, I realized I didn't quite understand the ones I had
already answered.  I'll use a specific use-case as an example of my
misunderstanding here, and hopefully the clarification can be easily
adapted to the rest of the use-cases that are similar.

Use Case 13:  A project-user has an HTTPS application in which some of
the back-end servers serving this application are in the same subnet,
and others are across the internet, accessible via VPN. He wants this
HTTPS application to be available to web clients via a single IP
address.

In this use-case, is the Load Balancer going to act as a node in the
VPN?  What I mean here, is the Load Balancer supposed to establish a
connection to this VPN for the client, and simulate itself as a computer
on the VPN?  If this is not the case, wouldn't the VPN have a subnet ID,
and simply be added to a pool during its creation?  If the latter is
accurate, would this not just be a basic HTTPS Load Balancer creation?
After looking through the VPNaaS API, you would provide a subnet ID to
the create VPN service request, and it establishes a VPN on said subnet.
Couldn't this be provided to the Load Balancer pool as its subnet?

Forgive me for requiring so much distinction here, but what may be clear
to the creator of this use-case, it has left me confused.  This same
type of clarity would be very helpful across many of the other
VPN-related use-cases.  Thanks again!

-Trevor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Use Case Question

2014-04-24 Thread Trevor Vardeman
Hey,

I'm looking through the use-cases doc for review, and I'm confused about one of 
them.  I'm familiar with HTTP cookie based session persistence, but to satisfy 
secure-traffic for this case would there be decryption of content, injection of 
the cookie, and then re-encryption?  Is there another session persistence type 
that solves this issue already?  I'm copying the doc link and the use case 
specifically; not sure if the document order would change so I thought it would 
be easiest to include both :)

Use Cases:  
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis

Specific Use Case:  A project-user wants to make his secured web based 
application (HTTPS) highly available. He has n VMs deployed on the same private 
subnet/network. Each VM is installed with a web server (ex: apache) and 
content. The application requires that a transaction which has started on a 
specific VM will continue to run against the same VM. The application is also 
available to end-users via smart phones, a case in which the end user IP might 
change. The project-user wishes to represent them to the application users as a 
web application available via a single IP.

-Trevor Vardeman
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2014-04-23 Thread Trevor Vardeman
Hey,

I'm looking through the use-cases doc for review, and I'm confused about the 
6th one.  I'm familiar with HTTP cookie based session persistence, but to 
satisfy secure-traffic for this case would there be decryption of content, 
injection of the cookie, and then re-encryption?  Is there another session 
persistence type that solves this issue already?

Use Cases:  
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis

-Trevor Vardeman
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev