Re: [openstack-dev] [neutron][lbaasv2] Migrate LBaaS instance

2017-03-16 Thread Kosnik, Lubosz
Hello Zhi,
Just one small information. Yesterday on Octavia weekly meeting we decided that 
we’re gonna add new features to LBaaSv2 till Pike-1 so the windows is very 
small.
This decision was made as LBaaSv2 is currently Octavia delivery, not Neutron 
anymore and this project is going into deprecations stage.

Cheers,
Lubosz

On Mar 16, 2017, at 5:39 AM, zhi 
mailto:changzhi1...@gmail.com>> wrote:

Hi, all
Currently, LBaaS v2 doesn't support migration. Just like router instances, we 
can remove a router instance from one L3 agent and add it to another L3 agent.

So, there is a single point failure in LBaaS agent. As far as I know, LBaaS 
supports " allow_automatic_lbaas_agent_failover ". But in many cases, we want 
to migrate LBaaS instances manually. Do we plan to do this?

I'm doing this right now. But I meet a question. I define a function in 
agent_scheduler.py like this:

def remove_loadbalancer_from_lbaas_agent(self, context, agent_id, 
loadbalancer_id):
self._unschedule_loadbalancer(context, loadbalancer_id, agent_id)

The question is, how do I notify LBaaS agent?

Hope for your reply.



Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul v3 - What's Coming: What to expect with the Zuul v3 Rollout

2017-03-01 Thread Kosnik, Lubosz
So did I understand that properly. There will be possibility to create real 
multi-node tests like with 3-4 nodes?

Cheers,
Lubosz

On Feb 28, 2017, at 7:13 PM, joehuang 
mailto:joehu...@huawei.com>> wrote:

So cool! Look forward to multi-node jobs as first class

Best Regards
Chaoyi Huang (joehuang)


From: Monty Taylor [mord...@inaugust.com]
Sent: 01 March 2017 7:26
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Zuul v3 - What's Coming: What to expect with the   
Zuul v3 Rollout

Hi everybody!

This content can also be found at
http://inaugust.com/posts/whats-coming-zuulv3.html - but I've pasted it
in here directly because I know that some folks don't like clicking links.

tl;dr - At last week's OpenStack PTG, the OpenStack Infra team ran the
first Zuul v3 job, so it's time to start getting everybody ready for
what's coming

**Don't Panic!** Awesome changes are coming, but you are NOT on the hook
for rewriting all of your project's gate jobs or anything crazy like
that. Now grab a seat by the fire, pour yourself a drink while I spin a
yarn about days gone by and days yet to come.

First, some background

The OpenStack Infra team has been hard at work for quite a while on a
new version of Zuul (where by 'quite some time' I mean that Jim Blair
and I had our first Zuul v3 design whiteboarding session in 2014). As
you might be able to guess given the amount of time, there are some big
things coming that will have a real and visible impact on the OpenStack
community and beyond. Since we have a running Zuul v3 now [1], it seemed
like the time to start getting folks up to speed on what to expect.

There is other deep-dive information on architecture and rationale if
you're interested[2], but for now we'll focus on what's relevant for end
users. We're also going to start sending out a bi-weekly "Status of Zuul
v3" email to the 
openstack-dev@lists.openstack.org 
mailing list ... so
stay tuned!

**Important Note** This post includes some code snippets - but v3 is
still a work in progress. We know of at least one breaking change that
is coming to the config format, so please treat this not as a tutorial,
but as a conceptual overview. Syntax is subject to change.

The Big Ticket Items

While there are a bunch of changes behind the scenes, there are a
reasonably tractable number of user-facing differences.

* Self-testing In-Repo Job Config
* Ansible Job Content
* First-class Multi-node Jobs
* Improved Job Reuse
* Support for non-OpenStack Code and Node Systems
* and Much, Much More

Self-testing In-Repo Job Config

This is probably the biggest deal. There are a lot of OpenStack Devs
(around 2k in Ocata) and a lot of repositories (1689) There a lot fewer
folks on the project-config-core team who are the ones who review all of
the job config changes (please everyone thank Andreas Jaeger next time
you see him). That's not awesome.

Self-testing in-repo job config is awesome.

Many systems out there these days have an in-repo job config system.
Travis CI has had it since day one, and Jenkins has recently added
support for a Jenkinsfile inside of git repos. With Zuul v3, we'll have
it too.

Once we roll out v3 to everyone, as a supplement to jobs defined in our
central config repositories, each project will be able to add a
zuul.yaml file to their own repo:


- job:
   name: my_awesome_job
   nodes:
 - name: controller
   label: centos-7

- project:
   name: openstack/awesome_project
   check:
 jobs:
   - my_awesome_job

It's a small file, but there is a lot going on, so let's unpack it.

First we define a job to run. It's named my_awesome_job and it needs one
node. That node will be named controller and will be based on the
centos-7 base node in nodepool.

In the next section, we say that we want to run that job in the check
pipeline, which in OpenStack is defined as the jobs that run when
patchsets are proposed.

And it's also self-testing!

Everyone knows the fun game of writing a patch to the test jobs, getting
it approved, then hoping it works once it starts running. With Zuul v3
in-repo jobs, if there is a change to job definitions in a proposed
patch, that patch will be tested with those changes applied. And since
it's Zuul, Depends-On footers are honored as well - so iteration on
getting a test job right becomes just like iterating on any other patch
or sequence of patches.

Ansible Job Content

The job my_awesome_job isn't very useful if it doesn't define any
content. That's done in the repo as well, in playbooks/my_awesome_job.yaml:


- hosts: controller
 tasks:
   - name: Run make tests
 shell: make distcheck

As previously mentioned, the job content is now defined in Ansible
rather than using our Jenkins Job Builder tool. This playbook is going
to run a tasks on a host called controller which you may remember we
requested in the job 

Re: [openstack-dev] [Neutron] Alternative approaches for L3 HA

2017-02-15 Thread Kosnik, Lubosz
About success of RDO we need to remember that this deployment utilizes 
Peacemaker and when I was working on this feature and even I spoke with Assaf 
this external application was doing everything to make this solution working.
Peacemaker was responsible for checking external and internal connectivity. To 
detect split brain. Elect master, even keepalived was running but Peacemaker 
was automatically killing all services and moving FIP.
Assaf - is there any change in this implementation in RDO? Or you’re still 
doing everything outside of Neutron?

Because if RDO success is build on Peacemaker it means that yes, Neutron needs 
some solution which will be available for more than RH deployments.

Lubosz

On Feb 15, 2017, at 3:22 AM, Anna Taraday 
mailto:akamyshnik...@mirantis.com>> wrote:

If I propose some concrete solution that will be discussion about one solution 
not about making things flexible.
At first I wanted to propose some PoC for other approach, but during my 
experiments I understood that we may have different approaches, but for all of 
them we need pluggable HA router in Neutron.

The thing that bothers me about L3 HA - it is complex. Yes, we fixed bunch of 
races and John did significant refactor, but it is still too complex. In the 
end we want to use L3 HA + DVR but DVR is pretty complex by itself. We would 
like to try to offload this complexity to external service to replace 
management of keepalived instances and networks withing Neutron. Router 
rescheduling is not really an alternative for L3 HA.

RDO with L3 HA is a great example of success, but we want to have ability to 
try something else that can suit other OpenStack deployments better.

I wrote this email to understand whether community have interest in something 
like this, so that it will be worth doing.

On Tue, Feb 14, 2017 at 10:20 PM Assaf Muller 
mailto:as...@redhat.com>> wrote:
On Fri, Feb 10, 2017 at 12:27 PM, Anna Taraday
mailto:akamyshnik...@mirantis.com>> wrote:
> Hello everyone!
>
> In Juno in Neutron was implemented L3 HA feature based on Keepalived (VRRP).
> During next cycles it was improved, we performed scale testing [1] to find
> weak places and tried to fix them. The only alternative for L3 HA with VRRP
> is router rescheduling performed by Neutron server, but it is significantly
> slower and depends on control plane.
>
> What issues we experienced with L3 HA VRRP?
>
> Bugs in Keepalived (bad versions) [2]
> Split brain [3]
> Complex structure (ha networks, ha interfaces) - which actually cause races
> that we were fixing during Liberty, Mitaka and Newton.
>
> This all is not critical, but this is a bad experience and not everyone
> ready (or want) to use Keepalived approach.
>
> I think we can make things more flexible. For example, we can allow user to
> use external services like etcd instead of Keepalived to synchronize current
> HA state across agents. I've done several experiments and I've got failover
> time comparable to L3 HA with VRRP. Tooz [4] can be used to abstract from
> concrete backend. For example, it can allow us to use Zookeeper, Redis and
> other backends to store HA state.
>
> What I want to propose?
>
> I want to bring up idea that Neutron should have some general classes for L3
> HA which will allow to use not only Keepalived but also other backends for
> HA state. This at least will make it easier to try some other approaches and
> compare them with existing ones.
>
> Does this sound reasonable?

I understand that the intention is to add pluggability upstream so
that you could examine the viability of alternative solutions. I'd
advise instead to do the research locally, and if you find concrete
benefits to an alternative solution, come back, show your work and
have a discussion about it then. Merging extra complexity in the form
of a plug point without knowing if we're actually going to need it
seems risky.

On another note, after years of work the stability issues have largely
been resolved and L3 HA is in a good state with modern releases of
OpenStack. It's not a authoritative solution in the sense that it
doesn't cover every possible failure mode, but it covers the major
ones and in that sense better than not having any form of HA, and as
you pointed out the existing alternatives are not in a better state.
The subtext in your email is that now L3 HA is technically where we
want it, but some users are resisting adoption because of bad PR or a
bad past experience, but not for technical reasons. If that is the
case, then perhaps some good PR would be a more cost effective
investment than investigating, implementing, stabilizing and
maintaining a different backend that will likely take at least a cycle
to get merged and another 1 to 2 cycles to iron out kinks. Would you
have a critical mass of developers ready to support a pluggable L3 HA
now and in the long term?

Finally, I can share that L3 HA has been the default in RDO-land for a
few cycles now and is being used widely and success

Re: [openstack-dev] [Neutron] Alternative approaches for L3 HA

2017-02-13 Thread Kosnik, Lubosz
So from my perspective I can tell that problem is completely in architecture 
and even without something outside of Neutron we cannot solve that.
Two releases ago I started to work on hardening that feature but all my ideas 
was killed by Armando and Assaf. The decided that adding outside dependency 
will open the doors for a new bugs from dependencies into Neutron [1].

You need to know that there are two outstanding bugs in this feature. There is 
a internal and outside connectivity split brain. [2] this patch made by me is 
“fixing” part of the problem. It allows you specify additional tests to verify 
connectivity from router to GW.
Also there is a problem with connectivity between network nodes. It’s more 
problematic and like you said it’s unsolvable in my opinion without using 
external mechanism.

If there will be any need to help with anything I would love to help with 
sharing my knowledge about this feature and what exactly is not working. If 
anyone needs any help with anything about this please ping me on email or IRC.

[1] https://bugs.launchpad.net/neutron/+bug/1375625/comments/31
[2] https://review.openstack.org/#/c/273546/

Lubosz

On Feb 13, 2017, at 4:10 AM, Anna Taraday 
mailto:akamyshnik...@mirantis.com>> wrote:

To avoid dependency of data plane on control plane it is possible to deploy a 
separate key-value storage cluster on data plane side, using the same network 
nodes.
I'm proposing to make some changes to enable experimentation in this field, we 
are yet to come up with any other concrete solution.

On Mon, Feb 13, 2017 at 2:01 PM 
mailto:cristi.ca...@orange.com>> wrote:

Hi,





We also operate using Juno with the VRRP HA implementation and at had to patch 
through several bugs before getting to the Mitaka release.

An pluggable, drop-in alternative would be highly appreciated. However our 
experience has been that the decoupling of VRRP from the control plane is 
actually a benefit as when the control plane is down the traffic is not 
affected.

In a solution where the L3 HA implementation becomes tied to the availability 
of the control plane (etcd cluster or any other KV store) then an operator 
would have to account for extra failure scenarios for the KV store which would 
affect multiple routers than the outage of a single L3 node which is the case 
we usually have to account now.





Just my $.02



Cristian



From: Anna Taraday 
[mailto:akamyshnik...@mirantis.com]
Sent: Monday, February 13, 2017 11:45 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Alternative approaches for L3 HA



In etcd for each HA router we can store key which will identify which agent is 
active. L3 agents will "watch" this key.
All these tools have leader election mechanism which can be used to get agent 
which is active for current HA router.



On Mon, Feb 13, 2017 at 7:02 AM zhi 
mailto:changzhi1...@gmail.com>> wrote:

Hi, we are using L3 HA in our production environment now. Router instances 
communicate to each other by VRRP protocol. In my opinion, although VRRP is a 
control plane thing, but the real VRRP traffic is using data plane nic so that 
router namespaces can not talk to each other sometimes when the  data plan is 
busy. If we were used etcd (or other), does every router instance register one 
"id" in etcd ?





Thanks

Zhi Chang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--

Regards,
Ann Taraday

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<

Re: [openstack-dev] [octavia] Nominating German Eichberger for Octavia core reviewer

2017-01-22 Thread Kosnik, Lubosz
+1, welcome back.

Lubosz

On Jan 20, 2017, at 2:11 PM, Miguel Lavalle 
mailto:mig...@mlavalle.com>> wrote:

Well, I don't vote here but it's nice to see German back in the community. 
Welcome!

On Fri, Jan 20, 2017 at 1:26 PM, Brandon Logan 
mailto:brandon.lo...@rackspace.com>> wrote:
+1, yes welcome back German.
On Fri, 2017-01-20 at 09:41 -0800, Michael Johnson wrote:
> Hello Octavia Cores,
>
> I would like to nominate German Eichberger (xgerman) for
> reinstatement as an
> Octavia core reviewer.
>
> German was previously a core reviewer for Octavia and neutron-lbaas
> as well
> as a former co-PTL for Octavia.  Work dynamics required him to step
> away
> from the project for a period of time, but now he has moved back into
> a
> position that allows him to contribute to Octavia.  His review
> numbers are
> back in line with other core reviewers [1] and I feel he would be a
> solid
> asset to the core reviewing team.
>
> Current Octavia cores, please respond with your +1 vote or an
> objections.
>
> Michael
>
> [1] http://stackalytics.com/report/contribution/octavia-group/90
>
>
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia]

2017-01-03 Thread Kosnik, Lubosz
In my opinion this patch should be changed. We should start using project_id 
instead of still keeping tenant_id property.
All occurences of project_id in [1] should be fixed.

Lubosz

[1] neutron_lbaas/tests/tempest/v2/scenario/base.py

From: Nir Magnezi 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, January 3, 2017 at 3:37 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [neutron-lbaas][octavia]

I would like to emphasize the importance of this issue.

Currently, all te LBaaS/Octavia gates are up on running (touch wood).
Nevertheless, this bug will become more apparent (aka broken gates) in the next 
release of tempest (if we don't merge this fix beforehand).

The reason is that the issue occurs when you use tempest master,
while our gates currently use tempest tag 13.0.0 (as expected).

Nir

On Tue, Jan 3, 2017 at 11:04 AM, Genadi Chereshnya 
mailto:gcher...@redhat.com>> wrote:
When running neutron_lbaas scenarios tests with the latest tempest version we 
fail because of https://bugs.launchpad.net/octavia/+bug/1649083.
I would like if anyone can go over the patch that fixes the problem and merge 
it, so our automation will succeed.
The patch is https://review.openstack.org/#/c/411257/
Thanks in advance,
Genadi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] Error when creating load balancer

2016-12-29 Thread Kosnik, Lubosz
Based on this logs. I can tell you that problem is with plugging VIP address. 
You need to show us also n-cpu logs. There should be some info what happened 
because we can see in logs in line 22 that client failed with error 500 on 
attaching network adapter. Maybe you’re out of IP’s in this subnet?
Without the rest of logs there is no way to tell exactly what happened.

Regards,
Lubosz.

From: Yipei Niu 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, December 27, 2016 at 9:16 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [neutron-lbaas][octavia] Error when creating load 
balancer

Hi, All,

I failed creating a load balancer on a subnet. The detailed info of o-cw.log is 
pasted in the link http://paste.openstack.org/show/593492/.

Look forward to your valuable comments. Thank you.

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver based LBaaSv2

2016-12-13 Thread Kosnik, Lubosz
This features are supported by default in Octavia. So like keeping HAProxy 
namespace driver up and running is really loosing the whole value added which 
Octavia delivers. You can still work on NLBaaS v2 project but you need to 
remember that by default octavia will be used in cloud as LBaaS and NLBaaS will 
be deprecated starting with P release - right now we’re merging API into 
Octavia to provide backward compatibility - and in this project just bugs will 
be fixed.
Michael any thoughts about what I just said?

Regards,
Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com<mailto:lubosz.kos...@intel.com>

On Dec 13, 2016, at 9:47 AM, Bartek Żurawski 
mailto:bartekzuraws...@gmail.com>> wrote:

Hello,

Just to be sure about adding new feature to lbaas, should we still
proposing new features like this proposed by Zhi to neutron/lbaas
or already to Octavia ?

@Zhi, yep I like those two extensions it will be very helpful for LB.
Have you already do something with that, or for now you just asking
for opinion ?

Bartek

On 8 December 2016 at 05:33, Brandon Logan 
mailto:brandon.lo...@rackspace.com>> wrote:
On Wed, 2016-12-07 at 06:50 -0800, Michael Johnson wrote:
> Lubosz,
>
> I would word that very differently.  We are not dropping LBaaSv2
> support.  It is not going away.  I don't want there to be confusion
> on
> this point.
>
> We are however, moving/merging the API from neutron into Octavia.
> So, during this work the code will be transitioning repositories and
> you will need to carefully synchronize and/or manage the changes in
> both places.
> Currently the API changes have patchsets up in the Octavia
> repository.
> However, the old namespace driver has not yet been migrated over.
I know I've talked about using the namespace driver as a guinea pig for
the nlbaas to octavia shim driver layer, but I didn't know it would be
fully supported in octavia.  This will require a bit more work because
of the callbacks the agent expects to be able to call.

>
> Michael
>
>
> On Tue, Dec 6, 2016 at 8:46 AM, Kosnik, Lubosz 
> mailto:lubosz.kosnik@intel.c>
> om> wrote:
> > Hello Zhi,
> > So currently we’re working on dropping LBasSv2 support.
> > Octavia is a big-tent project providing lbass in OpenStack and
> > after merging
> > LBasS v2 API in Octavia we will deprecate that project and in next
> > 2
> > releases we’re planning to completely wipe out that code
> > repository. If you
> > would like to help with LBasS in OpenStack you’re more than welcome
> > to start
> > working with us on Octavia.
> >
> > Cheers,
> > Lubosz Kosnik
> > Cloud Software Engineer OSIC
> >
> > On Dec 6, 2016, at 6:04 AM, Gary Kotton 
> > mailto:gkot...@vmware.com>> wrote:
> >
> > Hi,
> > I think that there is a move to Octavia. I suggest reaching out to
> > that
> > community and see how these changes can be added. Sounds like a
> > nice
> > addition
> > Thanks
> > Gary
> >
> > From: zhi mailto:changzhi1...@gmail.com>>
> > Reply-To: OpenStack List 
> > mailto:openstack-dev@lists.openstack.org>>
> > Date: Tuesday, December 6, 2016 at 11:06 AM
> > To: OpenStack List 
> > mailto:openstack-dev@lists.openstack.org>>
> > Subject: [openstack-dev] [neutron][lbaas] New extensions for
> > HAProxy driver
> > based LBaaSv2
> >
> > Hi, all
> >
> > I am considering add some new extensions for HAProxy driver based
> > Neutron
> > LBaaSv2.
> >
> > Extension 1, multi subprocesses supported. By following this
> > document[1], I
> > think we can let our HAProxy based LBaaSv2 support this feature. By
> > adding
> > this feature, we can enhance loadbalancers performance.
> >
> > Extension 2, http keep-alive supported. By following this
> > document[2], we
> > can make our loadbalancers more effective.
> >
> >
> > Any comments are welcome!
> >
> > Thanks
> > Zhi Chang
> >
> >
> > [1]: http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#c
> > pu-map
> > [2]:
> > http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#option
> > %20http-keep-alive
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> > openstack-dev-requ...@lists.openstack.org?subject:unsu<http://openstack-dev-requ...@lists.openstack.org/?subject:unsu>
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > __

Re: [openstack-dev] [neutron-lbaas] [octavia]vip failed to be plugged in to the amphorae vm

2016-12-09 Thread Kosnik, Lubosz
Plugging VIP worked without any problems.
Log is telling that you have very restrictive timeout configuration. 7 retries 
is very low configuration. Please reconfigure this to much bigger value.

Regards,
Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

On Dec 9, 2016, at 3:46 PM, Wanjing Xu (waxu) 
mailto:w...@cisco.com>> wrote:

I have stable/metaka Octavia which has been running OK until today, whenever I 
created loadbalancer, the amphorae vm is created with mgmt nic. But look like 
vip plugin failed.  I can ping to amphorae mgmt. NIC from controller(where 
Octavia process is running), but look like some rest api call  into amphorae to 
plug in vip failed :

Ping works:

[localadmin@dmz-eth2-ucs1]logs> ping 192.168.0.7
PING 192.168.0.7 (192.168.0.7) 56(84) bytes of data.
64 bytes from 192.168.0.7: icmp_seq=1 ttl=64 time=1.11 ms
64 bytes from 192.168.0.7: icmp_seq=2 ttl=64 time=0.461 ms
^C


o-cw.log:

2016-12-09 11:03:54.468 31408 DEBUG 
octavia.controller.worker.tasks.network_tasks [-] Retrieving network details 
for amphora ae80ae54-395f-4fad-b0de-39f17dd9b19e execute 
/opt/stack/octavia/octavia/controller/worker/tasks/network_tasks.py:380
2016-12-09 11:03:55.441 31408 DEBUG octavia.controller.worker.controller_worker 
[-] Task 
'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' 
(76823522-b504-4d6a-8ba7-c56015cb39a9) transitioned into state 'SUCCESS' from 
state 'RUNNING' with result '{u'ae80ae54-395f-4fad-b0de-39f17dd9b19e': 
}' 
_task_receiver 
/usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
2016-12-09 11:03:55.444 31408 DEBUG octavia.controller.worker.controller_worker 
[-] Task 
'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraPostVIPPlug' 
(3b798537-3f20-46a3-abe2-a2c24c569cd9) transitioned into state 'RUNNING' from 
state 'PENDING' _task_receiver 
/usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
2016-12-09 11:03:55.446 31408 DEBUG 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
plug/vip/100.100.100.9 request 
/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:218
2016-12-09 11:03:55.446 31408 DEBUG 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
https://192.168.0.7:9443/0.5/plug/vip/100.100.100.9 request 
/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:221
2016-12-09 11:03:55.452 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:56.458 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:57.462 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:58.466 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:59.470 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:04:00.474 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:04:02.487 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
……
ransitioned into state 'REVERTED' from state 'REVERTING' with result 'None'
2016-12-09 11:29:10.509 31408 WARNING 
octavia.controller.worker.controller_worker [-] Flow 
'post-amphora-association-octavia-post-loadbalancer-amp_association-subflow' 
(f7b0d080-830a-4d6a-bb85-919b6461252f) transitioned into state 'REVERTED' from 
state 'RUNNING'
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher [-] Exception 
during message handling: contacting the amphora timed out
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
138, in _dispatch_and_reply
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
185, in _dispatch
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
127, in _do_dispatch
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/octavia/octavia/controller/queue/endpoint.py", line 45, in 
create_load_balancer
2016-12-09 11:29:10.509 31408 ER

Re: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver based LBaaSv2

2016-12-07 Thread Kosnik, Lubosz
Completely true Michael,

It’s like my mind is already in state that this project will loose support in 
next few releases. 2 are the standard for OpenStack. So like personally I don’t 
like to work and contribute to something what will be lost in future. And not 
like years future but near future from OpenStack perspective.

Lubosz

On Dec 7, 2016, at 8:50 AM, Michael Johnson 
mailto:johnso...@gmail.com>> wrote:

Lubosz,

I would word that very differently.  We are not dropping LBaaSv2
support.  It is not going away.  I don't want there to be confusion on
this point.

We are however, moving/merging the API from neutron into Octavia.
So, during this work the code will be transitioning repositories and
you will need to carefully synchronize and/or manage the changes in
both places.
Currently the API changes have patchsets up in the Octavia repository.
However, the old namespace driver has not yet been migrated over.

Michael


On Tue, Dec 6, 2016 at 8:46 AM, Kosnik, Lubosz 
mailto:lubosz.kos...@intel.com>> wrote:
Hello Zhi,
So currently we’re working on dropping LBasSv2 support.
Octavia is a big-tent project providing lbass in OpenStack and after merging
LBasS v2 API in Octavia we will deprecate that project and in next 2
releases we’re planning to completely wipe out that code repository. If you
would like to help with LBasS in OpenStack you’re more than welcome to start
working with us on Octavia.

Cheers,
Lubosz Kosnik
Cloud Software Engineer OSIC

On Dec 6, 2016, at 6:04 AM, Gary Kotton 
mailto:gkot...@vmware.com>> wrote:

Hi,
I think that there is a move to Octavia. I suggest reaching out to that
community and see how these changes can be added. Sounds like a nice
addition
Thanks
Gary

From: zhi mailto:changzhi1...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, December 6, 2016 at 11:06 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver
based LBaaSv2

Hi, all

I am considering add some new extensions for HAProxy driver based Neutron
LBaaSv2.

Extension 1, multi subprocesses supported. By following this document[1], I
think we can let our HAProxy based LBaaSv2 support this feature. By adding
this feature, we can enhance loadbalancers performance.

Extension 2, http keep-alive supported. By following this document[2], we
can make our loadbalancers more effective.


Any comments are welcome!

Thanks
Zhi Chang


[1]: http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#cpu-map
[2]:
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#option%20http-keep-alive
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver based LBaaSv2

2016-12-06 Thread Kosnik, Lubosz
Hello Zhi,
So currently we’re working on dropping LBasSv2 support.
Octavia is a big-tent project providing lbass in OpenStack and after merging 
LBasS v2 API in Octavia we will deprecate that project and in next 2 releases 
we’re planning to completely wipe out that code repository. If you would like 
to help with LBasS in OpenStack you’re more than welcome to start working with 
us on Octavia.

Cheers,
Lubosz Kosnik
Cloud Software Engineer OSIC

On Dec 6, 2016, at 6:04 AM, Gary Kotton 
mailto:gkot...@vmware.com>> wrote:

Hi,
I think that there is a move to Octavia. I suggest reaching out to that 
community and see how these changes can be added. Sounds like a nice addition
Thanks
Gary

From: zhi mailto:changzhi1...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, December 6, 2016 at 11:06 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver 
based LBaaSv2

Hi, all

I am considering add some new extensions for HAProxy driver based Neutron 
LBaaSv2.

Extension 1, multi subprocesses supported. By following this document[1], I 
think we can let our HAProxy based LBaaSv2 support this feature. By adding this 
feature, we can enhance loadbalancers performance.

Extension 2, http keep-alive supported. By following this document[2], we can 
make our loadbalancers more effective.


Any comments are welcome!

Thanks
Zhi Chang


[1]: 
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#cpu-map
[2]: 
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#option%20http-keep-alive
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-10 Thread Kosnik, Lubosz
Octavia is using own DB and LBaaS v2 has his own. Because of that like Michael 
said we’re working on aligning this DBs and we’re planning to provide migration 
mechanism.

Cheers,
Lubosz Kosnik
Cloud Software Engineer OSIC

On Nov 10, 2016, at 1:13 AM, Gary Kotton 
mailto:gkot...@vmware.com>> wrote:

Will the same DB be maintained or will the LBaaS DB be moved to that of 
Octavia. I am really concerned about this and I feel that it will cause 
production problems.

From: Kevin Benton mailto:ke...@benton.pub>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 9, 2016 at 11:43 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS 
retrospective and next steps recap

The people working on the migration are ensuring API compatibility and are even 
leaving in a shim on the Neutron side for some time so you don't even have to 
change endpoints initially. It should be a seamless change.

On Wed, Nov 9, 2016 at 12:28 PM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
Just please don't make this a lbv3 thing that completely breaks compatibility 
of existing lb's yet again. If its just an "point url endpoint from thing like 
x to thing like y" in one place, thats ok. I still have v1 lb's in existence 
though I have to deal with and a backwards incompatible v3 would just cause me 
to abandon lbaas all together I think as it would show the lbaas stuff is just 
not maintainable.

Thanks,
Kevin

From: Armando M. [arma...@gmail.com]
Sent: Wednesday, November 09, 2016 8:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS 
retrospective and next steps recap


On 9 November 2016 at 05:50, Gary Kotton 
mailto:gkot...@vmware.com>> wrote:
Hi,
What about neutron-lbaas project? Is this project still alive and kicking to 
the merge is done or are we going to continue to maintain it? I feel like we 
are between a rock and a hard place here. LBaaS is in production and it is not 
clear the migration process. Will Octavia have the same DB models as LBaaS or 
will there be a migration?
Sorry for the pessimism but I feel that things are very unclear and that we 
cannot even indicate to our community/consumers what to use/expect.
Thanks
Gary

http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html


On 11/8/16, 1:36 AM, "Michael Johnson" 
mailto:johnso...@gmail.com>> wrote:

Ocata LBaaS retrospective and next steps recap
--

This session lightly touched on the work in the newton cycle, but
primarily focused on planning for the Ocata release and the LBaaS spin
out of neutron and merge into the octavia project [1].  Notes were
captured on the etherpad [1].

The focus of work for Ocata in neutron-lbaas and octavia will be on
the spin out/merge and not new features.

Work has started on merging neutron-lbaas into the octavia project
with API sorting/pagination, quota support, keystone integration,
neutron-lbaas driver shim, and documentation updates.  Work is still
needed for policy support, the API shim to handle capability gaps
(example: stats are by listener in octavia, but by load balancer in
neturon-lbaas), neutron api proxy, a database migration script from
the neutron database to the octavia database for existing non-octavia
load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
the octavia API server.

The room agreed that since we will have a shim/proxy in neutron for
some time, updating the OpenStack client can be deferred to a future
cycle.

There is a lot of concern about Ocata being a short cycle and the
amount of work to be done.  There is hope that additional resources
will help out with this task to allow us to complete the spin
out/merge for Ocata.

We discussed the current state of the active/active topology patches
and agreed that it is unlikely this will merge in Ocata.  There are a
lot of open comments and work to do on the patches.  It appears that
these patches may have been created against an old release and require
significant updating.

Finally there was a question about when octavia would implement
metadata tags.  When we dug into the need for the tags we found that
what was really wanted is a full implementation of the flavors
framework [3] [4].  Some vendors expressed interest in finishing the
flavors framework for Octavia.

Thank you to everyone that participated in our design session and etherpad.

Michael

[1] 
https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
[2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
[3] 

Re: [openstack-dev] [lbaas] [octavia] Proposing Lubosz Kosnik (diltram) as Octavia Core

2016-10-13 Thread Kosnik, Lubosz
Thank you very much for believing that I can be valuable asset for Octavia.
I will work as hard as currently, or even harder because of responsibility, on 
making Octavia better from day to day.

Lubosz

> On Oct 12, 2016, at 3:48 PM, Michael Johnson  wrote:
> 
> That is quorum from the cores, welcome Lubosz!
> 
> Michael
> 
> 
> On Wed, Oct 12, 2016 at 1:26 PM, Doug Wiegley
>  wrote:
>> +1
>> 
>>> On Oct 10, 2016, at 3:40 PM, Brandon Logan  
>>> wrote:
>>> 
>>> +1
>>> 
>>> On Mon, 2016-10-10 at 13:06 -0700, Michael Johnson wrote:
 Greetings Octavia and developer mailing list folks,
 
 I propose that we add Lubosz Kosnik (diltram) as an OpenStack Octavia
 core reviewer.
 
 His contributions [1] are in line with other cores and he has been an
 active member of our community.  He regularly attends our weekly
 meetings, contributes good code, and provides solid reviews.
 
 Overall I think Lubosz would make a great addition to the core review
 team.
 
 Current Octavia cores, please respond with +1/-1.
 
 Michael
 
 [1] http://stackalytics.com/report/contribution/octavia/90
 
 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
 cribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Multi-node controller testing

2016-08-08 Thread Kosnik, Lubosz
Great work with that multi-node setup Miguel.
About that multinode Infra is supporting two nodes setup used currently by 
grenade jobs but in my opinion we don’t have any tests which can cover that 
type of testing. We’re still struggling with selecting proper tool to test 
Octavia from integration/functional perspective so probably it’s too early to 
make it happen.
Maybe it’s great start to finally make some decision about testing tools and 
there will be a lot of work for you after that also with setting up an infra 
multi-node job for that.

Cheers,
Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

> On Aug 8, 2016, at 7:04 AM, Miguel Angel Ajo Pelayo  
> wrote:
> 
> Recently, I sent a series of patches [1] to make it easier for
> developers to deploy a multi node octavia controller with
> n_controllers x [api, cw, hm, hk] with an haproxy in front of the API.
> 
> Since this is the way the service is designed to work (with horizontal
> scalability in mind), and we want to have a good guarantee that any
> bug related to such configuration is found early, and addressed, I was
> thinking that an extra job that runs a two node controller deployment
> could be beneficial for the project.
> 
> 
> If we all believe it makes sense, I would be willing to take on this
> work but I'd probably need some pointers and light help, since I've
> never dealt with setting up or modifying existing jobs.
> 
> How does this sound?
> 
> 
> [1] 
> https://review.openstack.org/#/q/status:merged+project:openstack/octavia+branch:master+topic:multinode-devstack
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas][octavia] suggestion for today's meeting agenda: How to make the Amphora-agent support additional Linux flavors

2016-06-30 Thread Kosnik, Lubosz
Currently agent is in next face of his life. There are some work in progress to 
change that code.
Because of that it’s right timing to discuss about that and find some proper 
way how to work with this.
The biggest issue with Octavia is that there is almost no documentation about 
what everything to be able to use this project.
There is laconic doc about creating new images so always everyone is able to 
build his own image. Were not blocking that behavior.

Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

> On Jun 30, 2016, at 8:01 AM, Ihar Hrachyshka  wrote:
> 
> 
>> On 30 Jun 2016, at 06:03, Kosnik, Lubosz  wrote:
>> 
>> Like Doug said Amphora suppose to be a black box. It suppose to get some 
>> data - like info in /etc/defaults and do everything inside on its own.
>> Everyone will be able to prepare his own implementation of this image 
>> without mixing things between each other.
> 
> That would be correct if the image would not be maintained by the project 
> itself. Then indeed every vendor would prepare their own image, maybe 
> collaborate on common code for that. Since this code is currently in octavia, 
> we kinda need to plug into it for other vendors. Otherwise you pick one and 
> give it a preference.
> 
> But if we can make the agent itself vendor agnostic, so that the only 
> differentiation would happen around components stuffed into the image 
> (kernel, haproxy version, security tweaks, …), then it will be obviously a 
> better path than trying to template the agent for multiple vendors.
> 
> A silly question: why does the agent even need to configure the network using 
> distribution mechanisms and not just calling to ip link and friends?
> 
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas][octavia] suggestion for today's meeting agenda: How to make the Amphora-agent support additional Linux flavors

2016-06-29 Thread Kosnik, Lubosz
Like Doug said Amphora suppose to be a black box. It suppose to get some data - 
like info in /etc/defaults and do everything inside on its own.
Everyone will be able to prepare his own implementation of this image without 
mixing things between each other.

Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

On Jun 29, 2016, at 3:17 PM, Gregory Haynes 
mailto:g...@greghaynes.net>> wrote:

On Wed, Jun 29, 2016, at 02:18 PM, Nir Magnezi wrote:
Hi Greg,

Thanks for the replay, comments inline.

On Wed, Jun 29, 2016 at 9:59 PM, Gregory Haynes 
mailto:g...@greghaynes.net>> wrote:

On Wed, Jun 29, 2016, at 10:26 AM, Nir Magnezi wrote:
Hello,

Lately, I've been working on a fix[1] for the amhpora-agent, which currently 
only support Debian based flavors such as Ubuntu.

The main Issues here:
1. NIC hot plugs: Ubuntu's ethX.cfg files looks different from ifcfg-ethX files 
which are accepted in Linux flavors such a RHEL, CentOS and Fedora, read more 
in the fix commit msg.
2. The usage of Flavor specific features such as 'upstart'.

I would like to have a discussion about the second bullet mentioned above.
Due to the fact that in Octavia the loadbalancer runs inside of an instance 
(Amphora), There are few actions that need to take place in the Amphora 
instance boot process:
a. namespace and NIC created.
b. amphora agent starts
c. haproxy (and possibly keepalived) start

The Amphora-agent leverages[2] the capabilities of 'upstart' to make that 
happen, which is a bit problematic if we wish it to work on other flavors.
The default cloud image for Amphora today is Ubuntu, yet there are few more 
options[3] such as CentOS and Fedora.
Unlike the Ubuntu base image, which uses 'sysvinit', The latter two flavors use 
'systemd'.
This creates incompatibility with the jinja2[4][5] templates used by the agent.

The way I see it there are two possible solutions for this:
1. Include a systemd equivalent in the fix[1] that will essentially duplicate 
the functionality mentioned above and work in the other flavors.
2. Have a the amphora agent be the only binary that needs to be configured to 
start upon boot, and that agent will take care of plugging namespaces and NICs 
and also spawning needs processes. This is how it is done in lbaas and l3 
agents.

While the latter solution looks like a more "clean" design, the trade-off here 
is a bigger change to the amphora agent.

[1] https://review.openstack.org/#/c/331841/
[2] 
https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/listener.py#L128
[3] 
https://github.com/openstack/octavia/blob/master/diskimage-create/diskimage-create.sh#L27
[4] 
https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/templates/upstart.conf.j2
[5] 
https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/templates/sysvinit.conf.j2


Thanks,
Nir

I have an alternative suggestion - Maybe we shouldn't be templating out the 
init scripts? What we are effectively doing here is code-gen which leads to 
problems exactly like this, and fixing it with more code gen actually makes the 
problem more difficult.

The incompatibility to systemd is not due to usage of templates and code 
generated files is a nice and useful tool to have.

Sure, its not a direct result, but it just shouldn't be necessary here and it 
makes this problem far more complicated than it needs to be. If we weren't 
using templating then supporting non-upstart would be as easy as creating a 
trivial init script and including it in the amphora element (which only requies 
copying a file in to that element, done.).


I see two fairly straightforward ways to not require this templating:

1) Use the agent to write out config for the init scripts in to 
/etc/defaults/amphora and have the init scripts consume that file (source 
variables in that file). The init script can then simply be a static file which 
we can even bake in to the image directly.

systemd does not use init script, which is why the current code is incompatible 
to the distros i mentioned.

Right, what I am saying is to separate out configuration from the 
init/upstart/systemd files and if necessary source that configuration. This is 
how init/upstart/systemd scripts are written for almost every application for a 
reason and why ubuntu has /etc/defaults and why systemd has things like 
EnvironmentFile. It sounds like the second option is what were leaning towards 
though, in which case this isn't needed.



2) Move the code which requires the templating in to another executable which 
the init scripts call out to. e.g. create a amphora-net-init executable that 
runs the same code as in the pre-up section of the upstart script. Then there 
is no need for templating in the init scripts themselves (they will all simply 
call the same executable) and we can also do something like bake init scripts 
directly in to the image.

I'm 

Re: [openstack-dev] [octavia][upgrades] upgrade loadbalancer to new amphora image

2016-06-29 Thread Kosnik, Lubosz
May you specify what exact use-case you have to upload incompatible images?
In my opinion we should prepare a flow which is like you said building new 
instance, configuring everything, adding that amphora into load balancer and 
removing old one. In that way we will be able to minimize retry to specific 
Amphorae not all load balancers.
Everything depends of that are we able to do something with Amphora image that 
it will not work any more in cluster with older versions.
 
Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

> On Jun 29, 2016, at 11:14 AM, Ihar Hrachyshka  wrote:
> 
> Hi all,
> 
> I was looking lately at upgrades for octavia images. This includes using new 
> images for new loadbalancers, as well as for existing balancers.
> 
> For the first problem, the amp_image_tag option that I added in Mitaka seems 
> to do the job: all new balancers are created with the latest image that is 
> tagged properly.
> 
> As for balancers that already exist, the only way to get them use a new image 
> is to trigger an instance failure, that should rebuild failed nova instance, 
> using the new image. AFAIU the failover process is not currently automated, 
> requiring from the user to set the corresponding port to DOWN and waiting for 
> failover to be detected. I’ve heard there are plans to introduce a specific 
> command to trigger a quick-failover, that would streamline the process and 
> reduce the time needed for the process because the failover would be 
> immediately detected and processed instead of waiting for keepalived failure 
> mode to occur. Is it on the horizon? Patches to review?
> 
> While the approach seems rather promising and may be applicable for some 
> environments, I have several concerns about the failover approach that we may 
> want to address.
> 
> 1. HA assumption. The approach assumes there is another node running 
> available to serve requests while instance is rebuilding. For non-HA 
> amphoras, it’s not the case, meaning the image upgrade process has a 
> significant downtime.
> 
> 2. Even if we have HA, for the time of instance rebuilding, the balancer 
> cluster is degraded to a single node.
> 
> 3. (minor) during the upgrade phase, instances that belong to the same HA 
> amphora may run different versions of the image.
> 
> What’s the alternative?
> 
> One idea I was running with for some time is moving the upgrade complexity 
> one level up. Instead of making Octavia aware of upgrade intricacies, allow 
> it to do its job (load balance), while use neutron floating IP resource to 
> flip a switch from an old image to a new one. Let me elaborate.
> 
> Let’s say we have a load balancer LB1 that is running Image1. In this 
> scenario, we assume that access to LB1 VIP is proxied through a floating ip 
> FIP that points to LB1 VIP. Now, the operator uploaded a new Image2 to glance 
> registry and tagged it for octavia usage. The user now wants to migrate the 
> load balancer function to using the new image. To achieve this, the user 
> follows the steps:
> 
> 1. create an independent clone of LB1 (let’s call it LB2) that has exact same 
> attributes (members) as LB1.
> 2. once LB2 is up and ready to process requests incoming to its VIP, redirect 
> FIP to the LB2 VIP.
> 3. now all new flows are immediately redirected to LB2 VIP, no downtime (for 
> new flows) due to atomic nature of FIP update on the backend (we use 
> iptables-save/iptables-restore to update FIP rules on the router).
> 4. since LB1 is no longer handling any flows, we can deprovision it. LB2 is 
> now the only balancer handling members.
> 
> With that approach, 1) we provide for consistent downtime expectations 
> irrelevant to amphora architecture chosen (HA or not); 2) we flip the switch 
> when the clone is up and ready, so no degraded state for the balancer 
> function; 3) all instances in an HA amphora run the same image.
> 
> Of course, it won’t provide no downtime for existing flows that may already 
> be handled by the balancer function. That’s a limitation that I believe is 
> shared by all approaches currently at the table.
> 
> As a side note, the approach would work for other lbaas drivers, like 
> namespaces, f.e. in case we want to update haproxy.
> 
> Several questions in regards to the topic:
> 
> 1. are there any drawbacks with the approach? can we consider it an 
> alternative way of doing image upgrades that could find its way into official 
> documentation?
> 
> 2. if the answer is yes, then how can I contribute the piece? should I sync 
> with some other doc related work that I know is currently ongoing in the team?
> 
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas][octavia] suggestion for today's meeting agenda: How to make the Amphora-agent support additional Linux flavors

2016-06-29 Thread Kosnik, Lubosz
Unfortunately I will be available on today meeting by phone so I’m not be able 
to discuss in way I would like to so I’m writing here.

May you describe what you understand by that info about configuring agent 
binary on start?
If I’m right the ideal situation would be like this:
1. We’re booting Ampora which is starting agent
2. Agent is calling worker metadata server to inform that is up
3. Agent is taking configuration (probably json) and configuring everything 
inside of the box:
- using that solution we can prepare multiple implementations of our agent - 
driver way - to configure systemd, sysvinit, something other

It’s what you was thinking or maybe you have some other way how you would like 
to organize that?

Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

On Jun 29, 2016, at 10:26 AM, Nir Magnezi 
mailto:nmagn...@redhat.com>> wrote:

Hello,

Lately, I've been working on a fix[1] for the amhpora-agent, which currently 
only support Debian based flavors such as Ubuntu.

The main Issues here:
1. NIC hot plugs: Ubuntu's ethX.cfg files looks different from ifcfg-ethX files 
which are accepted in Linux flavors such a RHEL, CentOS and Fedora, read more 
in the fix commit msg.
2. The usage of Flavor specific features such as 'upstart'.

I would like to have a discussion about the second bullet mentioned above.
Due to the fact that in Octavia the loadbalancer runs inside of an instance 
(Amphora), There are few actions that need to take place in the Amphora 
instance boot process:
a. namespace and NIC created.
b. amphora agent starts
c. haproxy (and possibly keepalived) start

The Amphora-agent leverages[2] the capabilities of 'upstart' to make that 
happen, which is a bit problematic if we wish it to work on other flavors.
The default cloud image for Amphora today is Ubuntu, yet there are few more 
options[3] such as CentOS and Fedora.
Unlike the Ubuntu base image, which uses 'sysvinit', The latter two flavors use 
'systemd'.
This creates incompatibility with the jinja2[4][5] templates used by the agent.

The way I see it there are two possible solutions for this:
1. Include a systemd equivalent in the fix[1] that will essentially duplicate 
the functionality mentioned above and work in the other flavors.
2. Have a the amphora agent be the only binary that needs to be configured to 
start upon boot, and that agent will take care of plugging namespaces and NICs 
and also spawning needs processes. This is how it is done in lbaas and l3 
agents.

While the latter solution looks like a more "clean" design, the trade-off here 
is a bigger change to the amphora agent.

[1] https://review.openstack.org/#/c/331841/
[2] 
https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/listener.py#L128
[3] 
https://github.com/openstack/octavia/blob/master/diskimage-create/diskimage-create.sh#L27
[4] 
https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/templates/upstart.conf.j2
[5] 
https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/templates/sysvinit.conf.j2


Thanks,
Nir
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia]In amphora plugin_vip(), why cidr and gateway are required but not used?

2016-06-17 Thread Kosnik, Lubosz
Here is a bug for that - https://bugs.launchpad.net/octavia/+bug/1585804
You’re more than welcome to fix this issue.

Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

On Jun 17, 2016, at 6:37 PM, Jiahao Liang 
mailto:jiahao.li...@oneconvergence.com>> wrote:

Added more related topics to the original email.

-- Forwarded message --
From: Jiahao Liang (Frankie) 
mailto:gzliangjia...@gmail.com>>
Date: Fri, Jun 17, 2016 at 4:30 PM
Subject: [openstack-dev][Octavia]In amphora plugin_vip(), why cidr and gateway 
are required but not used?
To: openstack-dev@lists.openstack.org


Hi community,

I am going over the Octavia amphora backend code. There is one thing really 
confused me. In 
https://github.com/openstack/octavia/blob/stable/mitaka/octavia/amphorae/backends/agent/api_server/plug.py#L45,
 plug_vip() method doesn't use the cidr and gateway from the REST request. But 
in the haproxy amphora api, those two fields are required values (an 
assert
 will perform on the server).

What is the design considerations for this api? Could we safely remove these 
two values to avoid ambiguity?

Thank you,
Jiahao Liang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][vpnaas]Question about MPLS VPN

2016-05-26 Thread Kosnik, Lubosz
I had a discussion with few operators and after what I heard about VPNaaS I can 
tell that we not suppose to help with that implementation.
Maybe we should work on service VM’s and prepare implementation of VPNaaS using 
them and using some prebuild images like VyOS or other.

Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

> On May 26, 2016, at 9:39 AM, Ihar Hrachyshka  wrote:
> 
> 
>> On 26 May 2016, at 16:23, Kosnik, Lubosz  wrote:
>> 
>> You should read e-mails on ML. VPNaaS will be removed in next 6 months from 
>> repo. You need to look into something else like starting VyOS image, pfSense 
>> or other.
> 
> Strictly speaking, vpnaas is on probation right now, and if interested 
> parties actually revive the project, it may stay past those 6 months. That 
> said, I haven’t heard about anyone stepping in since the summit.
> 
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][vpnaas]Question about MPLS VPN

2016-05-26 Thread Kosnik, Lubosz
You should read e-mails on ML. VPNaaS will be removed in next 6 months from 
repo. You need to look into something else like starting VyOS image, pfSense or 
other.

Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

> On May 26, 2016, at 1:50 AM, zhangyali (D)  wrote:
> 
> Hi all,
> 
> I am interested in the VPNaaS project in Neutron. Now I notice that only 
> IPsec tunnel has completed, but other types of VPN, such as, MPLS/BGP, have 
> not completed. I'd like to know how's going about MPLS/BGP vpn? What's the 
> mechanism or extra work need to be done? 
> 
> Thanks.
> 
> Best,
> Yali
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Social at the summit

2016-04-27 Thread Kosnik, Lubosz
+1 from me also :)


> On Apr 27, 2016, at 10:29 AM, Thomas Morin  wrote:
> 
> +1 !
> 
> Mon Apr 25 2016 10:55:33 GMT-0500 (CDT), Kyle Mestery:
>> Ihar, Henry and I were talking and we thought Thursday night makes sense for 
>> a Neutron social in Austin. If others agree, reply on this thread and we'll 
>> find a place.
>> 
>> Thanks!
>> Kyle
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA testing on scale

2016-04-18 Thread Kosnik, Lubosz
Great work Ann.
About testing on scale it’s not so problematic because of the Cloud For All 
project.
Here [1] you can request for a multi node cluster which you can use to
perform tests. Exact requirements are specified on that website.

[1] http://osic.org

Regards,
Lubosz “diltram” Kosnik

On Apr 18, 2016, at 10:42 AM, John Schwarz 
mailto:jschw...@redhat.com>> wrote:

This is some awesome work, Ann. It's very neat to see that all the
races we've struggled with w.r.t. the l3 scheduler has paid off. I
would definitely like to see how these results are effected by
https://review.openstack.org/#/c/305774/ but understandably 49
physical nodes are hard to come by.

Also, we should see how to best handle of the issue Ann found (and is
tracked at https://review.openstack.org/#/c/305774/). Specifically,
reproducing this should be our goal.

John.

On Mon, Apr 18, 2016 at 5:15 PM, Anna Kamyshnikova
mailto:akamyshnik...@mirantis.com>> wrote:
Hi guys!

As a developer I use Devstack or multinode OpenStack installation (4-5
nodes) for work, but these are "abstract" environments, where you are not
able to perform some scenarios as your machine is not powerful enough. But
it is really important to understand the issues that real deployments have.

Recently I've performed testing of L3 HA on the scale environment 49 nodes
(3 controllers, 46 computes) Fuel 8.0. On this environment I ran shaker and
rally tests and also performed some manual destructive scenarios. I think
that this is very important to share these results. Ideally, I think that we
should collect statistics for different configurations each release to
compare and check it to make sure that we are heading the right way.

The results of shaker and rally tests [1]. I put detailed report in google
doc [2]. I would appreciate all comments on these results.

[1] - http://akamyshnikova.github.io/neutron-benchmark-results/
[2] -
https://docs.google.com/a/mirantis.com/document/d/1TFEUzRRlRIt2HpsOzFh-RqWwgTzJPBefePPA0f0x9uw/edit?usp=sharing

Regards,
Ann Kamyshnikova
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
John Schwarz,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA testing on scale

2016-04-18 Thread Kosnik, Lubosz
Great work Ann.
About testing on scale it’s not so problematic because of the Cloud For All 
project.
Here [1] you can request for a multi node cluster which you can use to
perform tests. Exact requirements are specified on that website.

[1] http://osic.org

Regards,
Lubosz “diltram” Kosnik

On Apr 18, 2016, at 10:42 AM, John Schwarz 
mailto:jschw...@redhat.com>> wrote:

This is some awesome work, Ann. It's very neat to see that all the
races we've struggled with w.r.t. the l3 scheduler has paid off. I
would definitely like to see how these results are effected by
https://review.openstack.org/#/c/305774/ but understandably 49
physical nodes are hard to come by.

Also, we should see how to best handle of the issue Ann found (and is
tracked at https://review.openstack.org/#/c/305774/). Specifically,
reproducing this should be our goal.

John.

On Mon, Apr 18, 2016 at 5:15 PM, Anna Kamyshnikova
mailto:akamyshnik...@mirantis.com>> wrote:
Hi guys!

As a developer I use Devstack or multinode OpenStack installation (4-5
nodes) for work, but these are "abstract" environments, where you are not
able to perform some scenarios as your machine is not powerful enough. But
it is really important to understand the issues that real deployments have.

Recently I've performed testing of L3 HA on the scale environment 49 nodes
(3 controllers, 46 computes) Fuel 8.0. On this environment I ran shaker and
rally tests and also performed some manual destructive scenarios. I think
that this is very important to share these results. Ideally, I think that we
should collect statistics for different configurations each release to
compare and check it to make sure that we are heading the right way.

The results of shaker and rally tests [1]. I put detailed report in google
doc [2]. I would appreciate all comments on these results.

[1] - http://akamyshnikova.github.io/neutron-benchmark-results/
[2] -
https://docs.google.com/a/mirantis.com/document/d/1TFEUzRRlRIt2HpsOzFh-RqWwgTzJPBefePPA0f0x9uw/edit?usp=sharing

Regards,
Ann Kamyshnikova
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
John Schwarz,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev