Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Jeremy Stanley
On 2015-04-15 11:06:20 +0200 (+0200), Thierry Carrez wrote:
 And the doc is indeed pretty clear. I assumed requirements.txt would
 describe... well... requirements. But like Robert said they are meant to
 describe specific deployments (should really be have been named
 deployment.txt, or at least dependencies.txt).

It may also just be that we overloaded the meaning of that filename
convention without realizing. Rewind to a couple years ago we had
essentially the same file but it was called tools/pip-requires
instead. I wonder if continuing to have it called something else
would have been less confusing to the Python developer community,
but the damage is done now.

Ultimately we just want a way to maintain a list of application or
library dependencies in such a way that when someone uses pip
install they get a fully-working installation without having to know
to run additional commands, and for us to be able to keep that list
in a machine-parsable file which isn't also source code fed to a
turing-complete interpreter.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] naming of the project

2015-04-15 Thread Richard Raseley

Emilien Macchi wrote:

Hi all,

I sent a patch to openstack/governance to move our project under the big
tent, and it came up [1] that we should decide of a project name and be
careful about trademarks issues with Puppet name.

I would like to hear from Puppetlabs if there is any issue to use Puppet
in the project title; also, I open a new etherpad so people can suggest
some names: https://etherpad.openstack.org/p/puppet-openstack-naming

Thanks,

[1] https://review.openstack.org/#/c/172112/1/reference/projects.yaml,cm


Emilien,

Thank you for driving this conversation. I can forward this on to people 
internally to find out if there are any issues with using the Puppet name.


Regards,

Richard Raseley

SysOps Engineer
Puppet Labs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Ken Giusti
On Tue, Apr 14, 2015 at 6:23 PM, Joshua Harlow harlo...@outlook.com wrote:
 Ken Giusti wrote:

 Just to be clear: you're asking specifically about the 0-10 based
 impl_qpid.py driver, correct?   This is the driver that is used for
 the qpid:// transport (aka rpc_backend).

 I ask because I'm maintaining the AMQP 1.0 driver (transport
 amqp://) that can also be used with qpidd.

 However, the AMQP 1.0 driver isn't yet Python 3 compatible due to its
 dependency on Proton, which has yet to be ported to python 3 - though
 that's currently being worked on [1].

 I'm planning on porting the AMQP 1.0 driver once the dependent
 libraries are available.

 [1]: https://issues.apache.org/jira/browse/PROTON-490


 What's the expected date on this as it appears this also blocks python 3
 work as well... Seems like that hasn't been updated since nov 2014 which
 doesn't inspire that much confidence (especially for what appears to be
 mostly small patches).


Good point.  I reached out to the bug owner.  He got it 'mostly
working' but got hung up on porting the proton unit tests.   I've
offered to help this along and he's good with that.  I'll make this a
priority to move this along.

In terms of availability - proton tends to do releases about every 4-6
months.  They just released 0.9, so the earliest availability would be
in that 4-6 month window (assuming that should be enough time to
complete the work).   Then there's the time it will take for the
various distros to pick it up...

so, definitely not 'real soon now'. :(



 On Tue, Apr 14, 2015 at 1:22 PM, Clint Byrumcl...@fewbar.com  wrote:

 Hello! There's been some recent progress on python3 compatibility for
 core libraries that OpenStack depends on[1], and this is likely to open
 the flood gates for even more python3 problems to be found and fixed.

 Recently a proposal was made to make oslo.messaging start to run python3
 tests[2], and it was found that qpid-python is not python3 compatible
 yet.

 This presents us with questions: Is anyone using QPID, and if so, should
 we add gate testing for it? If not, can we deprecate the driver? In the
 most recent survey results I could find [3] I don't even see message
 broker mentioned, whereas Databases in use do vary somewhat.

 Currently it would appear that only oslo.messaging runs functional tests
 against QPID. I was unable to locate integration testing for it, but I
 may not know all of the places to dig around to find that.

 So, please let us know if QPID is important to you. Otherwise it may be
 time to unburden ourselves of its maintenance.

 [1] https://pypi.python.org/pypi/eventlet/0.17.3
 [2] https://review.openstack.org/#/c/172135/
 [3]
 http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Sean Dague
On 04/12/2015 06:43 PM, Robert Collins wrote:
 Right now we do something that upstream pip considers wrong: we make
 our requirements.txt be our install_requires.
 
 Upstream there are two separate concepts.
 
 install_requirements, which are meant to document what *must* be
 installed to import the package, and should encode any mandatory
 version constraints while being as loose as otherwise possible. E.g.
 if package A depends on package B version 1.5 or above, it should say
 B=1.5 in A's install_requires. They should not specify maximum
 versions except when that is known to be a problem: they shouldn't
 borrow trouble.
 
 deploy requirements - requirements.txt - which are meant to be *local
 to a deployment*, and are commonly expected to specify very narrow (or
 even exact fit) versions.
 
 What pbr, which nearly if not all OpenStack projects use, does, is to
 map the contents of requirements.txt into install_requires. And then
 we use the same requirements.txt in our CI to control whats deployed
 in our test environment[*]. and there we often have tight constraints
 like seen here -
 http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt#n63
 
 I'd like to align our patterns with those of upstream, so that we're
 not fighting our tooling so much.
 
 Concretely, I think we need to:
  - teach pbr to read in install_requires from setup.cfg, not requirements.txt
  - when there are requirements in setup.cfg, stop reading requirements.txt
  - separate out the global intall_requirements from the global CI
 requirements, and update our syncing code to be aware of this
 
 Then, setup.cfg contains more open requirements suitable for being on
 PyPI, requirements.txt is the local CI set we know works - and can be
 much more restrictive as needed.
 
 Thoughts? If there's broad apathy-or-agreement I can turn this into a
 spec for fine coverage of ramifications and corner cases.

I'm definitely happy someone else is diving in on here, just beware the
dragons, there are many.

I think some of the key problems are the following (lets call these the
requirements requirements):

== We would like to be able to install multiple projects into a single
devstack instance, and have all services work.

This is hard because:

1. these are multiple projects so pip can't resolve all requirements at
once to get to a solved state (also, optional dependencies in particular
configs mean these can be installed later)

2. pip's solver ignores setup_requires - https://github.com/pypa/pip/issues
/2612#issuecomment-91114298 which means we can get inconsistent results

3. doing this iteratively in projects can cause the following to happen

A requires B1.0,2.0
C requires B1.2

pip install C can make the pip install A requirements invalid later.
This can end up in a failure of a service to start (if pkg_resources is
actually checking things), or very subtle bugs later.

Today global-requirements attempts to address this by continuously
narrowing the requirements definitions for everything we have under our
control so that pip is living in a rubber room and can only get an
answer we know works.



== However this has exposed an additional issue, libraries not
released at release time

Way more things are getting g-r syncs than top level projects.
Synchronizing requirements for things that all release at the same time
makes a lot of sense. However we're synchronizing requirements into
libraries that release at different cadence. This has required all
libraries to also have stable/ branches, for requirements matching.

In an ideal world libraries would have very broad requirements, which
would not have caps in them. non library projects would have narrower
requirements that we know work.


== End game?

*If* pip install took into account the requirements of everything
already installed like apt or yum does, and resolve accordingly
(including saying that's not possible unless you uninstall or upgrade
X), we'd be able to pip install and get a working answer at the end. Maybe?

Honestly, there are so many fixes on fixes here to our system, I'm not
sure even this would fix it.



-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Neutron] - Joining the team - interested in a Debian Developer and experienced Python and Network programmer?

2015-04-15 Thread Neil Jerram

Hi Matt,

I just re-read this thread, including your intro below.  You might be 
interested in what we're doing in the Calico project [1][2], as it uses 
some of the same idea as the example you describe below, notably 
importing TAP interface routes into bird and bird6.


[1] http://www.projectcalico.org/
[2] http://docs.projectcalico.org/en/latest/index.html

I'm currently working through factorizing the constituent ideas, and how 
they might mesh with or complement other existing OpenStack ideas such 
as DVR.


Regards,
Neil


On 15/04/15 06:55, Matt Grant wrote:

Hi Vikram,

I am very interested in this, however can't do everything for free!

I believe that bird would be a better fit than Zebra/Quagga, as it is
just 2 processes to be launched in a network namespace.  Also they are
very accessible/reloadable via the birdc/birdc6 control binaries that
lends itself to be called from python.

Could let you let me know if Huawei are interested in financially
supporting this please?

It is doable, and would be useful for smaller deployments.  It can be
made part of the new ML3 that is proposed.

Looking forward to your answer!

Best Regards,

Matt Grant

On Tue, 2015-04-14 at 11:58 +, Vikram Choudhary wrote:

Hi Matt,

Can you please let me know about your views on this proposal.

Thanks
Vikram

-Original Message-
From: Vikram Choudhary
Sent: 10 April 2015 10:40
To: 'm...@mattgrant.net.nz'
Cc: Kalyankumar Asangi; Dhruv Dhody; Kyle Mestery; 'Mathieu Rohon'; Dongfeng (C)
Subject: RE: [openstack-dev] [Neutron] - Joining the team - interested in a 
Debian Developer and experienced Python and Network programmer?

Hi Matt,

Welcome to Openstack:)

I was thinking of supporting an open vRouter for Openstack neutron. Currently, 
few vendors are there but are not open source. I feel it will be good if we can 
introduce Zebra/Quagga for neutron. Since you have an expertise over these so I 
feel we can do this much easier.

Please let me know about your views in this regard.

Thanks
Vikram

-Original Message-
From: Matt Grant [mailto:m...@mattgrant.net.nz]
Sent: 09 April 2015 12:44
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] - Joining the team - interested in a Debian 
Developer and experienced Python and Network programmer?

Hi!

I am just wondering what the story is about joining the neutron team.
Could you tell me if you are looking for new contributors?

Previously I have programmed OSPFv2 in Zebra/Quagga, and worked as a router 
developer for Allied Telesyn.  I also have extensive Python programming 
experience, having worked on the DNS Management System.

I have been experimenting with IPv6 since 2008 on my own home network, and I am 
currently installing a Juno Openstack cluster to learn ho things tick.

Have you guys ever figured out how to do a hybrid L3 North/South Neutron router 
that propagates tenant routes and networks into OSPF/BGP via a routing daemon, 
and uses floating MAC addresses/costed flow rules via OVS to fail over to a hot 
standby router? There are practical use cases for such a thing in smaller 
deployments.

I have a single stand alone example working by turning off neutron-l3-agent 
network name space support, and importing the connected interface and static 
routes into Bird and Birdv6. The AMPQ connection back to the neutron-server is 
via the upstream interface and is secured via transport mode IPSEC (just easier 
than bothering with https/SSL).
Bird looks easier to run from neutron as they are single process than a multi 
process Quagga implementation.  Incidentally, I am running this in an LXC 
container.

Could some one please point me in the right direction.  I would love to be in 
Vancouver :-)

Best Regards,

--
Matt Grant,  Debian and Linux Systems Administration and Consulting
Mobile: 021 0267 0578
Email: m...@mattgrant.net.nz


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Several nominations for fuel project cores

2015-04-15 Thread Evgeniy L
1/ +1
2/ +1
3/ +1

On Tue, Apr 14, 2015 at 2:45 PM, Aleksey Kasatkin akasat...@mirantis.com
wrote:

 1/ +1
 2/ +1
 3/ +1


 Aleksey Kasatkin


 On Tue, Apr 14, 2015 at 12:26 PM, Tatyana Leontovich 
 tleontov...@mirantis.com wrote:


 3/ +1

 On Tue, Apr 14, 2015 at 11:49 AM, Sergii Golovatiuk 
 sgolovat...@mirantis.com wrote:

 +1 for separating.

 Let's follow the formal well established process.

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Tue, Apr 14, 2015 at 10:32 AM, Igor Kalnitsky 
 ikalnit...@mirantis.com wrote:

 Dmitry,

 1/ +1

 2/ +1

 3/ +1

 P.S: Dmitry, please send one mail per nomination next time. It's much
 easier to vote for each candidate in separate threads. =)

 Thanks,
 Igor

 On Mon, Apr 13, 2015 at 4:24 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:
  Hi,
 
  1) I want to nominate Vladimir Sharshov to fuel-astute core. We
 hardly need
  more core reviewers here. At the moment Vladimir is one of the main
  contributors and reviewers in astute.
 
  2) I want to nominate Alexander Kislitsky to fuel-stats core. He is
 the lead
  of this feature and one of the main authors in this repo.
 
  3) I want to nominate Dmitry Shulyak to fuel-web and fuel-ostf cores.
 He is
  one of the main contributors and reviewers in both repos.
 
  Core reviewers, please reply with +1/-1 for each nomination.
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Utilizing the KMIP plugin

2015-04-15 Thread John Wood
Hello Christopher,

I’m glad you are making progress. I’m including two folks that worked on the 
KMIP plugin to see if they can help with your error diagnosis.

Thanks,
John


From: Christopher N Solis cnso...@us.ibm.commailto:cnso...@us.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 14, 2015 at 10:21 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [barbican] Utilizing the KMIP plugin


Hey John.
Thanks!
You were right. It was reading the config from the /root directory because I 
switched to the root user.
After switching back to the normal user it is reading the correct config file 
again.
It is trying to use the kmip plugin now.

However, I cannot not make a request to the kmip plugin because of an ssl error:

2015-04-14 10:02:26,219 - barbican.plugin.kmip_secret_store - ERROR - Error 
opening or writing to client
Traceback (most recent call last):
  File /home/swift/barbican/barbican/plugin/kmip_secret_store.py, line 167, 
in generate_symmetric_key
self.client.open()
  File 
/home/swift/.pyenv/versions/barbican27/lib/python2.7/site-packages/kmip/services/kmip_client.py,
 line 86, in open
self.socket.connect((self.host, self.port))
  File /home/swift/.pyenv/versions/2.7.6/lib/python2.7/ssl.py, line 333, in 
connect
self._real_connect(addr, False)
  File /home/swift/.pyenv/versions/2.7.6/lib/python2.7/ssl.py, line 314, in 
_real_connect
self.ca_certs, self.ciphers)
SSLError: [Errno 0] _ssl.c:343: error::lib(0):func(0):reason(0)

I believe there is a problem in the KMIP plugin part of the barbican-api.conf 
file:
keyfile = '/path/to/certs/cert.key'
certfile = '/path/to/certs/cert.crt'
ca_certs = '/path/to/certs/LocalCA.crt'

What exactly is each variable suppose to contain?
I have keyfile and certfile being a self signed certificate and 2048 bit RSA 
key respectively for barbican to use and
ca_certs is the kmip_plugins' certificate for barbican to trust. Does this 
setup sound right?

Regards,
Christopher Solis

[Inactive hide details for John Wood ---04/10/2015 07:24:59 PM---Hello 
Christopher, It does seem that configs are being read for]John Wood 
---04/10/2015 07:24:59 PM---Hello Christopher, It does seem that configs are 
being read for another location. Try to remove that

From: John Wood john.w...@rackspace.commailto:john.w...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: 04/10/2015 07:24 PM
Subject: Re: [openstack-dev] [barbican] Utilizing the KMIP plugin





Hello Christopher,

It does seem that configs are being read for another location. Try to remove 
that copy in you home directory (so just keep the /etc location). If you see 
the same issue, try to rename your /etc/barbican/barbican-api.conf file to 
something else. Barbican should crash, probably with a No SQL connection error.

Also, double check the ‘kmip_plugin’ setting in setup.cfg as per below, and try 
running ‘pip install -e .’ again in your virtual environment.

FWIW, this CR adds better logging of plugin errors once the loading problem you 
have is figured out: https://review.openstack.org/#/c/171868/

Thanks,
John


From: Christopher N Solis cnso...@us.ibm.commailto:cnso...@us.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, April 9, 2015 at 1:55 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [barbican] Utilizing the KMIP plugin

Hey John.
Thanks for letting me know about the error. But I think my configuration is not 
seeing the kmip_plugin selection.
In my barbican-api.conf file in /etc/barbican I have set 
enabled_secretstore_plugins = kmip_plugin

However, I don't think it is creating a KMIPSecretStore instance.
I edited the code in kmip_secret_store.py and put a breakpoint at the very 
beginning of the init function.
When I make a barbican request to put a secret in there, it did not stop at the 
breakpoint at all.
I put another breakpoint in the store_crypto.py file inside the init function 
for the StoreCryptoAdapterPlugin and I
was able to enter the code at that breakpoint.

So even though in my barbican-api.conf file I specified kmip_plugin it seems to 
be using the store_crypto plugin instead.

Is there something that might cause this to happen?
I also want to note that my code has the most up to date pull from the 
community code.

Here's what my /etc/barbican/barbican-api.conf file has in it:

# = Secret Store Plugin ===

Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-15 Thread Veiga, Anthony
Miguel,
As a telco operator, who is active in the WG, I am absolutely an interested 
party for QoS.  I’d be willing to hop between the two of them if absolutely 
necessary (it’s IRC, after all) but would prefer they not overlap if possible. 
Thanks!
-Anthony

On Apr 15, 2015, at 6:39 , Miguel Angel Ajo Pelayo 
mangel...@redhat.commailto:mangel...@redhat.com wrote:

I saw Mathieu Rohon message on the mail list archive, but it didn’t reach my 
inbox
for some reason:


Hi,

It will overlap with the Telco Working group weekly meeting [1]. It's too
bad, since Qos is a big interest for Telco Cloud Operator!

Mathieu

[1]https://wiki.openstack.org/wiki/TelcoWorkingGroup#Meetings


My intention was to set the meeting one hour earlier, but it seems that the DST 
time changes got to confuse me, I’m very sorry. I’m ok with moving the meeting 
1 hour later (15:00 UTC) for future meetings, as long as it still works for 
other people interested in the QoS topic.

Mathieu, I’m not sure if people from the telco meeting would be interested in 
participation on this meeting, but my participation on the TWG meeting would 
probably help getting everyone in sync.


Best,

Miguel Ángel

On 14/4/2015, at 10:43, Miguel Angel Ajo Pelayo 
mangel...@redhat.commailto:mangel...@redhat.com wrote:

Ok, after one week, looks like the most popular time slot is B,
that is 14:00 UTC / Wednesdays.

I’m proposing first meeting for Wednesday / Apr 22th 14:00 UTC / 
#openstack-meeting-2.

Tomorrow (Apr 15th / 14:00 UTC) It’s a been early since the announcement, so
I will join #openstack-meeting-2 while working on the agenda for next week, 
feel free to join
if you want/have time.




On 9/4/2015, at 22:43, Howard, Victor 
victor_how...@cable.comcast.commailto:victor_how...@cable.comcast.com wrote:

I prefer Timeslot B, thanks for coordinating.  I would be interested in helping 
out in any way with the design session let me know!

From: Sandhya Dasu (sadasu) sad...@cisco.commailto:sad...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 7, 2015 12:19 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

Hi Miguel,
Both time slots work for me. Thanks for rekindling this effort.

Thanks,
Sandhya

From: Miguel Ángel Ajo majop...@redhat.commailto:majop...@redhat.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 7, 2015 1:45 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

On Tuesday, 7 de April de 2015 at 3:14, Kyle Mestery wrote:
On Mon, Apr 6, 2015 at 6:04 PM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:


On 7 April 2015 at 00:33, Armando M. 
arma...@gmail.commailto:arma...@gmail.com wrote:

On 6 April 2015 at 08:56, Miguel Ángel Ajo 
majop...@redhat.commailto:majop...@redhat.com wrote:
I’d like to co-organized a QoS weekly meeting with Sean M. Collins,

In the last few years, the interest for QoS support has increased, Sean has 
been leading
this effort [1] and we believe we should get into a consensus about how to 
model an extension
to let vendor plugins implement QoS capabilities on network ports and tenant 
networks, and
how to extend agents, and the reference implementation  others [2]

As you surely know, so far every attempt to achieve a consensus has failed in a 
pretty miserable way.
This mostly because QoS can be interpreted in a lot of different ways, both 
from the conceptual and practical perspective.
Yes, I’m fully aware of it, it was also a new feature, so it was out of scope 
for Kilo.
It is important in my opinion to clearly define the goals first. For instance a 
simple extensions for bandwidth limiting could be a reasonable target for the 
Liberty release.
I quite agree here, but IMHO, as you said it’s a quite open field (limiting, 
guaranteeing,
marking, traffic shaping..), we should do our best in trying to define a model 
allowing us
to build that up in the future without huge changes, on the API side I guess 
micro versioning
is going to help in the API evolution.

Also, at some point, we should/could need to involve the nova folks, for 
example, to define
port flavors that can be associated to nova
instance flavors, providing them
1) different types of network port speeds/guarantees/priorities,
2) being able to schedule instance/ports in coordination to be able to met 
specified guarantees.

yes, complexity can sky rocket fast,
Moving things such as ECN into future works is the right thing to do in my 
opinion. Attempting to define a 

Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-15 Thread Neil Jerram

Hi again Joe, (+ list)

On 11/04/15 02:00, joehuang wrote:

Hi, Neil,

See inline comments.

Best Regards

Chaoyi Huang


From: Neil Jerram [neil.jer...@metaswitch.com]
Sent: 09 April 2015 23:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?

Hi Joe,

Many thanks for your reply!

On 09/04/15 03:34, joehuang wrote:

Hi, Neil,

  From theoretic, Neutron is like a broadcast domain, for example, enforcement of DVR and 
security group has to touch each regarding host where there is VM of this project resides. Even using SDN 
controller, the touch to regarding host is inevitable. If there are plenty of physical hosts, for 
example, 10k, inside one Neutron, it's very hard to overcome the broadcast storm issue under 
concurrent operation, that's the bottleneck for scalability of Neutron.


I think I understand that in general terms - but can you be more
specific about the broadcast storm?  Is there one particular message
exchange that involves broadcasting?  Is it only from the server to
agents, or are there 'broadcasts' in other directions as well?

[[joehuang]] for example, L2 population, Security group rule update, DVR route 
update. Both direction in different scenario.


Thanks.  In case it's helpful to see all the cases together, 
sync_routers (from the L3 agent) was also mentioned in other part of 
this thread.  Plus of course the liveness reporting from all agents.



(I presume you are talking about control plane messages here, i.e.
between Neutron components.  Is that right?  Obviously there can also be
broadcast storm problems in the data plane - but I don't think that's
what you are talking about here.)

[[joehuang]] Yes, controll plane here.


Thanks for confirming that.


We need layered architecture in Neutron to solve the broadcast domain bottleneck of 
scalability. The test report from OpenStack cascading shows that through layered architecture 
Neutron cascading, Neutron can supports up to million level ports and 100k level 
physical hosts. You can find the report here: 
http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers


Many thanks, I will take a look at this.


It was very interesting, thanks.  And by following through your links I 
also learned more about Nova cells, and about how some people question 
whether we need any kind of partitioning at all, and should instead 
solve scaling/performance problems in other ways...  It will be 
interesting to see how this plays out.


I'd still like to see more information, though, about how far people 
have scaled OpenStack - and in particular Neutron - as it exists today. 
 Surely having a consensus set of current limits is an important input 
into any discussion of future scaling work.


For example, Kevin mentioned benchmarking where the Neutron server 
processed a liveness update in 50ms and a sync_routers in 300ms. 
Suppose, the liveness update time was 50ms (since I don't know in detail 
what that  means) and agents report liveness every 30s.  Does that mean 
that a single Neutron server can only support 600 agents?


I'm also especially interested in the DHCP agent, because in Calico we 
have one of those on every compute host.  We've just run tests which 
appeared to be hitting trouble from just 50 compute hosts onwards, and 
apparently because of DHCP agent communications.  We need to continue 
looking into that and report findings properly, but if anyone already 
has any insights, they would be much appreciated.


Many thanks,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-15 Thread Miguel Angel Ajo Pelayo
Ok,

1) #openstack-meeting-2 doesn’t exist (-alt is it)

2) and not only that we’re colliding the TWG meeting,
but all the meeting rooms starting at UTC 14:30 are busy.

3) If we move -30m (UTC 13:30) then we could use meeting room
#openstack-meeting-3  

 before the neutron drivers meeting, and removing some overlap
with the TGW meeting.

But I know it’s an awful time (yet more) for anyone in the USA west coast.

What do you think?

#openstack-meeting-3 @ UTC 13:30 sounds good for everybody, or should we 
propose some
other timeslot?

What a wonderful meeting organizer I am… :/

Best,
Miguel Ángel?

Unless we’re able to live with 30min, we may need to move the meeting 
 On 15/4/2015, at 15:26, Veiga, Anthony anthony_ve...@cable.comcast.com 
 wrote:
 
 Miguel,
 As a telco operator, who is active in the WG, I am absolutely an interested 
 party for QoS.  I’d be willing to hop between the two of them if absolutely 
 necessary (it’s IRC, after all) but would prefer they not overlap if 
 possible. Thanks!
 -Anthony
 
 On Apr 15, 2015, at 6:39 , Miguel Angel Ajo Pelayo mangel...@redhat.com 
 mailto:mangel...@redhat.com wrote:
 
 I saw Mathieu Rohon message on the mail list archive, but it didn’t reach my 
 inbox
 for some reason:
 
 Hi,
 
 It will overlap with the Telco Working group weekly meeting [1]. It's too
 bad, since Qos is a big interest for Telco Cloud Operator!
 
 Mathieu
 
 [1]https://wiki.openstack.org/wiki/TelcoWorkingGroup#Meetings 
 https://wiki.openstack.org/wiki/TelcoWorkingGroup#Meetings
 My intention was to set the meeting one hour earlier, but it seems that the 
 DST time changes got to confuse me, I’m very sorry. I’m ok with moving the 
 meeting 1 hour later (15:00 UTC) for future meetings, as long as it still 
 works for other people interested in the QoS topic.
 Mathieu, I’m not sure if people from the telco meeting would be interested 
 in participation on this meeting, but my participation on the TWG meeting 
 would probably help getting everyone in sync.
 
 Best, 
 Miguel Ángel
 
 On 14/4/2015, at 10:43, Miguel Angel Ajo Pelayo mangel...@redhat.com 
 mailto:mangel...@redhat.com wrote:
 
 Ok, after one week, looks like the most popular time slot is B,
 that is 14:00 UTC / Wednesdays.
 
 I’m proposing first meeting for Wednesday / Apr 22th 14:00 UTC / 
 #openstack-meeting-2.
 
 Tomorrow (Apr 15th / 14:00 UTC) It’s a been early since the announcement, 
 so 
 I will join #openstack-meeting-2 while working on the agenda for next week, 
 feel free to join
 if you want/have time.
 
 
 
 
 On 9/4/2015, at 22:43, Howard, Victor victor_how...@cable.comcast.com 
 mailto:victor_how...@cable.comcast.com wrote:
 
 I prefer Timeslot B, thanks for coordinating.  I would be interested in 
 helping out in any way with the design session let me know!
 
 From: Sandhya Dasu (sadasu) sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 mailto:openstack-dev@lists.openstack.org
 Date: Tuesday, April 7, 2015 12:19 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting
 
 Hi Miguel,
 Both time slots work for me. Thanks for rekindling this effort.
 
 Thanks,
 Sandhya
 
 From: Miguel Ángel Ajo majop...@redhat.com mailto:majop...@redhat.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 mailto:openstack-dev@lists.openstack.org
 Date: Tuesday, April 7, 2015 1:45 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting
 
 On Tuesday, 7 de April de 2015 at 3:14, Kyle Mestery wrote:
 On Mon, Apr 6, 2015 at 6:04 PM, Salvatore Orlando sorla...@nicira.com 
 mailto:sorla...@nicira.com wrote:
 
 
 On 7 April 2015 at 00:33, Armando M. arma...@gmail.com 
 mailto:arma...@gmail.com wrote:
 
 On 6 April 2015 at 08:56, Miguel Ángel Ajo majop...@redhat.com 
 mailto:majop...@redhat.com wrote:
 I’d like to co-organized a QoS weekly meeting with Sean M. Collins,
 
 In the last few years, the interest for QoS support has increased, 
 Sean has been leading
 this effort [1] and we believe we should get into a consensus about 
 how to model an extension
 to let vendor plugins implement QoS capabilities on network ports and 
 tenant networks, and
 how to extend agents, and the reference implementation  others [2]
 
 As you surely know, so far every attempt to achieve a consensus has 
 failed in a pretty miserable way.
 This mostly because QoS can be interpreted in a lot of different ways, 
 both from the conceptual and practical perspective.
 Yes, I’m fully aware of it, it was also a new feature, so it was out of 
 scope for Kilo. 
 It is important in 

Re: [openstack-dev] [TripleO] Alternate meeting time

2015-04-15 Thread Giulio Fidente

On 04/15/2015 10:46 AM, marios wrote:

On 15/04/15 00:13, James Slagle wrote:

Hi, TripleO currently has an alternate meeting time scheduled for
Wednesdays at 08:00 UTC. The alternate meeting actually hasn't
happened the last 4 occurrences that I know of [2].

Do we still need the alternate meeting time slot? I'd like to
accommodate as many people as possible to be either to attend one of
our two meeting times. The last time this came up, we tracked people's
opinions in an etherpad[3], and a doodle, which has since expired.
Maybe a good first step would be to just update your preferences in
the etherpad, so we can start to see if there's a larger group of
people we can accommodate at the alternate meeting time.


I don't think we need the alternate slot.


I added myself in the etherpad to the list of people who can stick with 
the single primary time slot as well

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Upgrade tarball retirement path

2015-04-15 Thread Dmitry Pyzhov
Guys,

TL;DR: There will be upgrade tarball in 6.1. But it will not require any
data from 6.0.x branch. And there will be no upgrade tarball starting from
7.0.

Looks like we don't need upgrade tarball any more. It is a big relief for
our build team. Because GNU make is not intended to be used for this kind
of things. So:

1) We should remove upgrade tarball and create a script for upgrade
instead. This script will get new packages from upstream repos and do all
the stuff.
2) We are not ready to remove upgrade tarball in 6.1. We are too close to
the release and it will be too risky to deal with all the last minute bugs
after such big change.
3) We will get rid of diff repos in 6.1. It is useless for Ubuntu. Because
we've updated from 12.04 to 14.04 and there are a lot of changes in
packages. So diff repos saves about 300Mb and produces a lot of extra work
for build and upgrade procedures in 6.1. We will deprecate it.
4) 6.0.1 release will not be available by the HCF for 6.1. And our old
version of patching is deprecated. So we don't need to ship any 6.0.x data
with our upgrade tarball for 6.1.

Any questions, comments, objections?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ops][rally][announce] What's new in Rally v0.0.3

2015-04-15 Thread Boris Pavlovic
Hello,

Rally team is happy to say that we cut new release 0.0.3.


*Release stats:*

+--+-+
| Commits  |   53|
+--+-+
| Bug fixes|   14|
+--+-+
| Dev cycle| 33 days |
+--+-+
| Release date |   14/Apr/2015   |
+--+-+
| New scenarios|   11|
+--+-+
| New slas |2|
+--+-+


*New features:*


   - Add the ability to specify versions for clients in benchmark scenarios
   You can call self.clients(“glance”, “2”) and get client initialized for
   specific API version.

   - Add API for tempest uninstall

   $ rally-manage tempest uninstall
   # removes fully tempest for active deployment

   - Add a –uuids-only option to rally task list

   $ rally task list –uuids-only# returns list with only task uuids

   - Adds endpoint to –fromenv deployment creation

   $ rally deployment create –fromenv
   # recognizes standard OS_ENDPOINT environment variable

   - Configure SSL per deployment

   Now SSL information is deployment specific not Rally specific and
   rally.conf option is deprecated. Take a look at sample.


For more details take a look at release notes:

http://boris-42.me/rally-v0-0-3-whats-new/

or here

https://rally.readthedocs.org/en/latest/release_notes/v0.0.3.html


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Jeremy Stanley
On 2015-04-15 11:53:28 +0200 (+0200), Flavio Percoco wrote:
[...]
 When I proposed removing the GridFS driver from glance_store, I asked
 for feedback in other mailing lists and then came back here proposing
 de dev removal.

Got it--so the recommendation is to not ask the developer community
until the operator community has been polled for input first. I
suppose I can see the logic there. We do already get plenty of
E-mail on the dev ML.

 The point I tried to make in my previous email is that, whenever we
 propose removing something important - like support for a broker - the
 broader the audience we try to get feedback from is, the better. You
 can argue saying that it's very unlikely that there are ops in the
 OpenStack General mailing list that are not in the ops m-l, but we
 don't know that.
[...]

On the other hand, limiting these sorts of discussions to only the
most appropriate venues encourages those who didn't get a say in the
discussion to join the mailing lists where they take place so that
they can participate more effectively in the future.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to use docker-swarm bay in magnum

2015-04-15 Thread Jay Lau
Thanks Andrew and Fenghua. I see. The current docker-swarm template only
using heat to create a swarm cluster, I see you have many bps related to
this. ;-)

2015-04-15 23:13 GMT+08:00 FangFenghua fang_feng...@hotmail.com:

 For docker-swam bay,I think the bay looks like a big machine have a
 docker-daemon.
 We can create container in it.
 --
 From: andrew.mel...@rackspace.com
 To: openstack-dev@lists.openstack.org
 Date: Wed, 15 Apr 2015 14:53:39 +
 Subject: Re: [openstack-dev] [magnum] How to use docker-swarm bay in magnum


 ​​​Hi Jay,



 Magnum Bays do not currently use the docker-swarm template. I'm working on
 a patch to add support for the docker-swarm template. That is going to
 require a new TemplateDefinition, and potentially some new config options
 and/or Bay/BayModel parameters. After that, the Docker container conductor
 will need to be updated to pull it's connection string from the Bay instead
 of the



 To answer your main question though, the idea is that once users can build
 a docker-swarm bay, they will use the container endpoints of our API to
 interact with the bay.



 --Andrew

  --
 *From:* Jay Lau jay.lau@gmail.com
 *Sent:* Tuesday, April 14, 2015 5:33 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [magnum] How to use docker-swarm bay in magnum

   Greetings,

  Currently, there is a docker-swarm bay in magnum, but the problem is
 after this swarm bay was created, how to let user use this bay? Still using
 swarm CLI? The magnum do not have API/CLI to interact with swarm bay now.

 --
   Thanks,

  Jay Lau (Guangya Liu)

 __
 OpenStack Development Mailing List (not for usage questions) Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][bay] Make Apache mesos a container backend of magnum

2015-04-15 Thread FangFenghua
FRI

From: fang_feng...@hotmail.com
To: openstack-dev@lists.openstack.org
Date: Wed, 15 Apr 2015 14:29:27 +
Subject: [openstack-dev] [magnum]




Apache Mesos maybe is  a choice as a Magnum's Container backend.Now it native 
support Docker contanier . I thinks magnum have aMesos bay is  very Cool.   


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-15 Thread Veiga, Anthony
On Apr 15, 2015, at 10:00 , Miguel Angel Ajo Pelayo mangel...@redhat.com 
wrote:
 
 Ok,
 
 1) #openstack-meeting-2 doesn’t exist (-alt is it)
 
 2) and not only that we’re colliding the TWG meeting,
 but all the meeting rooms starting at UTC 14:30 are busy.

While not preferable, I don’t mind overlapping that meeting. I can be in both 
places.

 
 3) If we move -30m (UTC 13:30) then we could use meeting room
 #openstack-meeting-3  
 
  before the neutron drivers meeting, and removing some overlap
 with the TGW meeting.
 
 But I know it’s an awful time (yet more) for anyone in the USA west coast.
 
 What do you think?

This time is fine for me, but I’m EDT so it’s normal business hours here.

 
 #openstack-meeting-3 @ UTC 13:30 sounds good for everybody, or should we 
 propose some
 other timeslot?
 
 What a wonderful meeting organizer I am… :/

You’re doing fine! It’s an international organization.  It is by definition 
impossible to select a timeslot that’s perfect for everyone.

 
 Best,
 Miguel Ángel?
 
 Unless we’re able to live with 30min, we may need to move the meeting 

-Anthony


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-04-15 Thread Dmitry Pyzhov
FYI. We are going to disable Multi-node mode on UI even in experimental
mode. And we will remove related code from nailgun in 7.0.
https://bugs.launchpad.net/fuel/+bug/1428054

On Fri, Jan 30, 2015 at 1:39 PM, Aleksandr Didenko adide...@mirantis.com
wrote:

 What do you guys think about switching CentOS CI job [1] to HA with single
 controller (1 controller + 1 or 2 computes)? Just to verify that our
 replacement of Simple mode works fine.

 [1]
 https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/

 On Fri, Jan 30, 2015 at 10:54 AM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Thanks Igor for the quick turn over, excellent!

 On Fri, Jan 30, 2015 at 1:19 AM, Igor Belikov ibeli...@mirantis.com
 wrote:

 Folks,

 Changes in CI jobs have been made, for master branch of fuel-library we
 are running CentOS HA + Nova VLAN and Ubuntu HA + Neutron VLAN .
 Job naming schema has also been changed, so now it includes actual
 testgroup. Current links for master branch CI jobs are [1] and [2], all
 other jobs can be found here[3] or will show up in your gerrit reviews.
 ISO and environments have been updated to the latest ones.

 [1]
 https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/
 [2]
 https://fuel-jenkins.mirantis.com/job/master.fuel-library.ubuntu.ha_neutron_vlan/
 [3]https://fuel-jenkins.mirantis.com
 --
 Igor Belikov
 Fuel DevOps
 ibeli...@mirantis.com





 On 29 Jan 2015, at 13:42, Aleksandr Didenko adide...@mirantis.com
 wrote:

 Mike,

  Any objections / additional suggestions?

 no objections from me, and it's already covered by LP 1415116 bug [1]

 [1] https://bugs.launchpad.net/fuel/+bug/1415116

 On Wed, Jan 28, 2015 at 6:42 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Folks,
 one of the things we should not forget about - is out Fuel CI gating
 jobs/tests. [1], [2].

 One of them is actually runs simple mode. Unfortunately, I don't see
 details about tests ran for [1], [2], but I'm pretty sure it's same set as
 [3], [4].

 I suggest to change tests. First of all, we need to get rid of simple
 runs (since we are deprecating it), and second - I'd like us to run Ubuntu
 HA + Neutron VLAN for one of the tests.

 Any objections / additional suggestions?

 [1]
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_centos/
 [2]
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_ubuntu/
 [3]
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_centos/
 [4]
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_ubuntu/

 On Wed, Jan 28, 2015 at 2:28 PM, Sergey Vasilenko 
 svasile...@mirantis.com wrote:

 +1 to replace simple to HA with one controller

 /sv


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org
 ?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] naming of the project

2015-04-15 Thread Gui Maluf
if Puppet isn't possible, *Silhouette* looks very charming to me :)

On Wed, Apr 15, 2015 at 10:44 AM, Richard Raseley rich...@raseley.com
wrote:

 Emilien Macchi wrote:

 Hi all,

 I sent a patch to openstack/governance to move our project under the big
 tent, and it came up [1] that we should decide of a project name and be
 careful about trademarks issues with Puppet name.

 I would like to hear from Puppetlabs if there is any issue to use Puppet
 in the project title; also, I open a new etherpad so people can suggest
 some names: https://etherpad.openstack.org/p/puppet-openstack-naming

 Thanks,

 [1] https://review.openstack.org/#/c/172112/1/reference/projects.yaml,cm


 Emilien,

 Thank you for driving this conversation. I can forward this on to people
 internally to find out if there are any issues with using the Puppet name.

 Regards,

 Richard Raseley

 SysOps Engineer
 Puppet Labs

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*guilherme* \n
\t *maluf*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] livemigration failed due to invalid of cpuset

2015-04-15 Thread Chris Friesen

On 04/15/2015 08:22 AM, Qiao, Liyong wrote:

Hi all

Live migration an instance will fail due to invalid cpuset, more detail can be
find in this bug[1]


I actually reported the same issue back in February:
https://bugs.launchpad.net/nova/+bug/1417667

Chris



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: Migration/Evacuation of instance on desired host

2015-04-15 Thread Chris Friesen

On 04/15/2015 03:22 AM, Akshik dbk wrote:

Hi,

would like to know if schedule filters are considered while instance
migration/evacuation.


If you migrate or evacuate without specifying a destination then the scheduler 
filters will be considered.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Group-Based-Policy] Fixing backward incompatible unnamed constraints removal

2015-04-15 Thread Robert Kukura
I believe that, on the stable branch at least, we need to fix the 
migrations so that upgrades are possible. This probably means fixing 
them the same way on the master branch first and backporting the fixes 
to stable/juno. All migrations that were present in the initial juno 
release need to be restored to the exact state they were in that 
release, and new migrations need to be added that make the needed schema 
changes, preserving state of existing deployments. I'm assuming there is 
more involved than just the constraint removal in Ivar's [2], but 
haven't checked yet. I think it would be OK to splice these new 
migrations into the chain on master just after the final migration that 
was present in the juno release, since we are not trying to support 
trunk chasers on master. Does this make sense? I do not think it should 
be difficult, unless schema changes were introduced for which deployment 
state cannot be preserved/defaulted.


-Bob

On 4/15/15 3:30 AM, Sumit Naiksatam wrote:

Thanks Ivar for tracking this and bringing it up for discussion. I am
good with taking approach (1).



On Mon, Apr 13, 2015 at 1:10 PM, Ivar Lazzaro ivarlazz...@gmail.com wrote:

Hello Team,

As per discussion in the latest GBP meeting [0] I'm hunting down all the
backward incompatible changes made on DB migrations regarding the removal of
unnamed constraints.
In this report [1] you can find the list of affected commits.

The problem is that some of the affected commits are already back ported to
Juno! and others will be [2], so I was wondering what's the plan regarding
how we want back port the compatibility fix to stable/juno.
I see two possibilities:

1) We backport [2] as is (with the broken migration), but we cut the new
stable release only once [3] is merged and back ported. This has the
advantage of having a cleaner backport tree in which all the changes in
master are cherry-picked without major changes.

2) We split [3] in multiple patches, and we only backport those that fix
commits that are already in Juno. Patches like [2] will be changed to
accomodate the fixed migration *before* being merged into the stable branch.
This will avoid intra-release code breakage (which is an issue for people
installing GBP directly from code).

Please share your thoughts, Thanks,
Ivar.

[0]
http://eavesdrop.openstack.org/meetings/networking_policy/2015/networking_policy.2015-04-09-18.00.log.txt
[1] https://bugs.launchpad.net/group-based-policy/+bug/1443606
[2] https://review.openstack.org/#/c/170972/
[3] https://review.openstack.org/#/c/173051/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to use docker-swarm bay in magnum

2015-04-15 Thread Andrew Melton
???Hi Jay,


Magnum Bays do not currently use the docker-swarm template. I'm working on a 
patch to add support for the docker-swarm template. That is going to require a 
new TemplateDefinition, and potentially some new config options and/or 
Bay/BayModel parameters. After that, the Docker container conductor will need 
to be updated to pull it's connection string from the Bay instead of the


To answer your main question though, the idea is that once users can build a 
docker-swarm bay, they will use the container endpoints of our API to interact 
with the bay.


--Andrew


From: Jay Lau jay.lau@gmail.com
Sent: Tuesday, April 14, 2015 5:33 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [magnum] How to use docker-swarm bay in magnum

Greetings,

Currently, there is a docker-swarm bay in magnum, but the problem is after this 
swarm bay was created, how to let user use this bay? Still using swarm CLI? The 
magnum do not have API/CLI to interact with swarm bay now.

--
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to use docker-swarm bay in magnum

2015-04-15 Thread FangFenghua
For docker-swam bay,I think the bay looks like a big machine have a 
docker-daemon.We can create container in it.
From: andrew.mel...@rackspace.com
To: openstack-dev@lists.openstack.org
Date: Wed, 15 Apr 2015 14:53:39 +
Subject: Re: [openstack-dev] [magnum] How to use docker-swarm bay in magnum







​​​Hi Jay,







Magnum Bays do not currently use the docker-swarm template. I'm working on a 
patch to add support for the docker-swarm template. That is going to require a 
new TemplateDefinition, and potentially some new config options and/or 
Bay/BayModel parameters. After
 that, the Docker container conductor will need to be updated to pull it's 
connection string from the Bay instead of the 







To answer your main question though, the idea is that once users can build a 
docker-swarm bay, they will use the container endpoints of our API to interact 
with the bay.







--Andrew





From: Jay Lau jay.lau@gmail.com

Sent: Tuesday, April 14, 2015 5:33 AM

To: OpenStack Development Mailing List

Subject: [openstack-dev] [magnum] How to use docker-swarm bay in magnum
 



Greetings,




Currently, there is a docker-swarm bay in magnum, but the problem is after this 
swarm bay was created, how to let user use this bay? Still using swarm CLI? The 
magnum do not have API/CLI to interact with swarm bay now.





-- 





Thanks,




Jay Lau (Guangya Liu)















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to use docker-swarm bay in magnum

2015-04-15 Thread FangFenghua
Maybe we can add a object mapping to docker-compose.

Date: Wed, 15 Apr 2015 23:32:01 +0800
From: jay.lau@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] How to use docker-swarm bay in magnum

Thanks Andrew and Fenghua. I see. The current docker-swarm template only using 
heat to create a swarm cluster, I see you have many bps related to this. ;-)

2015-04-15 23:13 GMT+08:00 FangFenghua fang_feng...@hotmail.com:



For docker-swam bay,I think the bay looks like a big machine have a 
docker-daemon.We can create container in it.
From: andrew.mel...@rackspace.com
To: openstack-dev@lists.openstack.org
Date: Wed, 15 Apr 2015 14:53:39 +
Subject: Re: [openstack-dev] [magnum] How to use docker-swarm bay in magnum







​​​Hi Jay,







Magnum Bays do not currently use the docker-swarm template. I'm working on a 
patch to add support for the docker-swarm template. That is going to require a 
new TemplateDefinition, and potentially some new config options and/or 
Bay/BayModel parameters. After
 that, the Docker container conductor will need to be updated to pull it's 
connection string from the Bay instead of the 







To answer your main question though, the idea is that once users can build a 
docker-swarm bay, they will use the container endpoints of our API to interact 
with the bay.







--Andrew





From: Jay Lau jay.lau@gmail.com

Sent: Tuesday, April 14, 2015 5:33 AM

To: OpenStack Development Mailing List

Subject: [openstack-dev] [magnum] How to use docker-swarm bay in magnum
 



Greetings,




Currently, there is a docker-swarm bay in magnum, but the problem is after this 
swarm bay was created, how to let user use this bay? Still using swarm CLI? The 
magnum do not have API/CLI to interact with swarm bay now.





-- 





Thanks,




Jay Lau (Guangya Liu)















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  

__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] livemigration failed due to invalid of cpuset

2015-04-15 Thread Qiao, Liyong
Hi all
Live migration an instance will fail due to invalid cpuset, more detail can be 
find in this bug[1]
this exception is raised by python-libvirt's migrateToURI2/ migrateToURI. I'd 
like to get your idea on this:

1. disable live-migration and raise exception early since migrateToURI2/ 
migrateToURI consider this as a exception.
2. manually check cpuset of the instance's domain xml (maybe need to change the 
instance's numa_topology), this is kinds of hacking ??
3. fix this in migrateToURI2/ migrateToURI to allow libvirt live migrate that 
instance.


In my opinion, I think option 2 is better and much more reasonable, but I don't 
know if it possible to approach that changes?

[1] https://launchpad.net/bugs/1440981

I'd like to hear your suggestions, thanks
BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] where is the api to fetch mysql log?

2015-04-15 Thread Li Tianqing
Hi, 
   all, i know the kilo-rc 1 is released. I found an introduction about new 
features here
http://www.slideshare.net/openstack/trove-juno-to-kilo
   It says that we can fetch mysql error log. Then i search in the source code 
on master branch, and i do not find the api. Can someone help me ?




--

Best
Li Tianqing__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum]

2015-04-15 Thread FangFenghua
Apache Mesos maybe is  a choice as a Magnum's Container backend.Now it native 
support Docker contanier . I thinks magnum have aMesos bay is  very Cool.   
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-15 Thread Miguel Angel Ajo Pelayo
Ok, during today’s preliminary meeting we talked about moving to 
#openstack-meeting-3,

and we’re open to move -30m the meeting if it’s ok for everybody, to only 
partly overlap with the TWG,
yet we could stay at 14:00 UTC for now.

I have updated both wikis to reflect the meeting room change (to an existing 
one… ‘:D )

minutes of this preliminary meeting can be found here:

http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-04-15-14.07.html

Best,
Miguel Ángel



 On 15/4/2015, at 16:32, Veiga, Anthony anthony_ve...@cable.comcast.com 
 wrote:
 
 On Apr 15, 2015, at 10:00 , Miguel Angel Ajo Pelayo mangel...@redhat.com 
 wrote:
 
 Ok,
 
 1) #openstack-meeting-2 doesn’t exist (-alt is it)
 
 2) and not only that we’re colliding the TWG meeting,
but all the meeting rooms starting at UTC 14:30 are busy.
 
 While not preferable, I don’t mind overlapping that meeting. I can be in both 
 places.
 
 
 3) If we move -30m (UTC 13:30) then we could use meeting room
#openstack-meeting-3  
 
 before the neutron drivers meeting, and removing some overlap
 with the TGW meeting.
 
 But I know it’s an awful time (yet more) for anyone in the USA west coast.
 
 What do you think?
 
 This time is fine for me, but I’m EDT so it’s normal business hours here.
 
 
 #openstack-meeting-3 @ UTC 13:30 sounds good for everybody, or should we 
 propose some
 other timeslot?
 
 What a wonderful meeting organizer I am… :/
 
 You’re doing fine! It’s an international organization.  It is by definition 
 impossible to select a timeslot that’s perfect for everyone.
 
 
 Best,
 Miguel Ángel?
 
 Unless we’re able to live with 30min, we may need to move the meeting 
 
 -Anthony
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Miguel Angel Ajo




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-04-15 Thread Tomasz Napierala
Do you mean single node?

 On 15 Apr 2015, at 17:04, Dmitry Pyzhov dpyz...@mirantis.com wrote:
 
 FYI. We are going to disable Multi-node mode on UI even in experimental mode. 
 And we will remove related code from nailgun in 7.0.
 https://bugs.launchpad.net/fuel/+bug/1428054
 
 On Fri, Jan 30, 2015 at 1:39 PM, Aleksandr Didenko adide...@mirantis.com 
 wrote:
 What do you guys think about switching CentOS CI job [1] to HA with single 
 controller (1 controller + 1 or 2 computes)? Just to verify that our 
 replacement of Simple mode works fine.
 
 [1] 
 https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/
 
 On Fri, Jan 30, 2015 at 10:54 AM, Mike Scherbakov mscherba...@mirantis.com 
 wrote:
 Thanks Igor for the quick turn over, excellent!
 
 On Fri, Jan 30, 2015 at 1:19 AM, Igor Belikov ibeli...@mirantis.com wrote:
 Folks,
 
 Changes in CI jobs have been made, for master branch of fuel-library we are 
 running CentOS HA + Nova VLAN and Ubuntu HA + Neutron VLAN .
 Job naming schema has also been changed, so now it includes actual testgroup. 
 Current links for master branch CI jobs are [1] and [2], all other jobs can 
 be found here[3] or will show up in your gerrit reviews.
 ISO and environments have been updated to the latest ones.
 
 [1]https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/
 [2]https://fuel-jenkins.mirantis.com/job/master.fuel-library.ubuntu.ha_neutron_vlan/
 [3]https://fuel-jenkins.mirantis.com
 --
 Igor Belikov
 Fuel DevOps
 ibeli...@mirantis.com
 
 
 
 
 
 On 29 Jan 2015, at 13:42, Aleksandr Didenko adide...@mirantis.com wrote:
 
 Mike,
 
  Any objections / additional suggestions?
 
 no objections from me, and it's already covered by LP 1415116 bug [1]
 
 [1] https://bugs.launchpad.net/fuel/+bug/1415116
 
 On Wed, Jan 28, 2015 at 6:42 PM, Mike Scherbakov mscherba...@mirantis.com 
 wrote:
 Folks,
 one of the things we should not forget about - is out Fuel CI gating 
 jobs/tests. [1], [2].
 
 One of them is actually runs simple mode. Unfortunately, I don't see details 
 about tests ran for [1], [2], but I'm pretty sure it's same set as [3], [4].
 
 I suggest to change tests. First of all, we need to get rid of simple runs 
 (since we are deprecating it), and second - I'd like us to run Ubuntu HA + 
 Neutron VLAN for one of the tests.
 
 Any objections / additional suggestions?
 
 [1] 
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_centos/
 [2] 
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_ubuntu/
 [3] https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_centos/
 [4] https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_ubuntu/
 
 On Wed, Jan 28, 2015 at 2:28 PM, Sergey Vasilenko svasile...@mirantis.com 
 wrote:
 +1 to replace simple to HA with one controller
 
 /sv
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Mike Scherbakov
 #mihgen
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Mike Scherbakov
 #mihgen
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Tomasz 'Zen' Napierala
Product Engineering - Poland








Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Joshua Harlow

Flavio Percoco wrote:

On 14/04/15 19:54 -0400, Sean Dague wrote:

On 04/14/2015 07:26 PM, Flavio Percoco wrote:

On 14/04/15 23:18 +, Jeremy Stanley wrote:

On 2015-04-15 01:10:03 +0200 (+0200), Flavio Percoco wrote:
[...]

I'd recommend sending this email to the ops mailing list


And I'd recommend subscribing to it... it's really quite good! He
did (twice apparently, I expect the second by mistake):

http://lists.openstack.org/pipermail/openstack-operators/2015-April/006735.html




It'd have been useful to have this linked in this thread...




and the users mailing list too.

[...]

The general mailing list seems a little less focused on this sort of
thing, but I suppose it can't hurt.


I disagree, they are still users and we get feedback from them.


There is a problem with sending out an is anyone using this? email and
deciding whether or not to do this based on that. You're always going to
find a few voices that pop up.

We've gotten a ton of feedback from operators, both via survey, and
meetups. And the answer is that they are all running Rabbit. Many have
tried to run one of the other backends because of Rabbit bugs, and have
largely found them worse, and moved back.

The operator community has gathered around this backend. Even though
it's got it's issues, there are best practices that people have come to
develop in dealing with them. Making this pluggable doesn't provide a
service to our users, because it doesn't make it clear that there is 1
backend you'll get help from others with, and the rest, well you are
pretty much on your own, good luck, and you get to keep all the parts.
Writing a seemingly correct driver for oslo.messaging doesn't mean
that it's seen the kind of field abuse that's really needed to work out
where the hard bugs are.

It's time to be honest about the level of support that comes with those
other backends, deprecate the plugability, and move on to more
interesting problems. We do have plenty of them to solve. :) Perhaps in
doing so we could get a better Rabbit implementation and make life
easier for everyone.


The only reason I proposed to move it in a separate repo is to provide
sort of a deprecation path that won't block our work towards Py3K. In
the stripped part of my previous email I also mentioned that we could
mark it as deprecated to make clear what our intentions going forward
are.

I don't agree that just killing it is the right thing to do here.
Doing this will give us a bit more work to do since we'll have to go
through the repo creation process but at least we don't risk to be
blamed for killing people's deployments out of the blue.


If it isn't working in the gate and/or maintained, exactly whose 
deployments are working (for some definition of working) to 'kill' in 
the first place? Seems like it has to work at least in the gate for 
people to have deployments that work in the first place, otherwise they 
likely don't have deployments that could be 'killed' in the first place, 
right?


/me not saying that we should 'kill' it as the best way, just if it 
doesn't work then it doesn't seem to do much harm to 'kill' it...




Cheers,
Flavio

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-04-14 16:54:30 -0700:
 On 04/14/2015 07:26 PM, Flavio Percoco wrote:
  On 14/04/15 23:18 +, Jeremy Stanley wrote:
  On 2015-04-15 01:10:03 +0200 (+0200), Flavio Percoco wrote:
  [...]
  I'd recommend sending this email to the ops mailing list
 
  And I'd recommend subscribing to it... it's really quite good! He
  did (twice apparently, I expect the second by mistake):
 
  http://lists.openstack.org/pipermail/openstack-operators/2015-April/006735.html
 
  
  It'd have been useful to have this linked in this thread...
  
 
  and the users mailing list too.
  [...]
 
  The general mailing list seems a little less focused on this sort of
  thing, but I suppose it can't hurt.
  
  I disagree, they are still users and we get feedback from them.
 
 There is a problem with sending out an is anyone using this? email and
 deciding whether or not to do this based on that. You're always going to
 find a few voices that pop up.
 
 We've gotten a ton of feedback from operators, both via survey, and
 meetups. And the answer is that they are all running Rabbit. Many have
 tried to run one of the other backends because of Rabbit bugs, and have
 largely found them worse, and moved back.
 
 The operator community has gathered around this backend. Even though
 it's got it's issues, there are best practices that people have come to
 develop in dealing with them. Making this pluggable doesn't provide a
 service to our users, because it doesn't make it clear that there is 1
 backend you'll get help from others with, and the rest, well you are
 pretty much on your own, good luck, and you get to keep all the parts.
 Writing a seemingly correct driver for oslo.messaging doesn't mean
 that it's seen the kind of field abuse that's really needed to work out
 where the hard bugs are.
 
 It's time to be honest about the level of support that comes with those
 other backends, deprecate the plugability, and move on to more
 interesting problems. We do have plenty of them to solve. :) Perhaps in
 doing so we could get a better Rabbit implementation and make life
 easier for everyone.
 

I think you're right about most of this, so +1*

*I want to suggest that having this pluggable isn't the problem. Merging
drivers without integration testing and knowledgeable resources from
interested parties is the problem. If there isn't a well defined gate
test, and a team of people willing to respond to any and all issues with
that infrastructure committed, then the driver should not be shipped
with oslo.messaging.

We've been through this already with the virt drivers and
databases. Having the ability to move into a space that a different
backend serves well is a good feature. We *will* hit the limits of
Rabbit. Encouraging users to submit every possible backend has costs
though. This neighborhood has been gentrified, and it's time to evict
anyone not willing or able to pay the rent.

This thread has convinced me that the right path is to make an
announcement, and deprecate the QPID driver as of the next release of
oslo.messaging. We can always reverse that decision if users actually
show up. Then the usual 2 cycle dance and we're relieved of, oddly
enough, 666 SLOC:

$ radon raw oslo_messaging/_drivers/impl_qpid.py
oslo_messaging/_drivers/impl_qpid.py
LOC: 794
LLOC: 430
SLOC: 666
Comments: 50
Multi: 72
Blank: 128
- Comment Stats
(C % L): 6%
(C % S): 8%
(C + M % L): 15%

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Logo for Octavia project

2015-04-15 Thread Trevor Vardeman
I have a couple proposals done up on paper that I'll have available
shortly, I'll reply with a link.

 - Trevor J. Vardeman
 - trevor.varde...@rackspace.com
 - (210) 312 - 4606




On 4/14/15, 5:34 PM, Eichberger, German german.eichber...@hp.com wrote:

All,

Let's decide on a logo tomorrow so we can print stickers in time for
Vancouver. Here are some designs to consider:
http://bit.ly/Octavia_logo_vote

We will discuss more at tomorrow's meeting - Agenda:
https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Meeting_2015
-04-15 - but please come prepared with one of your favorite designs...

Thanks,
German

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-04-15 Thread Igor Kalnitsky
Tomasz, multi-node mode is a legacy non-HA mode with only 1
controller. Currently, our so-called HA mode support deployment with 1
controller, so it makes no sense to support both modes.

On Wed, Apr 15, 2015 at 6:38 PM, Tomasz Napierala
tnapier...@mirantis.com wrote:
 Do you mean single node?

 On 15 Apr 2015, at 17:04, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 FYI. We are going to disable Multi-node mode on UI even in experimental 
 mode. And we will remove related code from nailgun in 7.0.
 https://bugs.launchpad.net/fuel/+bug/1428054

 On Fri, Jan 30, 2015 at 1:39 PM, Aleksandr Didenko adide...@mirantis.com 
 wrote:
 What do you guys think about switching CentOS CI job [1] to HA with single 
 controller (1 controller + 1 or 2 computes)? Just to verify that our 
 replacement of Simple mode works fine.

 [1] 
 https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/

 On Fri, Jan 30, 2015 at 10:54 AM, Mike Scherbakov mscherba...@mirantis.com 
 wrote:
 Thanks Igor for the quick turn over, excellent!

 On Fri, Jan 30, 2015 at 1:19 AM, Igor Belikov ibeli...@mirantis.com wrote:
 Folks,

 Changes in CI jobs have been made, for master branch of fuel-library we are 
 running CentOS HA + Nova VLAN and Ubuntu HA + Neutron VLAN .
 Job naming schema has also been changed, so now it includes actual 
 testgroup. Current links for master branch CI jobs are [1] and [2], all 
 other jobs can be found here[3] or will show up in your gerrit reviews.
 ISO and environments have been updated to the latest ones.

 [1]https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/
 [2]https://fuel-jenkins.mirantis.com/job/master.fuel-library.ubuntu.ha_neutron_vlan/
 [3]https://fuel-jenkins.mirantis.com
 --
 Igor Belikov
 Fuel DevOps
 ibeli...@mirantis.com





 On 29 Jan 2015, at 13:42, Aleksandr Didenko adide...@mirantis.com wrote:

 Mike,

  Any objections / additional suggestions?

 no objections from me, and it's already covered by LP 1415116 bug [1]

 [1] https://bugs.launchpad.net/fuel/+bug/1415116

 On Wed, Jan 28, 2015 at 6:42 PM, Mike Scherbakov mscherba...@mirantis.com 
 wrote:
 Folks,
 one of the things we should not forget about - is out Fuel CI gating 
 jobs/tests. [1], [2].

 One of them is actually runs simple mode. Unfortunately, I don't see 
 details about tests ran for [1], [2], but I'm pretty sure it's same set as 
 [3], [4].

 I suggest to change tests. First of all, we need to get rid of simple runs 
 (since we are deprecating it), and second - I'd like us to run Ubuntu HA + 
 Neutron VLAN for one of the tests.

 Any objections / additional suggestions?

 [1] 
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_centos/
 [2] 
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_ubuntu/
 [3] https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_centos/
 [4] https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_ubuntu/

 On Wed, Jan 28, 2015 at 2:28 PM, Sergey Vasilenko svasile...@mirantis.com 
 wrote:
 +1 to replace simple to HA with one controller

 /sv

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-15 Thread Joshua Harlow

Neil Jerram wrote:

Hi again Joe, (+ list)

On 11/04/15 02:00, joehuang wrote:

Hi, Neil,

See inline comments.

Best Regards

Chaoyi Huang


From: Neil Jerram [neil.jer...@metaswitch.com]
Sent: 09 April 2015 23:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?

Hi Joe,

Many thanks for your reply!

On 09/04/15 03:34, joehuang wrote:

Hi, Neil,

From theoretic, Neutron is like a broadcast domain, for example,
enforcement of DVR and security group has to touch each regarding
host where there is VM of this project resides. Even using SDN
controller, the touch to regarding host is inevitable. If there are
plenty of physical hosts, for example, 10k, inside one Neutron, it's
very hard to overcome the broadcast storm issue under concurrent
operation, that's the bottleneck for scalability of Neutron.


I think I understand that in general terms - but can you be more
specific about the broadcast storm? Is there one particular message
exchange that involves broadcasting? Is it only from the server to
agents, or are there 'broadcasts' in other directions as well?

[[joehuang]] for example, L2 population, Security group rule update,
DVR route update. Both direction in different scenario.


Thanks. In case it's helpful to see all the cases together, sync_routers
(from the L3 agent) was also mentioned in other part of this thread.
Plus of course the liveness reporting from all agents.


(I presume you are talking about control plane messages here, i.e.
between Neutron components. Is that right? Obviously there can also be
broadcast storm problems in the data plane - but I don't think that's
what you are talking about here.)

[[joehuang]] Yes, controll plane here.


Thanks for confirming that.


We need layered architecture in Neutron to solve the broadcast
domain bottleneck of scalability. The test report from OpenStack
cascading shows that through layered architecture Neutron
cascading, Neutron can supports up to million level ports and 100k
level physical hosts. You can find the report here:
http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers



Many thanks, I will take a look at this.


It was very interesting, thanks. And by following through your links I
also learned more about Nova cells, and about how some people question
whether we need any kind of partitioning at all, and should instead
solve scaling/performance problems in other ways... It will be
interesting to see how this plays out.

I'd still like to see more information, though, about how far people
have scaled OpenStack - and in particular Neutron - as it exists today.
Surely having a consensus set of current limits is an important input
into any discussion of future scaling work.


+2 to this...

Shooting for the moon (although nice in theory) is not so useful when 
you can't even get up a hill ;)




For example, Kevin mentioned benchmarking where the Neutron server
processed a liveness update in 50ms and a sync_routers in 300ms.
Suppose, the liveness update time was 50ms (since I don't know in detail
what that  means) and agents report liveness every 30s. Does that mean
that a single Neutron server can only support 600 agents?

I'm also especially interested in the DHCP agent, because in Calico we
have one of those on every compute host. We've just run tests which
appeared to be hitting trouble from just 50 compute hosts onwards, and
apparently because of DHCP agent communications. We need to continue
looking into that and report findings properly, but if anyone already
has any insights, they would be much appreciated.

Many thanks,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] openwrt VM as service

2015-04-15 Thread Dean Troyer
On Wed, Apr 15, 2015 at 2:37 AM, Guo, Ruijing ruijing@intel.com wrote:

   I’d like to propose openwrt VM as service.



 What’s openWRT VM as service:



 a)Tenant can download openWRT VM from
 http://downloads.openwrt.org/

 b)Tenant can create WAN interface from external public network

 c)Tenant can create private network and create instance from
 private network

 d)Tenent can configure openWRT for several services including
 DHCP, route, QoS, ACL and VPNs.



So first off, I'll be the first on in line to promote using OpenWRT for the
basis of appliances for this sort of thing.  I use it to overcome the 'joy'
of VirtualBox's local networking and love what it can do in 64M RAM.

However, what you are describing are services, yes, but I think to focus on
the OpenWRT part of it is missing the point.  For example, Neutron has a
VPNaaS already, but I agree it can also be built using OpenWRT and
OpenVPN.  I don't think it is a stand-alone service though, using a
combination of Heat/{ansible|chef|puppet|salt}/any other
deployment/orchestration can get you there.  I have a shell script
somewhere for doing exactly that on AWS from way back.

What I've always wanted was an image builder that would customize the
packages pre-installed.  This would be especially useful for disposable
ramdisk-only or JFFS images that really can't install additional packages.
Such a front-end to the SDK/imagebuilder sounds like about half of what you
are talking about above.

Also, FWIW, a while back I packaged up a micro cloud-init replacement[0] in
shell that turns out to be really useful.  It's based on something I
couldn't find again to give proper attribution so if anyone knows who
originated this I'd be grateful.

dt

[0] https://github.com/dtroyer/openwrt-packages/tree/master/rc.cloud
-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2015-04-15 10:15:11 -0700:
 Excerpts from Sean Dague's message of 2015-04-14 16:54:30 -0700:
  
  It's time to be honest about the level of support that comes with those
  other backends, deprecate the plugability, and move on to more
  interesting problems. We do have plenty of them to solve. :) Perhaps in
  doing so we could get a better Rabbit implementation and make life
  easier for everyone.
  
 
 I think you're right about most of this, so +1*
 
 *I want to suggest that having this pluggable isn't the problem. Merging
 drivers without integration testing and knowledgeable resources from
 interested parties is the problem. If there isn't a well defined gate
 test, and a team of people willing to respond to any and all issues with
 that infrastructure committed, then the driver should not be shipped
 with oslo.messaging.

I tend to agree, although it's up to the oslo-messaging-core team to
decide what they want to support.

A general note on these sorts of conversations:

It's very easy to look at the state of OpenStack testing now and
say, we must have integration test jobs for oslo.messaging! Don't
forget that most of the work in this repo came out of Nova at a
time when there was no such thing, and we've only just settled on
good processes for managing third-party testing of that sort in
Nova, Cinder, and Neutron. We've been watching that work with
interest, but given the small size of the team currently maintaining
the library, it wasn't necessarily the highest priority.

That said, I know Mehdi and others have been working on setting up
integration test jobs, and I expect that at some point in the
not-too-distant future we'll need to discuss putting a rule into
place for these drivers just like the other projects have for their
drivers.  We don't yet have a sufficiently strong test suite to do
that, though, so requiring test jobs now would be premature.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Doug Hellmann
Excerpts from Ken Giusti's message of 2015-04-15 09:31:18 -0400:
 On Tue, Apr 14, 2015 at 6:23 PM, Joshua Harlow harlo...@outlook.com wrote:
  Ken Giusti wrote:
 
  Just to be clear: you're asking specifically about the 0-10 based
  impl_qpid.py driver, correct?   This is the driver that is used for
  the qpid:// transport (aka rpc_backend).
 
  I ask because I'm maintaining the AMQP 1.0 driver (transport
  amqp://) that can also be used with qpidd.
 
  However, the AMQP 1.0 driver isn't yet Python 3 compatible due to its
  dependency on Proton, which has yet to be ported to python 3 - though
  that's currently being worked on [1].
 
  I'm planning on porting the AMQP 1.0 driver once the dependent
  libraries are available.
 
  [1]: https://issues.apache.org/jira/browse/PROTON-490
 
 
  What's the expected date on this as it appears this also blocks python 3
  work as well... Seems like that hasn't been updated since nov 2014 which
  doesn't inspire that much confidence (especially for what appears to be
  mostly small patches).
 
 
 Good point.  I reached out to the bug owner.  He got it 'mostly
 working' but got hung up on porting the proton unit tests.   I've
 offered to help this along and he's good with that.  I'll make this a
 priority to move this along.
 
 In terms of availability - proton tends to do releases about every 4-6
 months.  They just released 0.9, so the earliest availability would be
 in that 4-6 month window (assuming that should be enough time to
 complete the work).   Then there's the time it will take for the
 various distros to pick it up...
 
 so, definitely not 'real soon now'. :(

This seems like a case where if we can get the libs we need to a point
where they install via pip, we can let the distros catch up instead of
waiting for them.

Similarly, if we have *an* approach for Python 3 on oslo.messaging, that
means the library isn't blocking us from testing applications with
Python 3. If some of the drivers lag, their test jobs may need to be
removed or disabled if the apps start testing under Python 3.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] where is the api to fetch mysql log?

2015-04-15 Thread Peter Stachowski
Hi Li,

Unfortunately, fetching the logs didn't make it into kilo and is still an 
ongoing project.  It should make it into liberty, though.  ;)

Regards,
Peter Stachowski

From: Li Tianqing [mailto:jaze...@163.com]
Sent: April-15-15 10:30 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove] where is the api to fetch mysql log?

Hi,
   all, i know the kilo-rc 1 is released. I found an introduction about new 
features here
http://www.slideshare.net/openstack/trove-juno-to-kilo
   It says that we can fetch mysql error log. Then i search in the source code 
on master branch, and i do not find the api. Can someone help me ?


--
Best
Li Tianqing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2015-04-15 10:48:30 -0700:
 Excerpts from Clint Byrum's message of 2015-04-15 10:15:11 -0700:
  Excerpts from Sean Dague's message of 2015-04-14 16:54:30 -0700:
   
   It's time to be honest about the level of support that comes with those
   other backends, deprecate the plugability, and move on to more
   interesting problems. We do have plenty of them to solve. :) Perhaps in
   doing so we could get a better Rabbit implementation and make life
   easier for everyone.
   
  
  I think you're right about most of this, so +1*
  
  *I want to suggest that having this pluggable isn't the problem. Merging
  drivers without integration testing and knowledgeable resources from
  interested parties is the problem. If there isn't a well defined gate
  test, and a team of people willing to respond to any and all issues with
  that infrastructure committed, then the driver should not be shipped
  with oslo.messaging.
 
 I tend to agree, although it's up to the oslo-messaging-core team to
 decide what they want to support.
 
 A general note on these sorts of conversations:
 
 It's very easy to look at the state of OpenStack testing now and
 say, we must have integration test jobs for oslo.messaging! Don't
 forget that most of the work in this repo came out of Nova at a
 time when there was no such thing, and we've only just settled on
 good processes for managing third-party testing of that sort in
 Nova, Cinder, and Neutron. We've been watching that work with
 interest, but given the small size of the team currently maintaining
 the library, it wasn't necessarily the highest priority.
 
 That said, I know Mehdi and others have been working on setting up
 integration test jobs, and I expect that at some point in the
 not-too-distant future we'll need to discuss putting a rule into
 place for these drivers just like the other projects have for their
 drivers.  We don't yet have a sufficiently strong test suite to do
 that, though, so requiring test jobs now would be premature.

Great points Doug.

A devstack-gate job that is pointed at the major consumers of
oslo.messaging would be enough I think. The library sits at the core
of nearly everything, so I don't think we necessarily need to have
a split-out gate that just tests oslo.messaging narrowly with each
backend. Somewhere we would need _something_ in the gate using a
particular driver to be able to say that one should use it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Ken Giusti
On Wed, Apr 15, 2015 at 1:33 PM, Doug Hellmann d...@doughellmann.com wrote:
 Excerpts from Ken Giusti's message of 2015-04-15 09:31:18 -0400:
 On Tue, Apr 14, 2015 at 6:23 PM, Joshua Harlow harlo...@outlook.com wrote:
  Ken Giusti wrote:
 
  Just to be clear: you're asking specifically about the 0-10 based
  impl_qpid.py driver, correct?   This is the driver that is used for
  the qpid:// transport (aka rpc_backend).
 
  I ask because I'm maintaining the AMQP 1.0 driver (transport
  amqp://) that can also be used with qpidd.
 
  However, the AMQP 1.0 driver isn't yet Python 3 compatible due to its
  dependency on Proton, which has yet to be ported to python 3 - though
  that's currently being worked on [1].
 
  I'm planning on porting the AMQP 1.0 driver once the dependent
  libraries are available.
 
  [1]: https://issues.apache.org/jira/browse/PROTON-490
 
 
  What's the expected date on this as it appears this also blocks python 3
  work as well... Seems like that hasn't been updated since nov 2014 which
  doesn't inspire that much confidence (especially for what appears to be
  mostly small patches).
 

 Good point.  I reached out to the bug owner.  He got it 'mostly
 working' but got hung up on porting the proton unit tests.   I've
 offered to help this along and he's good with that.  I'll make this a
 priority to move this along.

 In terms of availability - proton tends to do releases about every 4-6
 months.  They just released 0.9, so the earliest availability would be
 in that 4-6 month window (assuming that should be enough time to
 complete the work).   Then there's the time it will take for the
 various distros to pick it up...

 so, definitely not 'real soon now'. :(

 This seems like a case where if we can get the libs we need to a point
 where they install via pip, we can let the distros catch up instead of
 waiting for them.


Sadly just the python wrappers are available via pip.  Its C extension
requires that the native proton shared library (libqpid-proton) is
available.   To date we've relied on the distro to provide that
library.

 Similarly, if we have *an* approach for Python 3 on oslo.messaging, that
 means the library isn't blocking us from testing applications with
 Python 3. If some of the drivers lag, their test jobs may need to be
 removed or disabled if the apps start testing under Python 3.

 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Monty Taylor
On 04/14/2015 08:21 PM, Chris Dent wrote:
 On Tue, 14 Apr 2015, Sean Dague wrote:
 
 It's time to be honest about the level of support that comes with those
 other backends, deprecate the plugability, and move on to more
 interesting problems. We do have plenty of them to solve. :) Perhaps in
 doing so we could get a better Rabbit implementation and make life
 easier for everyone.
 
 Etch this in stone about all kinds of pluggability.

+1000


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Introducing the Cloud Service Federation project (cross-project design summit proposal)

2015-04-15 Thread Geoff Arnold
That’s the basic idea.  Now, if you’re a reseller of cloud services, you deploy 
Horizon+Aggregator/Keystone behind your public endpoint, with your branding on 
Horizon. You then bind each of your Aggregator Regions to a Virtual Region from 
one of your providers. As a reseller, you don’t actually deploy the rest of 
OpenStack.

As for tokens, there are at least two variations, each with pros and cons: 
proxy and pass-through. Still working through implications of both.

Geoff


 On Apr 15, 2015, at 12:49 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 
 So, an Aggregator would basically be a stripped down keystone that basically 
 provided a dynamic service catalog that points to the registered other 
 regions?  You could then point a horizon, cli, or rest api at the aggregator 
 service?
 
 I guess if it was an identity provider too, it can potentially talk to the 
 remote keystone and generate project scoped tokens, though you'd need 
 project+region scoped tokens, which I'm not sure exists today?
 
 Thanks,
 Kevin
 
 
 From: Geoff Arnold [ge...@geoffarnold.com]
 Sent: Wednesday, April 15, 2015 12:05 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [all] Introducing the Cloud Service Federation 
 project (cross-project design summit proposal)
 
 tl;dr We want to implement a new system which we’re calling an Aggregator 
 which is based on Horizon and Keystone, and that can provide access to 
 virtual Regions from multiple independent OpenStack providers. We plan on 
 developing this system as a project in Stackforge, but we need help right now 
 in identifying any unexpected dependencies.
 
 
 
 For the last 6-7 years, there has been great interest in the potential for 
 various business models involving multiple clouds and/or cloud providers. 
 These business models include but are not limited to, federation, reseller, 
 broker, cloud-bursting, hybrid and intercloud. The core concept of this 
 initiative is to go beyond the simple dyadic relationship between a cloud 
 service provider and a cloud service consumer to a more sophisticated “supply 
 chain” of cloud services, dynamically configured, and operated by different 
 business entities. This is an ambitious goal, but there is a general sense 
 that OpenStack is becoming stable and mature enough to support such an 
 undertaking.
 
 Until now, OpenStack has focused on the logical abstraction of a Region as 
 the basis for cloud service consumption. A user interacts with Horizon and 
 Keystone instances for a Region, and through them gains access to the 
 services and resources within the specified Region. A recent extension of 
 this model has been to share Horizon and Keystone instances between several 
 Regions. This simplifies the user experience and enables single sign on to a 
 “single pane of glass”. However, in this configuration all of the services, 
 shared or otherwise, are still administered by a single entity, and the 
 configuration of the whole system is essentially static and centralized.
 
 We’re proposing that the first step in realizing the Cloud Service Federation 
 use cases is to enable the administrative separation of interface and region: 
 to create a new system which provides the same user interface as today - 
 Horizon, Keystone - but which is administratively separate from the Region(s) 
 which provide the actual IaaS resources. We don’t yet have a good name for 
 this system; we’ve been referring to it as the “Aggregator”. It includes 
 slightly-modified Horizon and Keystone services, together with a subsystem 
 which configures these services to implement the mapping of “Aggregator 
 Regions” to multiple, administratively independent, “Provider Regions”. Just 
 as the User-Provider relationship in OpenStack is “on demand”, we want the 
 Aggregator-Provider mappings to be dynamic, established by APIs, rather than 
 statically configured. We want to achieve this without substantially changing 
 the user experience, and with no changes to applications or to core OpenStack 
 services. The Aggregator represents an additional way of accessing a cloud; 
 it does not replace the existing Horizon and Keystone.
 
 The functionality and workflow is as follows: A user, X, logs into the 
 Horizon interface provided by Aggregator A. The user sees two Regions, V1 and 
 V2, and selects V1. This Region is actually provided by OpenStack service 
 provider P; it’s the Region which P knows as R4.  X now creates a new tenant 
 project, T. Leveraging the Hierarchical Multitenancy work in Kilo, T is 
 actually instantiated as a subproject of a Domain in R4, thus providing 
 namespace isolation and quota management. Now X can deploy and operate her 
 project T as she is used to, using Horizon, CLI, or other client-side tools. 
 UI and API requests are forwarded by the Aggregator to P’s Region R4. [I’ll 
 transfer this to the wiki and add diagrams.]
 
 As noted, 

Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2015-04-15 11:02:34 -0700:
 Excerpts from Doug Hellmann's message of 2015-04-15 10:48:30 -0700:
  Excerpts from Clint Byrum's message of 2015-04-15 10:15:11 -0700:
   Excerpts from Sean Dague's message of 2015-04-14 16:54:30 -0700:

It's time to be honest about the level of support that comes with those
other backends, deprecate the plugability, and move on to more
interesting problems. We do have plenty of them to solve. :) Perhaps in
doing so we could get a better Rabbit implementation and make life
easier for everyone.

   
   I think you're right about most of this, so +1*
   
   *I want to suggest that having this pluggable isn't the problem. Merging
   drivers without integration testing and knowledgeable resources from
   interested parties is the problem. If there isn't a well defined gate
   test, and a team of people willing to respond to any and all issues with
   that infrastructure committed, then the driver should not be shipped
   with oslo.messaging.
  
  I tend to agree, although it's up to the oslo-messaging-core team to
  decide what they want to support.
  
  A general note on these sorts of conversations:
  
  It's very easy to look at the state of OpenStack testing now and
  say, we must have integration test jobs for oslo.messaging! Don't
  forget that most of the work in this repo came out of Nova at a
  time when there was no such thing, and we've only just settled on
  good processes for managing third-party testing of that sort in
  Nova, Cinder, and Neutron. We've been watching that work with
  interest, but given the small size of the team currently maintaining
  the library, it wasn't necessarily the highest priority.
  
  That said, I know Mehdi and others have been working on setting up
  integration test jobs, and I expect that at some point in the
  not-too-distant future we'll need to discuss putting a rule into
  place for these drivers just like the other projects have for their
  drivers.  We don't yet have a sufficiently strong test suite to do
  that, though, so requiring test jobs now would be premature.
 
 Great points Doug.
 
 A devstack-gate job that is pointed at the major consumers of
 oslo.messaging would be enough I think. The library sits at the core
 of nearly everything, so I don't think we necessarily need to have
 a split-out gate that just tests oslo.messaging narrowly with each
 backend. Somewhere we would need _something_ in the gate using a
 particular driver to be able to say that one should use it.
 

Sure, that makes sense.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Introducing the Cloud Service Federation project (cross-project design summit proposal)

2015-04-15 Thread Fox, Kevin M
So, an Aggregator would basically be a stripped down keystone that basically 
provided a dynamic service catalog that points to the registered other regions? 
 You could then point a horizon, cli, or rest api at the aggregator service?

I guess if it was an identity provider too, it can potentially talk to the 
remote keystone and generate project scoped tokens, though you'd need 
project+region scoped tokens, which I'm not sure exists today?

Thanks,
Kevin


From: Geoff Arnold [ge...@geoffarnold.com]
Sent: Wednesday, April 15, 2015 12:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [all] Introducing the Cloud Service Federation project 
(cross-project design summit proposal)

tl;dr We want to implement a new system which we’re calling an Aggregator which 
is based on Horizon and Keystone, and that can provide access to virtual 
Regions from multiple independent OpenStack providers. We plan on developing 
this system as a project in Stackforge, but we need help right now in 
identifying any unexpected dependencies.



For the last 6-7 years, there has been great interest in the potential for 
various business models involving multiple clouds and/or cloud providers. These 
business models include but are not limited to, federation, reseller, broker, 
cloud-bursting, hybrid and intercloud. The core concept of this initiative is 
to go beyond the simple dyadic relationship between a cloud service provider 
and a cloud service consumer to a more sophisticated “supply chain” of cloud 
services, dynamically configured, and operated by different business entities. 
This is an ambitious goal, but there is a general sense that OpenStack is 
becoming stable and mature enough to support such an undertaking.

Until now, OpenStack has focused on the logical abstraction of a Region as the 
basis for cloud service consumption. A user interacts with Horizon and Keystone 
instances for a Region, and through them gains access to the services and 
resources within the specified Region. A recent extension of this model has 
been to share Horizon and Keystone instances between several Regions. This 
simplifies the user experience and enables single sign on to a “single pane of 
glass”. However, in this configuration all of the services, shared or 
otherwise, are still administered by a single entity, and the configuration of 
the whole system is essentially static and centralized.

We’re proposing that the first step in realizing the Cloud Service Federation 
use cases is to enable the administrative separation of interface and region: 
to create a new system which provides the same user interface as today - 
Horizon, Keystone - but which is administratively separate from the Region(s) 
which provide the actual IaaS resources. We don’t yet have a good name for this 
system; we’ve been referring to it as the “Aggregator”. It includes 
slightly-modified Horizon and Keystone services, together with a subsystem 
which configures these services to implement the mapping of “Aggregator 
Regions” to multiple, administratively independent, “Provider Regions”. Just as 
the User-Provider relationship in OpenStack is “on demand”, we want the 
Aggregator-Provider mappings to be dynamic, established by APIs, rather than 
statically configured. We want to achieve this without substantially changing 
the user experience, and with no changes to applications or to core OpenStack 
services. The Aggregator represents an additional way of accessing a cloud; it 
does not replace the existing Horizon and Keystone.

The functionality and workflow is as follows: A user, X, logs into the Horizon 
interface provided by Aggregator A. The user sees two Regions, V1 and V2, and 
selects V1. This Region is actually provided by OpenStack service provider P; 
it’s the Region which P knows as R4.  X now creates a new tenant project, T. 
Leveraging the Hierarchical Multitenancy work in Kilo, T is actually 
instantiated as a subproject of a Domain in R4, thus providing namespace 
isolation and quota management. Now X can deploy and operate her project T as 
she is used to, using Horizon, CLI, or other client-side tools. UI and API 
requests are forwarded by the Aggregator to P’s Region R4. [I’ll transfer this 
to the wiki and add diagrams.]

As noted, the high-level workflow is relatively straightforward, but we need to 
understand two important concepts. First, how does P make R4 available for use 
by A? Are all of the services and resources in R4 available to A, or can P 
restrict things in some way? What’s the lifecycle of the relationship? 
Secondly, how do we handle identity? Can we arrange that same identity provider 
is used in the Aggregator and in the relevant domain within R4? One answer to 
these issues is to introduce what Mark Shuttleworth called “virtual Regions” at 
his talk in Paris; add a layer which exposes a Domain within a Region (with 
associated IDM, 

Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-04-15 06:50:11 -0700:
 == End game?
 
 *If* pip install took into account the requirements of everything
 already installed like apt or yum does, and resolve accordingly
 (including saying that's not possible unless you uninstall or upgrade
 X), we'd be able to pip install and get a working answer at the end. Maybe?
 
 Honestly, there are so many fixes on fixes here to our system, I'm not
 sure even this would fix it.
 

This also carries a new problem: co-installability. If you get deep into
Debian policy, you'll find that many of the policies that make the least
sense are there to preserve co-installability. For instance, Node.js
caused a stir a while back because they use '/usr/bin/node' for their
interpreter. There was already an X.25 packet-radio-thing that used that
binary and was called node, and so the node.js maintainers simply said
Conflicts: node. This causes problem though, as now you can't have an
X.25 packet-radio-thing that also uses node.js.

Right now users can go ahead and violate some stated requirements after
they've run tests and verified whatever reason the conflict is present
doesn't affect them, by simply ordering their requirements. It's not
awesome, but it _does_ work. Without that, pypi suddenly is sectioned
off into islands the moment a popular library narrows its requirements.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] openwrt VM as service

2015-04-15 Thread Sławek Kapłoński
Hello,

I agree. IMHO it should be maybe something like *aaS deployed on VM. I
think that Octavia is something like that for LBaaS now.
Maybe it could be something like RouteraaS which will provide all such
functions in VM?

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Wed, Apr 15, 2015 at 11:55:06AM -0500, Dean Troyer wrote:
 On Wed, Apr 15, 2015 at 2:37 AM, Guo, Ruijing ruijing@intel.com wrote:
 
I’d like to propose openwrt VM as service.
 
 
 
  What’s openWRT VM as service:
 
 
 
  a)Tenant can download openWRT VM from
  http://downloads.openwrt.org/
 
  b)Tenant can create WAN interface from external public network
 
  c)Tenant can create private network and create instance from
  private network
 
  d)Tenent can configure openWRT for several services including
  DHCP, route, QoS, ACL and VPNs.
 
 
 
 So first off, I'll be the first on in line to promote using OpenWRT for the
 basis of appliances for this sort of thing.  I use it to overcome the 'joy'
 of VirtualBox's local networking and love what it can do in 64M RAM.
 
 However, what you are describing are services, yes, but I think to focus on
 the OpenWRT part of it is missing the point.  For example, Neutron has a
 VPNaaS already, but I agree it can also be built using OpenWRT and
 OpenVPN.  I don't think it is a stand-alone service though, using a
 combination of Heat/{ansible|chef|puppet|salt}/any other
 deployment/orchestration can get you there.  I have a shell script
 somewhere for doing exactly that on AWS from way back.
 
 What I've always wanted was an image builder that would customize the
 packages pre-installed.  This would be especially useful for disposable
 ramdisk-only or JFFS images that really can't install additional packages.
 Such a front-end to the SDK/imagebuilder sounds like about half of what you
 are talking about above.
 
 Also, FWIW, a while back I packaged up a micro cloud-init replacement[0] in
 shell that turns out to be really useful.  It's based on something I
 couldn't find again to give proper attribution so if anyone knows who
 originated this I'd be grateful.
 
 dt
 
 [0] https://github.com/dtroyer/openwrt-packages/tree/master/rc.cloud
 -- 
 
 Dean Troyer
 dtro...@gmail.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Introducing the Cloud Service Federation project (cross-project design summit proposal)

2015-04-15 Thread Geoff Arnold
tl;dr We want to implement a new system which we’re calling an Aggregator which 
is based on Horizon and Keystone, and that can provide access to virtual 
Regions from multiple independent OpenStack providers. We plan on developing 
this system as a project in Stackforge, but we need help right now in 
identifying any unexpected dependencies.
 
 
 
For the last 6-7 years, there has been great interest in the potential for 
various business models involving multiple clouds and/or cloud providers. These 
business models include but are not limited to, federation, reseller, broker, 
cloud-bursting, hybrid and intercloud. The core concept of this initiative is 
to go beyond the simple dyadic relationship between a cloud service provider 
and a cloud service consumer to a more sophisticated “supply chain” of cloud 
services, dynamically configured, and operated by different business entities. 
This is an ambitious goal, but there is a general sense that OpenStack is 
becoming stable and mature enough to support such an undertaking.
 
Until now, OpenStack has focused on the logical abstraction of a Region as the 
basis for cloud service consumption. A user interacts with Horizon and Keystone 
instances for a Region, and through them gains access to the services and 
resources within the specified Region. A recent extension of this model has 
been to share Horizon and Keystone instances between several Regions. This 
simplifies the user experience and enables single sign on to a “single pane of 
glass”. However, in this configuration all of the services, shared or 
otherwise, are still administered by a single entity, and the configuration of 
the whole system is essentially static and centralized.
 
We’re proposing that the first step in realizing the Cloud Service Federation 
use cases is to enable the administrative separation of interface and region: 
to create a new system which provides the same user interface as today - 
Horizon, Keystone - but which is administratively separate from the Region(s) 
which provide the actual IaaS resources. We don’t yet have a good name for this 
system; we’ve been referring to it as the “Aggregator”. It includes 
slightly-modified Horizon and Keystone services, together with a subsystem 
which configures these services to implement the mapping of “Aggregator 
Regions” to multiple, administratively independent, “Provider Regions”. Just as 
the User-Provider relationship in OpenStack is “on demand”, we want the 
Aggregator-Provider mappings to be dynamic, established by APIs, rather than 
statically configured. We want to achieve this without substantially changing 
the user experience, and with no changes to applications or to core OpenStack 
services. The Aggregator represents an additional way of accessing a cloud; it 
does not replace the existing Horizon and Keystone.
 
The functionality and workflow is as follows: A user, X, logs into the Horizon 
interface provided by Aggregator A. The user sees two Regions, V1 and V2, and 
selects V1. This Region is actually provided by OpenStack service provider P; 
it’s the Region which P knows as R4.  X now creates a new tenant project, T. 
Leveraging the Hierarchical Multitenancy work in Kilo, T is actually 
instantiated as a subproject of a Domain in R4, thus providing namespace 
isolation and quota management. Now X can deploy and operate her project T as 
she is used to, using Horizon, CLI, or other client-side tools. UI and API 
requests are forwarded by the Aggregator to P’s Region R4. [I’ll transfer this 
to the wiki and add diagrams.]
 
As noted, the high-level workflow is relatively straightforward, but we need to 
understand two important concepts. First, how does P make R4 available for use 
by A? Are all of the services and resources in R4 available to A, or can P 
restrict things in some way? What’s the lifecycle of the relationship? 
Secondly, how do we handle identity? Can we arrange that same identity provider 
is used in the Aggregator and in the relevant domain within R4? One answer to 
these issues is to introduce what Mark Shuttleworth called “virtual Regions” at 
his talk in Paris; add a layer which exposes a Domain within a Region (with 
associated IDM, quotas, and other policies) as a browsable, consumable resource 
aggregate. To implement this, P can add a new service to R4, the Virtual Region 
Manager, with the twin roles of defining Virtual Regions in terms of physical 
Region resources, and managing the service provider side of the negotiation 
with the Aggregator when setting up Aggregator-to-provider mappings. The 
intention is that the Virtual Region Manager will be a non-disruptive add-on to 
an existing OpenStack deployment.
 
Obviously there are many more issues to be solved, both within OpenStack and 
outside (especially in the areas of OSS and BSS). However, we have the 
beginnings of an architecture which seems to address many of the interesting 
use cases. The immediate question is how to 

[openstack-dev] [doc] Kilo doc bug triage day - choose day

2015-04-15 Thread Anne Gentle
Hi all,

Round the clock and around the world, we need to dedicate a day to triaging
doc bugs in anticipation of the Kilo release. I'd like to propose either
Tuesday April 21 or Thursday April 23.

All the details are here:
https://wiki.openstack.org/wiki/Documentation/BugDay and I'll add the exact
date after getting input here.

Please join in. Doc team members, sign up for a time to be available on IRC
for questions. Everyone join in triaging even if you just leave a comment
on a bug that will help someone fix it.

Thanks,
Anne

-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Doug Hellmann
Excerpts from Ken Giusti's message of 2015-04-15 14:08:57 -0400:
 On Wed, Apr 15, 2015 at 1:33 PM, Doug Hellmann d...@doughellmann.com wrote:
  Excerpts from Ken Giusti's message of 2015-04-15 09:31:18 -0400:
  On Tue, Apr 14, 2015 at 6:23 PM, Joshua Harlow harlo...@outlook.com 
  wrote:
   Ken Giusti wrote:
  
   Just to be clear: you're asking specifically about the 0-10 based
   impl_qpid.py driver, correct?   This is the driver that is used for
   the qpid:// transport (aka rpc_backend).
  
   I ask because I'm maintaining the AMQP 1.0 driver (transport
   amqp://) that can also be used with qpidd.
  
   However, the AMQP 1.0 driver isn't yet Python 3 compatible due to its
   dependency on Proton, which has yet to be ported to python 3 - though
   that's currently being worked on [1].
  
   I'm planning on porting the AMQP 1.0 driver once the dependent
   libraries are available.
  
   [1]: https://issues.apache.org/jira/browse/PROTON-490
  
  
   What's the expected date on this as it appears this also blocks python 3
   work as well... Seems like that hasn't been updated since nov 2014 which
   doesn't inspire that much confidence (especially for what appears to be
   mostly small patches).
  
 
  Good point.  I reached out to the bug owner.  He got it 'mostly
  working' but got hung up on porting the proton unit tests.   I've
  offered to help this along and he's good with that.  I'll make this a
  priority to move this along.
 
  In terms of availability - proton tends to do releases about every 4-6
  months.  They just released 0.9, so the earliest availability would be
  in that 4-6 month window (assuming that should be enough time to
  complete the work).   Then there's the time it will take for the
  various distros to pick it up...
 
  so, definitely not 'real soon now'. :(
 
  This seems like a case where if we can get the libs we need to a point
  where they install via pip, we can let the distros catch up instead of
  waiting for them.
 
 
 Sadly just the python wrappers are available via pip.  Its C extension
 requires that the native proton shared library (libqpid-proton) is
 available.   To date we've relied on the distro to provide that
 library.

OK, that may pose more of a problem. It is possible to put C extensions
into a Python library and make them pip installable, so that might be
our path out.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Sean Dague
On 04/15/2015 01:48 PM, Doug Hellmann wrote:
 Excerpts from Clint Byrum's message of 2015-04-15 10:15:11 -0700:
 Excerpts from Sean Dague's message of 2015-04-14 16:54:30 -0700:

 It's time to be honest about the level of support that comes with those
 other backends, deprecate the plugability, and move on to more
 interesting problems. We do have plenty of them to solve. :) Perhaps in
 doing so we could get a better Rabbit implementation and make life
 easier for everyone.


 I think you're right about most of this, so +1*

 *I want to suggest that having this pluggable isn't the problem. Merging
 drivers without integration testing and knowledgeable resources from
 interested parties is the problem. If there isn't a well defined gate
 test, and a team of people willing to respond to any and all issues with
 that infrastructure committed, then the driver should not be shipped
 with oslo.messaging.
 
 I tend to agree, although it's up to the oslo-messaging-core team to
 decide what they want to support.

I do feel like decisions like this need to uplevel from just the library
maintainers, because it's a think that as a community we need to all be
willing to stand up and support the way OpenStack works, for some
definition of that. Otherwise we're going to get into a situation where
a lot of projects are just going to say: oh, not rabbit, go talk to
those folks instead. Nothing infuriates people more than support
telephone tag.

I don't think that's a situation we want to put the oslo-messaging team
in, and I don't think it's the way we want to work as a community.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Robert Collins
On 14 April 2015 at 21:36, Thierry Carrez thie...@openstack.org wrote:
 Robert Collins wrote:
 On 13 April 2015 at 22:04, Thierry Carrez thie...@openstack.org wrote:
 How does this proposal affect stable branches ? In order to keep the
 breakage there under control, we now have stable branches for all the
 OpenStack libraries and cap accordingly[1]. We planned to cap all other
 libraries to the version that was there when the stable branch was
 cut.  Where would we do those cappings in the new world order ? In
 install_requires ? Or should we not do that anymore ?

 [1]
 http://specs.openstack.org/openstack/openstack-specs/specs/library-stable-branches.html

 I don't think there's a hard and fast answer here. Whats proposed
 there should work fine.
 [...]

 tl;dr - I dunno :)

 This is not our first iteration at this, and it feels like we never come
 up with a solution that covers all the bases. Past failure can generally
 be traced back to the absence of the owner of a critical puzzle piece...
 so what is the way forward ?

 Should we write a openstack-specs and pray that all the puzzle piece
 owners review it ?

 Should we lock up all the puzzle piece owners in the QA/Infra/RelMgt
 Friday sprint room in Vancouver and get them (us) to sort it out ?

That might be good.

There are I think four or five distinct owners here:
 - CI
 [wants no spurious failures]
 - mordreds publish-what-works-for-the-non-CICD-users, aka (AFAICT)
'folk that consume git or perhaps tarballs but mainly git'
 [wants to be able to reproduce what CI had installed and not have
things require hours of debugging]
 - redistributors [including folk that build debs and rpms from git
and tarballs]
 [wants to be able to fit what we produced into existing
*usually-not-latest-of-everything* environments
 - developers
 [want to be able to bump/cap/change dependencies as dependencies
bring in new features (and/or breaking releases]

If we're going to lock folk into a room, lets make sure we have all of
them there :)

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Solum PTL Election Delayed

2015-04-15 Thread Adrian Otto
Solum Electorate,

In accordance with tradition, we make an effort to delay our election until 
after OpenStack PTL elections conclude. We planned to open our PTL Candidacy 
today, and start an election on April 23, if one is required. Because the 
OpenStack election was stopped and re-started [1], I am delaying those dates 
until April 18 and April 25 respectively so that our candidacy begins after the 
OpenStack PTL election ends.

Regards,

Adrian Otto

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061263.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Robert Collins
On 16 April 2015 at 00:51, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-04-15 11:06:20 +0200 (+0200), Thierry Carrez wrote:
 And the doc is indeed pretty clear. I assumed requirements.txt would
 describe... well... requirements. But like Robert said they are meant to
 describe specific deployments (should really be have been named
 deployment.txt, or at least dependencies.txt).

 It may also just be that we overloaded the meaning of that filename
 convention without realizing. Rewind to a couple years ago we had
 essentially the same file but it was called tools/pip-requires
 instead. I wonder if continuing to have it called something else
 would have been less confusing to the Python developer community,
 but the damage is done now.

 Ultimately we just want a way to maintain a list of application or
 library dependencies in such a way that when someone uses pip
 install they get a fully-working installation without having to know
 to run additional commands, and for us to be able to keep that list
 in a machine-parsable file which isn't also source code fed to a
 turing-complete interpreter.

I think this is too narrow a description of our needs. See the
audience list I proposed to Thierry :).

The heart of the conflict is this:
'pip install git+/nova.git' cannot itself have Known Good
dependencies. Because multiple repos prohibit developers putting exact
versions in install_requires, and chicken-and-egg with knowing its
good from CI if we generate the file from CI.

install_requires can however have Known Bad exclusions. Which is the
intended use. And we can (and should) have a Known Good somewhere.

We could generate that in advance as
https://review.openstack.org/#/c/161047/ proposes - we'll know that
any commit that got through CI worked with a precise list fed into it,
but we can't trivially reconstruct what list was used. (Log scraping
is not 'trivial' - it requires deep knowledge of our infra and
processes).

Or we can generate it as a by product of CI, same issued about getting
hold of the data as in my prior paragraph, but we'll have the ability
to use floating versions as our inputs - so this would be suitable and
relevant for master as well as stable.

We can of course do both:
 - precapped gate lists for stable branches, and captured output to a
common place for consumption by deployers/testers
 - looser gate lists for master branches, captured in the same way to
the same place as stable branches, for deployers/testsers

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Sean Dague
On 04/15/2015 06:44 PM, Robert Collins wrote:
 On 16 April 2015 at 01:50, Sean Dague s...@dague.net wrote:
 On 04/12/2015 06:43 PM, Robert Collins wrote:
 
 Thoughts? If there's broad apathy-or-agreement I can turn this into a
 spec for fine coverage of ramifications and corner cases.

 I'm definitely happy someone else is diving in on here, just beware the
 dragons, there are many.

 I think some of the key problems are the following (lets call these the
 requirements requirements):
 
 :) There's definitely enough meat here we're going to want a spec to
 review in one place the conclusions and inputs.
 
 == We would like to be able to install multiple projects into a single
 devstack instance, and have all services work.
 
 -- why do we want this? [for completeness - no objecting]. I think we
 want this because its easier for folk mucking around to not have to
 remember which venv etc; because we have projects like neutron and
 nova that install bits into each others processes via common
 libraries; because deployers have asked us to be sure that we can
 co-install everything.

I think the completeness statement here is as follows:

1. For OpenStack to scale to the small end, we need to be able to
overlap services, otherwise you are telling people they basically have
to start with a full rack of hardware to get 1 worker.

2. The Linux Distributors (who are a big part of our community) install
everything at a system level. Python has no facility for having multiple
versions of libraries installed at the system level, largely because
virtualenvs were created (which solve the non system application level
problem really well).

3. The alternative of putting a venv inside of every service violates
the principle of single location security update.

Note: there is an aside about why we might *not* want to do this

That being said, if you deploy at a system level your upgrade unit is
now 1 full box, instead of 1 service, because you can't do library
isolation between old and new. A worker might have neutron, cinder, and
nova agents. Only the Nova agents support rolling upgrade (cinder is
working hard on this, I don't think neutron has visited this yet). So
real rolling upgrade is sacrificed on this alter of install everything
at a system level.

 
 This is hard because:

 1. these are multiple projects so pip can't resolve all requirements at
 once to get to a solved state (also, optional dependencies in particular
 configs mean these can be installed later)
 
 I don't understand your first clause here. Pip certainly can resolve
 all requirements at once: for instance, 'pip install path_to_nova
 path_to_swift path_to_neutron' would resolve all the requirements for
 all three at once. We're not doing that today, but its not a pip
 limitation. Today https://github.com/pypa/pip/issues/2687 will rear
 its head, but that may be quite shallow.
 
 As far as optional deps go - we need to get those into extra
 requirements, then pip can examine that for us. Enabling that is on my
 stack that I'm rolling up at the moment.
 
 2. pip's solver ignores setup_requires - https://github.com/pypa/pip/issues
 /2612#issuecomment-91114298 which means we can get inconsistent results
 
 Ish. The actual issue we ran into was
 https://bitbucket.org/pypa/setuptools/issue/141/setup_requires-feature-does-not-handle
 . We can tackle that directly and then require a newer setuptools to
 solve this - it doesn't need any larger context.
 
 3. doing this iteratively in projects can cause the following to happen

 A requires B1.0,2.0
 C requires B1.2

 pip install C can make the pip install A requirements invalid later.
 
 https://github.com/pypa/pip/issues/2687 again. I suspect this is a ~10
 line patch - read all the package metadata present on the system, and
 union its deps in inside PackageSet.add_requirement.
 
 This can end up in a failure of a service to start (if pkg_resources is
 actually checking things), or very subtle bugs later.

 Today global-requirements attempts to address this by continuously
 narrowing the requirements definitions for everything we have under our
 control so that pip is living in a rubber room and can only get an
 answer we know works.
 
 == However this has exposed an additional issue, libraries not
 released at release time

 Way more things are getting g-r syncs than top level projects.
 Synchronizing requirements for things that all release at the same time
 makes a lot of sense. However we're synchronizing requirements into
 libraries that release at different cadence. This has required all
 libraries to also have stable/ branches, for requirements matching.

 In an ideal world libraries would have very broad requirements, which
 would not have caps in them. non library projects would have narrower
 requirements that we know work.
 
 I mostly agree. That is I think the heart of the issue I started this
 thread about.
 For libraries I trivially agree.
 For non-library projects, I think we still 

Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Robert Collins
On 15 April 2015 at 09:33, Joe Gordon joe.gord...@gmail.com wrote:


 On Tue, Apr 14, 2015 at 2:36 AM, Thierry Carrez thie...@openstack.org
 wrote:

 Robert Collins wrote:
  On 13 April 2015 at 22:04, Thierry Carrez thie...@openstack.org wrote:
  How does this proposal affect stable branches ? In order to keep the
  breakage there under control, we now have stable branches for all the
  OpenStack libraries and cap accordingly[1]. We planned to cap all other
  libraries to the version that was there when the stable branch was
  cut.  Where would we do those cappings in the new world order ? In
  install_requires ? Or should we not do that anymore ?
 
  [1]
 
  http://specs.openstack.org/openstack/openstack-specs/specs/library-stable-branches.html
 
  I don't think there's a hard and fast answer here. Whats proposed
  there should work fine.
  [...]
 
  tl;dr - I dunno :)


 This is the part of the puzzle I am the most interested in. Making sure
 stable branches don't break out from underneath us forcing us to go into
 fire fighting mode.

I think this is very important too. The spec that James linked to
https://review.openstack.org/#/c/161047/  seems broadly to be heading
in the right direction to me - my -1 on it is because the described
failure mode is very solvable vs e.g. failure modes where dependencies
release broken stuff.

It is in short 'make a fully specified requirements list and use that
via pip -r in stable branch gate jobs'. I think thats entirely sane
(and have been discussing that in various places for a while :)).

*this* thread is about making install_requires, which we currently
source from requirements.txt, be sourced separately (from setup.cfg)
and then we don't need per-repo requirements.txts, which will align us
with the broader ecosystem as far as the definition of
requirements.txt.

We *do* have uses for fully specified things, and one of them is 'what
worked in that test run', which is what the etherpad is intended to
capture.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Robert Collins
On 15 April 2015 at 09:35, Joe Gordon joe.gord...@gmail.com wrote:


 On Sun, Apr 12, 2015 at 6:09 PM, Robert Collins robe...@robertcollins.net
 wrote:

 On 13 April 2015 at 12:53, Monty Taylor mord...@inaugust.com wrote:

  If we pin the stable branches with hard pins of direct and indirect
  dependencies, we can have our stable branch artifacts be installable.
  Thats awesome. IF there is a bugfix release or a security update to a
  dependent library - someone can propose it. Otherwise, the stable
  release should not be moving.

 Can we do that in stable branches? We've still got the problem of
 bumping dependencies across multiple packages.

 What do you mean bumping dependencies across mulitple packages?

nova depends on oslo.messaging.
Both nova and oslo.messaging have $foo in install_requires. A hard pin
of $foo==X will then be applied by both nova and oslo.messaging.

AIUI we've moved to installing deps from PyPI (which is great), but
lets analyze both cases.

a) we install oslo.messaging from PyPI.
  - we go to change our version of foo to X+1 in nova
  - the gate job that overrides install_requires (by editing
requirements.txt from memory, which feeds into install_requires via
our reflection logic) will then fail when oslo.messaging is evaluated,
because X+1 != X and oslo.messaging requires ==X.

b) we install oslo.messaging from git
 - we go to change our version of foo to X+1 in nova
 - we edit oslo.messaging's install_requires as well as nova's
 - the commit succeeds, but oslo.messaging git and nova git are no
longer coinstallable, because oslo.messaging's foo==X has not been
changed in git

 We cannot do this today with 'pip install -r requirements.txt' but we can
 with 'pip install -r --no-deps requirements.txt'  if requirements includes
 all transitive dependencies. And then we have to figure out transitive
 dependencies for all projects etc.

pip install -r a-requirements-file-from-pip-freeze should always work,
because all transitive dependencies are included, and the versions
selected are mutually co-installable (assuming the environment that
was pip freeze's was safely constructed [there is one bug
https://github.com/pypa/pip/issues/2687 I believe that could be
triggered as we use multiple pip install runs to install stuff]).

The issues we have with our current requirements.txt and the different
ways it might be used are the source of us having had to try --no-deps
before: we have global requirements broad because it takes time to
propogate a change across repos, and we can't bump lower versions when
that breaks not-yet-ready repos (because we use global requirements to
override in the gate).

The issue inside pip that triggers this is:
 - when a requirement is added, we initially store the constraint for it
 - then we evaluate it against the indices in use, finding a version
that matches the constraints and selecting that.
 - if a later source of requirements (e.g. the install_requires from
the wheel or sdist of something we selected) has the same requirement,
and the selected version does not meet the constraints this new
requirement expressed, pip errors.

What needs to happen is that we need to be able to backtrack the
selection state back to where we selected the version that didn't
work, and re-evaluate without that one. This is of course
theoretically NPC because as we discover each version the conflict
might change each time as well... and the need to actually run sdist
egg_info to find out the requirements from each version adds even more
pain.

In principle though, a solid cache layer + a SAT tool should eat it up
for breakfast - but we need to rearrange a chunk of internal code to
be able to express it this way. OTOH I'm in the middle of doing that
because I need much of the same plumbing to be able to do
setup_requires in the way the setuptools and pip folk want. So... its
coming.

***but***

none of that matters, because the output from freeze - transitive
locked versions - will Just Work today.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Robert Collins
On 16 April 2015 at 07:58, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Sean Dague's message of 2015-04-15 06:50:11 -0700:
 == End game?

 *If* pip install took into account the requirements of everything
 already installed like apt or yum does, and resolve accordingly
 (including saying that's not possible unless you uninstall or upgrade
 X), we'd be able to pip install and get a working answer at the end. Maybe?

 Honestly, there are so many fixes on fixes here to our system, I'm not
 sure even this would fix it.


 This also carries a new problem: co-installability. If you get deep into
 Debian policy, you'll find that many of the policies that make the least
 sense are there to preserve co-installability. For instance, Node.js
 caused a stir a while back because they use '/usr/bin/node' for their
 interpreter. There was already an X.25 packet-radio-thing that used that
 binary and was called node, and so the node.js maintainers simply said
 Conflicts: node. This causes problem though, as now you can't have an
 X.25 packet-radio-thing that also uses node.js.

 Right now users can go ahead and violate some stated requirements after
 they've run tests and verified whatever reason the conflict is present
 doesn't affect them, by simply ordering their requirements. It's not
 awesome, but it _does_ work. Without that, pypi suddenly is sectioned
 off into islands the moment a popular library narrows its requirements.

There's already --no-deps to give folk an escape clause in the pip
world. The coinstallability problem is not created by the tool
observing it - its just made explicit.

And honestly as a user, I prefer 'no you can't break the other thing
you installed' to 'oops, I did it again'.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday April 16th at 22:00 UTC

2015-04-15 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, April 16th at 22:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
07:30 ACST
00:00 CEST
17:00 CDT
15:00 PDT

-Matt Treinish


pgpP1ork4JLpB.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Robert Collins
On 16 April 2015 at 01:50, Sean Dague s...@dague.net wrote:
 On 04/12/2015 06:43 PM, Robert Collins wrote:

 Thoughts? If there's broad apathy-or-agreement I can turn this into a
 spec for fine coverage of ramifications and corner cases.

 I'm definitely happy someone else is diving in on here, just beware the
 dragons, there are many.

 I think some of the key problems are the following (lets call these the
 requirements requirements):

:) There's definitely enough meat here we're going to want a spec to
review in one place the conclusions and inputs.

 == We would like to be able to install multiple projects into a single
 devstack instance, and have all services work.

-- why do we want this? [for completeness - no objecting]. I think we
want this because its easier for folk mucking around to not have to
remember which venv etc; because we have projects like neutron and
nova that install bits into each others processes via common
libraries; because deployers have asked us to be sure that we can
co-install everything.

 This is hard because:

 1. these are multiple projects so pip can't resolve all requirements at
 once to get to a solved state (also, optional dependencies in particular
 configs mean these can be installed later)

I don't understand your first clause here. Pip certainly can resolve
all requirements at once: for instance, 'pip install path_to_nova
path_to_swift path_to_neutron' would resolve all the requirements for
all three at once. We're not doing that today, but its not a pip
limitation. Today https://github.com/pypa/pip/issues/2687 will rear
its head, but that may be quite shallow.

As far as optional deps go - we need to get those into extra
requirements, then pip can examine that for us. Enabling that is on my
stack that I'm rolling up at the moment.

 2. pip's solver ignores setup_requires - https://github.com/pypa/pip/issues
 /2612#issuecomment-91114298 which means we can get inconsistent results

Ish. The actual issue we ran into was
https://bitbucket.org/pypa/setuptools/issue/141/setup_requires-feature-does-not-handle
. We can tackle that directly and then require a newer setuptools to
solve this - it doesn't need any larger context.

 3. doing this iteratively in projects can cause the following to happen

 A requires B1.0,2.0
 C requires B1.2

 pip install C can make the pip install A requirements invalid later.

https://github.com/pypa/pip/issues/2687 again. I suspect this is a ~10
line patch - read all the package metadata present on the system, and
union its deps in inside PackageSet.add_requirement.

 This can end up in a failure of a service to start (if pkg_resources is
 actually checking things), or very subtle bugs later.

 Today global-requirements attempts to address this by continuously
 narrowing the requirements definitions for everything we have under our
 control so that pip is living in a rubber room and can only get an
 answer we know works.

 == However this has exposed an additional issue, libraries not
 released at release time

 Way more things are getting g-r syncs than top level projects.
 Synchronizing requirements for things that all release at the same time
 makes a lot of sense. However we're synchronizing requirements into
 libraries that release at different cadence. This has required all
 libraries to also have stable/ branches, for requirements matching.

 In an ideal world libraries would have very broad requirements, which
 would not have caps in them. non library projects would have narrower
 requirements that we know work.

I mostly agree. That is I think the heart of the issue I started this
thread about.
For libraries I trivially agree.
For non-library projects, I think we still need to be Known-Not-Bad,
vs Known-Good, but for CI our overrides can resolve that into
Known-Good - and we can publish this in some well known,
automation-friendly way.

Concretely, devstack should be doing one pip install run, and in
stable branches that needs to look something like:

$ pip install -r known-good-list $path_to_nova $path_to_neutron 

 == End game?

 *If* pip install took into account the requirements of everything
 already installed like apt or yum does, and resolve accordingly
 (including saying that's not possible unless you uninstall or upgrade
 X), we'd be able to pip install and get a working answer at the end. Maybe?

As Clint notes, this makes co-installability a constraint, but
pragmatically is already is. As I noted above
https://github.com/pypa/pip/issues/2687 is that issue, and is shallow.
It won't fix it for us though. It will still leave us open to
https://github.com/pypa/pip/issues/988 which and
https://bitbucket.org/pypa/setuptools/issue/141/setup_requires-feature-does-not-handle
at a minimum.

 Honestly, there are so many fixes on fixes here to our system, I'm not
 sure even this would fix it.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud


Re: [openstack-dev] [nova] FW: Migration/Evacuation of instance on desired host

2015-04-15 Thread Fei Long Wang
To make  it more clear, it depends on the release. Nova supports
evacuate instance without specifying a host since Juno. See
https://review.openstack.org/#/c/88749/

On 16/04/15 03:16, Chris Friesen wrote:
 On 04/15/2015 03:22 AM, Akshik dbk wrote:
 Hi,

 would like to know if schedule filters are considered while instance
 migration/evacuation.

 If you migrate or evacuate without specifying a destination then the
 scheduler filters will be considered.

 Chris


 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-04-15 Thread Tomasz Napierala
Sorry, I just mixed the names ;)

 On 15 Apr 2015, at 18:25, Igor Kalnitsky ikalnit...@mirantis.com wrote:
 
 Tomasz, multi-node mode is a legacy non-HA mode with only 1
 controller. Currently, our so-called HA mode support deployment with 1
 controller, so it makes no sense to support both modes.
 
 On Wed, Apr 15, 2015 at 6:38 PM, Tomasz Napierala
 tnapier...@mirantis.com wrote:
 Do you mean single node?
 
 On 15 Apr 2015, at 17:04, Dmitry Pyzhov dpyz...@mirantis.com wrote:
 
 FYI. We are going to disable Multi-node mode on UI even in experimental 
 mode. And we will remove related code from nailgun in 7.0.
 https://bugs.launchpad.net/fuel/+bug/1428054
 
 On Fri, Jan 30, 2015 at 1:39 PM, Aleksandr Didenko adide...@mirantis.com 
 wrote:
 What do you guys think about switching CentOS CI job [1] to HA with single 
 controller (1 controller + 1 or 2 computes)? Just to verify that our 
 replacement of Simple mode works fine.
 
 [1] 
 https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/
 
 On Fri, Jan 30, 2015 at 10:54 AM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:
 Thanks Igor for the quick turn over, excellent!
 
 On Fri, Jan 30, 2015 at 1:19 AM, Igor Belikov ibeli...@mirantis.com wrote:
 Folks,
 
 Changes in CI jobs have been made, for master branch of fuel-library we are 
 running CentOS HA + Nova VLAN and Ubuntu HA + Neutron VLAN .
 Job naming schema has also been changed, so now it includes actual 
 testgroup. Current links for master branch CI jobs are [1] and [2], all 
 other jobs can be found here[3] or will show up in your gerrit reviews.
 ISO and environments have been updated to the latest ones.
 
 [1]https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/
 [2]https://fuel-jenkins.mirantis.com/job/master.fuel-library.ubuntu.ha_neutron_vlan/
 [3]https://fuel-jenkins.mirantis.com
 --
 Igor Belikov
 Fuel DevOps
 ibeli...@mirantis.com
 
 
 
 
 
 On 29 Jan 2015, at 13:42, Aleksandr Didenko adide...@mirantis.com wrote:
 
 Mike,
 
 Any objections / additional suggestions?
 
 no objections from me, and it's already covered by LP 1415116 bug [1]
 
 [1] https://bugs.launchpad.net/fuel/+bug/1415116
 
 On Wed, Jan 28, 2015 at 6:42 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:
 Folks,
 one of the things we should not forget about - is out Fuel CI gating 
 jobs/tests. [1], [2].
 
 One of them is actually runs simple mode. Unfortunately, I don't see 
 details about tests ran for [1], [2], but I'm pretty sure it's same set as 
 [3], [4].
 
 I suggest to change tests. First of all, we need to get rid of simple runs 
 (since we are deprecating it), and second - I'd like us to run Ubuntu HA + 
 Neutron VLAN for one of the tests.
 
 Any objections / additional suggestions?
 
 [1] 
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_centos/
 [2] 
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_ubuntu/
 [3] 
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_centos/
 [4] 
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_ubuntu/
 
 On Wed, Jan 28, 2015 at 2:28 PM, Sergey Vasilenko 
 svasile...@mirantis.com wrote:
 +1 to replace simple to HA with one controller
 
 /sv
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Mike Scherbakov
 #mihgen
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Mike Scherbakov
 #mihgen
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] Global Cluster Template in Sahara

2015-04-15 Thread lu jander
We have already implement the default template for sahara

https://blueprints.launchpad.net/sahara/+spec/default-templates

2015-04-16 5:22 GMT+08:00 Liang, Yanchao yanli...@ebay.com:

  Dear Openstack Developers,

  My name is Yanchao Liang. I am a software engineer in eBay, working on
 Hadoop as a Service on top of Openstack cloud.

  Right now we are using Sahara, Juno version. We want to stay current and
 introduce global template into sahara.

  In order to simplify the cluster creation process for user, we would
 like to create some cluster templates available for all users. User can
 just go to the horizon webUI, select one of the pre-popluated templates and
 create a hadoop cluster, in just a few clicks.

  Here is how I would implement this feature:

- In the database, Create a new column in “cluster_templates table
called “is_global”, which is a boolean value indicating whether the
template is available for all users or not.
- When user getting the cluster template from database,  add another
function similar to “cluster_template_get”, which query the database for
global templates.
- When creating cluster, put the user’s tenant id in
the “merged_values” config variable, instead of the tenant id from cluster
template.
- Use an admin account create and manage global cluster templates

 Since I don’t know the code base as well as you do, what do you think
 about the global template idea? How would you implement this new feature?

  We would like to contribute this feature back to the Openstack
 community. Any feedback would be greatly appreciated. Thank you.

  Best,
 Yanchao


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] where is the api to fetch mysql log?

2015-04-15 Thread Li Tianqing
Is there an official document to introduce the new features about trove 
kilo-rc1?

--

Best
Li Tianqing

At 2015-04-16 02:15:11, Peter Stachowski pe...@tesora.com wrote:
 










Hi Li,

 

Unfortunately, fetching the logs didn’t make it into kilo and is still an 
ongoing project.  It should make it into liberty, though.  ;)
 

Regards,

Peter Stachowski

 

From: Li Tianqing [mailto:jaze...@163.com]


Sent: April-15-15 10:30 AM

To: openstack-dev@lists.openstack.org

Subject: [openstack-dev] [trove] where is the api to fetch mysql log?

 





Hi, 




   all, i know the kilo-rc 1 is released. I found an introduction about new 
features here




http://www.slideshare.net/openstack/trove-juno-to-kilo





   It says that we can fetch mysql error log. Then i search in the source code 
on master branch, and i do not find the api. Can someone help me ?




 


 



--



Best




Li Tianqing




 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Action Requested: Take 2 minutes to shape Docker training

2015-04-15 Thread Adrian Otto
OpenStack Devs,

My apologies, I sent this request to you by mistake. Please accept my sincere 
apology for the distraction.

Regards,

Adrian Otto

On Apr 15, 2015, at 6:21 PM, Adrian Otto adrian.o...@rackspace.com wrote:

...
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-15 Thread joehuang
In case it's helpful to see all the cases together, sync_routers (from the L3 
agent) was also mentioned in other part of this thread.  Plus of course the 
liveness reporting from all agents.

In the test report [1], which shows Neutron can supports up to million level 
ports and 100k level physical hosts, the scalability is done by one cascading 
Neutron to manage 100 cascaded Neutrons through current Neutron restful API. 
For normal Neutron, each compute node will host L2 agent/OVS, L3 agent/DVR. In 
the cascading Neutron layer, the L2 agent is modified to interact with 
regarding cascaded Neutron but not OVS, the L3 agent(DVR) is modified to 
interact with regarding cascaded Neutron but not linux route. That's why we 
call the cascaded Neutron is the backend of Neutron. 

Therefore, there are only 100 compute nodes (or say agent ) required in the 
cascading layer, each compute node will manage one cascaded Neutron. Each 
cascaded Neutron can manage up to 1000 nodes (there is already report and 
deployment and lab test can support this). That's the scalability to 100k nodes.

Because the cloud is splited into two layer (100 nodes in the cascading layer, 
1000 nodes in each cascaded layer ), even current mechanism can meet the demand 
for sync_routers and liveness reporting from all agents, or L2 population, DVR 
router update...etc. 

The test report [1] at least prove that the layered architecture idea is 
feasible for Neutron scalability, even up to million level ports and 100k level 
nodes. The extra benefit for the layered architecture is that each cascaded 
Neutron can leverage different backend technology implementation, for example, 
one is ML2+OVS, another is OVN or ODL or Calico...

[1]test report for million ports scalability of Neutron 
http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Neil Jerram [mailto:neil.jer...@metaswitch.com] 
Sent: Wednesday, April 15, 2015 9:46 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?

Hi again Joe, (+ list)

On 11/04/15 02:00, joehuang wrote:
 Hi, Neil,

 See inline comments.

 Best Regards

 Chaoyi Huang

 
 From: Neil Jerram [neil.jer...@metaswitch.com]
 Sent: 09 April 2015 23:01
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?

 Hi Joe,

 Many thanks for your reply!

 On 09/04/15 03:34, joehuang wrote:
 Hi, Neil,

   From theoretic, Neutron is like a broadcast domain, for example, 
 enforcement of DVR and security group has to touch each regarding host where 
 there is VM of this project resides. Even using SDN controller, the touch 
 to regarding host is inevitable. If there are plenty of physical hosts, for 
 example, 10k, inside one Neutron, it's very hard to overcome the broadcast 
 storm issue under concurrent operation, that's the bottleneck for 
 scalability of Neutron.

 I think I understand that in general terms - but can you be more 
 specific about the broadcast storm?  Is there one particular message 
 exchange that involves broadcasting?  Is it only from the server to 
 agents, or are there 'broadcasts' in other directions as well?

 [[joehuang]] for example, L2 population, Security group rule update, DVR 
 route update. Both direction in different scenario.

Thanks.  In case it's helpful to see all the cases together, sync_routers (from 
the L3 agent) was also mentioned in other part of this thread.  Plus of course 
the liveness reporting from all agents.

 (I presume you are talking about control plane messages here, i.e.
 between Neutron components.  Is that right?  Obviously there can also 
 be broadcast storm problems in the data plane - but I don't think 
 that's what you are talking about here.)

 [[joehuang]] Yes, controll plane here.

Thanks for confirming that.

 We need layered architecture in Neutron to solve the broadcast 
 domain bottleneck of scalability. The test report from OpenStack 
 cascading shows that through layered architecture Neutron 
 cascading, Neutron can supports up to million level ports and 100k 
 level physical hosts. You can find the report here: 
 http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascad
 ing-solution-to-support-1-million-v-ms-in-100-data-centers

 Many thanks, I will take a look at this.

It was very interesting, thanks.  And by following through your links I also 
learned more about Nova cells, and about how some people question whether we 
need any kind of partitioning at all, and should instead solve 
scaling/performance problems in other ways...  It will be interesting to see 
how this plays out.

I'd still like to see more information, though, about how far people have 
scaled OpenStack - and in particular Neutron - as it exists today. 
  

Re: [openstack-dev] Help Needed!!! In response to China's first OpenStack Hackathon held this week.

2015-04-15 Thread Doug Wiegley
Hi,

Just a suggestion, but if in that etherpad you could put the bug 
subject/project, it’d help folks scan for relevant reviews. It’s a bit many 
links to click through.

Thanks,
doug


 On Apr 15, 2015, at 9:27 PM, Bhargava, Ruchi ruchi.bharg...@intel.com wrote:
 
 Hello, 
   
 Parts of the OpenStack community in China recently joined together to focus 
 on bug fixing for Nova and Neutron. This is the 1st event of its kind for 
 this group and they are very excited to make this contribution.  They 
 identified 43 bugs to work on and have submitted patches for 29 of them. 
   
 I’m asking for your support to prioritize these patches for your review to 
 support increasing the stability and robustness of these key OpenStack 
 services and provide appreciation and on-going motivation to the OpenStack 
 PRC development community for this effort. The etherpad with the bug fix 
 details is https://etherpad.openstack.org/p/prc_kilo_nova_neutron_hackathon 
 https://etherpad.openstack.org/p/prc_kilo_nova_neutron_hackathon 
   
 For future hackathons, would be interested in your suggestions on how we can 
 increase the impact in terms of timing in the cycle, methodology of 
 engagement of cores, triaging of bugs. 
   
 Ruchi Bhargava 
 Intel Corporation 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] To evacuate it is not mandatory to have compute service down but mandatory to have host down/powered-off/dead

2015-04-15 Thread Ratnaker Katipally
Hi,

In the API.py of /nova/compute/ 

We are checking for compute service status in evacuate(..) method.

if self.servicegroup_api.service_is_up(service):
msg = (_('Instance compute service state on %s '
 'expected to be down, but it was up.') % inst_host)
LOG.error(msg)
raise exception.ComputeServiceUnavailable(msg)

But It is not necessarily required for the compute service to be down but 
we can consider compute service is still up when host is 
down/dead/powered-off.

Thanks and Regards,
Ratnaker R Katipally
Software Developer, Powervc
ISTL-Cloud Systems Software
ratnaker.katipa...@in.ibm.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] splitting out image building from devtest_overcloud.sh

2015-04-15 Thread Gregory Haynes
Excerpts from Dan Prince's message of 2015-04-15 02:14:12 +:
 I've been trying to cleanly model some Ceph and HA configurations in
 tripleo-ci that use Puppet (we are quite close to having these things in
 CI now!)
 
 Turns out the environment variables needed for these things are getting
 to be quite a mess. Furthermore we'd actually need to add to the
 environment variable madness to get it all working. And then there are
 optimization we'd like to add (like building a single image instead of
 one per role).
 
 One thing that would really help in this regard is splitting out image
 building from devtest_overcloud.sh. I took a stab at some initial
 patches to do this today.
 
 build-images: drive DIB via YAML config file
 https://review.openstack.org/#/c/173644/
 
 devtest_overcloud.sh: split out image building
 https://review.openstack.org/#/c/173645/
 
 If these sit well we could expand the effort to load images a bit more
 dynamically (a load-images script which could also be driven via a
 disk_images.yaml config file) and then I think devtest_overcloud.sh
 would be a lot more flexible for us Puppet users.
 
 Thoughts? I still have some questions myself but I wanted to get this
 out because we really do need some extra flexibility to be able to
 cleanly tune our scripts for more CI jobs.
 
 Dan
 

Have you looked at possibly using infra's nodepool for this? It is a bit
overkill, but currently nodepool lets you define a yaml file of images
for it to build using dib. If were not ok with bringing in all the
extras that nodepool has, maybe we could work on splitting out part of
nodepool for our needs, and having both projects this.

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-15 Thread Sean M. Collins
On Wed, Apr 15, 2015 at 10:09:24PM EDT, Tom Fifield wrote:
 On 16/04/15 10:54, Fox, Kevin M wrote:
 Yes, but if stuff like dvr is the only viable replacement to
 nova-network in production, then learning the non representitive config
 of neutron with linuxbridge might be misleading/counter productive since
 ovs looks very very different.
 
 Sure, though on the other hand, doesn't current discussion seem to indicate
 that OVS with DVR is not a viable replacement for nova-network multi-host HA
 (eg due to complexity), and this is why folks were working towards linux
 bridge?
 

Before we get sidetracked too far into the OVS vs. Linux Bridge bikeshed,
let's reflect on the fact that DVR is *not* enabled by default in
DevStack. Nor is nova-network's multi_host enabled by default.

So in all cases, someone looking at the innards of Nova-Network or
Neutron or looking to create a production-like DevStack 
will be consulting documentation for configuration details.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] rabbit/kombu settings deprecations

2015-04-15 Thread Matt Fischer
Recently a mass of changes was proposed and some merged to move the
rabbit/kombu settings to avoid a Kilo deprecation. As far as I can tell
this will break rabbit connectivity for all modules still using Juno. I did
an experiment with Heat and it certainly can't talk to rabbit anymore.
(Hopefully you guys can just tell me I'm wrong here and everything will
still work)

So why did we do this? We seem to have traded a slightly annoying
deprecation warning for breaking every single module. It does not seem like
a good trade-off to me. At a minimum, I would have liked to wait until we
had forked Kilo off and we're working towards Liberty. Why? Because there
was simply no pressing reason to do this right now when most everyone is
still using Juno.

Since we as a community are pretty terrible at backports, this means that I
now need to switch to stable and sit on old and non-updated code until I
can upgrade to Kilo, which is likely a minimum of 45 days away for many
components for us.

This has implications for my team beyond breaking everything:

* It means that we need to stop importing puppet code changes with our
git-upstream jobs. This makes the process of moving back to master when we
finally can quite painful. I had to do it for Icehouse and I don't relish
doing it again.

* It means that any fixes we want will require a two step process to get
into backports. This delays things obviously.

* It means that the number of contributions you will get from us will
probably fall, not being on master makes it way more likely for us just to
hold patches.

* It means that we will have to write a shim layer in our module to deal
with these settings and whatever else gets broken like this.

So I'd like to discuss the philosophy of why this was done. I'd also again
like to put in my vote for master supporting current-1 for at least some
period of time. There's a reason that the upstream code that we configure
just did this with a deprecation rather than a if you set it like this you
are broken. We should do the same.

For now I've -1'd all the outstanding reviews until we can have a
discussion about it. I know one was merged despite my -1, but I didn't
think a -2 was warranted.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Introducing the Cloud Service Federation project (cross-project design summit proposal)

2015-04-15 Thread Geoff Arnold
Yeah, we’ve taken account of:
https://github.com/openstack/keystone-specs/blob/master/specs/juno/keystone-to-keystone-federation.rst
 
https://github.com/openstack/keystone-specs/blob/master/specs/juno/keystone-to-keystone-federation.rst
http://blog.rodrigods.com/playing-with-keystone-to-keystone-federation/ 
http://blog.rodrigods.com/playing-with-keystone-to-keystone-federation/
http://docs.openstack.org/developer/keystone/configure_federation.html 
http://docs.openstack.org/developer/keystone/configure_federation.html

One of the use cases we’re wrestling with requires fairly strong anonymization: 
if user A purchases IaaS services from reseller B, who sources those services 
from service provider C, nobody at C (OpenStack admin, root on any box) should 
be able to identify that A is consuming resources; all that they can see is 
that the requests are coming from B. It’s unclear if we should defer this 
requirement to a future version, or whether there’s something we need to (or 
can) do now.

The main focus of Cloud Service Federation is managing the life cycle of 
virtual regions and service chaining. It builds on the Keystone federated 
identity work over the last two cycles, but identity is only part of the 
problem. However, I recognize that we do have an issue with terminology. For a 
lot of people, “federation” is simply interpreted as “identity federation”. If 
there’s a better term than “cloud service federation”, I’d love to hear it. 
(The Cisco term “Intercloud” is accurate, but probably inappropriate!)

Geoff

 On Apr 15, 2015, at 7:07 PM, Adam Young ayo...@redhat.com wrote:
 
 On 04/15/2015 04:23 PM, Geoff Arnold wrote:
 That’s the basic idea.  Now, if you’re a reseller of cloud services, you 
 deploy Horizon+Aggregator/Keystone behind your public endpoint, with your 
 branding on Horizon. You then bind each of your Aggregator Regions to a 
 Virtual Region from one of your providers. As a reseller, you don’t actually 
 deploy the rest of OpenStack.
 
 As for tokens, there are at least two variations, each with pros and cons: 
 proxy and pass-through. Still working through implications of both.
 
 Geoff
 
 
 Read the Keysteon to Keystone (K2K) docs in the Keystone spec repo, as that 
 addresses some of the issues here.
 
 
 On Apr 15, 2015, at 12:49 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 
 So, an Aggregator would basically be a stripped down keystone that 
 basically provided a dynamic service catalog that points to the registered 
 other regions?  You could then point a horizon, cli, or rest api at the 
 aggregator service?
 
 I guess if it was an identity provider too, it can potentially talk to the 
 remote keystone and generate project scoped tokens, though you'd need 
 project+region scoped tokens, which I'm not sure exists today?
 
 Thanks,
 Kevin
 
 
 From: Geoff Arnold [ge...@geoffarnold.com]
 Sent: Wednesday, April 15, 2015 12:05 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [all] Introducing the Cloud Service Federation 
 project (cross-project design summit proposal)
 
 tl;dr We want to implement a new system which we’re calling an Aggregator 
 which is based on Horizon and Keystone, and that can provide access to 
 virtual Regions from multiple independent OpenStack providers. We plan on 
 developing this system as a project in Stackforge, but we need help right 
 now in identifying any unexpected dependencies.
 
 
 
 For the last 6-7 years, there has been great interest in the potential for 
 various business models involving multiple clouds and/or cloud providers. 
 These business models include but are not limited to, federation, reseller, 
 broker, cloud-bursting, hybrid and intercloud. The core concept of this 
 initiative is to go beyond the simple dyadic relationship between a cloud 
 service provider and a cloud service consumer to a more sophisticated 
 “supply chain” of cloud services, dynamically configured, and operated by 
 different business entities. This is an ambitious goal, but there is a 
 general sense that OpenStack is becoming stable and mature enough to 
 support such an undertaking.
 
 Until now, OpenStack has focused on the logical abstraction of a Region as 
 the basis for cloud service consumption. A user interacts with Horizon and 
 Keystone instances for a Region, and through them gains access to the 
 services and resources within the specified Region. A recent extension of 
 this model has been to share Horizon and Keystone instances between several 
 Regions. This simplifies the user experience and enables single sign on to 
 a “single pane of glass”. However, in this configuration all of the 
 services, shared or otherwise, are still administered by a single entity, 
 and the configuration of the whole system is essentially static and 
 centralized.
 
 We’re proposing that the first step in realizing the Cloud Service 
 Federation use cases is to enable 

Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-15 Thread Tom Fifield



On 14/04/15 23:36, Dean Troyer wrote:

On Tue, Apr 14, 2015 at 7:02 AM, Miguel Angel Ajo Pelayo
mangel...@redhat.com mailto:mangel...@redhat.com wrote:

Why would operators install from devstack? that’s not going to be
the case.


If they do they need more help than we can give...


So, ummm, there is actually a valid use case for ops on devstack: it's 
part of the learning process.


In my rounds, I've had feedback from more than a few whose first 
OpenStack experience was to run up a devstack, so they could run ps 
aufx, brctl, iptables, etc just to see what was going on under the 
covers before moving on to something more 'proper'.



While acknowledging that the primary purpose and audience of devstack is 
and should remain development and developers, if there is something we 
can do to improve the experience for those ops first-timers that doesn't 
have a huge impact on devs, it would be kinda nice.


After all, one of the reasons this thread exists is because of problems 
with that ramp up process, no?





I believe we should have both LB  OVS well tested, if LB is a good
option for
some operators willing to migrate from nova, that’s great, let’s
make sure LB
is perfectly tested upstream.


+1

But by doing such change we can’t let OVS code rot and we would be
neglecting
a big customer base which is now making use of the openvswitch
mechanism.
(may be I’m miss understanding  the scope of the change).


Changing DevStack's default doesn't remove anything OVS-related, nor
does it by itself remove any OVS jobs.  It gives everyone who is happy
with nova-net (or more correctly doesn't think about it) a new default
that changes their experience the least for when nova-net disappears.

dt

--

Dean Troyer
dtro...@gmail.com mailto:dtro...@gmail.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-15 Thread Dean Troyer
On Wed, Apr 15, 2015 at 7:58 PM, Tom Fifield t...@openstack.org wrote:

 If they do they need more help than we can give...


 So, ummm, there is actually a valid use case for ops on devstack: it's
 part of the learning process.


Yes, this is very true.  The context in my mind included the
not-present-above phrase in production.

In my rounds, I've had feedback from more than a few whose first OpenStack
 experience was to run up a devstack, so they could run ps aufx, brctl,
 iptables, etc just to see what was going on under the covers before moving
 on to something more 'proper'.


I hear this too, and do not want to discourage those use cases.  However,
there is little real guidance on what parts of DevStack are representative
of a production deployment and what parts should not be replicated.
DevStack's use of sudo, for example, is one of those places that would be a
huge security issue in any real deployment.

While acknowledging that the primary purpose and audience of devstack is
 and should remain development and developers, if there is something we can
 do to improve the experience for those ops first-timers that doesn't have a
 huge impact on devs, it would be kinda nice.


So I would be interested in hearing ideas on this topic.  One of the
reasons DevStack is (still) written in shell script is that we always felt
it was easier to demonstrate how the pieces fit together that way.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-15 Thread Fox, Kevin M
Yes, but if stuff like dvr is the only viable replacement to nova-network 
in production, then learning the non representitive config of neutron with 
linuxbridge might be misleading/counter productive since ovs looks very very 
different.

Kevin


From: Tom Fifield
Sent: Wednesday, April 15, 2015 5:58:43 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in 
DevStack [was: Status of the nova-network to Neutron migration work]



On 14/04/15 23:36, Dean Troyer wrote:
 On Tue, Apr 14, 2015 at 7:02 AM, Miguel Angel Ajo Pelayo
 mangel...@redhat.com mailto:mangel...@redhat.com wrote:

 Why would operators install from devstack? that’s not going to be
 the case.


 If they do they need more help than we can give...

So, ummm, there is actually a valid use case for ops on devstack: it's
part of the learning process.

In my rounds, I've had feedback from more than a few whose first
OpenStack experience was to run up a devstack, so they could run ps
aufx, brctl, iptables, etc just to see what was going on under the
covers before moving on to something more 'proper'.


While acknowledging that the primary purpose and audience of devstack is
and should remain development and developers, if there is something we
can do to improve the experience for those ops first-timers that doesn't
have a huge impact on devs, it would be kinda nice.

After all, one of the reasons this thread exists is because of problems
with that ramp up process, no?



 I believe we should have both LB  OVS well tested, if LB is a good
 option for
 some operators willing to migrate from nova, that’s great, let’s
 make sure LB
 is perfectly tested upstream.


 +1

 But by doing such change we can’t let OVS code rot and we would be
 neglecting
 a big customer base which is now making use of the openvswitch
 mechanism.
 (may be I’m miss understanding  the scope of the change).


 Changing DevStack's default doesn't remove anything OVS-related, nor
 does it by itself remove any OVS jobs.  It gives everyone who is happy
 with nova-net (or more correctly doesn't think about it) a new default
 that changes their experience the least for when nova-net disappears.

 dt

 --

 Dean Troyer
 dtro...@gmail.com mailto:dtro...@gmail.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer]Handling cumulative counter resets

2015-04-15 Thread Prashanth Hari
Hi,

Can someone please provide some info on how we are handling cumulative
counter resets for various scenarios in ceilometer ?

Seeing some old posts and bugs but couldn't find any references to fixes -
https://bugs.launchpad.net/ceilometer/+bug/1061817
http://openstack.10931.n7.nabble.com/ceilometer-The-reset-on-the-cumulative-counter-td3275.html

Is this still an open issue ?

Thanks,
Prashanth
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Help Needed!!! In response to China's first OpenStack Hackathon held this week.

2015-04-15 Thread Bhargava, Ruchi
Hello,

Parts of the OpenStack community in China recently joined together to focus on 
bug fixing for Nova and Neutron. This is the 1st event of its kind for this 
group and they are very excited to make this contribution.  They identified 43 
bugs to work on and have submitted patches for 29 of them.

I'm asking for your support to prioritize these patches for your review to 
support increasing the stability and robustness of these key OpenStack services 
and provide appreciation and on-going motivation to the OpenStack PRC 
development community for this effort. The etherpad with the bug fix details is 
https://etherpad.openstack.org/p/prc_kilo_nova_neutron_hackathon

For future hackathons, would be interested in your suggestions on how we can 
increase the impact in terms of timing in the cycle, methodology of engagement 
of cores, triaging of bugs.

Ruchi Bhargava
Intel Corporation

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: Migration/Evacuation of instance on desired host

2015-04-15 Thread Akshik dbk
Hi Chris,
Thanks for responding, I was under the understanding that the destination host 
is manditory, Im using Icehouse version
nova help evacuateusage: nova evacuate [--password password] 
[--on-shared-storage] server host
Evacuate server from failed host to specified one.
Positional arguments:  server  Name or ID of server.  host  
  Name or ID of target host.
Optional arguments:  --password passwordSet the 
provided password on the evacuated server. Not
applicable with on-shared-storage flag  --on-shared-storage   Specifies whether 
server files are located on sharedstoragenova evacuate 
1fd48e33-137c-45db-9eb5-fbc44b371b87usage: nova evacuate [--password 
password] [--on-shared-storage] server hosterror: too 
few argumentsTry 'nova help evacuate' for more information.

pls. advice.
Regards,Akshik
 Date: Wed, 15 Apr 2015 09:16:24 -0600
 From: chris.frie...@windriver.com
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] FW: Migration/Evacuation of instance on desired 
 host
 
 On 04/15/2015 03:22 AM, Akshik dbk wrote:
  Hi,
 
  would like to know if schedule filters are considered while instance
  migration/evacuation.
 
 If you migrate or evacuate without specifying a destination then the 
 scheduler 
 filters will be considered.
 
 Chris
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Action Requested: Take 2 minutes to shape Docker training

2015-04-15 Thread Adrian Otto
Rackers,

We are in the process of putting together comprehensive RackU training for 
Rackers to learn about Docker at a few different levels. We want to be ahead of 
the curve when our customers are ready to benefit from application containers.

1) Foundational training will give any Racker a view of the technology, and how 
it applies to our customers.
2) Operational training is for technical Rackers who will use Docker in their 
future daily work.
3) Advanced training will be for those wishing to become experts in the subject 
and master the subject matter. 

We will offer options for each, based on your input. Please take a moment to 
shape what we put into the program:

Web Survey: https://rax.io/docker-training-survey

Thanks,

Adrian Otto

PS: Please forward this to your teams, especially Sales and Support Rackers as 
well!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Robert Collins
On 16 April 2015 at 11:59, Sean Dague s...@dague.net wrote:

 I think the completeness statement here is as follows:

 1. For OpenStack to scale to the small end, we need to be able to
 overlap services, otherwise you are telling people they basically have
 to start with a full rack of hardware to get 1 worker.

I don't see how that follows: you can run N venvs on one machine. Its
basically a lighter-weight container than containers, just without all
the process and resource isolation.

 2. The Linux Distributors (who are a big part of our community) install
 everything at a system level. Python has no facility for having multiple
 versions of libraries installed at the system level, largely because
 virtualenvs were created (which solve the non system application level
 problem really well).

Actually, the statement about Python having no facility isn't true -
there are eggs and a mechanism to get a specific version of a
dependency into a process. It's not all that widely used, largely
because of the actual things thats missing: Python *doesn't* have the
ability to load multiple versions of a package into one process. So
once you ask for testtools==1.0.0, that process has only
testtools-1.0.0 in the singleton sys.modules['testtools'], and the
import machinery is defined as having the side effect of changing
global state, so this is sufficiently tricky to tackle noone has, even
with importlib etc being around now. NB: 'vendoring' does something in
this space by shifting an import to a different location, but its
fragile.

 3. The alternative of putting a venv inside of every service violates
 the principle of single location security update.

So does copying code around, but we've officially adopted that as our
approach-until-things-are-mature... and in fact we're talking cluster
software, so there is (except the small scale) absolutely no
expectation of single-location security updates: we know and expect to
have to update N==machines locations, making that MN is a small
matter of automation, be that docker/lxc/venvs.

I think the arguments about devstack and ease of hacking, + our
deployer community specifically requesting it are sufficient. Whether
they need to request it or not, they have :).

 Note: there is an aside about why we might *not* want to do this

 That being said, if you deploy at a system level your upgrade unit is
 now 1 full box, instead of 1 service, because you can't do library
 isolation between old and new. A worker might have neutron, cinder, and
 nova agents. Only the Nova agents support rolling upgrade (cinder is
 working hard on this, I don't think neutron has visited this yet). So
 real rolling upgrade is sacrificed on this alter of install everything
 at a system level.

Yup. And most deployment frameworks want to scale by service, not by
box, which makes genuine containers super interesting

 Concretely, devstack should be doing one pip install run, and in
 stable branches that needs to look something like:

 $ pip install -r known-good-list $path_to_nova $path_to_neutron 

 Also remember we need to -e install a bunch of directories, not sure if
 that makes things easier or harder.

There's a particular bit of ugly in pip where directories have to be
resolved to packages, but if we use egg fragment names we can tell pip
the name and avoid that. The thing that that lookup does is cause
somewhat later binding of some requirements calls - I'm not sure it
would be a problem, but if it is its fairly straight forward to
address.

 So, one of the problems with that is often what we need is buried pretty
 deep like -
 https://github.com/openstack-dev/devstack/blob/master/lib/nova_plugins/functions-libvirt#L39

Both apt and rpm will perform much faster given a single invocation
than 10 or 20 little ones. There's a chunk of redundant work in dpkg
itself for instance that can be avoided by a single call. So we might
want to do that for that reason alone. pip doesn't currently do much
global big-O work, so it shouldn't be affected, but once we do start
considering already-installed-requirements, then it will start to have
the same issue.

 If devstack needs to uplift everything into an install the world phase,
 I think we're basically on the verge of creating OpenStack Package
 Manager so that we can specify this all declaratively (after processing
 a bunch of conditional feature logic). Which, is a thing someone could
 do (not it), but if we really get there we're probably just better off
 thinking about reviving building debs so that we can have the package
 manager globally keep track of all the requirements across multiple
 invocations.

mmm, I don't see that - to me a package manager has a lot more to do
with dependencies and distribution- I'd expect an opm to know about
things like 'nova requires the optional feature X to work with dvr, or
Y to work with ovn'. And where to get the tarballs or git repos from
given just 'openstack/nova'. Refactoring devstack into a bunch 

Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-15 Thread Tom Fifield

On 16/04/15 10:54, Fox, Kevin M wrote:

Yes, but if stuff like dvr is the only viable replacement to
nova-network in production, then learning the non representitive config
of neutron with linuxbridge might be misleading/counter productive since
ovs looks very very different.


Sure, though on the other hand, doesn't current discussion seem to 
indicate that OVS with DVR is not a viable replacement for nova-network 
multi-host HA (eg due to complexity), and this is why folks were working 
towards linux bridge?


In essence: if linux bridge was a viable nova-network multi-host HA 
replacement, you'd be OK with this change?




Kevin *
*

*From:* Tom Fifield
*Sent:* Wednesday, April 15, 2015 5:58:43 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the
default in DevStack [was: Status of the nova-network to Neutron
migration work]



On 14/04/15 23:36, Dean Troyer wrote:

On Tue, Apr 14, 2015 at 7:02 AM, Miguel Angel Ajo Pelayo
mangel...@redhat.com mailto:mangel...@redhat.com wrote:

Why would operators install from devstack? that’s not going to be
the case.


If they do they need more help than we can give...


So, ummm, there is actually a valid use case for ops on devstack: it's
part of the learning process.

In my rounds, I've had feedback from more than a few whose first
OpenStack experience was to run up a devstack, so they could run ps
aufx, brctl, iptables, etc just to see what was going on under the
covers before moving on to something more 'proper'.


While acknowledging that the primary purpose and audience of devstack is
and should remain development and developers, if there is something we
can do to improve the experience for those ops first-timers that doesn't
have a huge impact on devs, it would be kinda nice.

After all, one of the reasons this thread exists is because of problems
with that ramp up process, no?




I believe we should have both LB  OVS well tested, if LB is a good
option for
some operators willing to migrate from nova, that’s great, let’s
make sure LB
is perfectly tested upstream.


+1

But by doing such change we can’t let OVS code rot and we would be
neglecting
a big customer base which is now making use of the openvswitch
mechanism.
(may be I’m miss understanding  the scope of the change).


Changing DevStack's default doesn't remove anything OVS-related, nor
does it by itself remove any OVS jobs.  It gives everyone who is happy
with nova-net (or more correctly doesn't think about it) a new default
that changes their experience the least for when nova-net disappears.

dt

--

Dean Troyer
dtro...@gmail.com mailto:dtro...@gmail.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Meeting timings

2015-04-15 Thread Ganesh Narayanan (ganeshna)
Hi,

I see 2 different timings for the meeting.  One of them needs to be updated or 
these are 2 different meetings ?

https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting

LBaaS meeting

  *   Weekly on Tuesdays at 1600 UTC
  *   IRC channel: #openstack-meeting-4
  *   Chair (to contact for more information) mestery (Kyle Mestery)
  *   see Network/LBaaShttps://wiki.openstack.org/wiki/Network/LBaaS for 
agenda

https://wiki.openstack.org/wiki/Neutron/LBaaS

Communication Channels

  *   IRC: #openstack-lbaas
  *   IRC Weekly Meeting: #openstack-meeting every Thursday @ 14:00 UTC
  *   Mailing List: openstack-dev [at] lists [dot] openstack [dot] org. Please 
prefix subject with '[openstack-dev][Neutron][LBaaS]’

Thanks,
Ganesh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Joshua Harlow

Ken Giusti wrote:

On Wed, Apr 15, 2015 at 1:33 PM, Doug Hellmannd...@doughellmann.com  wrote:

Excerpts from Ken Giusti's message of 2015-04-15 09:31:18 -0400:

On Tue, Apr 14, 2015 at 6:23 PM, Joshua Harlowharlo...@outlook.com  wrote:

Ken Giusti wrote:

Just to be clear: you're asking specifically about the 0-10 based
impl_qpid.py driver, correct?   This is the driver that is used for
the qpid:// transport (aka rpc_backend).

I ask because I'm maintaining the AMQP 1.0 driver (transport
amqp://) that can also be used with qpidd.

However, the AMQP 1.0 driver isn't yet Python 3 compatible due to its
dependency on Proton, which has yet to be ported to python 3 - though
that's currently being worked on [1].

I'm planning on porting the AMQP 1.0 driver once the dependent
libraries are available.

[1]: https://issues.apache.org/jira/browse/PROTON-490


What's the expected date on this as it appears this also blocks python 3
work as well... Seems like that hasn't been updated since nov 2014 which
doesn't inspire that much confidence (especially for what appears to be
mostly small patches).


Good point.  I reached out to the bug owner.  He got it 'mostly
working' but got hung up on porting the proton unit tests.   I've
offered to help this along and he's good with that.  I'll make this a
priority to move this along.

In terms of availability - proton tends to do releases about every 4-6
months.  They just released 0.9, so the earliest availability would be
in that 4-6 month window (assuming that should be enough time to
complete the work).   Then there's the time it will take for the
various distros to pick it up...

so, definitely not 'real soon now'. :(

This seems like a case where if we can get the libs we need to a point
where they install via pip, we can let the distros catch up instead of
waiting for them.



Sadly just the python wrappers are available via pip.  Its C extension
requires that the native proton shared library (libqpid-proton) is
available.   To date we've relied on the distro to provide that
library.


How does that (c extension) work with eventlet? Does it?




Similarly, if we have *an* approach for Python 3 on oslo.messaging, that
means the library isn't blocking us from testing applications with
Python 3. If some of the drivers lag, their test jobs may need to be
removed or disabled if the apps start testing under Python 3.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] splitting out image building from devtest_overcloud.sh

2015-04-15 Thread Gregory Haynes
Excerpts from Gregory Haynes's message of 2015-04-16 02:50:17 +:
 Excerpts from Dan Prince's message of 2015-04-15 02:14:12 +:
  I've been trying to cleanly model some Ceph and HA configurations in
  tripleo-ci that use Puppet (we are quite close to having these things in
  CI now!)
  
  Turns out the environment variables needed for these things are getting
  to be quite a mess. Furthermore we'd actually need to add to the
  environment variable madness to get it all working. And then there are
  optimization we'd like to add (like building a single image instead of
  one per role).
  
  One thing that would really help in this regard is splitting out image
  building from devtest_overcloud.sh. I took a stab at some initial
  patches to do this today.
  
  build-images: drive DIB via YAML config file
  https://review.openstack.org/#/c/173644/
  
  devtest_overcloud.sh: split out image building
  https://review.openstack.org/#/c/173645/
  
  If these sit well we could expand the effort to load images a bit more
  dynamically (a load-images script which could also be driven via a
  disk_images.yaml config file) and then I think devtest_overcloud.sh
  would be a lot more flexible for us Puppet users.
  
  Thoughts? I still have some questions myself but I wanted to get this
  out because we really do need some extra flexibility to be able to
  cleanly tune our scripts for more CI jobs.
  
  Dan
  
 
 Have you looked at possibly using infra's nodepool for this? It is a bit
 overkill, but currently nodepool lets you define a yaml file of images
 for it to build using dib. If were not ok with bringing in all the
 extras that nodepool has, maybe we could work on splitting out part of
 nodepool for our needs, and having both projects this.
 
 Cheers,
 Greg

Did some digging and looks like infra has some planned work for this
already[1]. This would be great for TripleO as well for the same reasons
that infra wants it.

I do get that you have a need for this today though and what i'm
describing is a ways out, so I am +1 on your current approach for now.

Cheers,
Greg

1: 
http://specs.openstack.org/openstack-infra/infra-specs/specs/nodepool-workers.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FFE for requirements

2015-04-15 Thread John Dickinson
We need to update the version of PyECLib for Kilo[1]. This library is a 
requirement for erasure coding in Swift.

In addition to fixing several bugs, 1.0.7 eliminates the need for some 
work-around code in Swift. This code was only to hide
issues in the current version of PyECLib, but it also ends up breaking some 
third-party integration. In order to enable expected
functionality and to avoid dealing with deprecation issues right from the 
beginning, we need the version bumped to 1.0.7.


[1] https://review.openstack.org/#/c/174167/ to master and 
https://review.openstack.org/#/c/174171/ to stable/kilo



--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Introducing the Cloud Service Federation project (cross-project design summit proposal)

2015-04-15 Thread Adam Young

On 04/15/2015 04:23 PM, Geoff Arnold wrote:

That’s the basic idea.  Now, if you’re a reseller of cloud services, you deploy 
Horizon+Aggregator/Keystone behind your public endpoint, with your branding on 
Horizon. You then bind each of your Aggregator Regions to a Virtual Region from 
one of your providers. As a reseller, you don’t actually deploy the rest of 
OpenStack.

As for tokens, there are at least two variations, each with pros and cons: 
proxy and pass-through. Still working through implications of both.

Geoff



Read the Keysteon to Keystone (K2K) docs in the Keystone spec repo, as 
that addresses some of the issues here.





On Apr 15, 2015, at 12:49 PM, Fox, Kevin M kevin@pnnl.gov wrote:

So, an Aggregator would basically be a stripped down keystone that basically 
provided a dynamic service catalog that points to the registered other regions? 
 You could then point a horizon, cli, or rest api at the aggregator service?

I guess if it was an identity provider too, it can potentially talk to the 
remote keystone and generate project scoped tokens, though you'd need 
project+region scoped tokens, which I'm not sure exists today?

Thanks,
Kevin


From: Geoff Arnold [ge...@geoffarnold.com]
Sent: Wednesday, April 15, 2015 12:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [all] Introducing the Cloud Service Federation project 
(cross-project design summit proposal)

tl;dr We want to implement a new system which we’re calling an Aggregator which 
is based on Horizon and Keystone, and that can provide access to virtual 
Regions from multiple independent OpenStack providers. We plan on developing 
this system as a project in Stackforge, but we need help right now in 
identifying any unexpected dependencies.



For the last 6-7 years, there has been great interest in the potential for 
various business models involving multiple clouds and/or cloud providers. These 
business models include but are not limited to, federation, reseller, broker, 
cloud-bursting, hybrid and intercloud. The core concept of this initiative is 
to go beyond the simple dyadic relationship between a cloud service provider 
and a cloud service consumer to a more sophisticated “supply chain” of cloud 
services, dynamically configured, and operated by different business entities. 
This is an ambitious goal, but there is a general sense that OpenStack is 
becoming stable and mature enough to support such an undertaking.

Until now, OpenStack has focused on the logical abstraction of a Region as the 
basis for cloud service consumption. A user interacts with Horizon and Keystone 
instances for a Region, and through them gains access to the services and 
resources within the specified Region. A recent extension of this model has 
been to share Horizon and Keystone instances between several Regions. This 
simplifies the user experience and enables single sign on to a “single pane of 
glass”. However, in this configuration all of the services, shared or 
otherwise, are still administered by a single entity, and the configuration of 
the whole system is essentially static and centralized.

We’re proposing that the first step in realizing the Cloud Service Federation 
use cases is to enable the administrative separation of interface and region: 
to create a new system which provides the same user interface as today - 
Horizon, Keystone - but which is administratively separate from the Region(s) 
which provide the actual IaaS resources. We don’t yet have a good name for this 
system; we’ve been referring to it as the “Aggregator”. It includes 
slightly-modified Horizon and Keystone services, together with a subsystem 
which configures these services to implement the mapping of “Aggregator 
Regions” to multiple, administratively independent, “Provider Regions”. Just as 
the User-Provider relationship in OpenStack is “on demand”, we want the 
Aggregator-Provider mappings to be dynamic, established by APIs, rather than 
statically configured. We want to achieve this without substantially changing 
the user experience, and with no changes to applications or to core OpenStack 
services. The Aggregator represents an additional way of accessing a cloud; it 
does not replace the existing Horizon and Keystone.

The functionality and workflow is as follows: A user, X, logs into the Horizon 
interface provided by Aggregator A. The user sees two Regions, V1 and V2, and 
selects V1. This Region is actually provided by OpenStack service provider P; 
it’s the Region which P knows as R4.  X now creates a new tenant project, T. 
Leveraging the Hierarchical Multitenancy work in Kilo, T is actually 
instantiated as a subproject of a Domain in R4, thus providing namespace 
isolation and quota management. Now X can deploy and operate her project T as 
she is used to, using Horizon, CLI, or other client-side tools. UI and API 
requests are forwarded by the Aggregator 

Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-15 Thread Hirofumi Ichihara
 Sure, though on the other hand, doesn't current discussion seem to indicate 
 that OVS with DVR is not a viable replacement for nova-network multi-host HA 
 (eg due to complexity), and this is why folks were working towards linux 
 bridge?
Some openstacker doesn’t believe ovs performance is higher than linuxbridge.
So they don’t want to use OVS. Surely old OVS has many performance problems.
Currently, the problems almost might be solved. But they aren’t sure about it.
If that is a point of the discussion, we should show it to them.

In any case, we need to know why do they prefer linuxbridge rather than OVS.

Hirofumi

On 2015/04/16, at 11:09, Tom Fifield t...@openstack.org wrote:

 On 16/04/15 10:54, Fox, Kevin M wrote:
 Yes, but if stuff like dvr is the only viable replacement to
 nova-network in production, then learning the non representitive config
 of neutron with linuxbridge might be misleading/counter productive since
 ovs looks very very different.
 
 Sure, though on the other hand, doesn't current discussion seem to indicate 
 that OVS with DVR is not a viable replacement for nova-network multi-host HA 
 (eg due to complexity), and this is why folks were working towards linux 
 bridge?
 
 In essence: if linux bridge was a viable nova-network multi-host HA 
 replacement, you'd be OK with this change?
 
 
 Kevin *
 *
 
 *From:* Tom Fifield
 *Sent:* Wednesday, April 15, 2015 5:58:43 PM
 *To:* openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the
 default in DevStack [was: Status of the nova-network to Neutron
 migration work]
 
 
 
 On 14/04/15 23:36, Dean Troyer wrote:
 On Tue, Apr 14, 2015 at 7:02 AM, Miguel Angel Ajo Pelayo
 mangel...@redhat.com mailto:mangel...@redhat.com wrote:
 
Why would operators install from devstack? that’s not going to be
the case.
 
 
 If they do they need more help than we can give...
 
 So, ummm, there is actually a valid use case for ops on devstack: it's
 part of the learning process.
 
 In my rounds, I've had feedback from more than a few whose first
 OpenStack experience was to run up a devstack, so they could run ps
 aufx, brctl, iptables, etc just to see what was going on under the
 covers before moving on to something more 'proper'.
 
 
 While acknowledging that the primary purpose and audience of devstack is
 and should remain development and developers, if there is something we
 can do to improve the experience for those ops first-timers that doesn't
 have a huge impact on devs, it would be kinda nice.
 
 After all, one of the reasons this thread exists is because of problems
 with that ramp up process, no?
 
 
 
I believe we should have both LB  OVS well tested, if LB is a good
option for
some operators willing to migrate from nova, that’s great, let’s
make sure LB
is perfectly tested upstream.
 
 
 +1
 
But by doing such change we can’t let OVS code rot and we would be
neglecting
a big customer base which is now making use of the openvswitch
mechanism.
(may be I’m miss understanding  the scope of the change).
 
 
 Changing DevStack's default doesn't remove anything OVS-related, nor
 does it by itself remove any OVS jobs.  It gives everyone who is happy
 with nova-net (or more correctly doesn't think about it) a new default
 that changes their experience the least for when nova-net disappears.
 
 dt
 
 --
 
 Dean Troyer
 dtro...@gmail.com mailto:dtro...@gmail.com
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Introducing the Cloud Service Federation project (cross-project design summit proposal)

2015-04-15 Thread Miguel Angel Ajo Pelayo
Sounds like a very interesting idea.

Have you talked to the keystone folks?,

I would do this work into the keystone project itself (just a separate daemon).

This still looks like identity management (federated, but identity)

I know the burden of working with a mainstream project could be higher, but 
benefits
are also higher: it becomes more useful (it becomes automatically available for 
everyone)
and also passes through the review process of the general keystone 
contributors, thus
getting a more robust code.


Best regards,
Miguel 

 On 16/4/2015, at 6:24, Geoff Arnold ge...@geoffarnold.com wrote:
 
 Yeah, we’ve taken account of:
 https://github.com/openstack/keystone-specs/blob/master/specs/juno/keystone-to-keystone-federation.rst
  
 https://github.com/openstack/keystone-specs/blob/master/specs/juno/keystone-to-keystone-federation.rst
 http://blog.rodrigods.com/playing-with-keystone-to-keystone-federation/ 
 http://blog.rodrigods.com/playing-with-keystone-to-keystone-federation/
 http://docs.openstack.org/developer/keystone/configure_federation.html 
 http://docs.openstack.org/developer/keystone/configure_federation.html
 
 One of the use cases we’re wrestling with requires fairly strong 
 anonymization: if user A purchases IaaS services from reseller B, who sources 
 those services from service provider C, nobody at C (OpenStack admin, root on 
 any box) should be able to identify that A is consuming resources; all that 
 they can see is that the requests are coming from B. It’s unclear if we 
 should defer this requirement to a future version, or whether there’s 
 something we need to (or can) do now.
 
 The main focus of Cloud Service Federation is managing the life cycle of 
 virtual regions and service chaining. It builds on the Keystone federated 
 identity work over the last two cycles, but identity is only part of the 
 problem. However, I recognize that we do have an issue with terminology. For 
 a lot of people, “federation” is simply interpreted as “identity federation”. 
 If there’s a better term than “cloud service federation”, I’d love to hear 
 it. (The Cisco term “Intercloud” is accurate, but probably inappropriate!)
 
 Geoff
 
 On Apr 15, 2015, at 7:07 PM, Adam Young ayo...@redhat.com 
 mailto:ayo...@redhat.com wrote:
 
 On 04/15/2015 04:23 PM, Geoff Arnold wrote:
 That’s the basic idea.  Now, if you’re a reseller of cloud services, you 
 deploy Horizon+Aggregator/Keystone behind your public endpoint, with your 
 branding on Horizon. You then bind each of your Aggregator Regions to a 
 Virtual Region from one of your providers. As a reseller, you don’t 
 actually deploy the rest of OpenStack.
 
 As for tokens, there are at least two variations, each with pros and cons: 
 proxy and pass-through. Still working through implications of both.
 
 Geoff
 
 
 Read the Keysteon to Keystone (K2K) docs in the Keystone spec repo, as that 
 addresses some of the issues here.
 
 
 On Apr 15, 2015, at 12:49 PM, Fox, Kevin M kevin@pnnl.gov 
 mailto:kevin@pnnl.gov wrote:
 
 So, an Aggregator would basically be a stripped down keystone that 
 basically provided a dynamic service catalog that points to the registered 
 other regions?  You could then point a horizon, cli, or rest api at the 
 aggregator service?
 
 I guess if it was an identity provider too, it can potentially talk to the 
 remote keystone and generate project scoped tokens, though you'd need 
 project+region scoped tokens, which I'm not sure exists today?
 
 Thanks,
 Kevin
 
 
 From: Geoff Arnold [ge...@geoffarnold.com mailto:ge...@geoffarnold.com]
 Sent: Wednesday, April 15, 2015 12:05 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [all] Introducing the Cloud Service Federation 
 project (cross-project design summit proposal)
 
 tl;dr We want to implement a new system which we’re calling an Aggregator 
 which is based on Horizon and Keystone, and that can provide access to 
 virtual Regions from multiple independent OpenStack providers. We plan on 
 developing this system as a project in Stackforge, but we need help right 
 now in identifying any unexpected dependencies.
 
 
 
 For the last 6-7 years, there has been great interest in the potential for 
 various business models involving multiple clouds and/or cloud providers. 
 These business models include but are not limited to, federation, 
 reseller, broker, cloud-bursting, hybrid and intercloud. The core concept 
 of this initiative is to go beyond the simple dyadic relationship between 
 a cloud service provider and a cloud service consumer to a more 
 sophisticated “supply chain” of cloud services, dynamically configured, 
 and operated by different business entities. This is an ambitious goal, 
 but there is a general sense that OpenStack is becoming stable and mature 
 enough to support such an undertaking.
 
 Until now, OpenStack has focused on the logical abstraction of a 

Re: [openstack-dev] [Neutron][LBaaS] Meeting timings

2015-04-15 Thread Doug Wiegley
Hi Ganesh,

The Tuesday meeting on -4 is the our slot, but it is suspended until further 
notice while we experiment with covering our agenda during the neutron and 
octavia meetings. If there ends up being a lot to cover, we will resume.

Is there something you need to discuss?  Please feel free to ask here, in 
#openstack-lbaas, or at the next neutron meeting.

Thanks,
doug


 On Apr 15, 2015, at 10:14 PM, Ganesh Narayanan (ganeshna) 
 ganes...@cisco.com wrote:
 
 Hi,
 
 I see 2 different timings for the meeting.  One of them needs to be updated 
 or these are 2 different meetings ?
 
 https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting 
 https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting
 
 LBaaS meeting
 Weekly on Tuesdays at 1600 UTC
 IRC channel: #openstack-meeting-4
 Chair (to contact for more information) mestery (Kyle Mestery)
 see Network/LBaaS https://wiki.openstack.org/wiki/Network/LBaaS for agenda
 
 https://wiki.openstack.org/wiki/Neutron/LBaaS 
 https://wiki.openstack.org/wiki/Neutron/LBaaS
 
 Communication Channels
 IRC: #openstack-lbaas
 IRC Weekly Meeting: #openstack-meeting every Thursday @ 14:00 UTC
 Mailing List: openstack-dev [at] lists [dot] openstack [dot] org. Please 
 prefix subject with '[openstack-dev][Neutron][LBaaS]’
 Thanks,
 Ganesh
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] treat stable/kilo as proposed/kilo for now

2015-04-15 Thread Doug Hellmann
We’ll have more details about this tomorrow, but the tl;dr is that in order to 
make some of the release/requirements/test tools work properly we’ve gone ahead 
and renamed proposed/kilo to stable/kilo. We have NOT cut RC2, and so the 
stable/kilo branch should be treated as though it is still just proposed/kilo 
for now. We’re working on ACL changes to enforce this [1], but in the mean time 
please just be careful and don’t approve anything in stable branches.

[1] https://review.openstack.org/#/c/174074/

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient 'stable-compat-jobs-{name}' broken

2015-04-15 Thread melanie witt
On Apr 14, 2015, at 12:03, Jeremy Stanley fu...@yuggoth.org wrote:

 Our regular integration testing jobs do this by default. When a
 change is proposed to a stable branch of a project, devstack-gate
 checks out the same branch name for the other projects being tested
 with it and updates requirements based on the global requirements
 list for that branch.
 
 The backward-compatibility jobs were a workaround for the fact that
 by default changes proposed to the master branch of a project are
 only tested with the master branches of other projects.

Oh, that makes it easy then! Thanks Jeremy.

-melanie (irc: melwitt)







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] openwrt VM as service

2015-04-15 Thread Salvatore Orlando
I think this work falls into the service VM category.

openwrt unlike other service VMs used for networking services (like
cloudstack's router vm) is very lightweight, and it's fairly easy to
provision such VMs on the fly. It should be easy also to integrate with a
ML2 control plane or even with other plugins.

It is a decent alternative to the l3 agent. Possibly to the dhcp agent as
well. As I see this as an alternative to part of the reference control
plane, I expect it to provide its own metadata proxy. The only change in
neutron would be some sort of configurability in the metadata proxy
launcher (assuming you do not provide DHCP as well via openwrt, in which
case the problem would not exist, probably).

It's not my call about whether this should live in neutron or not. My vote
is not - simply because I believe that neutron is not a control plane, and
everything that is control plane or integration with it should live outside
of neutron, including our agents.

On the other hand, I don't really see what the 'aaS' part of this. You're
not exposing anything as a service specific to openwrt, are you?

Salvatore



On 15 April 2015 at 22:06, Sławek Kapłoński sla...@kaplonski.pl wrote:

 Hello,

 I agree. IMHO it should be maybe something like *aaS deployed on VM. I
 think that Octavia is something like that for LBaaS now.
 Maybe it could be something like RouteraaS which will provide all such
 functions in VM?

 --
 Best regards / Pozdrawiam
 Sławek Kapłoński
 sla...@kaplonski.pl

 On Wed, Apr 15, 2015 at 11:55:06AM -0500, Dean Troyer wrote:
  On Wed, Apr 15, 2015 at 2:37 AM, Guo, Ruijing ruijing@intel.com
 wrote:
 
 I’d like to propose openwrt VM as service.
  
  
  
   What’s openWRT VM as service:
  
  
  
   a)Tenant can download openWRT VM from
   http://downloads.openwrt.org/
  
   b)Tenant can create WAN interface from external public
 network
  
   c)Tenant can create private network and create instance
 from
   private network
  
   d)Tenent can configure openWRT for several services
 including
   DHCP, route, QoS, ACL and VPNs.
  
 
 
  So first off, I'll be the first on in line to promote using OpenWRT for
 the
  basis of appliances for this sort of thing.  I use it to overcome the
 'joy'
  of VirtualBox's local networking and love what it can do in 64M RAM.
 
  However, what you are describing are services, yes, but I think to focus
 on
  the OpenWRT part of it is missing the point.  For example, Neutron has a
  VPNaaS already, but I agree it can also be built using OpenWRT and
  OpenVPN.  I don't think it is a stand-alone service though, using a
  combination of Heat/{ansible|chef|puppet|salt}/any other
  deployment/orchestration can get you there.  I have a shell script
  somewhere for doing exactly that on AWS from way back.
 
  What I've always wanted was an image builder that would customize the
  packages pre-installed.  This would be especially useful for disposable
  ramdisk-only or JFFS images that really can't install additional
 packages.
  Such a front-end to the SDK/imagebuilder sounds like about half of what
 you
  are talking about above.
 
  Also, FWIW, a while back I packaged up a micro cloud-init replacement[0]
 in
  shell that turns out to be really useful.  It's based on something I
  couldn't find again to give proper attribution so if anyone knows who
  originated this I'd be grateful.
 
  dt
 
  [0] https://github.com/dtroyer/openwrt-packages/tree/master/rc.cloud
  --
 
  Dean Troyer
  dtro...@gmail.com

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Clint Byrum
As a result of all of the excellent discussions here, I have produced a
spec for policy which I think will assist us all in moving forward on
improving the quality in this space:

https://review.openstack.org/174105

Thanks everyone for your excellent replies!

Excerpts from Clint Byrum's message of 2015-04-14 10:22:56 -0700:
 Hello! There's been some recent progress on python3 compatibility for
 core libraries that OpenStack depends on[1], and this is likely to open
 the flood gates for even more python3 problems to be found and fixed.
 
 Recently a proposal was made to make oslo.messaging start to run python3
 tests[2], and it was found that qpid-python is not python3 compatible yet.
 
 This presents us with questions: Is anyone using QPID, and if so, should
 we add gate testing for it? If not, can we deprecate the driver? In the
 most recent survey results I could find [3] I don't even see message
 broker mentioned, whereas Databases in use do vary somewhat.
 
 Currently it would appear that only oslo.messaging runs functional tests
 against QPID. I was unable to locate integration testing for it, but I
 may not know all of the places to dig around to find that.
 
 So, please let us know if QPID is important to you. Otherwise it may be
 time to unburden ourselves of its maintenance.
 
 [1] https://pypi.python.org/pypi/eventlet/0.17.3
 [2] https://review.openstack.org/#/c/172135/
 [3] 
 http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Global Cluster Template in Sahara

2015-04-15 Thread Liang, Yanchao
Dear Openstack Developers,

My name is Yanchao Liang. I am a software engineer in eBay, working on Hadoop 
as a Service on top of Openstack cloud.

Right now we are using Sahara, Juno version. We want to stay current and 
introduce global template into sahara.

In order to simplify the cluster creation process for user, we would like to 
create some cluster templates available for all users. User can just go to the 
horizon webUI, select one of the pre-popluated templates and create a hadoop 
cluster, in just a few clicks.

Here is how I would implement this feature:

  *   In the database, Create a new column in “cluster_templates table called 
“is_global”, which is a boolean value indicating whether the template is 
available for all users or not.
  *   When user getting the cluster template from database,  add another 
function similar to “cluster_template_get”, which query the database for global 
templates.
  *   When creating cluster, put the user’s tenant id in the “merged_values” 
config variable, instead of the tenant id from cluster template.
  *   Use an admin account create and manage global cluster templates

Since I don’t know the code base as well as you do, what do you think about the 
global template idea? How would you implement this new feature?

We would like to contribute this feature back to the Openstack community. Any 
feedback would be greatly appreciated. Thank you.

Best,
Yanchao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Where to keep discover images

2015-04-15 Thread Dmitry Tantsur
Ok, I will create a bug so that I don't forget (this kind of changes 
does not require a spec, does it?).


On 04/15/2015 01:59 AM, Devananda van der Veen wrote:

Yea, option #2 is what I had in mind. Like you said, I think most of
the code is already present in Ironic, and if it's not already
factored into a reusable thing (eg, ironic/utils or
ironic/driver/utils or such) then it should be.

Cheers,
-Deva

On Tue, Apr 14, 2015 at 1:40 PM, Dmitry Tantsur divius.ins...@gmail.com wrote:

Hi,

actually 2 possibilities here:
1. discoverd itself handles TFTP
2. DiscoverdInspect hanldes TFTP

I vote for the 2nd, as we have all required code in Ironic already. I guess
initial question was about the 1st case, which I doubt is worth supporting.
Anyway, nice idea for an improvement!

Dmitry


2015-04-14 22:27 GMT+02:00 Devananda van der Veen devananda@gmail.com:


I'm wondering Rather than have a static config, could the
DiscoverdInspect interface handle setting up the TFTP config, pulling
those images from Glance, etc, when a node is moved into the inspect
state (assuming such integration was desired by the cloud operator)?

-Deva

On Fri, Apr 10, 2015 at 12:31 AM, Dmitry Tantsur dtant...@redhat.com
wrote:

On 04/10/2015 01:43 AM, Jaromir Coufal wrote:


Hey Dmitry,



o/



I wanted to ask you about ironic-discoverd.

At the moment, after build, the discovery images are copied into local
folder:

TFTP_ROOT=${TFTP_ROOT:-/tftpboot}

sudo cp -f $IMAGE_PATH/$DISCOVERY_NAME.kernel
$TFTP_ROOT/discovery.kernel
sudo cp -f $IMAGE_PATH/$DISCOVERY_NAME.initramfs
$TFTP_ROOT/discovery.ramdisk

I am wondering why is that and if discoverd can work with these images
if they were loaded into glance.



Discoverd is not concerned with TFTP configuration (unlike Ironic), so
you
can put them everywhere, provided that your TFTP still works. Currently
we
use static configuration, as it's the easiest one.



I mean it would be definitely more
convenient than keeping them locally.

Thanks
-- Jarda





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
--
-- Dmitry Tantsur
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Neutron] - Joining the team - interested in a Debian Developer and experienced Python and Network programmer?

2015-04-15 Thread Matt Grant
Hi Vikram,

I am very interested in this, however can't do everything for free!

I believe that bird would be a better fit than Zebra/Quagga, as it is
just 2 processes to be launched in a network namespace.  Also they are
very accessible/reloadable via the birdc/birdc6 control binaries that
lends itself to be called from python.

Could let you let me know if Huawei are interested in financially
supporting this please? 

It is doable, and would be useful for smaller deployments.  It can be
made part of the new ML3 that is proposed.

Looking forward to your answer!

Best Regards,

Matt Grant

On Tue, 2015-04-14 at 11:58 +, Vikram Choudhary wrote:
 Hi Matt,
 
 Can you please let me know about your views on this proposal.
 
 Thanks
 Vikram
 
 -Original Message-
 From: Vikram Choudhary 
 Sent: 10 April 2015 10:40
 To: 'm...@mattgrant.net.nz'
 Cc: Kalyankumar Asangi; Dhruv Dhody; Kyle Mestery; 'Mathieu Rohon'; Dongfeng 
 (C)
 Subject: RE: [openstack-dev] [Neutron] - Joining the team - interested in a 
 Debian Developer and experienced Python and Network programmer?
 
 Hi Matt,
 
 Welcome to Openstack:)
 
 I was thinking of supporting an open vRouter for Openstack neutron. 
 Currently, few vendors are there but are not open source. I feel it will be 
 good if we can introduce Zebra/Quagga for neutron. Since you have an 
 expertise over these so I feel we can do this much easier. 
 
 Please let me know about your views in this regard.
 
 Thanks
 Vikram
 
 -Original Message-
 From: Matt Grant [mailto:m...@mattgrant.net.nz] 
 Sent: 09 April 2015 12:44
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron] - Joining the team - interested in a 
 Debian Developer and experienced Python and Network programmer?
 
 Hi!
 
 I am just wondering what the story is about joining the neutron team.
 Could you tell me if you are looking for new contributors?
 
 Previously I have programmed OSPFv2 in Zebra/Quagga, and worked as a router 
 developer for Allied Telesyn.  I also have extensive Python programming 
 experience, having worked on the DNS Management System.
 
 I have been experimenting with IPv6 since 2008 on my own home network, and I 
 am currently installing a Juno Openstack cluster to learn ho things tick.
 
 Have you guys ever figured out how to do a hybrid L3 North/South Neutron 
 router that propagates tenant routes and networks into OSPF/BGP via a routing 
 daemon, and uses floating MAC addresses/costed flow rules via OVS to fail 
 over to a hot standby router? There are practical use cases for such a thing 
 in smaller deployments.
 
 I have a single stand alone example working by turning off neutron-l3-agent 
 network name space support, and importing the connected interface and static 
 routes into Bird and Birdv6. The AMPQ connection back to the neutron-server 
 is via the upstream interface and is secured via transport mode IPSEC (just 
 easier than bothering with https/SSL).
 Bird looks easier to run from neutron as they are single process than a multi 
 process Quagga implementation.  Incidentally, I am running this in an LXC 
 container.
   
 Could some one please point me in the right direction.  I would love to be in 
 Vancouver :-)
 
 Best Regards,
 
 --
 Matt Grant,  Debian and Linux Systems Administration and Consulting
 Mobile: 021 0267 0578
 Email: m...@mattgrant.net.nz
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Matt Grant,  Debian and Linux Systems Administration and Consulting
Mobile: 021 0267 0578
Email: m...@mattgrant.net.nz


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon

2015-04-15 Thread hao wang
I prefer to the sub-state with a bit more detail. It will be simple for
horizon or others to get the more detail error message from cinder.

Maybe we can define some template messages(according to kinds of error
reason, more general) to avoid specify driver info back to end user.


2015-04-14 0:24 GMT+08:00 Duncan Thomas duncan.tho...@gmail.com:

 George

 What has been said is that:
 1) With an async API, there is no error from the client in the request.
 e.g. for a create, the request returns success well before the backend has
 been contacted about the request. There is no path back to the client with
 which to send an error.

 2) Quite often there is a desire for the admin to see error messages, but
 not the tenant - this is especially true for managed / public clouds.

 On 13 April 2015 at 18:21, George Peristerakis gperi...@redhat.com
 wrote:

  Hi Lui,

 I'm not familiar with the error you are trying to show, but Here's how
 Horizon typically works. In the case of cinder, we have a wrapper around
 the python-cinderclient which if the client sends a exception with a valid
 message, by default Horizon will display the exception message. The message
 can also be overridden in the translation file. So a good start is to look
 in python-cinderclient and see if you could produce a more meaningful
 message.


 Cheers.
 George


 On 10/04/15 06:16 AM, liuxinguo wrote:

 Hi,

 When we create a volume in the horizon, there may occurrs some errors at the 
 driver
 backend, and the in horizon we just see a error in the volume status.

 So is there any way to put the error information to the horizon so users can 
 know what happened exactly just from the horizon?
 Thanks,
 Liu




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best Wishes For You!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Consistent variable documentation for diskimage-builder elements

2015-04-15 Thread Smigiel, Dariusz
 Excerpts from Dan Prince's message of 2015-04-13 14:07:28 -0700:
  On Tue, 2015-04-07 at 21:06 +, Gregory Haynes wrote:
   Hello,
  
   Id like to propse a standard for consistently documenting our
   diskimage-builder elements. I have pushed a review which transforms
   the apt-sources element to this format[1][2]. Essentially, id like
   to move in the direction of making all our element README.rst's
   contain a sub section called Environment Vairables with a Definition
   List[3] where each entry is the environment variable. Under that
   environment variable we will have a field list[4] with Required,
   Default, Description, and optionally Example.
  
   The goal here is that rather than users being presented with a wall
   of text that they need to dig through to remember the name of a
   variable, there is a quick way for them to get the information they
   need. It also should help us to remember to document the vital bits
   of information for each vairable we use.
  
   Thoughts?
 
  I like the direction of the cleanup. +2
 
  I do wonder who we'll enforce consistency in making sure future
  changes adhere to the new format. It would be nice to have a CI check
  on these things so people don't constantly need to debate the correct
  syntax, etc.
 
 I agree Dan, which is why I'd like to make sure these are machine readable
 and consistent. I think it would actually make sense to make our argument
 isolation efforts utilize this format, as that would make sure that these are
 consistent with the code as well.
 

As already suggested in my previous email [1], we could consider rest_lint [2]

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/060907.html
[2] https://pypi.python.org/pypi/restructuredtext_lint/0.4.0

Any thoughts?

--
Dariusz Smigiel
Intel Technology Poland

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Group-Based-Policy] Fixing backward incompatible unnamed constraints removal

2015-04-15 Thread Sumit Naiksatam
Thanks Ivar for tracking this and bringing it up for discussion. I am
good with taking approach (1).



On Mon, Apr 13, 2015 at 1:10 PM, Ivar Lazzaro ivarlazz...@gmail.com wrote:
 Hello Team,

 As per discussion in the latest GBP meeting [0] I'm hunting down all the
 backward incompatible changes made on DB migrations regarding the removal of
 unnamed constraints.
 In this report [1] you can find the list of affected commits.

 The problem is that some of the affected commits are already back ported to
 Juno! and others will be [2], so I was wondering what's the plan regarding
 how we want back port the compatibility fix to stable/juno.
 I see two possibilities:

 1) We backport [2] as is (with the broken migration), but we cut the new
 stable release only once [3] is merged and back ported. This has the
 advantage of having a cleaner backport tree in which all the changes in
 master are cherry-picked without major changes.

 2) We split [3] in multiple patches, and we only backport those that fix
 commits that are already in Juno. Patches like [2] will be changed to
 accomodate the fixed migration *before* being merged into the stable branch.
 This will avoid intra-release code breakage (which is an issue for people
 installing GBP directly from code).

 Please share your thoughts, Thanks,
 Ivar.

 [0]
 http://eavesdrop.openstack.org/meetings/networking_policy/2015/networking_policy.2015-04-09-18.00.log.txt
 [1] https://bugs.launchpad.net/group-based-policy/+bug/1443606
 [2] https://review.openstack.org/#/c/170972/
 [3] https://review.openstack.org/#/c/173051/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] openwrt VM as service

2015-04-15 Thread Guo, Ruijing
I’d like to propose openwrt VM as service.

What’s openWRT VM as service:

a)Tenant can download openWRT VM from http://downloads.openwrt.org/
b)Tenant can create WAN interface from external public network
c)Tenant can create private network and create instance from 
private network
d)Tenent can configure openWRT for several services including DHCP, 
route, QoS, ACL and VPNs.

What’s need to change in neutron:

a)Neutron support to create port for openWRT VM. (I assume it 
already support it and just integrate it)
b)Move metadata proxy to openWRT VM.

Why openstack need it?

a)It is easy for tenant to configure/customize  network service.
Especially, openstack doesn’t support specified VPN.  Tenent can configure VPN 
and don’t need develop new one and request cloud admin to deploy new VPN.
b)It is easy for openstack to deploy new network service.

Case 1: SNAT load balance. (We may propose it in neutron)

Currently, neutron l3 support one gateway IP. Neutron L3 does SNAT from private 
network to public network.

   Private network -SNAT--- public network

If the public network is down, all private network cannot access to external 
network.

If we do SNAT load balance, private network can do SNAT to 2 public network.
How to implement in openwrt VM:

1.Create port1 from public network 1
2.Create port2 from public network 2
3.Create port3 from private network
4.Create openwrt VM including port1, port2 and port3
5.Configure openwrt to do SNAT load balance from private network to 
public network 1 and publice network2

Case 2: VPN Service

I want to use OpenVPN. Without openwrt VM, I need to develop OpenVPN as VPN 
plugin and ask  openstack admin to deploy it (possibly, openstack cloud admin 
reject it)

How to implement in openwrt VM:

1.Create port1 from public network 1
2.Create port2 from private network
3.Create vpn server/client
4.NAT from private network to vpn network

What do you think?

Thanks,
-Ruijing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >