[openstack-dev] [Neutron] Driver meeting cancelled for Thur Sept 29

2016-09-28 Thread Armando M.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Deprecation Policies (related to Heka)

2016-09-28 Thread Steven Dake (stdake)
First off, apologies for missing most of the team meeting today.  I have read 
through the logs and saw a discussion about deprecating heka.  We need to 
ensure that we follow the deprecation policy.  My understanding of the 
deprecation policy is as follows (in a nutshell):


1.   We must mail the 
openstack-operat...@lists.openstack.org
 mailing list and ask if the change impacts operators

2.   If it does impact operators, we have to propose a migration path that 
works and makes sense

3.   We have to *at minimum* keep the feature in the release for 3 months.  
Major features that are not technical preview should stay in the release for 2 
cycles and *extra care* should be taken when communicating our intent with the 
operator list.

4.   Once 1-3 are done, we must make official notice in the release notes 
for the release in which the deprecation occurred and deliver on that 
commitment of the stated deprecation time in O or P or Q.

5.   The thread on openstack-operators should be linked in the review of 
the deprecation of the work. (this last part of the policy isn’t stated, 
however, it will help avoid misunderstandings between the Kolla team, 
operators, and the technical committee)

I appreciate everyone’s interest in deprecating Heka in Ocata as it is entering 
security-fix only maintenance mode.  However, we need to follow the standard 
OpenStack deprecation policies, not make up new ones.  Heka will not be 
deprecated for Newton because we lack time to sort out 1-3 (especially #2 
above, the migration path) between now and Oct 12th (hopefully when we tag rc2, 
13th is drop dead date).

If you have a different parsing of the deprecation policy, feel free to chime 
in.

The standardized deprecation policy can be found here:

https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst

Regards,
Steve
`~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Project name DB length

2016-09-28 Thread Adrian Turjak
I think with PKI tokens we had worse to worry about!

At any rate, would be great to know, and if there isn't a strong reason
against it we can make project name 255 for some more flexibility.

Plus although there is no true official standard, most projects in
OpenStack seem to use 255 as the default for a lot of string fields.
Weirdly enough, a lot of projects seem to use 255 even for project.id,
which seeing as it's 64 in keystone, and a uuid4 anyway, seems like a
bit of a waste.


On 29/09/16 16:19, Steve Martinelli wrote:
> We may have to ask Adam or Dolph, or pull out the history textbook for
> this one. I imagine that trying to not bloat the token was definitely
> a concern. IIRC User name was 64 also, but we had to increase to 255
> because we're not in control of name that comes from external sources
> (like LDAP).
>
> On Wed, Sep 28, 2016 at 11:06 PM, Adrian Turjak
> > wrote:
>
> Hello Keystone Devs,
>
> Just curious as to the choice to have the project name be only 64
> characters:
> 
> https://github.com/openstack/keystone/blob/master/keystone/resource/backends/sql.py#L241
> 
> 
>
> Seems short, and an odd choice when the user.name
>  field is 255 characters:
> 
> https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql_model.py#L216
> 
> 
>
> Is there a good reason for it only being 64 characters, or is this
> just
> something that was done a long time ago and no one thought about it?
>
> Not hugely important, just seemed odd and may prove limiting for
> something I'm playing with.
>
> Cheers,
> Adrian Turjak
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Project name DB length

2016-09-28 Thread Steve Martinelli
We may have to ask Adam or Dolph, or pull out the history textbook for this
one. I imagine that trying to not bloat the token was definitely a concern.
IIRC User name was 64 also, but we had to increase to 255 because we're not
in control of name that comes from external sources (like LDAP).

On Wed, Sep 28, 2016 at 11:06 PM, Adrian Turjak 
wrote:

> Hello Keystone Devs,
>
> Just curious as to the choice to have the project name be only 64
> characters:
> https://github.com/openstack/keystone/blob/master/keystone/
> resource/backends/sql.py#L241
>
> Seems short, and an odd choice when the user.name field is 255 characters:
> https://github.com/openstack/keystone/blob/master/keystone/
> identity/backends/sql_model.py#L216
>
> Is there a good reason for it only being 64 characters, or is this just
> something that was done a long time ago and no one thought about it?
>
> Not hugely important, just seemed odd and may prove limiting for
> something I'm playing with.
>
> Cheers,
> Adrian Turjak
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Project name DB length

2016-09-28 Thread Adrian Turjak
Hello Keystone Devs,

Just curious as to the choice to have the project name be only 64
characters:
https://github.com/openstack/keystone/blob/master/keystone/resource/backends/sql.py#L241

Seems short, and an odd choice when the user.name field is 255 characters:
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql_model.py#L216

Is there a good reason for it only being 64 characters, or is this just
something that was done a long time ago and no one thought about it?

Not hugely important, just seemed odd and may prove limiting for
something I'm playing with.

Cheers,
Adrian Turjak


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trio2o]Trio2o cleaning discussion

2016-09-28 Thread joehuang
Hello,

As we discussed yesterday, we'll have a short conversation on the Trio2o 
cleaning.

Let's discuss Trio2o cleaning on Friday UTC2:00(i.e, beijing time 10:00), 1 hour

It would be better to discuss this in the channel of #openstack-trio2o instead 
of #openstack-tricircle(sorry mentioned #openstack-tricircle channel yesterday).

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] SRIOV-port refused to bind

2016-09-28 Thread Murali B
Hi

I am using the SRIOV on mitaka. When I try to launch the VM with SRIOV port
its failed.

When I see the neutrn-server.log I see that below message on controller.

ddf81 - - -] Refusing to bind due to unsupported vnic_type: direct
bind_port
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py:65
2016-09-28 18:55:18.384 16531 ERROR neutron.plugins.ml2.managers
[req-443e9b6a-45c6-4b30-aa50-52a9b0a4926c 7d0ef58dd1214f54983c9d843fec0bde
238c9900a2ae4b57b01fa72abdeddf81 - - -] Failed to bind port
e96eadf3-5501-442a-8bcc-0b4d64617b26 on host A1-22932-compute1 for
vnic_type direct using segments [{'segmentation_id': 123,
'physical_network': u'physnet1', 'id':
u'30b77081-d02c-4e29-a41a-f8997c1f9f66', 'network_type': u'vlan'}]

On compute node I see the below error.

2016-09-28 18:55:18.688 7651 ERROR nova.compute.manager [instance:
4c737a89-51b8-4504-a208-05f2da178482] flavor, virt_type, self._host)
2016-09-28 18:55:18.688 7651 ERROR nova.compute.manager [instance:
4c737a89-51b8-4504-a208-05f2da178482]   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 447, in
get_config
2016-09-28 18:55:18.688 7651 ERROR nova.compute.manager [instance:
4c737a89-51b8-4504-a208-05f2da178482] _("Unexpected vif_type=%s") %
vif_type)
2016-09-28 18:55:18.688 7651 ERROR nova.compute.manager [instance:
4c737a89-51b8-4504-a208-05f2da178482] NovaException: Unexpected
vif_type=binding_failed

Could somebody help me to come-out this issue.


Thanks
-Murali
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitaka)

2016-09-28 Thread Hongbin Lu
Steve,

In Newton, we upgraded the Heat templates to use lbaasv2 [1]. However, it
seems the k8s external load balancer still work with lbaasv1. If Kolla
doesn't use the external load balancer feature, it should be fine.

[1] https://review.openstack.org/#/c/314060/

Best regards,
Hongbin

On Wed, Sep 28, 2016 at 8:06 PM, Steven Dake (stdake) 
wrote:

> Fantastic!
>
>
>
> Quick semi-related question.  Will Magnum Newton be using lbaasv2?  That
> is what we have implemented in Kolla.
>
>
>
> Regards
>
> -steve
>
>
>
>
>
> *From: *Ton Ngo 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Tuesday, September 27, 2016 at 10:58 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [magnum] Fedora Atomic image that supports
> kubernetes external load balancer (for stable/mitaka)
>
>
>
> Thanks Steve. We indeed have been using the image built by Yolanda's DIB
> elements and things have been stable. Dane and I have resolved the problems
> with the load balancer at least for the LBaaS v1. For LBaaS v2, we need to
> build a new image with Kubernetes 1.3 and we just got one built today.
> Ton,
>
> [image: nactive hide details for "Steven Dake (stdake)" ---09/27/2016
> 10:18:07 PM]"Steven Dake (stdake)" ---09/27/2016 10:18:07 PM---Dane, I’ve
> heard Yolanda has done good work on making disk image builder build fedora
> atomic properl
>
>
> From: "Steven Dake (stdake)" 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 09/27/2016 10:18 PM
> Subject: Re: [openstack-dev] [magnum] Fedora Atomic image that supports
> kubernetes external load balancer (for stable/mitaka)
>
> --
>
>
>
>
> Dane,
>
> I’ve heard Yolanda has done good work on making disk image builder build
> fedora atomic properly consistently. This may work better than the current
> image building tools available with atomic if you need to roll your own.
> Might try pinging her on irc for advice if you get jammed up here. Might
> consider consulting tango as well as I handed off my knowledge in this area
> to him first and he has distributed to the rest of the Magnum core reviewer
> team. I’m not sure if tango and Yolanda have synced on this – recommend
> checking with them.
>
> Seems important to have a working atomic image for both Mitaka and Newton.
>
> Regards
> -steve
>
>
> *From: *"Dane Leblanc (leblancd)" 
> * Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> * Date: *Thursday, September 8, 2016 at 2:18 PM
> * To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> * Subject: *[openstack-dev] [magnum] Fedora Atomic image that supports
> kubernetes external load balancer (for stable/mitaka)
>
> Does anyone have a pointer to a Fedora Atomic image that works with
> stable/mitaka Magnum, and supports the kubernetes external load balancer
> feature [1]?
>
> I’m trying to test the kubernetes external load balancer feature with
> stable/mitaka Magnum. However, when I try to bring up a load-balanced
> service, I’m seeing these errors in the kube-controller-manager logs:
>
> *E0907 16:26:54.375286 1 servicecontroller.go:173] Failed to process
> service delta. Retrying: failed to create external load balancer for
> service default/nginx-service: SubnetID is required*
>
>
> I verified that I have the subnet-id field set in the [LoadBalancer]
> section in /etc/sysconfig/kube_openstack_config.
>
> I’ve tried this using the following Fedora Atomic images from [2]:
>
> fedora-21-atomic-5.qcow2
> fedora-21-atomic-6.qcow2
> fedora-atomic-latest.qcow2
>
>
> According to the Magnum external load balancer blueprint [3], there were 3
> patches in kubernetes that are required to get the OpenStack provider
> plugin to work in kubernetes:
>
> https://github.com/GoogleCloudPlatform/kubernetes/pull/12203
> https://github.com/GoogleCloudPlatform/kubernetes/pull/12262
> https://github.com/GoogleCloudPlatform/kubernetes/pull/12288
>
> The first of these patches, “Pass SubnetID to vips.Create()”, is
> apparently necessary to fix the “SubnetID is required” error shown above.
>
> According to the Magnum external load balancer blueprint [3], the
> fedora-21-atomic-6 image should include the above 3 fixes:
>
> “*Our work-around is to use our own custom Kubernetes build (version
> 1.0.4 + 3 fixes) until the fixes are released. This is in image
> fedora-21-atomic-6.qcow2*”
>
> However, I’m still seeing the “SubnetID is required” errors with this
> image downloaded from [2]. Here are the kube versions I’m seeing with this
> image:
>
> [minion@k8-64n4bna2v6-0-ffukgho7n7tf-kube-master-fif5b6pivdmy sysconfig]$
> rpm -qa | grep kube
> 

[openstack-dev] [infra]please help to add initial members to trio2o-core and trio2o-release group

2016-09-28 Thread joehuang
Hello,

Trio2o is a new project which is derived from Tricircle: 
https://review.openstack.org/#/c/367114/

Please add the initial members (same as that in Tricircle) to the group 
trio2o-core (https://review.openstack.org/#/admin/groups/1576,members) and 
trio2o-release (https://review.openstack.org/#/admin/groups/1577,members)

Chaoyi Huang: joehu...@huawei.com
Shinobu KINJO: shin...@linux.com
Shinobu Kinjo: shinobu...@gmail.com
Zhiyuan Cai:  luckyveg...@gmail.com

Thank you very much

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]tricircle cleaning plan

2016-09-28 Thread joehuang
Hello,

The Newton branch will be created tomorrow, we can start the Tricircle cleaning 
in the trunk to make Tricircle dedicated for networking automation. At the same 
time Tricircle Newton release will be done in the Newton branch, two more 
patches needed for this branch: one patch to update devstack related script to 
download newton branch code, one patch for release note.

During the discussion in yesterday weekly meeting, we agreed following plan to 
clean Tricircle:


  *   1. update README: https://review.openstack.org/#/c/375218/

  *   1. local plugin spec: https://review.openstack.org/#/c/368529/

  *   1. local and central plugin: https://review.openstack.org/#/c/375281/

  *   2.central and local plugin for l3: 
https://review.openstack.org/#/c/378476/

  *   2. remove api gateway code:

  *   3.  security group support:

  *   3. installation guide update(no api gateway):

  *

  *   - Try to get these above cleaning patches merged before Oct.19, 
before Barcelona summit.

The number is the prioritization order, 1 means the highest, 3 means the lowest.

Your comments are welcome.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][Cinder] Cinder Newton RC2 available

2016-09-28 Thread Davanum Srinivas
Hello everyone,

A new release candidate for Cinder for the end of the Newton cycle
is available!  You can find the source code tarball at:

https://tarballs.openstack.org/cinder/cinder-9.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the final
Newton release on 6 October. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/newton release
branch at:

http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/newton

If you find an issue that could be considered release-critical,
please file it at:

https://bugs.launchpad.net/cinder/+filebug

and tag it *newton-rc-potential* to bring it to the Cinder release
crew's attention.

Thanks,
Dims (On behalf of the OpenStack Release team)

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] XML Attacks and DefusedXML on Global Requirements

2016-09-28 Thread Charles Neill
A completely secure alternative isn't available in the Python standard library. 
Here's a table of various XML libraries and the vulnerabilities they may be 
affected by [1]. This is partially reflected in Python's official documentation 
as well (version 2.7.12) [2].

There are currently 132 references to "xml.etree.ElementTree" alone in 
OpenStack projects [3]. Granted, most of these examples aren't likely to have 
serious security ramifications, but the potential is there (see the Glance OVF 
bug mentioned by Travis for a relatively mild example). XML is definitely on 
the decline, but for the remaining stragglers, having a secure, stable solution 
might be a good idea. The codebase of defusedxml is fairly small, basically 
just replacing a few vulnerable functions in popular XML libraries with more 
secure versions. Might it be something OpenStack could maintain a fork of?

Since the bandit documentation suggests using defusedxml as a mitigation for 
these issues, we should at least figure out an alternative suggestion for 
bandit to provide if defusedxml doesn't meet OpenStack's needs.

[1]: https://pypi.python.org/pypi/defusedxml#python-xml-libraries
[2]: https://docs.python.org/2/library/xml.html#xml-vulnerabilities
[3]: 
https://github.com/search?utf8=%E2%9C%93=org%3Aopenstack+%22xml.etree.elementtree%22+language%3Apython=Code=searchresults

Charles Neill

From: Travis McPeak >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, September 27, 2016 at 13:45
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Security] XML Attacks and DefusedXML on Global 
Requirements

There is a private security bug about it right now too.  No, not all XML 
libraries are immune now.

On Tue, Sep 27, 2016 at 11:36 AM, Dave Walker 
> wrote:


On 27 September 2016 at 19:19, Sean Dague 
> wrote:
On 09/27/2016 01:24 PM, Travis McPeak wrote:
> There are several attacks (https://pypi.python.org/pypi/defusedxml#id3)
> that can be performed when XML is parsed from untrusted input.
> DefusedXML offers safe alternatives to XML parsing libraries but is not
> currently part of global requirements.
>
> I propose adding DefusedXML to global requirements so that projects have
> an option for safe XML parsing.  Does anybody have any thoughts or
> objections?

Out of curiosity, are there specific areas of concern in existing
projects here? Most projects have dropped XML API support.


Outbound XML datasources which are parsed still used with at least nova vmware 
support and multiple cinder drivers.

openstack/ec2-api is still providing an xml api service?

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
-Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitaka)

2016-09-28 Thread Steven Dake (stdake)
Fantastic!

Quick semi-related question.  Will Magnum Newton be using lbaasv2?  That is 
what we have implemented in Kolla.

Regards
-steve


From: Ton Ngo 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, September 27, 2016 at 10:58 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [magnum] Fedora Atomic image that supports 
kubernetes external load balancer (for stable/mitaka)


Thanks Steve. We indeed have been using the image built by Yolanda's DIB 
elements and things have been stable. Dane and I have resolved the problems 
with the load balancer at least for the LBaaS v1. For LBaaS v2, we need to 
build a new image with Kubernetes 1.3 and we just got one built today.
Ton,

[nactive hide details for "Steven Dake (stdake)" ---09/27/2016 10:18:07 
PM]"Steven Dake (stdake)" ---09/27/2016 10:18:07 PM---Dane, I’ve heard Yolanda 
has done good work on making disk image builder build fedora atomic properl

From: "Steven Dake (stdake)" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date: 09/27/2016 10:18 PM
Subject: Re: [openstack-dev] [magnum] Fedora Atomic image that supports 
kubernetes external load balancer (for stable/mitaka)





Dane,

I’ve heard Yolanda has done good work on making disk image builder build fedora 
atomic properly consistently. This may work better than the current image 
building tools available with atomic if you need to roll your own. Might try 
pinging her on irc for advice if you get jammed up here. Might consider 
consulting tango as well as I handed off my knowledge in this area to him first 
and he has distributed to the rest of the Magnum core reviewer team. I’m not 
sure if tango and Yolanda have synced on this – recommend checking with them.

Seems important to have a working atomic image for both Mitaka and Newton.

Regards
-steve


From: "Dane Leblanc (leblancd)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, September 8, 2016 at 2:18 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes 
external load balancer (for stable/mitaka)

Does anyone have a pointer to a Fedora Atomic image that works with 
stable/mitaka Magnum, and supports the kubernetes external load balancer 
feature [1]?

I’m trying to test the kubernetes external load balancer feature with 
stable/mitaka Magnum. However, when I try to bring up a load-balanced service, 
I’m seeing these errors in the kube-controller-manager logs:
E0907 16:26:54.375286 1 servicecontroller.go:173] Failed to process service 
delta. Retrying: failed to create external load balancer for service 
default/nginx-service: SubnetID is required

I verified that I have the subnet-id field set in the [LoadBalancer] section in 
/etc/sysconfig/kube_openstack_config.

I’ve tried this using the following Fedora Atomic images from [2]:
fedora-21-atomic-5.qcow2
fedora-21-atomic-6.qcow2
fedora-atomic-latest.qcow2

According to the Magnum external load balancer blueprint [3], there were 3 
patches in kubernetes that are required to get the OpenStack provider plugin to 
work in kubernetes:
https://github.com/GoogleCloudPlatform/kubernetes/pull/12203
https://github.com/GoogleCloudPlatform/kubernetes/pull/12262
https://github.com/GoogleCloudPlatform/kubernetes/pull/12288
The first of these patches, “Pass SubnetID to vips.Create()”, is apparently 
necessary to fix the “SubnetID is required” error shown above.

According to the Magnum external load balancer blueprint [3], the 
fedora-21-atomic-6 image should include the above 3 fixes:
“Our work-around is to use our own custom Kubernetes build (version 1.0.4 + 3 
fixes) until the fixes are released. This is in image fedora-21-atomic-6.qcow2”
However, I’m still seeing the “SubnetID is required” errors with this image 
downloaded from [2]. Here are the kube versions I’m seeing with this image:
[minion@k8-64n4bna2v6-0-ffukgho7n7tf-kube-master-fif5b6pivdmy sysconfig]$ rpm 
-qa | grep kube
kubernetes-node-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-client-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-master-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
[minion@k8-64n4bna2v6-0-ffukgho7n7tf-kube-master-fif5b6pivdmy sysconfig]$

Does anyone have a pointer to a Fedora Atomic image that contains the 3 
kubernetes fixes listed earlier (and works with stable/mitaka)?

Thanks!
-Dane

[1] http://kubernetes.io/docs/user-guide/services/#type-loadbalancer
[2] https://fedorapeople.org/groups/magnum/
[3] 

Re: [openstack-dev] Devstack, Tempest, and TLS

2016-09-28 Thread Clark Boylan


On Tue, Sep 27, 2016, at 02:58 PM, Clark Boylan wrote:
> Once multinode testing has tls-proxy enabled the next thing I think we
> should be talking about is enabling this by default in devstack. As
> mentioned before ironic doesn't work due to IPA images not trusting
> glance's cert. Swift's functional tests don't currently work against
> https keystone as they assume http as well. All this to say if you have
> a devstack plugin or testing that depends on devstack now would be a
> great time to turn on tls-proxy and see if your things work with it
> (easy mode is depends-on 373219). I think that if we can identify places
> where it doesn't work and fixing it would require a lot of effort we
> should just proactively disable it in the jobs. That way we can turn it
> on by default for the default vanilla case.

I forgot to mention here that devstack + tls-proxy + CentOS 7 is also
non functional. Something about apache restarts and reloads is different
here than on Fedora24, Ubuntu Trusty, and Ubuntu Xenial. Would be great
if someone that knows a lot more about CentOS were to take a look at
this.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][glance] glance Newton RC2 available

2016-09-28 Thread Doug Hellmann
Hello everyone,

A new release candidate for glance for the end of the Newton cycle
is available!  You can find the source code tarball at:

https://tarballs.openstack.org/glance/glance-13.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Newton release on 6 October. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/newton release
branch at:

http://git.openstack.org/cgit/openstack/glance/log/?h=stable/newton

If you find an issue that could be considered release-critical,
please file it at:

https://bugs.launchpad.net/glance/+filebug

and tag it *newton-rc-potential* to bring it to the glance release
crew's attention.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Petitboot and PXE

2016-09-28 Thread Michael Turek

Hey ironic-ers,

My team has a patch [1] up for enabling PXE for petitboot [2]. It's been 
around for awhile, and we actually use it in our PowerKVM CI ironic job 
(as our OpenPOWER target boxes run petitboot). I was hoping to get some 
eyes on it as we'd like to eventually get it upstream. I've recently 
stripped it down to what I think could be a reasonable change.


In short, this patch adds the use of DHCP option 210 [3] (TFTP path 
prefix) in the PXE module. The path-prefix points to where the 
pxelinux.cfg folder lives.


Petitboot is different from other PXE clients in that it doesn't use the 
pxelinux.0 boot file (or any boot file for that matter). Instead, 
petitboot handles all PXE functionality itself. The issue that arises 
from this is that pxelinux.0 by default derives the path-prefix from 
it's own location. Since petitboot doesn't use this boot file, the 
information must be specified through DHCP option 210.


I've tested this patch against target systems that use petitboot and 
systems that use the pxelinux.0 boot file and it seems to function properly.


While this is an arguably small change, I'm wondering is this should be 
going through the spec process. As well, I'm also wondering if there a 
better/preferred approach (ie - maybe providing path-prefix as a config 
option rather than always using it). I'd really appreciate any feedback!


Thanks,
Mike Turek

[1] https://review.openstack.org/#/c/185987
[2] 
https://www.kernel.org/pub/linux/kernel/people/geoff/petitboot/petitboot.html

[3] https://tools.ietf.org/html/rfc5071#section-5


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2016-09-28 Thread gordon chung


On 28/09/2016 4:27 PM, Jeremy Stanley wrote:
> As was pointed out elsewhere in the thread, the TC has been trying
> to do something along these lines at
> http://www.openstack.org/blog/category/technical-committee-updates/
> but even with a dedicated communication subteam (see the May 13,
> 2015 entry) attempting to summarize important decisions, it's often
> a struggle to determine which items are important enough to include
> in a periodic high-level summary and which are administrivia better
> left buried in meeting minutes and review comments. All in all I
> think Anne and Flavio have done an awesome job with it.

agreed! thanks to all who did this. definitely useful for me rather than 
having to sift through meeting logs which often lack some context from 
discussions outside logs.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2016-09-28 Thread gordon chung


On 28/09/2016 3:59 PM, Chris Dent wrote:
> On Wed, 28 Sep 2016, Jim Rollenhagen wrote:
>
>> And the git tree, with a changelog, is here:
>> http://git.openstack.org/cgit/openstack/governance/
>
> I assume, but I'd prefer if he confirm, that the point gordc was
> trying to make was that there's more to what the TC gets up to than
> merging changes to governance. That's certainly a major aspect and
> one can track those changes by tracking both of those resources.
>
> Part of the point I was trying to make in the message to which gordc was
> responding is that whereas a git tree can allow someone to dig through
> and acquire details, a thing that is more like release notes[1] is far
> more human oriented and more likely to operate as a consumable digest of
> what has happened. Notably a git log will not reflect important
> conversations that did not result in a governance change nor activity
> that could have led to a governance change but was rejected. Certainly
> where a community says "no" is just as important as where it says "yes"?
> Further, merged changes are changes that have already been decided. We
> need more engagement, more broadly, while decisions are being
> considered. That means being more verbose, sooner.

Chris, i should let you speak for me. much more concise. :)

i wasn't really looking to track conflicts but that is definitely an 
interesting metric. more often than not, stuff happens without many eyes 
even seeing it.

personally, i was wondering if there was more beyond governance repo but 
it seems like that is workflow: propose patch, discuss at meeting, merge.

>
> [1] Note that I don't actually think that release notes is the proper
> form for some extra communication from the TC. Rather the justifications
> that lead some projects to add release notes, in addition to the git
> log, are something to consider for TC activity.
>

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2016-09-28 Thread Jeremy Stanley
On 2016-09-28 20:59:09 +0100 (+0100), Chris Dent wrote:
> Part of the point I was trying to make in the message to which gordc was
> responding is that whereas a git tree can allow someone to dig through
> and acquire details, a thing that is more like release notes[1] is far
> more human oriented and more likely to operate as a consumable digest of
> what has happened. Notably a git log will not reflect important
> conversations that did not result in a governance change nor activity
> that could have led to a governance change but was rejected. Certainly
> where a community says "no" is just as important as where it says "yes"?
> Further, merged changes are changes that have already been decided. We
> need more engagement, more broadly, while decisions are being
> considered. That means being more verbose, sooner.
[...]

As was pointed out elsewhere in the thread, the TC has been trying
to do something along these lines at
http://www.openstack.org/blog/category/technical-committee-updates/
but even with a dedicated communication subteam (see the May 13,
2015 entry) attempting to summarize important decisions, it's often
a struggle to determine which items are important enough to include
in a periodic high-level summary and which are administrivia better
left buried in meeting minutes and review comments. All in all I
think Anne and Flavio have done an awesome job with it.

Also, as you say, this mostly just covers decisions made and
discussions concluded rather than bringing attention to upcoming
topics or those for which deliberation was arrested pending
subsequent input. It's probably not the answer you're looking for,
but https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee and
https://review.openstack.org/#/q/project:openstack/governance+is:open
are remarkably effective to that end.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2016-09-28 Thread Doug Wiegley

> On Sep 28, 2016, at 1:59 PM, Chris Dent  wrote:
> 
> On Wed, 28 Sep 2016, Jim Rollenhagen wrote:
> 
 +1 to release notes or something of that like. i was asked to give an
 update on the TC internally and it seems the only information out there
 is to read through backlog of meeting logs or track the items that do
 get raised to ML. even then, it's hard to define what deliverables were
 achieved in the cycle.
 
>>> 
>>> FWIW, the resolutions that passed are listed here:
>>> https://governance.openstack.org/
>> 
>> And the git tree, with a changelog, is here:
>> http://git.openstack.org/cgit/openstack/governance/
> 
> I assume, but I'd prefer if he confirm, that the point gordc was
> trying to make was that there's more to what the TC gets up to than
> merging changes to governance. That's certainly a major aspect and
> one can track those changes by tracking both of those resources.
> 
> Part of the point I was trying to make in the message to which gordc was
> responding is that whereas a git tree can allow someone to dig through
> and acquire details, a thing that is more like release notes[1] is far
> more human oriented and more likely to operate as a consumable digest of

The minutes and logs exist.

http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-09-27-20.01.html 


http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-09-27-20.01.log.html 


http://eavesdrop.openstack.org/meetings/tc/2016/ 




> what has happened. Notably a git log will not reflect important
> conversations that did not result in a governance change nor activity
> that could have led to a governance change but was rejected. Certainly
> where a community says "no" is just as important as where it says "yes"?
> Further, merged changes are changes that have already been decided. We
> need more engagement, more broadly, while decisions are being
> considered. That means being more verbose, sooner.
> 
> [1] Note that I don't actually think that release notes is the proper
> form for some extra communication from the TC. Rather the justifications
> that lead some projects to add release notes, in addition to the git
> log, are something to consider for TC activity.
> 
> -- 
> Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
> freenode: cdent tw: 
> @anticdent__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2016-09-28 Thread Chris Dent

On Wed, 28 Sep 2016, Jim Rollenhagen wrote:


+1 to release notes or something of that like. i was asked to give an
update on the TC internally and it seems the only information out there
is to read through backlog of meeting logs or track the items that do
get raised to ML. even then, it's hard to define what deliverables were
achieved in the cycle.



FWIW, the resolutions that passed are listed here:
https://governance.openstack.org/


And the git tree, with a changelog, is here:
http://git.openstack.org/cgit/openstack/governance/


I assume, but I'd prefer if he confirm, that the point gordc was
trying to make was that there's more to what the TC gets up to than
merging changes to governance. That's certainly a major aspect and
one can track those changes by tracking both of those resources.

Part of the point I was trying to make in the message to which gordc was
responding is that whereas a git tree can allow someone to dig through
and acquire details, a thing that is more like release notes[1] is far
more human oriented and more likely to operate as a consumable digest of
what has happened. Notably a git log will not reflect important
conversations that did not result in a governance change nor activity
that could have led to a governance change but was rejected. Certainly
where a community says "no" is just as important as where it says "yes"?
Further, merged changes are changes that have already been decided. We
need more engagement, more broadly, while decisions are being
considered. That means being more verbose, sooner.

[1] Note that I don't actually think that release notes is the proper
form for some extra communication from the TC. Rather the justifications
that lead some projects to add release notes, in addition to the git
log, are something to consider for TC activity.

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][freezer] freezer Newton RC2 available

2016-09-28 Thread Doug Hellmann
Hello everyone,

A new release candidate for freezer for the end of the Newton cycle
is available!  You can find the source code tarball at:

https://tarballs.openstack.org/freezer/freezer-3.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, these candidates will be formally released as the
final Newton release on 6 October. You are therefore strongly
encouraged to test and validate these tarballs!

Alternatively, you can directly test the stable/newton release
branch at:

http://git.openstack.org/cgit/openstack/freezer/log/?h=stable/newton

If you find an issue that could be considered release-critical,
please file it at:

https://bugs.launchpad.net/freezer/+filebug

and tag it *newton-rc-potential* to bring it to the freezer release
crew's attention.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2016-09-28 Thread Jim Rollenhagen
>> +1 to release notes or something of that like. i was asked to give an
>> update on the TC internally and it seems the only information out there
>> is to read through backlog of meeting logs or track the items that do
>> get raised to ML. even then, it's hard to define what deliverables were
>> achieved in the cycle.
>>
>
> FWIW, the resolutions that passed are listed here:
> https://governance.openstack.org/

And the git tree, with a changelog, is here:
http://git.openstack.org/cgit/openstack/governance/

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][murano] murano Newton RC2 available

2016-09-28 Thread Doug Hellmann
Hello everyone,

A new release candidate for murano for the end of the Newton cycle
is available!  You can find the source code tarballs at:

https://tarballs.openstack.org/murano/murano-3.0.0.0rc2.tar.gz
https://tarballs.openstack.org/murano-agent/murano-agent-3.0.0.0rc2.tar.gz
https://tarballs.openstack.org/murano-dashboard/murano-dashboard-3.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, these candidates will be formally released as the
final
Newton release on 6 October. You are therefore strongly
encouraged to test and validate these tarballs!

Alternatively, you can directly test the stable/newton release
branch at:

http://git.openstack.org/cgit/openstack/murano/log/?h=stable/newton
http://git.openstack.org/cgit/openstack/murano-agent/log/?h=stable/newton
http://git.openstack.org/cgit/openstack/murano-dashboard/log/?h=stable/newton

If you find an issue that could be considered release-critical,
please file it at:

https://bugs.launchpad.net/murano/+filebug

and tag it *newton-rc-potential* to bring it to the murano release
crew's attention.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2016-09-28 Thread Steve Martinelli
On Wed, Sep 28, 2016 at 2:49 PM, gordon chung  wrote:

>
>
> On 28/09/2016 12:41 PM, Chris Dent wrote:
> >
> > * Information is not always clear nor clearly available, despite
> >   valiant efforts to maintain a transparent environment for the
> >   discussion of policy and process. There is more that can be done
> >   to improve engagement and communication. Maybe the TC needs
> >   release notes?
>
> +1 to release notes or something of that like. i was asked to give an
> update on the TC internally and it seems the only information out there
> is to read through backlog of meeting logs or track the items that do
> get raised to ML. even then, it's hard to define what deliverables were
> achieved in the cycle.
>
>
FWIW, the resolutions that passed are listed here:
https://governance.openstack.org/



> i found this updates page[1] but it seemed a little sparse so i imagine
> other stuff happened. i think more of this would help explain what the
> TC does and where it hopes to go because to be frank, i'm not sure what
> is in the scope of TC and what is not and i've been following the list
> and meetings for a while.
>
> [1] http://www.openstack.org/blog/category/technical-committee-updates/
>
> cheers,
>
> --
> gord
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][keystone] retiring python-keystoneclient-kerberos

2016-09-28 Thread Steve Martinelli
Hi there,

I would like to retire the python-keystoneclient-kerberos repo [1]. The
repo was pretty basic, it had a single auth plugin. The logic has since
been copied over to keystoneauth1 and provided you have kerberos libraries
installed the plugin will be available to you. The last release of
python-keystoneclient-kerberos  was on May 23rd 2016, which included a
deprecation warning. Note that the last release was version 0.3, so we're
talking very pre-1.0.

AFAICT, nothing uses the library any longer [2]. The only consumer that did
use it is django-openstack-auth-kerberos, which is switched over to
keystoneauth1, but has not been released in quite some time (Jun 9, 2015).
[3]

Selfishly, from a keystone perspective, I think we're in the clear and can
retire the repo. But I'm tagging horizon here to see what their plans are
for the django-openstack-auth-kerberos repo.

I think we need another release of django-openstack-auth-kerberos or a new
release of django_openstack_auth that also uses setuptools to optionally
install the kerberos libraries (this is what we did in keystoneauth).

Thoughts?
stevemar

[1] https://github.com/openstack/python-keystoneclient-kerberos
[2]
http://codesearch.openstack.org/?q=keystoneclient_kerberos=nope==
[3]
https://github.com/openstack/django-openstack-auth-kerberos/blob/master/requirements.txt#L9
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2016-09-28 Thread gordon chung


On 28/09/2016 12:41 PM, Chris Dent wrote:
>
> * Information is not always clear nor clearly available, despite
>   valiant efforts to maintain a transparent environment for the
>   discussion of policy and process. There is more that can be done
>   to improve engagement and communication. Maybe the TC needs
>   release notes?

+1 to release notes or something of that like. i was asked to give an 
update on the TC internally and it seems the only information out there 
is to read through backlog of meeting logs or track the items that do 
get raised to ML. even then, it's hard to define what deliverables were 
achieved in the cycle.

i found this updates page[1] but it seemed a little sparse so i imagine 
other stuff happened. i think more of this would help explain what the 
TC does and where it hopes to go because to be frank, i'm not sure what 
is in the scope of TC and what is not and i've been following the list 
and meetings for a while.

[1] http://www.openstack.org/blog/category/technical-committee-updates/

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Integration Tests Status

2016-09-28 Thread Rob Cresswell
Integration tests are now non-voting, so feel free to recheck failing builds.

Rob

On 28 September 2016 at 14:23, Rob Cresswell 
> wrote:
Hi all,

So the integration tests have started failing all over the place again. Given 
the number of failures over the past few weeks I've put up a patch to make them 
non-voting [1]. Please stop rechecking patches until this merges.

Reviewers, please be sure to check the test logs when they fail and be diligent 
about checking the UI.

1. https://review.openstack.org/#/c/378406/

Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][heat][octavia] Heat engine doesn't detect lbaas listener failures

2016-09-28 Thread Jiahao Liang
But in the lbaas db, all lbaas resources have the provisioning_status and
operating_status fields[1].
[1]
http://git.openstack.org/cgit/openstack/neutron-lbaas/tree/neutron_lbaas/db/loadbalancer/models.py#n352

Also there are apis which allows drivers to maintain them[2].
[2]
http://git.openstack.org/cgit/openstack/neutron-lbaas/tree/neutron_lbaas/agent/agent_manager.py#n255
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] The Ironic Grenade gate job is currently broken. Please don't do recheck on patches

2016-09-28 Thread Villalovos, John L
Currently the Ironic Grenade gate job is broken. Until the issue is resolved 
please don't do recheck on openstack/ironic patch.

Work is ongoing to figure out why it stopped working.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2016-09-28 Thread Chris Dent


Despite its name, the Technical Committee has become the part of the
OpenStack contributor community that enshrines, defines, and -- in some
rare cases -- enforces what it means to be "OpenStack". Meanwhile,
the community has seen a great deal of growth and change.

Some of these changes have led to progress and clarity, others have left
people confused about how they can best make a contribution and what
constraints their contributions must meet (for example, do we all know
what it means to be an "official" project?).

Much of the confusion, I think, can be traced to two things:

* Information is not always clear nor clearly available, despite
  valiant efforts to maintain a transparent environment for the
  discussion of policy and process. There is more that can be done
  to improve engagement and communication. Maybe the TC needs
  release notes?

* Agreements are made without the full meaning and implications of those
  agreements being collectively shared. Most involved think they agree,
  but there is limited shared understanding, so there is limited
  effective collaboration. We see this, for example, in the ongoing
  discussions on "What is OpenStack?". Agreement is claimed without
  actually existing.

We can fix this, but we need a TC that has a diversity of ideas and
experiences. Other candidates will have dramatically different opinions
from me. This is good because we must rigorously and vigorously question
the status quo and our assumptions. Not to tear things down, but to
ensure our ideas are based on present day truths and clear visions of
the future. And we must do this, always, where it can be seen and
joined and later discovered; gerrit and IRC are not enough.

To have legitimate representation on the Technical Committee we must
have voices that bring new ideas, are well informed about history, that
protect the needs of existing users and developers, encourage new users
and developers, that want to know how, that want to know why. No single
person can speak with all these voices.

Several people have encouraged me to run for the TC, wanting my
willingness to ask questions, to challenge the status quo and to drive
discourse. What I want is to use my voice to bring about frequent and
positive reevaluation.

We have a lot of challenges ahead. We want to remain a pleasant,
progressive and relevant place to participate. That will require
discovering ways to build bridges with other communities and within our
own. We need to make greater use of technologies which were not invented
here and be more willing to think about the future users, developers and
use cases we don't yet have (as there will always be more of those). We
need to keep looking and pushing forward.

To that end I'm nominating myself to be a member of the Technical
Committee.

If you have specific questions about my goals, my background or anything
else, please feel free to ask. I'm on IRC as cdent or send some email.
Thank you for your consideration.

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-28 Thread Davanum Srinivas
Here you go Flavio, Sergey and team collected some information from
fuel-ccp efforts.

Design for OpenStack Containerized Control Plane :
https://review.openstack.org/#/c/378266/
Design document for clustering services on k8s :
https://review.openstack.org/#/c/378244/
Add test plan/results for fuel-ccp : https://review.openstack.org/#/c/378271/

Thanks,
Dims

On Tue, Sep 27, 2016 at 4:23 AM, Flavio Percoco  wrote:
> On 27/09/16 00:41 +, Fox, Kevin M wrote:
>>
>> I think some of the disconnect here is a potential misunderstanding about
>> what kolla-kubernetes is
>>
>> Ultimately, to me, kolla-kubernetes is a database of architecture bits to
>> successfully deploy and manage OpenStack on k8s. Its building blocks. Pretty
>> much what you asked for.
>>
>> There are a bunch of ways of building openstacks. There is no one true
>> way. It really depends on what the operator wants the cloud to do. Is a
>> daemonset or a petset the best way to deploy a cinder volume pod in k8s? The
>> answer is, it depends. (We have an example where one or the other is better
>> now)
>>
>> kolla-kubernetes is taking the building block approach. It takes a bit of
>> information in from the operator or other tool, along with their main
>> openstack configs, and generates k8s templates that are optimized for that
>> case.
>>
>> Who builds the configs, who tells it when to build what templates, and in
>> what order they are started is a separate thing.
>>
>> You should be able to do a 'kollakube template pod nova-api' and just see
>> what it thinks is best.
>>
>> If you want a nice set of documents, it should be easy to loop across them
>> and dump them to html.
>>
>> I think doing them in a machine readable way rather then a document makes
>> much more sense, as it can be reused in multiple projects such as tripleo,
>> fuel, and others and we all can share a common database. We're trying to
>> build a community around this database.
>>
>> Asking to basically make a new project, that does just a human only
>> readable version of the same database seems like a lot of work, with many
>> fewer useful outcomes.
>
>
> I just want to point out that I'm not asking anyone to make a new project
> and
> that my intention is to collect info from other projects too, not just
> kolla-kubernetes. This is a pure documentation effort. I understand you
> don't
> think this is useful and I appreciate your feedback.
>
> Flavio
>
>
>> Please help the community make a great machine and human readable
>> reference architecture system by contributing to the kolla-kubernetes
>> project. There are plenty of opportunity to help out.
>>
>> Maybe making some tools to make the data contained in the database more
>> human friendly would suit your interests? Maybe a nice web frontend that
>> asks a few questions and renders templates out in nice human friendly ways?
>>
>> Thanks,
>> Kevin
>> 
>> From: Flavio Percoco [fla...@redhat.com]
>> Sent: Monday, September 26, 2016 9:42 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture
>> to deploy OpenStack on k8s
>>
>> On 23/09/16 17:47 +, Steven Dake (stdake) wrote:
>>>
>>> Flavio,
>>>
>>> Forgive the top post and lack of responding inline – I am dealing with
>>> lookout 2016 which apparently has a bug here [0].
>>>
>>> Your question:
>>>
>>> I can contribute to kolla-kubernetes all you want but that won't give me
>>> what I
>>> asked for in my original email and I'm pretty sure there are opinions
>>> about the
>>> "recommended" way for running OpenStack on kubernetes. Questions like:
>>> Should I
>>> run rabbit in a container? Should I put my database in there too? Now
>>> with
>>> PetSets it might be possible. Can we be smarter on how we place the
>>> services in
>>> the cluster? Or should we go with the traditional
>>> controller/compute/storage
>>> architecture.
>>>
>>> You may argue that I should just read the yaml files from
>>> kolla-kubernetes and
>>> start from there. May be true but that's why I asked if there was
>>> something
>>> written already.
>>> Your question ^
>>>
>>> My answer:
>>> I think what you are really after is why kolla-kubernetes has made the
>>> choices we have made.  I would not argue that reading the code would answer
>>> that question because it does not.  Instead it answers how those choices
>>> were implemented.
>>>
>>> You are mistaken in thinking that contributing to kolla-kubernetes won’t
>>> give you what you really want.  Participation in the Kolla community will
>>> answer for you *why* choices were made as they were.  Many choices are left
>>> unanswered as of yet and Red Hat can make a big impact in the future of the
>>> decision making about *why*.  You have to participate to have your voice
>>> heard.  If you are expecting the Kolla team to write a bunch of
>>> documentation to explain *why* we have made the 

[openstack-dev] TC Candidacy

2016-09-28 Thread Ed Leafe
Hello! I am announcing my candidacy for a position on the OpenStack Technical
Committee.

For those who do not know me, I have been involved with OpenStack since the
very beginning, working for Rackspace as a core member of the Nova team. An
internal job change took me away from active development after Essex, but since
being hired by IBM, I've been back working on Nova since Kilo. As a result of
this long involvement, I have always had a strong interest in helping to shape
the direction of OpenStack, and if there is one thing people will agree about
me, is that I'm never shy about voicing my opinion, whether the majority agree
with me or not. Many of the earliest design decisions were very contentious,
and while I didn't always prevail in those discussions, I felt that I helped
move the conversation forward. More recently, I have participated in nearly all
TC meetings for the last two years, and now would like to join the TC as a
member.

There seems to be a lot of concern about the impact of the Big Tent, and how
all these new projects are diluting OpenStack, or somehow leading us astray
from what we should be doing. In my opinion, this is all a distraction.
Determining whether a project is "official" is simply a matter of controlling
the branding of OpenStack, and not changing what OpenStack is. If there is room
for improvement, it is in communicating what this means so that we eliminate
the confusion for those who are coming to OpenStack without this historical
knowledge.

One thing I feel strongly about is that since the Mission Statement for
OpenStack is "to produce the ubiquitous Open Source Cloud Computing
platform...", that what we do should always advance cloud *computing*. So while
I applaud the work being done by many of the telecommunication companies to
push the limits of network virtualization, unless it is useful to making
virtual machines communicate better, it really should be outside of OpenStack.
I do recognize that this is not a clear distinction, since someone can always
come up with a remote edge case where it could possibly be used, but we cannot
be all things to all people (or all companies). Having a clear focus is
important to success.

OpenStack is now over 6 years old, and that is forever in technology terms. And
while it has been continuously updated, these updates are restricted by the
requirement that they remain compatible with previous versions, and,
increasingly, that the updates are made with zero downtime. These are important
goals, and some very amazing work has been done to make them a reality. But one
of the consequences of this focus is that there is little serious discussion
about potential architectural changes that would greatly improve OpenStack, if
it requires downtime or breaking backwards compatibility. Suggestions for
experiments along these lines are usually met with the (very valid, in my
opinion) statement that we already have more development work than we can
handle, so diverting some of our resources to explore other possibilities would
set us further back. Unfortunately, this is the same argument that is used to
justify the build-up of technical debt. I would like to see us begin to think
about this, and have the TC direct this conversation, with input from
operators, the recently-formed Architecture Working Group, developers from the
various OpenStack projects, and any other interested parties. Yes, this is a
"moonshot" idea [0], but I believe that it is essential for the long-term
technical viability of OpenStack that we never stop looking ahead.

I have a great deal of respect for the other candidates who are seeking a
position on the TC, and thus understand that you, as a voter, have a difficult
job in selecting only six. I would indeed be honored if you would support me.

Thank you,
Ed

Email: e...@leafe.com
Foundation Profile: http://www.openstack.org/community/members/profile/280
Freenode: edleafe
Website: https://blog.leafe.com
Twitter: @edleafe

[0] https://en.wiktionary.org/wiki/moon_shot (definition 3)


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Getting the UI to talk with Undercloud API service endpoints

2016-09-28 Thread Dan Trainor
Hi -

I want to bring up a subject that needs a little more attention.  There are
a few ideas floating around but it's important that we get this done right.

UI is unique in the sense that it operates almost entirely in a browser,
talking directly to service API endpoints which it either figures out from
they Keystone service catalog as using the publicURL endpoint for each
service, or by specifying these API endpoints in a configuration file.
Though overriding the API endpoints in the UI's local configuration file is
an option that's available, I understand that we want to move towards
relying exclusively on Keystone for accurate and correct endpoint
configuration.

Right now, all of the service API endpoints that UI needs to talk with are
only listening on the ctlplane network.

We've had several iterations of testing and development of the UI over time
and as a result of that, three different solutions that work - depending on
the exact circumstances - have been created which all can achieve the same
goal of allowing the UI to talk to these endpoints:

- Local SSH port tunneling of the required ports that UI talks to, from the
system running the UI to the Undercloud, and configuring the UI to talk to
localhost:. This method "works", but it's not a solution we can
recommend
- Making the interface on which these services already listen on - the
ctlplane network - routable.  Again, this method "works", but we treat this
interface in a very special manner on purpose, not the least of which
because of it's ability to facilitate pxebooting
- Change the public endpoints in the Keystone catalog to be that of the
existing external, routable interface of the Undercloud for each service
required by the UI.  This also requires configuring each service that UI
needs to talk with, to listen on the existing, external, routable interface
on the Undercloud.  Some services support a list of interfaces and IPs to
listen on; others require exactly one argument, in which case the address
of 0.0.0.0 would need to be used

According to the API Endpoint Configuration Recommendation guide[1], the
third option seems most viable and one that we can recommend.  The document
briefly describes the security implications of having these services open
on a public interface but recommends the use of a stringent network policy
- something we're used to recommending and helping with.  The first two
options, not so much.

Based on discussions I've had with other people, it's my impression that
the third option is likely the one that we should proceed with.

This concern is largely transparent to how we're currently testing and
developing the UI because most of that work is done on local, virtualized
environments.  When this happens, libvirt does the heavy lifting of
creating a network that's easily routable from the host system.  If not
that, then the evolution of instructions for setting up these local
environments over time have recommended using SSH port forwarding.
However, neither of these options should be recommended.

Thoughts?

Thanks
-dant

--

P.S. and full disclosure:  I'm biased towards the third option.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing David Moreau Simard part of Puppet OpenStack CI core team

2016-09-28 Thread Iury Gregory
+1 from me, David is doing an awesome job in p-o-i =)

2016-09-28 13:08 GMT-03:00 Rich Megginson :

> On 09/28/2016 10:06 AM, Emilien Macchi wrote:
>
>> Until now, we had no specific team for dealing with Puppet OpenStack
>> CI (aka openstack/puppet-openstack-integration project).
>> But we have noticed that David was doing consistent work to contribute
>> to Puppet OpenStack CI by adding more coverage, but also helping when
>> things are broken.
>> David is always here to help us to make testing better.
>>
>> David is working on RDO Infra and re-use Puppet OpenStack CI tooling
>> to test OpenStack, so he has a perfect knowledge at how Puppet
>> OpenStack CI is working.
>>
>> I would like to request feedback from our community about creating
>> this new Gerrit group (where we would include existing Puppet
>> OpenStack core groups into it), and also include David into it.
>>
>
> +1 for David, whatever he's working on
>
>
>> Thanks,
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

~


*Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science at
UFCG*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2016-09-28 Thread Jim Rollenhagen
I'd like to throw in my hat to serve the community as a TC member.

My name is Jim Rollenhagen, but I'm better known in the community as jroll.
I've worked in many environments, from 20-person startup to massive
corporations. For the past three years, I've been working on OpenStack at
Rackspace. I primarily work on ironic (where I just started my third term as
PTL), but also dabble heavily in Nova, and try to contribute to cross-project
teams (mostly infra, QA, and Oslo) when I can.

I believe the primary objective of the TC should be to serve the community.
There's a few things we can immediately do to improve. First is the ongoing
effort to document principles and expectations. There's a massive amount of
shared understanding among the leaders in our community (and especially the
current TC) that isn't necessarily known or shared by the rest of the
community. We need to write down the current state of that. The principles
document does this well; but that's only the start. We need to continue to
document expectations for projects in the big tent, expectations for PTLs and
liaisons, and where we want OpenStack to be long term. We often focus on the
short term without thinking about how things support our longer-term goals, and
I'd like to fix that by writing down our vision for the future.

Over the last year, folks keep talking about the big tent, and how it has
watered down the meaning or focus of OpenStack. This is true today, at some
level. However, I believe this is short-term pain while we are moving to a
better place. I don't believe the solution is to go back to the old way of
life. Rather, we should roll forward and help to make the big tent better.
Going back will only create more confusion, and will bring the TC back to the
days of evaluating the usefulness and technical excellence of projects - which
we already have found is untenable. We have common ways of doing many things,
but those aren't well-documented and so newer projects simply do things the way
they think is best, or fastest, or the way it's done in the first project they
look to source ideas from. For example, I know of at least two or three ways
that microversioning is implemented.  There are two ways projects are
implementing rolling upgrades. And that might be okay; but they need to be
documented somewhere that all projects can benefit from. We should even go
further, and build frameworks for common things like these that OpenStack
projects tend to value. I believe the TC (working with folks like the
architecture WG, etc) could (and should!) be the body to help implement and
drive this sort of work. The new goals process is one step toward this, and I
think it's a great start. If we can truly make the big tent a more coherent set
of projects, I think it will be a huge win for everyone - not just developers
that need a home for their project.

The ironic project went through incubation just before the big tent went into
effect, and as such was one of the first projects to need to work with some of
the constraints (i.e., not be a first-class member of many of the cross-project
teams). We've implemented devstack and tempest plugins, in-tree API reference,
and in-tree install guide. To accomplish some of these, we needed to contribute
both code and documentation into these projects. I think my experience there
helps me relate to newer big tent projects that struggle with some of these
initiatives. I look forward to leading efforts to make this less of a burden on
projects.

I would be honored to serve the community from a TC seat, if elected. Whether
or not I am elected, I hope to work on some or all of these items over the next
two cycles, but I believe I will be in a better position to get these done from
within the TC.


// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing David Moreau Simard part of Puppet OpenStack CI core team

2016-09-28 Thread Rich Megginson

On 09/28/2016 10:06 AM, Emilien Macchi wrote:

Until now, we had no specific team for dealing with Puppet OpenStack
CI (aka openstack/puppet-openstack-integration project).
But we have noticed that David was doing consistent work to contribute
to Puppet OpenStack CI by adding more coverage, but also helping when
things are broken.
David is always here to help us to make testing better.

David is working on RDO Infra and re-use Puppet OpenStack CI tooling
to test OpenStack, so he has a perfect knowledge at how Puppet
OpenStack CI is working.

I would like to request feedback from our community about creating
this new Gerrit group (where we would include existing Puppet
OpenStack core groups into it), and also include David into it.


+1 for David, whatever he's working on



Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Proposing David Moreau Simard part of Puppet OpenStack CI core team

2016-09-28 Thread Emilien Macchi
Until now, we had no specific team for dealing with Puppet OpenStack
CI (aka openstack/puppet-openstack-integration project).
But we have noticed that David was doing consistent work to contribute
to Puppet OpenStack CI by adding more coverage, but also helping when
things are broken.
David is always here to help us to make testing better.

David is working on RDO Infra and re-use Puppet OpenStack CI tooling
to test OpenStack, so he has a perfect knowledge at how Puppet
OpenStack CI is working.

I would like to request feedback from our community about creating
this new Gerrit group (where we would include existing Puppet
OpenStack core groups into it), and also include David into it.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Nominating Zhu Rong for Solum core

2016-09-28 Thread James Y. Li
+1

Look forward to more contributions from you!


On Wed, Sep 28, 2016 at 8:32 AM, Devdatta Kulkarni <
kulkarni.devda...@gmail.com> wrote:

> Hi team,
>
> I would like to propose Zhu Rong (irc: zhurong) to be included as Solum
> core.
>
> Zhu Rong has been very active in different parts of Solum over several
> months now.
> His primary contribution has been moving Solum to use Oslo libraries,
> thereby
> making our project satisfy one of the project-wide goals suggested
> by TC for this cycle [1]. He has also been deeply involved in guiding
> and contributing to the work on Solum's horizon dashboard, which was
> started by
> Swati Dewan as part of her Outreachy internship this summer.
> Zhu Rong also actively participates on Solum IRC channel, regularly
> attends IRC meetings,
> and provides great feedback on patches.
>
> You can find Zhu Rong's activity here:
>
> http://stackalytics.com/?module=solum-group_id=zhu-rong
> http://stackalytics.com/?module=solum-dashboard_id=zhu-rong
>
> Please respond with your votes.
>
> Regards,
> Devdatta
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-
> August/101348.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections][TC] TC Candidacy

2016-09-28 Thread Nikhil Komawar
I noticed a good statement, marked it inline.


On 9/28/16 7:24 AM, John Davidge wrote:
> Hi Zane,
>
> Thanks for pointing this out! My interpretation of the StackForge
> Retirement page[1] was wrong on that point. I've updated the blog post to
> reflect that (without removing the original interpretation).
>
> The discussion about renaming git repos is a bit of a red herring, because
> what we're really talking about is what it *means* to be in Stackforge vs.
> OpenStack vs. OpenStack Family, not which git namespace a project should
> live in. Apologies if I didn't make that clear.
>
> Like many of us, I do my best to keep up with historical context, but when
> there is so much contradictory information/opinion out there about what
> OpenStack is/isn't was/wasn't it can be a struggle at times. The crux of

"""
> my proposal is aiming to solve that by not trying to be everything to
> everyone under one tent - by defining sensible boundaries to separate the
> different goals of the community.
"""

+1000 to sensible boundaries. the key factor is to strike the balance.
>
> All the best,
>
> John
>
> [1] https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement
>
> On 9/27/16, 5:13 PM, Zane Bitter wrote:
>
>> On 27/09/16 06:19, John Davidge wrote:
 Having Stackforge as a separate Github organization and set of
> repositories was a maintenance nightmare due to the awkwardness of
> renaming projects when they "moved into OpenStack".
>>> There's no reason that this would need a separate github structure, just
>>> separate messaging and rules.
>> That's exactly what we have now.
>>
>> This statement on your blog:
>>
>> "[StackForge] was retired in October 2015, at which point all projects
>> had to move into the OpenStack Big Tent or leave entirely."
>>
>> is completely false. That never happened. There are still plenty of
>> repos on git.openstack.org that are not part of the Big Tent. At no time
>> has any project been required to join the Big Tent in order to continue
>> being hosted.
>>
>> Maybe you should consider reading up on the historical background to
>> these changes. There are a lot of constraints that have to be met - from
>> technical ones like the fact that it's not feasible to rename git repos
>> when they move into or out of the official OpenStack project, to legal
>> ones like how the TC has to designate projects in order to trigger
>> certain rights and responsibilities in the (effectively immutable)
>> Foundation by-laws. Rehashing all of the same old discussions without
>> reference to these constraints is unlikely to be productive.
>>
>> cheers,
>> Zane.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> 
> Rackspace Limited is a company registered in England & Wales (company 
> registered number 03897010) whose registered office is at 5 Millington Road, 
> Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
> viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
> contain confidential or privileged information intended for the recipient. 
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited. If you receive this transmission in error, please notify us 
> immediately by e-mail at ab...@rackspace.com and delete the original message. 
> Your cooperation is appreciated.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Nominating Zhu Rong for Solum core

2016-09-28 Thread Vijendar Komalla
+1

From: Devdatta Kulkarni 
Date: Wed, Sep 28, 2016 at 8:32 AM
Subject: [Solum] Nominating Zhu Rong for Solum core
To: openstack-dev@lists.openstack.org

Hi team,

I would like to propose Zhu Rong (irc: zhurong) to be included as Solum core.

Zhu Rong has been very active in different parts of Solum over several months 
now.
His primary contribution has been moving Solum to use Oslo libraries, thereby
making our project satisfy one of the project-wide goals suggested
by TC for this cycle [1]. He has also been deeply involved in guiding
and contributing to the work on Solum's horizon dashboard, which was started by 
Swati Dewan as part of her Outreachy internship this summer.
Zhu Rong also actively participates on Solum IRC channel, regularly attends IRC 
meetings, 
and provides great feedback on patches.

You can find Zhu Rong's activity here:

http://stackalytics.com/?module=solum-group_id=zhu-rong
http://stackalytics.com/?module=solum-dashboard_id=zhu-rong

Please respond with your votes.

Regards,
Devdatta

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-August/101348.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-28 Thread Joshua Harlow

Matt Riedemann wrote:

On 9/28/2016 12:10 AM, Joshua Harlow wrote:

ACTION: we should make sure workarounds are advertised better
ACTION: we should have some document about "when cells"?

This is a difficult question to answer because "it depends." It's akin
to asking "how many nova-api/nova-conductor processes should I run?"
Well, what hardware is being used, how much traffic do you get, is it
bursty or sustained, are instances created and left alone or are they
torn down regularly, do you prune your database, what version of rabbit
are you using, etc...

I would expect the best answer(s) to this question are going to come
from the operators themselves. What I've seen with cellsv1 is that
someone will decide for themselves that they should put no more than X
computes in a cell and that information filters out to other operators.
That provides a starting point for a new deployment to tune from.


I don't think we need "don't go larger than N nodes" kind of advice. But
we should probably know what kinds of things we expect to be hot spots.
Like mysql load, possibly indicated by system load or high level of db
conflicts. Or rabbit mq load. Or something along those lines.

Basically the things to look out for that indicate your are approaching
a scale point where cells is going to help. That also helps in defining
what kind of scaling issues cells won't help on, which need to be
addressed in other ways (such as optimizations).


Big +1 if we can really get out of the behavior/pattern of
thinking/thought of guessing at the overall system characteristics
*somehow* I think it would be great for our own communities maturity and
for each project/s. Even though I know such things are hard, it scares
the bejeezus out of me when we (as a group) create software but can't
give recommendations on its behavioral characteristics (we aren't doing
quantum physics here the last time I checked).

Just some ideas:

* Rally maybe can help here?
* Fixing a standard set of configuration options and testing that at
scale (using the intel lab?) - and then possibly using rally (or other)
to probe the system characteristics and then giving recommendations
before releasing the software for general consumption based on observed
system characteristics (this is basically what operators are going to
have to do anyway to qualify a release, especially if the community
isn't doing it and/or is shying away from doing it).

I just have a hard time accepting that tribal knowledge about scale that
has to filter from operators to operator (yes I know from personal
experience this is how things trickled down) is a good way to go. It
reminds me of the medicine and practices in the late 1800s where all
sorts of quackery science was happening; and IMHO we can do better than
this :)


Hmm, that reminds me that I'm running low on leeches...



Don't forget your mercury and radioactive toothpaste[1] also, they 
perform miracles I tell you (or that's what I've heard) :)


[1] https://en.wikipedia.org/wiki/Doramad_Radioactive_Toothpaste

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Skip next meeting

2016-09-28 Thread Vitaly Gridnev
Hi team,

Since all of us are preparing for future summit and there is no specific
topics to cover, I think that we should skip meeting tomorrow at Sept 29.
If there is topic to discuss, please write email to mailing list about your
topic.

PS: Please, continue sharing your ideas about future summit, so I'm
recommending to spend time of meeting for filling up etherpad [0] with your
ideas.

[0] https://etherpad.openstack.org/p/sahara-ocata-summit

-- 
Best Regards,
Vitaly Gridnev,
Project Technical Lead of OpenStack DataProcessing Program (Sahara)
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-28 Thread Matt Riedemann

On 9/28/2016 12:10 AM, Joshua Harlow wrote:

ACTION: we should make sure workarounds are advertised better
ACTION: we should have some document about "when cells"?

This is a difficult question to answer because "it depends." It's akin
to asking "how many nova-api/nova-conductor processes should I run?"
Well, what hardware is being used, how much traffic do you get, is it
bursty or sustained, are instances created and left alone or are they
torn down regularly, do you prune your database, what version of rabbit
are you using, etc...

I would expect the best answer(s) to this question are going to come
from the operators themselves. What I've seen with cellsv1 is that
someone will decide for themselves that they should put no more than X
computes in a cell and that information filters out to other operators.
That provides a starting point for a new deployment to tune from.


I don't think we need "don't go larger than N nodes" kind of advice. But
we should probably know what kinds of things we expect to be hot spots.
Like mysql load, possibly indicated by system load or high level of db
conflicts. Or rabbit mq load. Or something along those lines.

Basically the things to look out for that indicate your are approaching
a scale point where cells is going to help. That also helps in defining
what kind of scaling issues cells won't help on, which need to be
addressed in other ways (such as optimizations).


Big +1 if we can really get out of the behavior/pattern of
thinking/thought of guessing at the overall system characteristics
*somehow* I think it would be great for our own communities maturity and
for each project/s. Even though I know such things are hard, it scares
the bejeezus out of me when we (as a group) create software but can't
give recommendations on its behavioral characteristics (we aren't doing
quantum physics here the last time I checked).

Just some ideas:

* Rally maybe can help here?
* Fixing a standard set of configuration options and testing that at
scale (using the intel lab?) - and then possibly using rally (or other)
to probe the system characteristics and then giving recommendations
before releasing the software for general consumption based on observed
system characteristics (this is basically what operators are going to
have to do anyway to qualify a release, especially if the community
isn't doing it and/or is shying away from doing it).

I just have a hard time accepting that tribal knowledge about scale that
has to filter from operators to operator (yes I know from personal
experience this is how things trickled down) is a good way to go. It
reminds me of the medicine and practices in the late 1800s where all
sorts of quackery science was happening; and IMHO we can do better than
this :)


Hmm, that reminds me that I'm running low on leeches...



Anyway, back to your regularly scheduled programming,

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Newton post-mortem

2016-09-28 Thread Jim Rollenhagen
Similar to keystone/neutron/nova, I thought it would be healthy for
us to do a post-mortem on the Newton cycle. Here's the etherpad:
https://etherpad.openstack.org/p/ironic-newton-retrospective

Let's try to keep it troll-free (I know it's hard).

Also note I put a poll about having a summit session on the etherpad,
please do include that. We can always talk about it Friday afternoon if
we don't have an explicit session on it, too.

Here's the other teams' retros as an example:
https://etherpad.openstack.org/p/nova-newton-retrospective
https://etherpad.openstack.org/p/keystone-newton-retrospective
https://review.openstack.org/#/c/360207/12

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Nominating Zhu Rong for Solum core

2016-09-28 Thread Devdatta Kulkarni
Hi team,

I would like to propose Zhu Rong (irc: zhurong) to be included as Solum
core.

Zhu Rong has been very active in different parts of Solum over several
months now.
His primary contribution has been moving Solum to use Oslo libraries,
thereby
making our project satisfy one of the project-wide goals suggested
by TC for this cycle [1]. He has also been deeply involved in guiding
and contributing to the work on Solum's horizon dashboard, which was
started by
Swati Dewan as part of her Outreachy internship this summer.
Zhu Rong also actively participates on Solum IRC channel, regularly attends
IRC meetings,
and provides great feedback on patches.

You can find Zhu Rong's activity here:

http://stackalytics.com/?module=solum-group_id=zhu-rong
http://stackalytics.com/?module=solum-dashboard_id=zhu-rong

Please respond with your votes.

Regards,
Devdatta

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101348.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Barcelona design sessions

2016-09-28 Thread Afek, Ifat (Nokia - IL)
Hi,

There will be three Vitrage design sessions in Barcelona:

Wednesday 15:05-15:45: workroom
Wednesday 15:55-16:35: fishbowl 
Wednesday 17:55-18:35: workroom

I gathered the ideas that were raised in the etherpad below, and created a 
draft for the design session discussions. I’ll be happy to get your feedback:
https://etherpad.openstack.org/p/vitrage-barcelona-design-sessions

Thanks,
Ifat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Next API meeting cancelled

2016-09-28 Thread Jim Rollenhagen
On Thu, Sep 22, 2016 at 2:30 PM, Devananda van der Veen
 wrote:
> Considering I'm the only one currently working on it / bringing up new 
> topics, I
> think biweekly is a better match to the pace I'm working on this.
>
> However, I hope that changes after the summit / once we start implementing 
> these
> changes.

That's a good point, I'll just leave it weekly and we'll see how
things go post-summit.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Integration Tests Status

2016-09-28 Thread Rob Cresswell
Hi all,

So the integration tests have started failing all over the place again. Given 
the number of failures over the past few weeks I've put up a patch to make them 
non-voting [1]. Please stop rechecking patches until this merges.

Reviewers, please be sure to check the test logs when they fail and be diligent 
about checking the UI.

1. https://review.openstack.org/#/c/378406/

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-28 Thread Clint Byrum
Excerpts from Andrew Laski's message of 2016-09-27 11:36:07 -0400:
> Hello all,
> 
> Recently I noticed that people would look at logs from a Zuul née
> Jenkins CI run and comment something like "there seem to be more
> warnings in here than usual." And so I thought it might be nice to
> quantify that sort of thing so we didn't have to rely on gut feelings.
> 
> So I threw together https://review.openstack.org/#/c/376531 which is a
> script that lives in the Nova tree, gets called from a devstack-gate
> post_test_hook, and outputs an n-stats.json file which can be seen at
> http://logs.openstack.org/06/375106/8/check/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/e103612/logs/n-stats.json.
> This provides just a simple way to compare two runs and spot large
> changes between them. Perhaps later things could get fancy and these
> stats could be tracked over time. I am also interested in adding stats
> for things that are a bit project specific like how long (max, min, med)
> it took to boot an instance, or what's probably better to track is how
> many operations that took for some definition of an operation.
> 
> I received some initial feedback that this might be a better fit in the
> os-loganalyze project so I took a look over there. So I cloned the
> project to take a look and quickly noticed
> http://git.openstack.org/cgit/openstack-infra/os-loganalyze/tree/README.rst#n13.
> That makes me think it would not be a good fit there because what I'm
> looking to do relies on parsing the full file, or potentially multiple
> files, in order to get useful data.
> 
> So my questions: does this seem like a good fit for os-loganalyze? If
> not is there another infra/QA project that this would be a good fit for?
> Or would people be okay with a lone project like Nova implementing this
> in tree for their own use?
> 

I wonder if we could combine forces on this. You want to report log
stats. I want to report query/messaging stats:

http://specs.openstack.org/openstack/qa-specs/specs/devstack/counter-inspection.html

I've never finished this one completely, but basically I wrote a thing
that inspects performance counters before and after the tempest run,
and outputs JSON which gets picked up and inserted into the subunit
stream as a subunit attachment.

From there, the part that isn't finished, is a job that picks up that
attachment and sprays the numbers into statsd/graphite, allowing trend
analysis. One can also of course just go look at the json file to find
out useful things like how many SQL queries were run, or how many rows
were read out of the database.

Wherever your code lands, it feels like it would be useful to have those
numbers graphed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Multi-node controller testing

2016-09-28 Thread Miguel Angel Ajo Pelayo
I just found this one created recently, and I will try to build on top of it:

https://review.openstack.org/#/c/371807/12



On Wed, Sep 28, 2016 at 1:52 PM, Miguel Angel Ajo Pelayo
 wrote:
> Refloating this thread.
>
> I posted this rfe/bug [1], and I'm planning to come up with an
> experimental job that triggers one of the basic neutron/lbaas tests
> with octavia.
>
> I wonder if even picking up the scenario one for now could make sense,
> it's not very stable at the moment, but may be spreading the load of
> VM creations between two compute nodes could, may be, ease it ?
>
> [1] https://bugs.launchpad.net/octavia/+bug/1628481
>
> On Thu, Aug 11, 2016 at 4:24 PM, Roman Vasilets  
> wrote:
>> Hi,
>>   "need to have something (tempest-plugin) to make sure that integration
>> works with nova & neutron" - Its easy to write scenarios that will test that
>> octavia works with nova and neutron
>>   "I guess rally is more suited to make sure that things work at scale, to
>> uncover any sort of race conditions (This would be specially beneficial in
>> multinode controllers)" - Rally is suitable for many kind of tests=)
>> Especially for testing at scale! If you have any question how to use Rally
>> feel free to ask Rally team!
>>
>> - Best regards, Roman Vasylets. Rally team member
>>
>> On Thu, Aug 11, 2016 at 11:46 AM, Miguel Angel Ajo Pelayo
>>  wrote:
>>>
>>> On Wed, Aug 10, 2016 at 9:51 PM, Stephen Balukoff 
>>> wrote:
>>> > Miguel--
>>> >
>>> > There have been a number of tempest patches in the review queue for a
>>> > long
>>> > time now, but I think the reason they're not getting attention is that
>>> > we
>>> > don't want to have to import a massive amount of tempest code into our
>>> > repository (which will become stale and need hot-fixing, as has happened
>>> > with neutron-lbaas on many occasions), and it appears tempest-lib
>>> > doesn't
>>> > yet support all the stuff we would need to do with it.
>>>
>>> I guess you mean [1]
>>>
>>>
>>> > People have suggested Rally, but so far nobody has come forth with code,
>>> > or
>>> > a strong desire to push it through.
>>>
>>> I guess rally is more suited to make sure that things work at scale,
>>> to uncover any sort of race conditions (This would be specially
>>> beneficial in multinode controllers).
>>>
>>> But I understand (I can be wrong) that we still need to have something
>>> (tempest-plugin) to make sure that integration works with nova &
>>> neutron. I'm going to check those patches to see what was the
>>> discussion and issues over there (I see this one [1] to start with,
>>> which is probably the most important)
>>>
>>> [1]
>>> https://review.openstack.org/#/q/status:open+project:openstack/octavia+branch:master+topic:octavia_basic_lb_scenario
>>>
>>> [2] https://review.openstack.org/#/c/172199/66..75/.testr.conf
>>>
>>>
>>> > Stephen
>>> >
>>> > On Tue, Aug 9, 2016 at 5:40 AM, Miguel Angel Ajo Pelayo
>>> >  wrote:
>>> >>
>>> >> On Mon, Aug 8, 2016 at 4:56 PM, Kosnik, Lubosz
>>> >> 
>>> >> wrote:
>>> >> > Great work with that multi-node setup Miguel.
>>> >>
>>> >> Thanks, I have to get my hands dirtier with octavia, it's just a tiny
>>> >> thing.
>>> >>
>>> >> > About that multinode Infra is supporting two nodes setup used
>>> >> > currently
>>> >> > by grenade jobs but in my opinion we don’t have any tests which can
>>> >> > cover
>>> >> > that type of testing. We’re still struggling with selecting proper
>>> >> > tool to
>>> >> > test Octavia from integration/functional perspective so probably it’s
>>> >> > too
>>> >> > early to make it happen.
>>> >>
>>> >>
>>> >> Well, any current tests we run should pass equally well in a multi
>>> >> node controller, and that's the point, that, regardless of the
>>> >> deployment architecture the behaviour shall not change at all. We may
>>> >> not need any specific test.
>>> >>
>>> >>
>>> >> > Maybe it’s great start to finally make some decision about testing
>>> >> > tools
>>> >> > and there will be a lot of work for you after that also with setting
>>> >> > up an
>>> >> > infra multi-node job for that.
>>> >>
>>> >> I'm not fully aware of what are we running today for octavia, so if
>>> >> you can give me some pointers about where are those jobs configured,
>>> >> and what do they target, it could be a start, to provide feedback.
>>> >>
>>> >> What are the current options/tools we're considering?
>>> >>
>>> >>
>>> >> >
>>> >> > Cheers,
>>> >> > Lubosz Kosnik
>>> >> > Cloud Software Engineer OSIC
>>> >> > lubosz.kos...@intel.com
>>> >> >
>>> >> >> On Aug 8, 2016, at 7:04 AM, Miguel Angel Ajo Pelayo
>>> >> >>  wrote:
>>> >> >>
>>> >> >> Recently, I sent a series of patches [1] to make it easier for
>>> >> >> developers to deploy a multi node octavia controller with
>>> >> >> n_controllers x [api, cw, hm, hk] with an haproxy in front of the
>>> >> >> 

[openstack-dev] [networking-cisco] networking-cisco IRC meetings

2016-09-28 Thread Sam Betts (sambetts)
TL;DR networking-cisco IRC meetings will be held bi-weekly on 
#openstack-meeting-3 starting from 2016/10/11

With the results of the doodle poll, and some pointers from Steve and Jeremy 
(thanks guys for the info) I've successfully organised the time and place for 
the networking-cisco IRC meeting. Initially the plan was to hold the meeting 
weekly on the openstack-networking-cisco IRC channel, however it is encouraged 
that we use the openstack-meeting channels for these sort of meetings. So with 
that in mind I found us a slot on the openstack-meeting-3 channel when the 
majority of people who responded to the poll where available. The negative to 
having it at a time that is convenient for most people is that the it is a very 
popular time slot for holding meetings, so to fit around the other projects, we 
will hold our meeting bi-weekly instead of weekly. If this proves too irregular 
then we can reschedule at a later date.  I look forward to our first meeting, 
the details can be found here: 
https://wiki.openstack.org/wiki/Meetings/networking-cisco. Please add any 
topics you want to discuss to the agenda, and if you want to talk about 
networking-cisco between now and the first meeting I encourage people to join 
the #openstack-networking-cisco channel for day to day discussions about 
networking-cisco development.

Sam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-28 Thread Andrew Laski


On Wed, Sep 28, 2016, at 07:15 AM, Sean Dague wrote:
> On 09/27/2016 06:19 PM, Andrew Laski wrote:
> 
> > 
> > I totally understand where you're coming from. I just see it
> > differently.
> > 
> > The way it was done did not affect any other projects, and the plumbing
> > used is something that could have easily be left in. And as luck would
> > have it there's another patch up to use that same plumbing so it's
> > probably going in anyways.
> > 
> > I completely agree that before expanding this to be in any way cross
> > project it's worth figuring out a better way to do it. But at this point
> > I don't feel comfortable enough with a long term vision to tackle that.
> > I would much prefer to experiment in a small way before moving forward.
> 
> Ok, so what if we go forward with your existing patch, but agree only to
> replace the post_test_hook on Nova specific jobs, like the placement
> one. (The change proposed here -
> https://review.openstack.org/#/c/376537/6/jenkins/jobs/nova.yaml). That
> should give enough data and experimentation, and it show up for every
> nova patch.
> 
> Before migrating this kind of approach to integrated gate jobs, we
> revisit a way to make sure that multiple projects can plumb content like
> this. Be it, expanding the test hook model, or a common post hook
> project that could have core members from multiple teams, or some
> generic yaml thing (though I agree, that's a lot harder to wrap my head
> around).

Deal.

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Multi-node controller testing

2016-09-28 Thread Miguel Angel Ajo Pelayo
Refloating this thread.

I posted this rfe/bug [1], and I'm planning to come up with an
experimental job that triggers one of the basic neutron/lbaas tests
with octavia.

I wonder if even picking up the scenario one for now could make sense,
it's not very stable at the moment, but may be spreading the load of
VM creations between two compute nodes could, may be, ease it ?

[1] https://bugs.launchpad.net/octavia/+bug/1628481

On Thu, Aug 11, 2016 at 4:24 PM, Roman Vasilets  wrote:
> Hi,
>   "need to have something (tempest-plugin) to make sure that integration
> works with nova & neutron" - Its easy to write scenarios that will test that
> octavia works with nova and neutron
>   "I guess rally is more suited to make sure that things work at scale, to
> uncover any sort of race conditions (This would be specially beneficial in
> multinode controllers)" - Rally is suitable for many kind of tests=)
> Especially for testing at scale! If you have any question how to use Rally
> feel free to ask Rally team!
>
> - Best regards, Roman Vasylets. Rally team member
>
> On Thu, Aug 11, 2016 at 11:46 AM, Miguel Angel Ajo Pelayo
>  wrote:
>>
>> On Wed, Aug 10, 2016 at 9:51 PM, Stephen Balukoff 
>> wrote:
>> > Miguel--
>> >
>> > There have been a number of tempest patches in the review queue for a
>> > long
>> > time now, but I think the reason they're not getting attention is that
>> > we
>> > don't want to have to import a massive amount of tempest code into our
>> > repository (which will become stale and need hot-fixing, as has happened
>> > with neutron-lbaas on many occasions), and it appears tempest-lib
>> > doesn't
>> > yet support all the stuff we would need to do with it.
>>
>> I guess you mean [1]
>>
>>
>> > People have suggested Rally, but so far nobody has come forth with code,
>> > or
>> > a strong desire to push it through.
>>
>> I guess rally is more suited to make sure that things work at scale,
>> to uncover any sort of race conditions (This would be specially
>> beneficial in multinode controllers).
>>
>> But I understand (I can be wrong) that we still need to have something
>> (tempest-plugin) to make sure that integration works with nova &
>> neutron. I'm going to check those patches to see what was the
>> discussion and issues over there (I see this one [1] to start with,
>> which is probably the most important)
>>
>> [1]
>> https://review.openstack.org/#/q/status:open+project:openstack/octavia+branch:master+topic:octavia_basic_lb_scenario
>>
>> [2] https://review.openstack.org/#/c/172199/66..75/.testr.conf
>>
>>
>> > Stephen
>> >
>> > On Tue, Aug 9, 2016 at 5:40 AM, Miguel Angel Ajo Pelayo
>> >  wrote:
>> >>
>> >> On Mon, Aug 8, 2016 at 4:56 PM, Kosnik, Lubosz
>> >> 
>> >> wrote:
>> >> > Great work with that multi-node setup Miguel.
>> >>
>> >> Thanks, I have to get my hands dirtier with octavia, it's just a tiny
>> >> thing.
>> >>
>> >> > About that multinode Infra is supporting two nodes setup used
>> >> > currently
>> >> > by grenade jobs but in my opinion we don’t have any tests which can
>> >> > cover
>> >> > that type of testing. We’re still struggling with selecting proper
>> >> > tool to
>> >> > test Octavia from integration/functional perspective so probably it’s
>> >> > too
>> >> > early to make it happen.
>> >>
>> >>
>> >> Well, any current tests we run should pass equally well in a multi
>> >> node controller, and that's the point, that, regardless of the
>> >> deployment architecture the behaviour shall not change at all. We may
>> >> not need any specific test.
>> >>
>> >>
>> >> > Maybe it’s great start to finally make some decision about testing
>> >> > tools
>> >> > and there will be a lot of work for you after that also with setting
>> >> > up an
>> >> > infra multi-node job for that.
>> >>
>> >> I'm not fully aware of what are we running today for octavia, so if
>> >> you can give me some pointers about where are those jobs configured,
>> >> and what do they target, it could be a start, to provide feedback.
>> >>
>> >> What are the current options/tools we're considering?
>> >>
>> >>
>> >> >
>> >> > Cheers,
>> >> > Lubosz Kosnik
>> >> > Cloud Software Engineer OSIC
>> >> > lubosz.kos...@intel.com
>> >> >
>> >> >> On Aug 8, 2016, at 7:04 AM, Miguel Angel Ajo Pelayo
>> >> >>  wrote:
>> >> >>
>> >> >> Recently, I sent a series of patches [1] to make it easier for
>> >> >> developers to deploy a multi node octavia controller with
>> >> >> n_controllers x [api, cw, hm, hk] with an haproxy in front of the
>> >> >> API.
>> >> >>
>> >> >> Since this is the way the service is designed to work (with
>> >> >> horizontal
>> >> >> scalability in mind), and we want to have a good guarantee that any
>> >> >> bug related to such configuration is found early, and addressed, I
>> >> >> was
>> >> >> thinking that an extra job that runs a two node controller
>> >> >> 

Re: [openstack-dev] [all][elections][TC] TC Candidacy

2016-09-28 Thread John Davidge
Hi Zane,

Thanks for pointing this out! My interpretation of the StackForge
Retirement page[1] was wrong on that point. I've updated the blog post to
reflect that (without removing the original interpretation).

The discussion about renaming git repos is a bit of a red herring, because
what we're really talking about is what it *means* to be in Stackforge vs.
OpenStack vs. OpenStack Family, not which git namespace a project should
live in. Apologies if I didn't make that clear.

Like many of us, I do my best to keep up with historical context, but when
there is so much contradictory information/opinion out there about what
OpenStack is/isn't was/wasn't it can be a struggle at times. The crux of
my proposal is aiming to solve that by not trying to be everything to
everyone under one tent - by defining sensible boundaries to separate the
different goals of the community.

All the best,

John

[1] https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement

On 9/27/16, 5:13 PM, Zane Bitter wrote:

>On 27/09/16 06:19, John Davidge wrote:
>>> Having Stackforge as a separate Github organization and set of
>>> >repositories was a maintenance nightmare due to the awkwardness of
>>> >renaming projects when they "moved into OpenStack".
>> There's no reason that this would need a separate github structure, just
>> separate messaging and rules.
>
>That's exactly what we have now.
>
>This statement on your blog:
>
>"[StackForge] was retired in October 2015, at which point all projects
>had to move into the OpenStack Big Tent or leave entirely."
>
>is completely false. That never happened. There are still plenty of
>repos on git.openstack.org that are not part of the Big Tent. At no time
>has any project been required to join the Big Tent in order to continue
>being hosted.
>
>Maybe you should consider reading up on the historical background to
>these changes. There are a lot of constraints that have to be met - from
>technical ones like the fact that it's not feasible to rename git repos
>when they move into or out of the official OpenStack project, to legal
>ones like how the TC has to designate projects in order to trigger
>certain rights and responsibilities in the (effectively immutable)
>Foundation by-laws. Rehashing all of the same old discussions without
>reference to these constraints is unlikely to be productive.
>
>cheers,
>Zane.
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-28 Thread Sean Dague
On 09/27/2016 06:19 PM, Andrew Laski wrote:

> 
> I totally understand where you're coming from. I just see it
> differently.
> 
> The way it was done did not affect any other projects, and the plumbing
> used is something that could have easily be left in. And as luck would
> have it there's another patch up to use that same plumbing so it's
> probably going in anyways.
> 
> I completely agree that before expanding this to be in any way cross
> project it's worth figuring out a better way to do it. But at this point
> I don't feel comfortable enough with a long term vision to tackle that.
> I would much prefer to experiment in a small way before moving forward.

Ok, so what if we go forward with your existing patch, but agree only to
replace the post_test_hook on Nova specific jobs, like the placement
one. (The change proposed here -
https://review.openstack.org/#/c/376537/6/jenkins/jobs/nova.yaml). That
should give enough data and experimentation, and it show up for every
nova patch.

Before migrating this kind of approach to integrated gate jobs, we
revisit a way to make sure that multiple projects can plumb content like
this. Be it, expanding the test hook model, or a common post hook
project that could have core members from multiple teams, or some
generic yaml thing (though I agree, that's a lot harder to wrap my head
around).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Adding new functionality to networking scenario tests in tempest

2016-09-28 Thread Barber, Ofer
Hi,

I'm working on adding new tests to the networking scenario tests.

I see that there is a class named "NetworkScenarioTest" in manager.py file.
In that class, there are some helpers functions like '_create_network', 
'create_subnet', ...
Is this the place to add more helpers functions like '_update_network', 
'_delete_network', '_update_subnet', '_delete_subnet', and similar functions in 
nature?

Thank you,
Ofer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][horizon] horizon Newton RC2 available

2016-09-28 Thread Thierry Carrez
Hello everyone,

A new release candidate for horizon for the end of the Newton cycle was
generated, to catch release-critical fixes and include recent
translations. You can find the source code tarball at:

[...]

Unless new release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the final
Newton release on 6 October. You are therefore strongly encouraged to
test and validate this tarball!

Alternatively, you can directly test the stable/newton release branch at:

http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/newton

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/horizon/+filebug

and tag it *newton-rc-potential* to bring it to the horizon release
crew's attention.

Thanks!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [devstack] nova-api did not start

2016-09-28 Thread edison xiang
Hi Tony,

As you suggest, I find the reason of this problem.
I have removed "API_WORKES=0" in local.conf.
It works.
Thanks very much.

Best Regards,
 xiangxinyong

Tony Breeds 于2016年9月28日周三 上午11:28写道:

> On Wed, Sep 28, 2016 at 11:10:52AM +0800, xiangxinyong wrote:
> > Hi guys,
> >
> >
> > When i setup OpenStack by devstack,
> > I have got an error message "nova-api did not start".
> >
> >
> > [Call Trace]
> > ./stack.sh:1242:start_nova_api
> > /home/edison/devstack/lib/nova:802:die
> > [ERROR] /home/edison/devstack/lib/nova:802 nova-api did not start
> > Error on exit
> > World dumping... see /opt/stack/logs/worlddump-2016-09-27-205614.txt for
> details
>
> Try looking in /opt/stack/logs/n-api*
>
> Yours Tony.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting Sept.28

2016-09-28 Thread joehuang
Agenda of Sept.28 weekly meeting, let's continue the topics:


# Newton cycle design summit planning: 
https://etherpad.openstack.org/p/ocata-tricircle-sessions-planning

# patch review before freeze date Sept.30

# open discussion


How to join:

#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]freeze date and patches to be merged for Newton release

2016-09-28 Thread joehuang
Thank you all for the great effort, now only two patches of MUST TO HAVE are on 
the way to be merged:

https://review.openstack.org/#/c/372414/ installation update
https://review.openstack.org/#/c/326192/ volume detach

Best Regards
Chaoyi Huang (joehuang)

From: joehuang
Sent: 22 September 2016 9:42
To: openstack-dev
Subject: [openstack-dev][tricircle]freeze date and patches to be merged for 
Newton release

Hello,

During yesterday's weekly meeting, patches were identified for these must be 
merged before freeze date for Newton release:

Freeze date: Sept.30, 2016

Must to have: must be merged before Sept.30, basic feature for Tricircle Newton 
release
https://review.openstack.org/#/c/356187/ framework for dynamic pod binding
https://review.openstack.org/354604 floating ip deletion
https://review.openstack.org/355847 subnet deletion
https://review.openstack.org/360848 router deletion
https://review.openstack.org/#/c/366606/ server action part1
https://review.openstack.org/#/c/369958/ server action part2
https://review.openstack.org/#/c/372414/ installation update
https://review.openstack.org/#/c/326192/ volume detach

Good to have: best effort
https://review.openstack.org/#/c/323687/ add resource_affinity_tag
https://review.openstack.org/368529 spec for local plugin
https://review.openstack.org/#/c/359561/
others not mentioned here

You can also refer to https://etherpad.openstack.org/p/TricircleNewtonFreeze  
for the discussion log.

Let's update patch and review in time to ensure "Must to have" get merged 
before freeze date Sept.30.

Thank you for your great effort. The Newton branch will be created at UTC 
9:00AM Sept.30.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitaka)

2016-09-28 Thread Ton Ngo
Thanks Steve.  We indeed have been using the image built by Yolanda's DIB
elements and things have been stable.  Dane and I have resolved the
problems with the load balancer at least for the LBaaS v1.  For LBaaS v2,
we need to build a new image with Kubernetes 1.3 and we just got one built
today.
Ton,



From:   "Steven Dake (stdake)" 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   09/27/2016 10:18 PM
Subject:Re: [openstack-dev] [magnum] Fedora Atomic image that supports
kubernetes external load balancer (for stable/mitaka)



Dane,

I’ve heard Yolanda has done good work on making disk image builder build
fedora atomic properly consistently.  This may work better than the current
image building tools available with atomic if you need to roll your own.
Might try pinging her on irc for advice if you get jammed up here.  Might
consider consulting tango as well as I handed off my knowledge in this area
to him first and he has distributed to the rest of the Magnum core reviewer
team.  I’m not sure if tango and Yolanda have synced on this – recommend
checking with them.

Seems important to have a working atomic image for both Mitaka and Newton.

Regards
-steve


 From: "Dane Leblanc (leblancd)" 
 Reply-To: "OpenStack Development Mailing List (not for usage questions)"
 
 Date: Thursday, September 8, 2016 at 2:18 PM
 To: "OpenStack Development Mailing List (not for usage questions)"
 
 Subject: [openstack-dev] [magnum] Fedora Atomic image that supports
 kubernetes external load balancer (for stable/mitaka)

 Does anyone have a pointer to a Fedora Atomic image that works with
 stable/mitaka Magnum, and supports the kubernetes external load balancer
 feature [1]?

 I’m trying to test the kubernetes external load balancer feature with
 stable/mitaka Magnum. However, when I try to bring up a load-balanced
 service, I’m seeing these errors in the kube-controller-manager logs:
   E0907 16:26:54.375286   1 servicecontroller.go:173] Failed to
   process service delta. Retrying: failed to create external load
   balancer for service default/nginx-service: SubnetID is required

 I verified that I have the subnet-id field set in the [LoadBalancer]
 section in /etc/sysconfig/kube_openstack_config.

 I’ve tried this using the following Fedora Atomic images from [2]:
   fedora-21-atomic-5.qcow2
   fedora-21-atomic-6.qcow2
   fedora-atomic-latest.qcow2

 According to the Magnum external load balancer blueprint [3], there were 3
 patches in kubernetes that are required to get the OpenStack provider
 plugin to work in kubernetes:
   https://github.com/GoogleCloudPlatform/kubernetes/pull/12203
   https://github.com/GoogleCloudPlatform/kubernetes/pull/12262
   https://github.com/GoogleCloudPlatform/kubernetes/pull/12288
 The first of these patches, “Pass SubnetID to vips.Create()”, is
 apparently necessary to fix the “SubnetID is required” error shown above.

 According to the Magnum external load balancer blueprint [3], the
 fedora-21-atomic-6 image should include the above 3 fixes:
   “Our work-around is to use our own custom Kubernetes build (version
   1.0.4 + 3 fixes) until the fixes are released. This is in image
   fedora-21-atomic-6.qcow2”
 However, I’m still seeing the “SubnetID is required” errors with this
 image downloaded from [2]. Here are the kube versions I’m seeing with this
 image:
   [minion@k8-64n4bna2v6-0-ffukgho7n7tf-kube-master-fif5b6pivdmy
   sysconfig]$ rpm -qa | grep kube
   kubernetes-node-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
   kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
   kubernetes-client-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
   kubernetes-master-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
   [minion@k8-64n4bna2v6-0-ffukgho7n7tf-kube-master-fif5b6pivdmy
   sysconfig]$

 Does anyone have a pointer to a Fedora Atomic image that contains the 3
 kubernetes fixes listed earlier (and works with stable/mitaka)?

 Thanks!
 -Dane

 [1] http://kubernetes.io/docs/user-guide/services/#type-loadbalancer
 [2] https://fedorapeople.org/groups/magnum/
 [3] https://blueprints.launchpad.net/magnum/+spec/external-lb

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev