Re: [openstack-dev] [Neutron] Being more aggressive with our defaults

2016-02-10 Thread Miguel Angel Ajo Pelayo

> On 09 Feb 2016, at 21:43, Sean M. Collins  wrote:
> 
> Kevin Benton wrote:
>> I agree with the mtu setting because there isn't much of a downside to
>> enabling it. However, the others do have reasons to be disabled.
>> 
>> csum - requires runtime detection of support for a feature and then auto
>> degradation for systems that don't support it. People were against those so
>> we have the whole sanity check framework instead. I wouldn't be opposed to
>> revisiting that decision, but it's definitely a blocker right now.
> 
> Agree - I think the work that can be done here is to do some
> self-discovery to see if the system supports it, and enable it.

The risk of doing such thing, and this is why we stayed with sanity checks, 
is that we slow down agent startup, it could be trivial at the start, but as we 
keep piling checks, it could be come an excessive overhead.

We could cache the system discoveries, which are unlikely to change, but that
could bring other issues, like switching hardware/network settings requiring to
cleanup the “facts” cache.

Another approach could be making the sanity checks generate configuration file
additions or modifications on request.

IMHO we should keep any setting which is an optimization OFF, and let the 
administrator
tune it up.

What do we want?, a super performant neutron reference implementation that 
doesn’t
work for 40% (random number) of the deployers, or a neutron reference 
implantation
that works for all but can be tuned?



> 
>> dvr - doesn't work in non-VM cases (e.g. floating IP pointing to allowed
>> address pair or bare metal host) and consumes more public IPs than legacy
>> or HA.
> 
> Yes it does have tradeoffs currently. But I like to think back to
> Nova-Network. It was extremely common to run it in multi_host=True mode.
> 
> Despite the fact that the default is False.
> 
> https://github.com/openstack/nova/blob/da019e89976f9673c4f80575909dda3bab3e1a24/nova/network/rpcapi.py#L31
> 
> It's been a little while for me since I looked at nova-network (Essex,
> Folsom era) so things may have moved around a bit, but that's at least
> what I recall.
> 
> I'd like to see some grizzled Nova network veterans chime in, but at
> least from the operator standpoint the whole pain point for Neutron
> (which endangered Neutron's existence for a long time) was the fact that
> we didn't have an equivalent feature to multi_host - hence DVR being
> written.
> 
> So, even Nova may have a couple things turned off by default probably a
> majority of deployers have to consciously turn the knob for.
> 
>> l2pop - this one is weird because it's an ML2 driver. It makes no sense to
>> have it always enabled because an operator could be using an l2pop
>> incompatible backend. We also don't have a notion of a driver enabled by
>> default so if we did want to do it, it would take a bunch of changes to
>> ML2.
> 
> I think in this case, the point is - enable L2Pop for things where it
> really makes sense. Meaning if you are using a tunnel protocol for
> tenant networking, and you do not have something like vxlan multicast
> group configured. I don't think Open vSwitch supports it, so in that
> deployment model I think we can bet that it should be enabled.
> 
> Linux Bridge supports l2pop and vxlan multicast, so even in that case
> I'd say - enable l2pop but put good docs in to say "hey if you have
> multicast vxlan set up, switch it over to use that instead" 
> 
>> Whenever we have a knob, it usually stems from the fact that we don't use
>> runtime feature detection or the feature has a tradeoff that doesn't make
>> its use obvious in all cases.
> 
> Right, but I think we've been very cautious in the past, where we don't
> want to make any decision, so we just turn it all off and force
> operators to enable it. In some cases we've decided to do nothing and
> the result is forcing everyone to make the decision, where a high % of
> people end up making the same decision. Perhaps we can use the user
> survey and the ops meetups to find options where "80% of people use this 
> option
> and have to be proactive and enable it" - and think about turning them
> on by default.
> 
> It's not cut and dry, but maybe taking a stab at it will help us clarify
> which really options really are a toss up between on/off and which
> should be defaults.
> 
> 
> -- 
> Sean M. Collins
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Request to do stable point releases more often if there are going to be a lot of backports

2016-02-10 Thread Ihar Hrachyshka

Matt Riedemann  wrote:

While reviewing the neutron 7.0.2 stable/liberty point release, I noticed  
there were a lot of changes since 7.0.1. [1]


There are 48 non-merge commits by my count.

While there is no rule about how many backports should land before we cut  
a point release, it would be helpful on reviewers for the release request  
if it was fewer than 48. :)


I think the Neutron team is by far the most active in backporting changes  
to stable, which is good. We might want to consider releasing more often  
though if the backport rate is going to be this high.


I have in mind some stable release automation that would make sure we  
release often without a hassle on Kyle’s side.




I'd also be interested in hearing from deployers/operators (if any are  
reading this) to know how frequently they are picking up stable point  
releases, or if they are taking an approach of waiting to upgrade from  
kilo to liberty until there have at least been a few stable/liberty point  
releases across the projects.


[1]  
http://logs.openstack.org/88/272688/2/check/gate-releases-tox-list-changes/aa8e270/console.html.gz


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [puppet] move puppet-pacemaker

2016-02-10 Thread Juan Antonio Osorio
I like the idea of moving it to use the OpenStack infrastructure.

On Wed, Feb 10, 2016 at 12:13 AM, Ben Nemec  wrote:

> On 02/09/2016 08:05 AM, Emilien Macchi wrote:
> > Hi,
> >
> > TripleO is currently using puppet-pacemaker [1] which is a module hosted
> > & managed by Github.
> > The module was created and mainly maintained by Redhat. It tends to
> > break TripleO quite often since we don't have any gate.
> >
> > I propose to move the module to OpenStack so we'll use OpenStack Infra
> > benefits (Gerrit, Releases, Gating, etc). Another idea would be to gate
> > the module with TripleO HA jobs.
> >
> > The question is, under which umbrella put the module? Puppet ? TripleO ?
> >
> > Or no umbrella, like puppet-ceph. <-- I like this idea
>

I think the module not being under an umbrella makes sense.


> >
> > Any feedback is welcome,
> >
> > [1] https://github.com/redhat-openstack/puppet-pacemaker
>
> Seems like a module that would be useful outside of TripleO, so it
> doesn't seem like it should live under that.  Other than that I don't
> have enough knowledge of the organization of the puppet modules to comment.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage meeting minutes

2016-02-10 Thread Afek, Ifat (Nokia - IL)
Hi,

You can find the meeting minutes of Vitrage meeting: 
http://eavesdrop.openstack.org/meetings/vitrage/2016/vitrage.2016-02-10-09.00.html
 
Meeting log: 
http://eavesdrop.openstack.org/meetings/vitrage/2016/vitrage.2016-02-10-09.00.log.html
 

See you next week,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][QA] What is the preferred way to bootstrap a baremetal node with Fuel on product CI?

2016-02-10 Thread Vladimir Kuklin
Folks

I think the easiest and the best option here is to boot iPXE or pxelinux
with NFS and put master node image onto an NFS mount. This one should work
seamlessly.

On Wed, Feb 10, 2016 at 1:36 AM, Andrew Woodward 
wrote:

> Unless we hope to gain some insight and specific testing by installing the
> ISO on a bare-metal node (like UEFI), I'd propose that we stop testing
> things that are well tested elsewhere (a given ISO produces a working fuel
> master) and just focus on what we want to test in this environment.
>
> Along this line, we cold
>
> a) keep fuel masternode as a VM that is set up with access to the networks
> with the BM nodes. We have a good set of tools to build the master node in
> a VM already we can just re-use time
>
> b) use cobbler to control PXE based ISO boot/install, then either create
> new profiles in cobbler for various fuel nodes with different ISO or
> replace the single download link. (Make sure you transfer the image over
> HTTP as TFTP will be slow for such size. We have some tools and knowledge
> around using cobbler as this is effectively what fuel does its self.
>
> c) fuel on fuel, as an extension of b, we can just use cobbler on an
> existing fuel node to provision another fuel node, either from ISO or even
> it's own repo's (we just need to send a kickstart)
>
> d) you can find servers with good BMC or DRAC that we can issue remote
> mount commands to the virtual cd-rom
>
> e) consider using live-cd approach (long implmentation). I've been asked
> about supporting this in product where we start an environment with
> live-cd, the master node may make it's own home and then it can be moved
> off the live-cd when it's ready
>
>
> On Tue, Feb 9, 2016 at 10:25 AM Pavlo Shchelokovskyy <
> pshchelokovs...@mirantis.com> wrote:
>
>> Hi,
>>
>> Ironic also supports running it as standalone service, w/o
>> Keystone/Glance/Neutron/Nova etc integration, deploying images from HTTP
>> links. Could that be an option too?
>>
>> BTW, there is already an official project under OpenStack Baremetal
>> program called Bifrost [0] that, quoting, "automates the task of deploying
>> a base image onto a set of known hardware using Ironic" by installing and
>> configuring Ironic in standalone mode.
>>
>> [0] https://github.com/openstack/bifrost
>>
>> Cheers,
>>
>>
>> On Tue, Feb 9, 2016 at 6:46 PM Dennis Dmitriev 
>> wrote:
>>
>>> Hi all!
>>>
>>> To run system tests on CI on a daily basis using baremetal servers
>>> instead of VMs, Fuel admin node also should be bootstrapped.
>>>
>>> There is no a simple way to mount an ISO with Fuel as a CDROM or USB
>>> device to a baremetal server, so we choose the provisioning with PXE.
>>
>> It could be done in different ways:
>>>
>>> - Configure a libvirt bridge as dnsmasq/tftp server for admin/PXE
>>> network.
>>>   Benefits: no additional services to be configured.
>>>   Doubts: ISO should be mounted on the CI host (via fusefs?); a HTTP
>>> or NFS server for basic provisioning should be started in the admin/PXE
>>> network (on the CI host);
>>>
>>> - Start a VM that is connected to admin/PXE network, and configure
>>> dnsmasq/tftp there.
>>>   Benefits: no additional configuration on the CI host should be
>>> performed
>>>   Doubts: starting the PXE service becomes a little complicated
>>>
>>> - Use Ironic for manage baremetal nodes.
>>>   Benefits: good support for different hardware, support for
>>> provisioning from ISO 'out of the box'.
>>>   Doubts: support for Ironic cannot be implemented in short terms,
>>> and there should be performed additional investigations.
>>>
>>> My question is:  what other benefits or doubts I missed for first two
>>> ways? Is there other ways to provision baremetal with Fuel that can be
>>> automated in short terms?
>>>
>>> Thanks for any suggestions!
>>>
>>>
>>> --
>>> Regards,
>>> Dennis Dmitriev
>>> QA Engineer,
>>> Mirantis Inc. http://www.mirantis.com
>>> e-mail/jabber: dis.x...@gmail.com
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> --
>> Dr. Pavlo Shchelokovskyy
>> Senior Software Engineer
>> Mirantis Inc
>> www.mirantis.com
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> --
> Andrew Woodward
> Mirantis
> Fuel Community Ambassador
> Ceph Community
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://list

[openstack-dev] [NFV][Telco] Telco Working Group Meeting for February 10th 2016 - CANCELLED

2016-02-10 Thread Steve Gordon
Hi all,

Unfortunately today's meeting of the Telco Working Group [1] is canceled, my 
apologies for the late notice!

Thanks,

Steve

[1] https://wiki.openstack.org/wiki/TelcoWorkingGroup

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova gate blocked on eventlet release

2016-02-10 Thread Chris Dent

On Tue, 9 Feb 2016, Chris Dent wrote:


eventlet 0.18.2 has broken the nova unit tests at
'nova.tests.unit.test_wsgi.TestWSGIServerWithSSL' so the gate is
blocked.

sdague et al are working on it. Please hold off approving patches
in nova until they get it resolved.


Just in case it's not obvious, the fix for this merged near midnight
(UTC) last night and at least this aspect of things is okay again.

https://git.openstack.org/cgit/openstack/nova/commit/?id=d754a830861fb55b047e7b4d43ba7f485fc120dd


--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova gate blocked on eventlet release

2016-02-10 Thread Sean Dague
On 02/10/2016 05:54 AM, Chris Dent wrote:
> On Tue, 9 Feb 2016, Chris Dent wrote:
> 
>> eventlet 0.18.2 has broken the nova unit tests at
>> 'nova.tests.unit.test_wsgi.TestWSGIServerWithSSL' so the gate is
>> blocked.
>>
>> sdague et al are working on it. Please hold off approving patches
>> in nova until they get it resolved.
> 
> Just in case it's not obvious, the fix for this merged near midnight
> (UTC) last night and at least this aspect of things is okay again.
> 
> https://git.openstack.org/cgit/openstack/nova/commit/?id=d754a830861fb55b047e7b4d43ba7f485fc120dd

Thanks much for shepharding this Chris.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] removing EC2 related code from nova

2016-02-10 Thread Andrey Pavlov
Could anyone (and especially Dan, Hans, or Ryan) check my changes?
There is no new code ) only removing old code and dependency to 'boto' library.

https://review.openstack.org/#/c/266425/

-- 
Kind regards,
Andrey Pavlov.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Liberty backward compatibility jobs are bound to fail

2016-02-10 Thread Yuriy Taraday
Hello.

I've noticed once again that job
"gate-tempest-dsvm-neutron-src-oslo.concurrency-liberty" is always failing.
After looking at the failure I found that the core issue is
ContextualVersionConflict [0]. It seems that we have conflicting
requirements for oslo.utils here, and we do: in Liberty upper-constraints
set oslo.utils to 3.2.0 version [1] while in master oslo.concurrency
requires at least 3.4.0 which is stated in global-requirements [2].

Other projects have similar issues too:
- oslo.utils fails [3] because of debtcollertor 1.1.0 [4] while it requires
at least 1.2.0 in master [5];
- oslo.messaging fails the same way because of debtcollector [6];
- etc.

Looks like a lot of wasted cycles to me.

It seems we need to either bump stable/liberty upper-constraints to match
current requirements of modern oslo libraries or somehow adapt backward
compatibility jobs to ignore upper-constraints for these libraries. Of
course we could also stop running these jobs altogether for projects that
have conflicting dependencies, but I think the reason we have them in the
first place is that we want to see that we can use new oslo libraris with
older OpenStack releases.

[0]
http://logs.openstack.org/83/273083/5/check/gate-tempest-dsvm-neutron-src-oslo.concurrency-liberty/369f8b7/logs/apache/keystone.txt.gz#_2016-01-28_14_49_01_352371
[1]
https://github.com/openstack/requirements/blob/stable/liberty/upper-constraints.txt#L202
[2]
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L110
[3]
http://logs.openstack.org/10/276510/2/check/gate-tempest-dsvm-neutron-src-oslo.utils-liberty/717ce34/logs/apache/keystone.txt.gz#_2016-02-05_02_11_35_72
[4]
https://github.com/openstack/requirements/blob/stable/liberty/upper-constraints.txt#L90
[5]
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L28
[6]
http://logs.openstack.org/76/278276/2/check/gate-tempest-dsvm-neutron-src-oslo.messaging-liberty/91cb3e4/logs/apache/keystone.txt.gz#_2016-02-10_10_05_29_293781
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon]horizon angular network QoS panel

2016-02-10 Thread masco


Hello All,

As most of you people knows the 'QoS' feature is added in neutron during 
liberty release.
It will be nice to have this feature in horizon, so I have added a 
'network qos' panel for the same in angularJS.
It will be very helpful if you people reviewing this patches and helping 
to land this feature in horizon.


_gerrit links:_

https://review.openstack.org/#/c/247997/
https://review.openstack.org/#/c/259022/11
https://review.openstack.org/#/c/272928/4
https://review.openstack.org/#/c/277743/3


_To set test env:_
here is some steps how to enable a QoS in neutron.
If you want to test it will help you.

To enable the QoS in devstack please add below two lines in the 
local.conf enable_plugin neutron 
git://git.openstack.org/openstack/neutron enable_service q-qos and 
rebuild your stack (./stack.sh)


Thanks,
Masco.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread Sean Dague
The largeops tests at this point are mostly finding out that some of our
new cloud providers are slow - http://tinyurl.com/j5u4nf5

This is fundamentally a performance test, with timings having been tuned
to pass 98% of the time on two clouds that were very predictable in
performance. We're now running on 4 clouds, and the variance between
them all, and between every run on each can be as much as a factor of 2.

We could just bump all the timeouts again, but that's basically the same
thing as dropping them.

These tests are not instrumented in a way that any real solution can be
addressed in most cases. Tests without a path forward, that are failing
good patches a lot, are very much the kind of thing we should remove
from the system.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread Jay Pipes

On 02/10/2016 07:33 AM, Sean Dague wrote:

The largeops tests at this point are mostly finding out that some of our
new cloud providers are slow - http://tinyurl.com/j5u4nf5

This is fundamentally a performance test, with timings having been tuned
to pass 98% of the time on two clouds that were very predictable in
performance. We're now running on 4 clouds, and the variance between
them all, and between every run on each can be as much as a factor of 2.

We could just bump all the timeouts again, but that's basically the same
thing as dropping them.

These tests are not instrumented in a way that any real solution can be
addressed in most cases. Tests without a path forward, that are failing
good patches a lot, are very much the kind of thing we should remove
from the system.


+1 from me.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread Davanum Srinivas
+1 from me

On Wed, Feb 10, 2016 at 7:56 AM, Jay Pipes  wrote:
> On 02/10/2016 07:33 AM, Sean Dague wrote:
>>
>> The largeops tests at this point are mostly finding out that some of our
>> new cloud providers are slow - http://tinyurl.com/j5u4nf5
>>
>> This is fundamentally a performance test, with timings having been tuned
>> to pass 98% of the time on two clouds that were very predictable in
>> performance. We're now running on 4 clouds, and the variance between
>> them all, and between every run on each can be as much as a factor of 2.
>>
>> We could just bump all the timeouts again, but that's basically the same
>> thing as dropping them.
>>
>> These tests are not instrumented in a way that any real solution can be
>> addressed in most cases. Tests without a path forward, that are failing
>> good patches a lot, are very much the kind of thing we should remove
>> from the system.
>
>
> +1 from me.
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] Multi release packages

2016-02-10 Thread Alexey Shtokolov
Fuelers,

We are discussing the idea to extend the multi release packages for plugins.

Fuel plugin builder (FPB) can create one rpm-package for all supported
releases (from metadata.yaml) but we can specify only deployment scripts
and repositories per release.

Current release definition (in metadata.yaml):
- os: ubuntu
  version: liberty-8.0
  mode: ['ha']
  deployment_scripts_path: deployment_scripts/
  repository_path: repositories/ubuntu

So the idea [0] is to make releases fully configurable.
Suggested changes for release definition (in metadata.yaml):
  components_path: components_liberty.yaml
  deployment_tasks_path: deployment_tasks_liberty/ # <- folder
  environment_config_path: environment_config_liberty.yaml
  network_roles_path: network_roles_liberty.yaml
  node_roles_path: node_roles_liberty.yaml
  volumes_path: volumes_liberty.yaml

I see the issue: if we change anything for one release (e.g.
deployment_task typo) revalidation is needed for all releases.

Your Pros and cons please?

[0] https://review.openstack.org/#/c/271417/
---
WBR, Alexey Shtokolov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread Flavio Percoco

On 10/02/16 07:33 -0500, Sean Dague wrote:

The largeops tests at this point are mostly finding out that some of our
new cloud providers are slow - http://tinyurl.com/j5u4nf5

This is fundamentally a performance test, with timings having been tuned
to pass 98% of the time on two clouds that were very predictable in
performance. We're now running on 4 clouds, and the variance between
them all, and between every run on each can be as much as a factor of 2.

We could just bump all the timeouts again, but that's basically the same
thing as dropping them.

These tests are not instrumented in a way that any real solution can be
addressed in most cases. Tests without a path forward, that are failing
good patches a lot, are very much the kind of thing we should remove
from the system.


no objections here!

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread gordon chung
makes sense to me. thanks for concise update and tracking this.

On 10/02/2016 7:59 AM, Davanum Srinivas wrote:
> +1 from me
>
> On Wed, Feb 10, 2016 at 7:56 AM, Jay Pipes  wrote:
>> On 02/10/2016 07:33 AM, Sean Dague wrote:
>>>
>>> The largeops tests at this point are mostly finding out that some of our
>>> new cloud providers are slow - http://tinyurl.com/j5u4nf5
>>>
>>> This is fundamentally a performance test, with timings having been tuned
>>> to pass 98% of the time on two clouds that were very predictable in
>>> performance. We're now running on 4 clouds, and the variance between
>>> them all, and between every run on each can be as much as a factor of 2.
>>>
>>> We could just bump all the timeouts again, but that's basically the same
>>> thing as dropping them.
>>>
>>> These tests are not instrumented in a way that any real solution can be
>>> addressed in most cases. Tests without a path forward, that are failing
>>> good patches a lot, are very much the kind of thing we should remove
>>> from the system.
>>
>>
>> +1 from me.
>>
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread David Moreau Simard
Lots of questions, I'm sorry.

Are you planning to drop them indefinitely or is it temporary ? Is it to
help alleviate the gate from it's current misery ?

Why were these tests introduced in the first place ? To find issues or
bottenecks relative to scale or amount of operations ? Was it a request
from the operator community ?

I have a strong feeling there is a very real need for *something* that is
able to find silly issues that only manifest themselves beyond the scale of
one VM before we ship something to the operator community.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]
On Feb 10, 2016 7:37 AM, "Sean Dague"  wrote:

> The largeops tests at this point are mostly finding out that some of our
> new cloud providers are slow - http://tinyurl.com/j5u4nf5
>
> This is fundamentally a performance test, with timings having been tuned
> to pass 98% of the time on two clouds that were very predictable in
> performance. We're now running on 4 clouds, and the variance between
> them all, and between every run on each can be as much as a factor of 2.
>
> We could just bump all the timeouts again, but that's basically the same
> thing as dropping them.
>
> These tests are not instrumented in a way that any real solution can be
> addressed in most cases. Tests without a path forward, that are failing
> good patches a lot, are very much the kind of thing we should remove
> from the system.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Make "central logging" optional

2016-02-10 Thread Eric LEMOINE
On Fri, Feb 5, 2016 at 4:46 PM, Eric LEMOINE  wrote:
> Hi Kolla devs
>
> The other day inc0 said that we would like "central logging" to be
> optional in Mitaka, and still use Rsyslog and keep the current
> behavior if "central logging" is disabled.
>
> I would like to propose an alternative, where we do remove Rsyslog as
> planned in the spec.
>
> I like the idea of an enable_central_logging switch, because it makes
> sense to me that an operator have the possibility to NOT centralize
> his logs in Elasticsearch (or anywhere else).
>
> When enable_central_logging is false I suggest to just not deploy
> Heka, Elasticsearch and Kibana. (Rsyslog would not be deployed
> either.)
>
> So when enable_central_logging is false the OpenStack services will
> still write their logs in the "log" named volume
> (/var/lib/docker/volumes/log/_data), but no one will read/process
> these logs.  They will just sit there at the disposal of the operator.
>
> For services that log to Syslog (HAProxy and Keepalived) we won't
> collect their logs if enable_central_logging is false, but these
> services' logs are not collected today, so no regression there.
>
> I think this would make a simple alternative.



I implemented that option in
 and
.  More specifically see

and 

and the use of the enable_elk variable.

So when enable_elk is false the Heka container will not be started,
and syslog logging will be disabled in HAProxy, but the OpenStack
containers will still write their logs in the "log" volume.

PS: I'd like to rename enable_elk to enable_central_logging, but this
can be done later, when both the Heka and Elasticsearch patches are
merged.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread Ghe Rivero
+1, although I would like to keep them to find scale bottlenecks. Maybe when 
the new infra-cloud is up (we'll have full control over it, includind hw 
access), we can pin these tests just to it.

Ghe Rivero

Quoting Sean Dague (2016-02-10 13:33:44)
> The largeops tests at this point are mostly finding out that some of our
> new cloud providers are slow - http://tinyurl.com/j5u4nf5
> 
> This is fundamentally a performance test, with timings having been tuned
> to pass 98% of the time on two clouds that were very predictable in
> performance. We're now running on 4 clouds, and the variance between
> them all, and between every run on each can be as much as a factor of 2.
> 
> We could just bump all the timeouts again, but that's basically the same
> thing as dropping them.
> 
> These tests are not instrumented in a way that any real solution can be
> addressed in most cases. Tests without a path forward, that are failing
> good patches a lot, are very much the kind of thing we should remove
> from the system.
> 
> -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-10 Thread Victor Stinner

Hi,

I asked eventlet dev to *not* remove a release from PyPI before they did 
it, but they ignored me and removed 0.18.0 and 0.18.1 releases from PyPI :-(


0.18.0 fixed a bug in Python 3:
https://github.com/eventlet/eventlet/issues/274

But 0.18.0 introduced a regression on Python 3 in WSGI:
https://github.com/eventlet/eventlet/issues/295

0.18.2 was supposed to fix the WSGI bug, but introduced a different bug 
in Keystone:

https://github.com/eventlet/eventlet/issues/296

Yeah, it's funny to work on eventlet :-) A new bug everyday :-D

At least, the eventlet test suite is completed at each bugfix.

Victor

Le 09/02/2016 17:44, Markus Zoeller a écrit :

For the sake of completeness: The eventlet package version 0.18.1
seems to be disappeared from the PyPi servers, which is a bad thing,
as we use that version in the "upper-constraints.txt" of the
requirements project. There is patch [1] in the queue which solves that.
Until this is merged, there is a change that our CI (and your third-party
CI) will break after the locally cached version in the CI vanishes.

References:
[1] https://review.openstack.org/#/c/277912/

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] removing python 2.6 classifiers from package metadata

2016-02-10 Thread Doug Hellmann
We stopped running tests under python 2.6 a while back, and I submitted
a bunch of patches to projects that still had the python package
classifier indicating support for python 2.6. Most of those merged, but
quite a few are still open [1]. Please take a look at the list and if
you find any for your project merge them before the next milestone tag.

Thanks,
Doug

[1] https://review.openstack.org/#/q/status:open++topic:remove-py26-classifier

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][release] reno 1.5.0 release

2016-02-10 Thread no-reply
We are delighted to announce the release of:

reno 1.5.0: RElease NOtes manager

With source available at:

http://git.openstack.org/cgit/openstack/reno

With package available at:

https://pypi.python.org/pypi/reno

Please report issues through launchpad:

http://bugs.launchpad.net/reno

For more details, please see below.

1.5.0
^

New Features

* Add the ability to limit queries by stopping at an "earliest
  version". This is intended to be used when scanning a branch, for
  example, to stop at a point when the branch was created and not
  include all of the history from the parent branch.

Changes in reno 1.4.0..1.5.0


f4e2d66 add release note for earliest-version feature
75e06c5 add earliest_version option to scanner

Diffstat (except docs and test files)
-

.../add-earliest-version-6f3d634770e855d0.yaml |  6 ++
reno/lister.py |  1 +
reno/main.py   | 10 +
reno/report.py |  1 +
reno/scanner.py|  7 ++-
reno/sphinxext.py  |  3 +++
8 files changed, 58 insertions(+), 1 deletion(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Distributed Multicast and IGMP Support in Dragonflow

2016-02-10 Thread Omer Anson
Hello,

We're in the process of adding IGMP and multicast support to Dragonflow[1]. The
added feature will route multicast packets only to relevant and registered VMs,
compute nodes, and handle IGMP packets.

This feature has some configuration parameters for subnets and router 
interfaces.
Some of these parameters are, e.g. if the subnet supports multicast, and the
Query Interval of the router.

Is anyone else working on such a feature? Are there any specific parameters
that should be included, in addition to multicast enable/disable on subnets,
and the parameters in Section 8 in [2]?

Thanks,
Omer Anson.

[1] https://review.openstack.org/#/c/278400/
[2] https://tools.ietf.org/html/rfc3376


-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread Sean Dague
On 02/10/2016 08:42 AM, David Moreau Simard wrote:
> Lots of questions, I'm sorry.
> 
> Are you planning to drop them indefinitely or is it temporary ? Is it to
> help alleviate the gate from it's current misery ?
> 
> Why were these tests introduced in the first place ? To find issues or
> bottenecks relative to scale or amount of operations ? Was it a request
> from the operator community ?
> 
> I have a strong feeling there is a very real need for *something* that
> is able to find silly issues that only manifest themselves beyond the
> scale of one VM before we ship something to the operator community.

Permanently.

A test suite is only useful if it gives you a set of bread crumbs to go
from fail to fix, and it's predictable enough to believe the results are
real.

Macro performance testing is not possible on the environment we function
in. Because a 10x performance regression in one operation under the
covers gets smoothed out to a 5 or 10% variance at the macro level.

Which we can't detect. Over time we find things failing a bit more and
people bump the timeouts or reduce the parallelism.

When this job was first created, no one was looking at performance at
all. It was a minor stop gap to catch a class of issues. Since then we
grew up db performance testing, rally, the current performance team.
Lots more people are running performance analysis in their downstream QA
teams and providing feedback back.

The Neutron team stopped running this job a while ago because it was
just noise. And I agree with their call there. We should do the same
across OpenStack.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread Ihar Hrachyshka

Sean Dague  wrote:


The largeops tests at this point are mostly finding out that some of our
new cloud providers are slow - http://tinyurl.com/j5u4nf5

This is fundamentally a performance test, with timings having been tuned
to pass 98% of the time on two clouds that were very predictable in
performance. We're now running on 4 clouds, and the variance between
them all, and between every run on each can be as much as a factor of 2.

We could just bump all the timeouts again, but that's basically the same
thing as dropping them.

These tests are not instrumented in a way that any real solution can be
addressed in most cases. Tests without a path forward, that are failing
good patches a lot, are very much the kind of thing we should remove
from the system.


+1. Now that we have SLA rally scenarios [1] available to us, there is no  
real reason to keep a generic large-ops job that is hard to make sense of  
once it fails.


[1]  
http://rally.readthedocs.org/en/latest/tutorial/step_4_adding_success_criteria_for_benchmarks.html


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Distributed Multicast and IGMP Support in Dragonflow

2016-02-10 Thread Gal Sagie
This is a work we are doing in Dragonflow and is relatively simple to
implement leveraging
Dragonflow current infrastructure and reactive local controller.

This will significantly reduce duplicating packets for IGMP aware VMs. and
hence
reduce multicast load.

We would love to help and bring this API to Neutron core, anyone that is
interested
is welcome to check the spec Omer published.  feel free to join us in our
next IRC
channel where we would talk about this design.

Thanks
Gal.

On Wed, Feb 10, 2016 at 4:59 PM, Omer Anson 
wrote:

> Hello,
>
> We're in the process of adding IGMP and multicast support to
> Dragonflow[1]. The
> added feature will route multicast packets only to relevant and registered
> VMs,
> compute nodes, and handle IGMP packets.
>
> This feature has some configuration parameters for subnets and router
> interfaces.
> Some of these parameters are, e.g. if the subnet supports multicast, and
> the
> Query Interval of the router.
>
> Is anyone else working on such a feature? Are there any specific parameters
> that should be included, in addition to multicast enable/disable on
> subnets,
> and the parameters in Section 8 in [2]?
>
> Thanks,
> Omer Anson.
>
> [1] https://review.openstack.org/#/c/278400/
> [2] https://tools.ietf.org/html/rfc3376
>
>
>
> -
>
> This email and any files transmitted and/or attachments with it are
> confidential and proprietary information of
> Toga Networks Ltd., and intended solely for the use of the individual or
> entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential
> information of Toga Networks Ltd., and is intended only for the individual
> named. If you are not the named
> addressee you should not disseminate, distribute or copy this e-mail.
> Please notify the sender immediately
> by e-mail if you have received this e-mail by mistake and delete this
> e-mail from your system. If you are not
> the intended recipient you are notified that disclosing, copying,
> distributing or taking any action in reliance on
> the contents of this information is strictly prohibited.
>
>
> 
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] HDFS CI broken

2016-02-10 Thread Ben Swartzlander
The gate-manila-tempest-dsvm-hdfs jenkins job has been failing for a 
long time now. It appears to be a config issue that's probably not hard 
to fix, but nobody is actively maintaining this code.


Since it's a waste of resources to continue running this broken job, I 
plan to disable it, and if nobody wants to volunteer to get it working 
again, we will need to take the HDFS driver out of the tree in Mitaka, 
since we can't ensure its quality without the CI job.


I really don't like removing drivers, especially fully open-source 
drivers, but we have too many other priorities this release to be 
distracted by fixing this kind of thing. If this driver is something 
people actively use and find valuable, then it should not be hard to 
find a volunteer to fix it.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Stable branch policy for Mitaka

2016-02-10 Thread Steven Hardy
Hi all,

We discussed this in our meeting[1] this week, and agreed a ML discussion
to gain consensus and give folks visibility of the outcome would be a good
idea.

In summary, we adopted a more permissive "release branch" policy[2] for our
stable/liberty branches, where feature backports would be allowed, provided
they worked with liberty and didn't break backwards compatibility.

The original idea was really to provide a mechanism to "catch up" where
features are added e.g to liberty OpenStack components late in the cycle
and TripleO requires changes to integrate with them.

However, the reality has been that the permissive backport policy has been
somewhat abused (IMHO) with a large number of major features being proposed
for backport, and in a few cases this has broken downstream (RDO) consumers
of TripleO.

Thus, I would propose that from Mitaka, we revise our backport policy to
simply align with the standard stable branch model observed by all
projects[3].

Hopefully this will allow us to retain the benefits of the stable branch
process, but provide better stability for downstream consumers of these
branches, and minimise confusion regarding what is a permissable backport.

If we do this, only backports that can reasonably be considered
"Appropriate fixes"[4] will be valid backports - in the majority of cases
this will mean bugfixes only, and large features where the risk of
regression is significant will not be allowed.

What are peoples thoughts on this?

Thanks,

Steve

[1] 
http://eavesdrop.openstack.org/meetings/tripleo/2016/tripleo.2016-02-09-14.01.log.html
[2] 
https://github.com/openstack/tripleo-specs/blob/master/specs/liberty/release-branch.rst
[3] http://docs.openstack.org/project-team-guide/stable-branches.html
[4] 
http://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-10 Thread Thierry Carrez

Chris Dent wrote:

[...]
Observing this thread and "the trouble with names"[1] one I get
concerned that we're trending in the direction of expecting
projects/servers/APIs to be done and perfect before they will ever
be OpenStack. This, of course, runs entirely contrary to the spirit
of open source where people release a solution to their itch and
people join with them to make it better.

If we start thinking of projects as needing to have "production-grade"
implementations and APIs as needing to be stable and correct from
the start we're backing ourselves into corners that are very difficult
to get out of, distracting ourselves from the questions we ought to be
asking, and putting barriers in the way of doing new but necessary
stuff and evolving.


I certainly didn't intend to mean that projects need to have a final API 
or perfect implementation before they can join the tent. I meant that 
projects need to have a reference implementation using open source tools 
that has a chance of being used in production one day. Imagine a project 
which uses sqlite in testing but requires Oracle DB to achieve full 
functionality or scaling beyond one user: the sqlite backend would be a 
token open backend for testing purposes but real usage would need you to 
buy into proprietary options. That would certainly be considered "open 
core": a project that pretends to be open but requires proprietary 
technology to be "really used".


Now it's not that clear cut and a lot of things fall in the grey area: 
on one side you have proprietary backends that may offer better 
performance -- at which point should we consider that "better 
performance" means nobody would seriously use the open source backend ? 
On the other side you have corner cases like Poppy where the 
"proprietary service" it plugs into is difficult to replicate since it's 
as much physical infrastructure than software.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Multi release packages

2016-02-10 Thread Stanislaw Bogatkin
It changes mostly nothing for case of furious plugin development when big
parts of code changed from one release to another.

You will have 6 different deployment_tasks directories and 30 a little bit
different files in root directory of plugin. Also you forgot about
repositories directory (+6 at least), pre_build hooks (also 6) and so on.
It will look as hell after just 3 years of development.

Also I can't imagine how to deal with plugin licensing if you have Apache
for liberty but BSD for mitaka release, for example.

Much easier way to develop a plugin is to keep it's source in VCS like Git
and just make a branches for every fuel release. It will give us
opportunity to not store a bunch of similar but a little bit different
files in repo. There is no reason to drag all different versions of code
for specific release.


On other hand there is a pros - your plugin can survive after upgrade if it
supports new release, no changes needed here.

On Wed, Feb 10, 2016 at 4:04 PM, Alexey Shtokolov 
wrote:

> Fuelers,
>
> We are discussing the idea to extend the multi release packages for
> plugins.
>
> Fuel plugin builder (FPB) can create one rpm-package for all supported
> releases (from metadata.yaml) but we can specify only deployment scripts
> and repositories per release.
>
> Current release definition (in metadata.yaml):
> - os: ubuntu
>   version: liberty-8.0
>   mode: ['ha']
>   deployment_scripts_path: deployment_scripts/
>   repository_path: repositories/ubuntu
>
> So the idea [0] is to make releases fully configurable.
> Suggested changes for release definition (in metadata.yaml):
>   components_path: components_liberty.yaml
>   deployment_tasks_path: deployment_tasks_liberty/ # <- folder
>   environment_config_path: environment_config_liberty.yaml
>   network_roles_path: network_roles_liberty.yaml
>   node_roles_path: node_roles_liberty.yaml
>   volumes_path: volumes_liberty.yaml
>
> I see the issue: if we change anything for one release (e.g.
> deployment_task typo) revalidation is needed for all releases.
>
> Your Pros and cons please?
>
> [0] https://review.openstack.org/#/c/271417/
> ---
> WBR, Alexey Shtokolov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
with best regards,
Stan.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][Neutron][Monasca]

2016-02-10 Thread Rubab Syed
Hi,

I'm doing a university project in OpenStack. The aim is to monitor virtual
routers per tenant with Monasca(which according to my knowledge hasn't been
implemented yet). The initial features would include monitoring of in/out
traffic per interface. I'm writing a plugin in Monasca for that purpose. If
I'm not wrong, I can fetch the data about routers sitting on different
compute nodes(in DVR case) running monasca-agent from Neutron database but
will have to devise a mechanism to filter traffic based on tenants and
subnets. Is there something already implemented in Neutron that I can use
for this purpose?

Also, I would really appreciate if you guys tell me some use cases for
Openstack's virtual routers monitoring per tenant with Monasca?

Thanks,
Rubab
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-10 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2016-02-10 08:35:19 -0800:
> Chris Dent wrote:
> > [...]
> > Observing this thread and "the trouble with names"[1] one I get
> > concerned that we're trending in the direction of expecting
> > projects/servers/APIs to be done and perfect before they will ever
> > be OpenStack. This, of course, runs entirely contrary to the spirit
> > of open source where people release a solution to their itch and
> > people join with them to make it better.
> >
> > If we start thinking of projects as needing to have "production-grade"
> > implementations and APIs as needing to be stable and correct from
> > the start we're backing ourselves into corners that are very difficult
> > to get out of, distracting ourselves from the questions we ought to be
> > asking, and putting barriers in the way of doing new but necessary
> > stuff and evolving.
> 
> I certainly didn't intend to mean that projects need to have a final API 
> or perfect implementation before they can join the tent. I meant that 
> projects need to have a reference implementation using open source tools 
> that has a chance of being used in production one day. Imagine a project 
> which uses sqlite in testing but requires Oracle DB to achieve full 
> functionality or scaling beyond one user: the sqlite backend would be a 
> token open backend for testing purposes but real usage would need you to 
> buy into proprietary options. That would certainly be considered "open 
> core": a project that pretends to be open but requires proprietary 
> technology to be "really used".
> 
> Now it's not that clear cut and a lot of things fall in the grey area: 
> on one side you have proprietary backends that may offer better 
> performance -- at which point should we consider that "better 
> performance" means nobody would seriously use the open source backend ? 
> On the other side you have corner cases like Poppy where the 
> "proprietary service" it plugs into is difficult to replicate since it's 
> as much physical infrastructure than software.
> 

What you say above resonates with me Thierry, I think this is a gray area,
and I think we have a TC to provide well informed value judgments for
this gray area while we seek to bring the dark and light sides closer
together so there are fewer shades of gray to judge.

For what it's worth, I think Poppy is perhaps soundly in the very middle
of the gray area. There's nothing about it that smells like a poorly
behaved free rider grab for free development resources, nor does it feel
like a lock-in attempt. It legitimately feels like something that
enables OpenStack users to consume services that sit outside of OpenStack.

So, for me, the users are served by this project developing closely with
OpenStack, even if there's no viable free way to consume it. I think
Neutron tap danced through this gray area early on, and now has reached
a point where it is clearly in the light side of things, with _multiple_
free drivers that are completely viable. So, let's let Poppy tap dance,
and let's keep making sure our TC is well informed and makes thought
out decisions like this one, even if they are hard.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][stable] Inappropriate changes backported to stable/liberty

2016-02-10 Thread Matt Riedemann



On 1/29/2016 12:08 AM, Renat Akhmerov wrote:



On 28 Jan 2016, at 17:40, Matt Riedemann  wrote:

With regards to the trove 'plugin' stuff, it adds a new dependency on 
python-troveclient which was not in sahara 1.0.0 in liberty GA, so IMO it's not 
valid to release that in 1.0.1 and expect people to have to start packaging and 
picking up python-troveclient in a point release. That should be reverted.


Ok Matt, I agree, we’ll revert it.


WRT the security stuff, I guess I wouldn't consider new functionality for 
security as a feature, but it depends on the implementation I suppose, I don't 
know the details.


We’ll see how it looks like once it’s implemented and make a decision how to 
handle it. If needed, I’ll bring it up here.

Renat Akhmerov
@ Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Here is another revert for stable/liberty:

https://review.openstack.org/#/c/278521/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS bootstrap image retirement

2016-02-10 Thread Vladimir Kozhukalov
Colleagues,

Centos bootstrap image (that we used to build together with the ISO) code
has been removed from fuel-main. Now the only available option is the
Ubuntu based bootstrap image that is built on the master node in run time.
>From this moment we are ready to get rid of building Fuel packages at the
ISO build time and instead download them from Fuel Packaging CI.




Vladimir Kozhukalov

On Wed, Feb 3, 2016 at 6:03 PM, Vladimir Kuklin 
wrote:

> +1
>
>
> On Wed, Feb 3, 2016 at 4:45 PM, Igor Kalnitsky 
> wrote:
>
>> No objections from my side. Let's do it.
>>
>> On Tue, Feb 2, 2016 at 8:35 PM, Dmitry Klenov 
>> wrote:
>> > Hi Sergey,
>> >
>> > I fully support this idea. It was our plan as well when we were
>> developing
>> > Ubuntu Bootstrap feature. So let's proceed with CentOS bootstrap
>> removal.
>> >
>> > BR,
>> > Dmitry.
>> >
>> > On Tue, Feb 2, 2016 at 2:55 PM, Sergey Kulanov 
>> > wrote:
>> >>
>> >> Hi Folks,
>> >>
>> >> I think it's time to declare CentOS bootstrap image retirement.
>> >> Since Fuel 8.0 we've switched to Ubuntu bootstrap image usage [1, 2]
>> and
>> >> CentOS one became deprecated,
>> >> so in Fuel 9.0 we can freely remove it [2].
>> >> For now we are building CentOS bootstrap image together with ISO and
>> then
>> >> package it into rpm [3], so by removing fuel-bootstrap-image [3] we:
>> >>
>> >> * simplify patching/update story, since we don't need to
>> rebuild/deliver
>> >> this
>> >>   package on changes in dependent packages [4].
>> >>
>> >> * speed-up ISO build process, since building centos bootstrap image
>> takes
>> >> ~ 20%
>> >>   of build-iso time.
>> >>
>> >> We've prepared related blueprint for this change [5] and spec [6]. We
>> also
>> >> have some draft patchsets [7]
>> >> which passed BVT tests.
>> >>
>> >> So the next steps are:
>> >> * get feedback by reviewing the spec/patches;
>> >> * remove related code from the rest fuel projects (fuel-menu,
>> fuel-devops,
>> >> fuel-qa).
>> >>
>> >>
>> >> Thank you
>> >>
>> >>
>> >> [1]
>> >>
>> https://specs.openstack.org/openstack/fuel-specs/specs/7.0/fuel-bootstrap-on-ubuntu.html
>> >> [2]
>> >>
>> https://specs.openstack.org/openstack/fuel-specs/specs/8.0/dynamically-build-bootstrap.html
>> >> [3]
>> >>
>> https://github.com/openstack/fuel-main/blob/master/packages/rpm/specs/fuel-bootstrap-image.spec
>> >> [4]
>> >>
>> https://github.com/openstack/fuel-main/blob/master/bootstrap/module.mk#L12-L50
>> >> [5]
>> >>
>> https://blueprints.launchpad.net/fuel/+spec/remove-centos-bootstrap-from-fuel
>> >> [6] https://review.openstack.org/#/c/273159/
>> >> [7]
>> >>
>> https://review.openstack.org/#/q/topic:bp/remove-centos-bootstrap-from-fuel
>> >>
>> >>
>> >> --
>> >> Sergey
>> >> DevOps Engineer
>> >> IRC: SergK
>> >> Skype: Sergey_kul
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-02-10 Thread Sean M. Collins
Ihar Hrachyshka wrote:
> UPD: seems like enforcing instance mtu to 1400 indeed makes us pass forward
> into tempest:
> 
> http://logs.openstack.org/59/265759/3/experimental/gate-grenade-dsvm-neutron-multinode/a167a59/console.html
> 
> And there are only three failures there:
> 
> http://logs.openstack.org/59/265759/3/experimental/gate-grenade-dsvm-neutron-multinode/a167a59/console.html#_2016-01-11_11_58_47_945
> 
> I also don’t see any RPC versioning related traces in service logs, which is
> a good sign.
> 

Just an update - we are still stuck on those three tempest tests.

I was able to dig a bit and it looks like it's still an MTU issue.


http://logs.openstack.org/35/187235/14/experimental/gate-grenade-dsvm-neutron-multinode/c5eda62/logs/tempest.txt.gz#_2016-02-09_20_37_40_044

"SSHException: Error reading SSH protocol banner[Errno 104] Connection reset by 
peer"

I tried pushing down a patch to cram network_device_mtu down to 1450 in
the hopes that it would do the trick - but sadly that didn't fix. I'm
going to have to keep digging. I am almost certain it's something that
Matt K (Sam-I-Am) has already made note of in his research.


-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-02-10 Thread Ihar Hrachyshka

Sean M. Collins  wrote:


Ihar Hrachyshka wrote:
UPD: seems like enforcing instance mtu to 1400 indeed makes us pass  
forward

into tempest:

http://logs.openstack.org/59/265759/3/experimental/gate-grenade-dsvm-neutron-multinode/a167a59/console.html

And there are only three failures there:

http://logs.openstack.org/59/265759/3/experimental/gate-grenade-dsvm-neutron-multinode/a167a59/console.html#_2016-01-11_11_58_47_945

I also don’t see any RPC versioning related traces in service logs,  
which is

a good sign.


Just an update - we are still stuck on those three tempest tests.

I was able to dig a bit and it looks like it's still an MTU issue.


http://logs.openstack.org/35/187235/14/experimental/gate-grenade-dsvm-neutron-multinode/c5eda62/logs/tempest.txt.gz#_2016-02-09_20_37_40_044

"SSHException: Error reading SSH protocol banner[Errno 104] Connection  
reset by peer”


Note that this time we get reset immediately instead of being stuck there  
until timeout.




I tried pushing down a patch to cram network_device_mtu down to 1450 in
the hopes that it would do the trick - but sadly that didn't fix. I’m


Actually, we already have 1450 for network_device_mtu for the job since:

https://review.openstack.org/#/c/267847/4/devstack-vm-gate.sh

Also, I added some interface state dump for worlddump, and here is how the  
main node networking setup looks like:


http://logs.openstack.org/59/265759/20/experimental/gate-grenade-dsvm-neutron-multinode/d64a6e6/logs/worlddump-2016-01-30-164508.txt.gz

br-ex: mtu = 1450
inside router: qg mtu = 1450, qr = 1450

So should be fine in this regard. I also set devstack locally enforcing  
network_device_mtu, and it seems to pass packets of 1450 size through. So  
it’s probably something tunneling packets to the subnode that fails for us,  
not local router-to-tap bits.


I also see br-tun having 1500. Is it a problem? Probably not, but I admit I  
miss a lot in this topic so far.


Also I see some qg-2c68fb65-21 device in the worlddump output from above in  
global namespace. The device has mtu = 1500. Which router does the device  
belong to?..




going to have to keep digging. I am almost certain it's something that
Matt K (Sam-I-Am) has already made note of in his research.


Actually, I don’t think Matt ran any tests for MTU that is reduced  
comparing to ‘standard’ 1500 size. It would be interesting to see how it  
goes in his lab with the limited mtu size we use in gate.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread Clint Byrum
Excerpts from Sean Dague's message of 2016-02-10 04:33:44 -0800:
> The largeops tests at this point are mostly finding out that some of our
> new cloud providers are slow - http://tinyurl.com/j5u4nf5
> 
> This is fundamentally a performance test, with timings having been tuned
> to pass 98% of the time on two clouds that were very predictable in
> performance. We're now running on 4 clouds, and the variance between
> them all, and between every run on each can be as much as a factor of 2.
> 
> We could just bump all the timeouts again, but that's basically the same
> thing as dropping them.
> 
> These tests are not instrumented in a way that any real solution can be
> addressed in most cases. Tests without a path forward, that are failing
> good patches a lot, are very much the kind of thing we should remove
> from the system.
> 

I think we need to replace this with something that measures work
counters, and not clock time. As you say, some of the other test suites
out there already pick up a lot of this slack too. Also, I'm working at
this as well with the counter-inspection spec, so hopefully dropping
this now won't leave too much of a gap in coverage while we ramp up
counter-inspection.

+1 to getting rid of it now, as instability in the test suites, which
slows down development velocity, is worse than missing a few performance
regressions in the corners.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-02-10 Thread Sean M. Collins
Ihar Hrachyshka wrote:
> Actually, we already have 1450 for network_device_mtu for the job since:
> 
> https://review.openstack.org/#/c/267847/4/devstack-vm-gate.sh
> 

Ah! Forgot about that one. Cool.

> Also, I added some interface state dump for worlddump, and here is how the
> main node networking setup looks like:
> 
> http://logs.openstack.org/59/265759/20/experimental/gate-grenade-dsvm-neutron-multinode/d64a6e6/logs/worlddump-2016-01-30-164508.txt.gz
> 
> br-ex: mtu = 1450
> inside router: qg mtu = 1450, qr = 1450
> 
> So should be fine in this regard. I also set devstack locally enforcing
> network_device_mtu, and it seems to pass packets of 1450 size through. So
> it’s probably something tunneling packets to the subnode that fails for us,
> not local router-to-tap bits.

Yeah! That's right. So is it the case that we need to do 1500 less the
GRE overhead less the VXLAN overhead? So 1446? Since the traffic gets
enacpsulated in VXLAN then encapsulated in GRE (yo dawg, I heard u like
tunneling).

http://baturin.org/tools/encapcalc/


> 
> I also see br-tun having 1500. Is it a problem? Probably not, but I admit I
> miss a lot in this topic so far.

Dunno. Maybe?

> Also I see some qg-2c68fb65-21 device in the worlddump output from above in
> global namespace. The device has mtu = 1500. Which router does the device
> belong to?..

Good question.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] ALLOWED_EXTRA_MISSING is cover.sh

2016-02-10 Thread John Spray
Hi,

I noticed that the coverage script is enforcing a hard limit of 4 on
the number of extra missing lines introduced.  We have a requirement
that new drivers have 90% unit test coverage, which the ceph driver
meets[1], but it's tripping up on that absolute 4 line limit.

What do folks think about tweaking the script to do a different
calculation, like identifying new files and permitting 10% of the line
count of the new files to be missed?  Otherwise I think the 90% target
is going to continually conflict with the manila-coverage CI task.

Cheers,
John

1. 
http://logs.openstack.org/11/270211/19/check/manila-coverage/47b79d2/cover/manila_share_drivers_cephfs_py.html
2. 
http://logs.openstack.org/11/270211/19/check/manila-coverage/47b79d2/console.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stable branch policy for Mitaka

2016-02-10 Thread James Slagle
On Wed, Feb 10, 2016 at 4:57 PM, Steven Hardy  wrote:

> Hi all,
>
> We discussed this in our meeting[1] this week, and agreed a ML discussion
> to gain consensus and give folks visibility of the outcome would be a good
> idea.
>
> In summary, we adopted a more permissive "release branch" policy[2] for our
> stable/liberty branches, where feature backports would be allowed, provided
> they worked with liberty and didn't break backwards compatibility.
>
> The original idea was really to provide a mechanism to "catch up" where
> features are added e.g to liberty OpenStack components late in the cycle
> and TripleO requires changes to integrate with them.
>
> However, the reality has been that the permissive backport policy has been
> somewhat abused (IMHO) with a large number of major features being proposed
> for backport, and in a few cases this has broken downstream (RDO) consumers
> of TripleO.
>
> Thus, I would propose that from Mitaka, we revise our backport policy to
> simply align with the standard stable branch model observed by all
> projects[3].
>
> Hopefully this will allow us to retain the benefits of the stable branch
> process, but provide better stability for downstream consumers of these
> branches, and minimise confusion regarding what is a permissable backport.
>
> If we do this, only backports that can reasonably be considered
> "Appropriate fixes"[4] will be valid backports - in the majority of cases
> this will mean bugfixes only, and large features where the risk of
> regression is significant will not be allowed.
>
> What are peoples thoughts on this?
>

​I'm in agreement. I think this change is needed and will help set better
expectations around what will be included in which release.

If we adopt this as the new policy, then the immediate followup is to set
and communicate when we'll be cutting the stable branches, so that it's
understood when the features have to be done/committed. I'd suggest that we
more or less completely adopt the integrated release schedule[1]. Which I
believe means the week of RC1 for cutting the stable/mitaka branches, which
is March 14th-18th.

It seems to follow logically then that we'd then want to also be more
aggresively aligned with other integrated release events such as the
feature freeze date, Feb 29th - March 4th.

An alternative to strictly following the schedule, would be to say that
TripleO lags the integrated release dates by some number of weeks (1 or 2
I'd think), to allow for some "catchup" time since TripleO is often
consuming features from projects part of the integrated release.


[1] http://releases.openstack.org/mitaka/schedule.html​



>
> Thanks,
>
> Steve
>
> [1]
> http://eavesdrop.openstack.org/meetings/tripleo/2016/tripleo.2016-02-09-14.01.log.html
> [2]
> https://github.com/openstack/tripleo-specs/blob/master/specs/liberty/release-branch.rst
> [3] http://docs.openstack.org/project-team-guide/stable-branches.html
> [4]
> http://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- James Slagle
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread Tim Bell

On 10/02/16 18:48, "Clint Byrum"  wrote:

>Excerpts from Sean Dague's message of 2016-02-10 04:33:44 -0800:
>> The largeops tests at this point are mostly finding out that some of our
>> new cloud providers are slow - http://tinyurl.com/j5u4nf5
>> 
>> This is fundamentally a performance test, with timings having been tuned
>> to pass 98% of the time on two clouds that were very predictable in
>> performance. We're now running on 4 clouds, and the variance between
>> them all, and between every run on each can be as much as a factor of 2.
>> 
>> We could just bump all the timeouts again, but that's basically the same
>> thing as dropping them.
>> 
>> These tests are not instrumented in a way that any real solution can be
>> addressed in most cases. Tests without a path forward, that are failing
>> good patches a lot, are very much the kind of thing we should remove
>> from the system.
>> 
>
>I think we need to replace this with something that measures work
>counters, and not clock time. As you say, some of the other test suites
>out there already pick up a lot of this slack too. Also, I'm working at
>this as well with the counter-inspection spec, so hopefully dropping
>this now won't leave too much of a gap in coverage while we ramp up
>counter-inspection.
>
>+1 to getting rid of it now, as instability in the test suites, which
>slows down development velocity, is worse than missing a few performance
>regressions in the corners.

I believe there is now a performance working group as part of the large 
deployments team (https://wiki.openstack.org/wiki/Performance_Team). Have they 
been contacted to
determine the alternative scenarios and how to strengthen the testing ?

Tim

>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Use of restricted and multiverse in the gate

2016-02-10 Thread Jeremy Stanley
On 2016-02-09 01:32:12 +0800 (+0800), Thomas Goirand wrote:
> While it is a good idea to enhance the current Ubuntu image, at the same
> time, I'd like to draw your attention that we need review for adding the
> Debian image too:
> https://review.openstack.org/#/c/264726
> 
> Igor Belikov did an amazing job at it, let's please not get this stuck
> because no core reviewers are helping.

Absolutely! I'm excited about this, but was holding off reviewing
for the past week while waiting for Igor to make sure it's booting
correctly in Rackspace.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ALLOWED_EXTRA_MISSING is cover.sh

2016-02-10 Thread Knight, Clinton
Hi, John.  This is but one reason the coverage job doesn¹t vote; it has
other known issues.  It is primarily a convenience tool that lets core
reviewers know if they should look more deeply into unit test coverage.
For a new driver such as yours, I typically pull the code and check
coverage for each new file in PyCharm rather than relying on the coverage
job.  Feel free to propose enhancements to the job, though.

Clinton


On 2/10/16, 1:02 PM, "John Spray"  wrote:

>Hi,
>
>I noticed that the coverage script is enforcing a hard limit of 4 on
>the number of extra missing lines introduced.  We have a requirement
>that new drivers have 90% unit test coverage, which the ceph driver
>meets[1], but it's tripping up on that absolute 4 line limit.
>
>What do folks think about tweaking the script to do a different
>calculation, like identifying new files and permitting 10% of the line
>count of the new files to be missed?  Otherwise I think the 90% target
>is going to continually conflict with the manila-coverage CI task.
>
>Cheers,
>John
>
>1. 
>http://logs.openstack.org/11/270211/19/check/manila-coverage/47b79d2/cover
>/manila_share_drivers_cephfs_py.html
>2. 
>http://logs.openstack.org/11/270211/19/check/manila-coverage/47b79d2/conso
>le.html
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][Neutron][Monasca]

2016-02-10 Thread Rubab Syed
Hi,

I'm doing a university project in OpenStack. The aim is to monitor virtual
routers per tenant with Monasca(which according to my knowledge hasn't been
implemented yet). The initial features would include monitoring of in/out
traffic per interface. I'm writing a plugin in Monasca for that purpose. If
I'm not wrong, I can fetch the data about routers sitting on different
compute nodes(in DVR case) running monasca-agent from Neutron database but
will have to devise a mechanism to filter traffic based on tenants and
subnets. Is there something already implemented in Neutron that I can use
for this purpose?

Also, I would really appreciate if you guys tell me some use cases for
Openstack's virtual routers monitoring per tenant with Monasca?

Thanks,
Rubab
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ALLOWED_EXTRA_MISSING is cover.sh

2016-02-10 Thread Valeriy Ponomaryov
Hello, John

Note, that digit "4" defines amount of "python code blocks", not "python
code lines". So, you can have uncovered some log message that consists of
100 lines. But it will be counted as just 1.
Who "we" have requirement that new drivers have 90% unit test coverage?
And, Manila CI coverage job non-voting. So, you are not blocked by it.

On Wed, Feb 10, 2016 at 8:30 PM, Knight, Clinton 
wrote:

> Hi, John.  This is but one reason the coverage job doesn¹t vote; it has
> other known issues.  It is primarily a convenience tool that lets core
> reviewers know if they should look more deeply into unit test coverage.
> For a new driver such as yours, I typically pull the code and check
> coverage for each new file in PyCharm rather than relying on the coverage
> job.  Feel free to propose enhancements to the job, though.
>
> Clinton
>
>
> On 2/10/16, 1:02 PM, "John Spray"  wrote:
>
> >Hi,
> >
> >I noticed that the coverage script is enforcing a hard limit of 4 on
> >the number of extra missing lines introduced.  We have a requirement
> >that new drivers have 90% unit test coverage, which the ceph driver
> >meets[1], but it's tripping up on that absolute 4 line limit.
> >
> >What do folks think about tweaking the script to do a different
> >calculation, like identifying new files and permitting 10% of the line
> >count of the new files to be missed?  Otherwise I think the 90% target
> >is going to continually conflict with the manila-coverage CI task.
> >
> >Cheers,
> >John
> >
> >1.
> >
> http://logs.openstack.org/11/270211/19/check/manila-coverage/47b79d2/cover
> >/manila_share_drivers_cephfs_py.html
> >2.
> >
> http://logs.openstack.org/11/270211/19/check/manila-coverage/47b79d2/conso
> >le.html
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [FwaaS] Meeting occurred early today by accident

2016-02-10 Thread Sean M. Collins
Hi,

I accidentally ran the meeting early, due to not having re-downloaded a
fresh iCal export. So I had the wrong time.

For background: 
https://github.com/openstack-infra/yaml2ical/commit/4def663a8d5259962d5c2239266ebfaa19082bf6

So anyway, please ensure that your calendar is updated with a fresh
event from 
http://eavesdrop.openstack.org/calendars/firewall-as-a-service-fwaas-team-meeting.ics

We'll get the correct time next week, my apologies.
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][keystone][kolla][bandit] linters jobs

2016-02-10 Thread Andreas Jaeger
Hi,

the pep8 target is our usual target to include style and lint checks and
thus is used besides pep8 also for doc8, bashate, bandit, etc as
documented in the PTI (=Python Test Interface,
http://governance.openstack.org/reference/cti/python_cti.html).

We've had some discussions to introduce a new target called linters as
better name for this and when I mentioned this in a few discussions, it
resonated with these projects. Unfortunately, I missed the relevance of
the PTI for such a change - and changing the PTI to replace pep8 with
linters and then pushing that one through to all projects is more than I
can commit to right now.

I apologize for being a too eager and will send patches for official
projects moving them back to pep8, so consider this is heads up and
background about my incoming patches with topic "pti-pep8-linters".

If somebody else wants to do the whole conversion in the future, I can
give pointers on what to do,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] RFC - service naming registry under API-WG

2016-02-10 Thread Sean Dague
On 02/04/2016 06:38 AM, Sean Dague wrote:
> 2) Have a registry of "common" names.
> 
> Upside, we can safely use common names everywhere and not fear collision
> down the road.
> 
> Downside, yet another contention point.
> 
> A registry would clearly be under TC administration, though all the
> heavy lifting might be handed over to the API working group. I still
> imagine collision around some areas might be contentious.

We had a good discussion last week here on the list, and I think the
consensus was that:

1) We should use option #2 and have standard service types

2) The API Working Group was probably as good a place as any to own /
drive this.

I'd like to follow on with the following recommendations:

3) This be a dedicated repository 'openstack/service-registry'. The API
WG will have votes on it (I would also suggest the folks that have been
working on Service Catalog TNG - myself, Anne Gentle, Brant Knudson, and
Chris Dent be added to this). The actual registry will be some
structured file that supports comments (probably yaml).

4) We seed it with the 'well known' service types from current devstack.
Then we patch in services one at a time after that as requested.
Basically sift through all the non controversial stuff first. Let debate
happen on the more contentious ones later.

5) We'll build up guidelines in this repo about the kinds of service
types names which we think are good. We may dedicate some reserve words
that are too highly confusing in the OpenStack space to be used (policy
comes to mind).

If there are concerns with this approach let me know. Otherwise I'll
propose the repo tomorrow and try to keep this ball rolling.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-10 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/05/2016 01:27 PM, Mike Perez wrote:

>>> So while Poppy may not fully qualify for the open core label,
>>> it still fails some of the tests that we want to see, such as a
>>> usable open source implementation.

>> From a QA perspective in gate, if we have to rely on a commercial
>> solution
> (even if donated) to test it, then that's bad.

Wasn't that the exact argument regarding needing to be able to boot a
Linux image? We can't reasonably test it without getting into
payment/licensing issues.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWu5hWAAoJEKMgtcocwZqLofIQAK7aofIjWwcqQGvA9yz9lFsI
5T53e4CjDZOVoTQaTdH085DVlMpL6kOARAzPTohoL6dRo/h0fusp0lSqWtPaDQwk
VNnNKR0GvcyP8nGi6YYa6rVqJLiZPtovbsdnmgDnfI9WRJ76cXy+HlLmtCVZsAej
boE/8b9h+cUwGQH/S6hH3Pi63GzThNH4k0goNWSvzXqrqQ6ypKDM5As1SUnTUgkk
Bioe8TRS5Bdt8bpxUOVwiuGBskO0vdX4DUYtsO7uLvD1x/ZIxX9iPQKKK9/E1tjg
HpqiVjrOahsLzcAFhCzfqNYyYTG0W45w00DLdQstG25IaVSIQ7GnbsW2HSAwLBUP
AU9/gAtkARj0PK+MafjrRmGtllCfh60KJCNWUOHmrjChDhE/TyigCqSa/Xq1abLi
KykKx4vjLUuTAVCIBIfeUB0ZEPdqiamGRigKGocLjSV+HUR28A2mpQCMkBiw5w7s
UPtlWnm4SP5ox5dpUx537OmPbaeoEF701k7tftiPOe30g5kq9GB5qAxCg2kbjRnc
Jn7rwt7otxH39hZ8rLRO8DgA5AHj3GrH5QX6Zehpx2lJakq8E/wqPtvzyDA8Y2j0
hoLWXCL5cTmtGXK4BuJLs2JzzM/Pvabq7lf8JYEQthENHXhAXk+E1F1rq3Ld6A28
VMUrfCiH8tCNVyn/HOZc
=/gSw
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-10 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/05/2016 01:16 PM, Sean Dague wrote:

> Whether or not it is, I'm not sure how it is part of a Ubiquitous
> Open Source Cloud Platform. Because it only enables the use of
> commerical services.
> 
> It's fine that it's open source software. I just don't think it's
> OpenStack.

Agreed. I don't think that everything that may be useful to consumers
of OpenStack has to be part of OpenStack, either.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWu5jUAAoJEKMgtcocwZqLtqoP/3INidXySMluomwiJv6fZe9g
x4ycIuaVK5zJeNErE7zPHjUTr4xX+KRHdnjBNUf5Ufeig4A2GqXPikj32hR4MaYA
x2Hro60xgmq0vGRTbFNRg9BObuHnrpg+JQKuIBEk8K7TaXJBuINBFv92863IaBjQ
NdeG87b7/sv6E6onB1V4LML7WwRjNEwz90dbwoBqPGkPtbB7h4WR/XyWw+13gU5/
Dw74MguubvP7et4dGBA14SPnk6j6CJ4WAZQ87Ud2eqlgkaFMlXml1RbJc+Q+H4lI
kzOJycLQJJdb2kvCypFlGxYqyr8kAZyxVy9UUDom3TkEE3SVr5Hz5tUw9edh3FXy
OKWglQ2x9rJUQpKeUznmbtIWZTt0C5ANUu6mpvnAZKmd1qF2njRO9yCvHPWlJT7T
dlq3MzeTWVLvMhABevuyPxtha3LIRitoMvQEpIpYsbMqLAREBMT/KODW9WJ9beQE
5LAPNK1RmLFL/gqxYod8XKWYwjUAKt6cOyUjHAASsqlgunMNn+l4DOAvPd3HaOph
g+skabkbDuYBfLw9v8gmjfpK/QTKPapHiGVTVKu6lnN87srMm8ep2scobx+YTAII
am6OFcUebEgEmD7Bhl9y37TUO/AIBqU4IRplI3pGp5ABnV1/uTcwFIJgLGoBAh+d
3X1gbt9GpKvdqUn0V3DN
=G5RR
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] dropping KEYSTONE_CATALOG_BACKEND - plus update your devstack plugins

2016-02-10 Thread Sean Dague
Devstack has some half baked support for keystone templated service
catalog. In an effort to clean up parts of devstack, we're dropping that
- https://review.openstack.org/#/c/278333

However... this unfortunately led to some cargo culting, and everyone's
devstack plugin is going to fail if we remove the
KEYSTONE_CATALOG_BACKEND variable -
http://codesearch.openstack.org/?q=KEYSTONE_CATALOG_BACKEND&i=nope&files=&repos=

We'll keep the variable around until Newton opens up. In Newton, we will
drop it. This is your warning to remove that if block and create your
service entries unconditionally.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - please review the neutron security guide

2016-02-10 Thread Carl Baldwin
On Tue, Feb 9, 2016 at 1:21 PM, Kevin Benton  wrote:
> If you see any issues, either propose a patch directly or file a bug against
> https://bugs.launchpad.net/openstack-manuals/+filebug with the tag
> 'seg-guide'

Did you want 'sec-guide'?  That would seem more intuitive to me.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-10 Thread gordon chung


On 10/02/2016 11:35 AM, Thierry Carrez wrote:
> Chris Dent wrote:
>> [...]
>> Observing this thread and "the trouble with names"[1] one I get
>> concerned that we're trending in the direction of expecting
>> projects/servers/APIs to be done and perfect before they will ever
>> be OpenStack. This, of course, runs entirely contrary to the spirit
>> of open source where people release a solution to their itch and
>> people join with them to make it better.
>>
>> If we start thinking of projects as needing to have "production-grade"
>> implementations and APIs as needing to be stable and correct from
>> the start we're backing ourselves into corners that are very difficult
>> to get out of, distracting ourselves from the questions we ought to be
>> asking, and putting barriers in the way of doing new but necessary
>> stuff and evolving.
>
> I certainly didn't intend to mean that projects need to have a final API
> or perfect implementation before they can join the tent. I meant that
> projects need to have a reference implementation using open source tools
> that has a chance of being used in production one day. Imagine a project
> which uses sqlite in testing but requires Oracle DB to achieve full
> functionality or scaling beyond one user: the sqlite backend would be a
> token open backend for testing purposes but real usage would need you to
> buy into proprietary options. That would certainly be considered "open
> core": a project that pretends to be open but requires proprietary
> technology to be "really used".

apologies if this was asked somewhere else in thread, but should we try 
to define "production" scale or can we even? based on the last survey, 
the vast majority of deployments are under 100nodes[1]. that said, a few 
years ago, one company was dreaming 100,000 nodes.

i'd imagine the 50 node solution won't satisfy the 1000 node solution 
let alone the 10k node. similarly, the opposite direction will probably 
give an overkill solution. it seems somewhat difficult to define 
something against 'production' term unless we scope it somehow (e.g # of 
node ranges)?

[1] http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-10 Thread Fox, Kevin M
There's two main types of services in openstack. Those that are a multitenant 
aware implementation of some kind of data plane protocol with an "openstack" 
api. Swift/radosgw, Zaqar, MagnetoDB, etc. I think we can ignore these in this 
discussion.

Then there's what I consider the more "Operating System" style OpenStack 
services. They are like the Linux Kernel Subsystems. They provide a standard 
API, and provide plugins to actually implement the guts of the requests. 
Abstracting out the request from how to get it done.

(A few services do both, Neutron for example is a pure pluggable api, but also 
provides a reference implementation driver that does an sdn.)

So, the question I think is really for those "Operating System" style services, 
is it alright to have a standard, opensource api with no current backing system 
that's free?

While not a pure direct comparison,lets look to the leading opensource 
operating system for guidance. So, are there any examples of a OS subsystem 
that has drivers that there are no purely open solutions to implementing the 
api. Just off the top of my head, I'd say the Infiniband subsystem. You can't 
use the api unless you buy hardware from someone and use the appropriate driver 
and proprietary firmware.

So there is precedent for it in the open source world. While being completely 
open is a great goal, I think there are rare cases where it makes sense to 
allow the abstraction and plugable drivers without a current open backend.

Poppy is a very interesting edge case. CDN's are useful to users mostly because 
they are about having the vast network of machines spread across the world that 
you can push content to. Its not the software users care about but the whole 
worldwide system made up of hardware, sysadmins, software, etc. You can almost 
think of it as a single piece of hardware provided by a vendor in this 
instance. I'm guessing at present, its unlikely that any reference 
implementation of an opensource piece of software would ever be that widely 
deployed. But as an optional API, it would be great if there was a single 
standard opensource api that was vendor neutral can be provided so that it 
doesn't just break down to each vendor providing their own proprietary api. The 
open/standard api is a much better thing for everyone. Most OS's accept this 
level of tradeoff.

Just my 2 cents.

Thanks,
Kevin

From: Ed Leafe [e...@leafe.com]
Sent: Wednesday, February 10, 2016 12:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/05/2016 01:16 PM, Sean Dague wrote:

> Whether or not it is, I'm not sure how it is part of a Ubiquitous
> Open Source Cloud Platform. Because it only enables the use of
> commerical services.
>
> It's fine that it's open source software. I just don't think it's
> OpenStack.

Agreed. I don't think that everything that may be useful to consumers
of OpenStack has to be part of OpenStack, either.

- --

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWu5jUAAoJEKMgtcocwZqLtqoP/3INidXySMluomwiJv6fZe9g
x4ycIuaVK5zJeNErE7zPHjUTr4xX+KRHdnjBNUf5Ufeig4A2GqXPikj32hR4MaYA
x2Hro60xgmq0vGRTbFNRg9BObuHnrpg+JQKuIBEk8K7TaXJBuINBFv92863IaBjQ
NdeG87b7/sv6E6onB1V4LML7WwRjNEwz90dbwoBqPGkPtbB7h4WR/XyWw+13gU5/
Dw74MguubvP7et4dGBA14SPnk6j6CJ4WAZQ87Ud2eqlgkaFMlXml1RbJc+Q+H4lI
kzOJycLQJJdb2kvCypFlGxYqyr8kAZyxVy9UUDom3TkEE3SVr5Hz5tUw9edh3FXy
OKWglQ2x9rJUQpKeUznmbtIWZTt0C5ANUu6mpvnAZKmd1qF2njRO9yCvHPWlJT7T
dlq3MzeTWVLvMhABevuyPxtha3LIRitoMvQEpIpYsbMqLAREBMT/KODW9WJ9beQE
5LAPNK1RmLFL/gqxYod8XKWYwjUAKt6cOyUjHAASsqlgunMNn+l4DOAvPd3HaOph
g+skabkbDuYBfLw9v8gmjfpK/QTKPapHiGVTVKu6lnN87srMm8ep2scobx+YTAII
am6OFcUebEgEmD7Bhl9y37TUO/AIBqU4IRplI3pGp5ABnV1/uTcwFIJgLGoBAh+d
3X1gbt9GpKvdqUn0V3DN
=G5RR
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Being more aggressive with our defaults

2016-02-10 Thread Carl Baldwin
On Mon, Feb 8, 2016 at 9:40 AM, Assaf Muller  wrote:
> As for DVR, I'm searching for someone to pick up the gauntlet and
> contribute some L3 fullstack tests. I'd be more than happy to review
> it! I even have an abandoned patch that gets the ball rolling (The
> idea is to test L3 east/west, north/south with FIP and north/south
> without FIP for all four router types: Legacy, HA, DVR and DVR HA. You
> run the same test in four different configurations, fullstack is
> basically purpose built for this).

This is something that we can discuss with the DVR team.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - please review the neutron security guide

2016-02-10 Thread Kevin Benton
Yes, sorry.
On Feb 10, 2016 12:52, "Carl Baldwin"  wrote:

> On Tue, Feb 9, 2016 at 1:21 PM, Kevin Benton  wrote:
> > If you see any issues, either propose a patch directly or file a bug
> against
> > https://bugs.launchpad.net/openstack-manuals/+filebug with the tag
> > 'seg-guide'
>
> Did you want 'sec-guide'?  That would seem more intuitive to me.
>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-10 Thread Tim Bell

On 10/02/16 21:53, "gordon chung"  wrote:

>
>
>On 10/02/2016 11:35 AM, Thierry Carrez wrote:
>> Chris Dent wrote:
>>> [...]
>>> Observing this thread and "the trouble with names"[1] one I get
>>> concerned that we're trending in the direction of expecting
>>> projects/servers/APIs to be done and perfect before they will ever
>>> be OpenStack. This, of course, runs entirely contrary to the spirit
>>> of open source where people release a solution to their itch and
>>> people join with them to make it better.
>>>
>>> If we start thinking of projects as needing to have "production-grade"
>>> implementations and APIs as needing to be stable and correct from
>>> the start we're backing ourselves into corners that are very difficult
>>> to get out of, distracting ourselves from the questions we ought to be
>>> asking, and putting barriers in the way of doing new but necessary
>>> stuff and evolving.
>>
>> I certainly didn't intend to mean that projects need to have a final API
>> or perfect implementation before they can join the tent. I meant that
>> projects need to have a reference implementation using open source tools
>> that has a chance of being used in production one day. Imagine a project
>> which uses sqlite in testing but requires Oracle DB to achieve full
>> functionality or scaling beyond one user: the sqlite backend would be a
>> token open backend for testing purposes but real usage would need you to
>> buy into proprietary options. That would certainly be considered "open
>> core": a project that pretends to be open but requires proprietary
>> technology to be "really used".
>
>apologies if this was asked somewhere else in thread, but should we try 
>to define "production" scale or can we even? based on the last survey, 
>the vast majority of deployments are under 100nodes[1]. that said, a few 
>years ago, one company was dreaming 100,000 nodes.
>
>i'd imagine the 50 node solution won't satisfy the 1000 node solution 
>let alone the 10k node. similarly, the opposite direction will probably 
>give an overkill solution. it seems somewhat difficult to define 
>something against 'production' term unless we scope it somehow (e.g # of 
>node ranges)?
>
>[1] http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf


As always, scale is relative. However, projects have shown major difficulties 
to scale to 10% of the larger deployments. Scaling beyond that, even with 
commercial solutions, has required major investments in custom configurations 
by the deployers.

There are two risks I see

A. Use sqlite and then change to proprietary solution X for scale
B. Works at a small scale but scalability has not been considered as a design 
criteria or demonstrated

I think it is important that the community is informed on these constraints 
before feeling that a particular project is the solution for them and that the 
TC factors these questions into their approval criteria.

Tim


>
>-- 
>gord
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] RFC - service naming registry under API-WG

2016-02-10 Thread michael mccune

On 02/10/2016 03:03 PM, Sean Dague wrote:

On 02/04/2016 06:38 AM, Sean Dague wrote:

2) Have a registry of "common" names.

Upside, we can safely use common names everywhere and not fear collision
down the road.

Downside, yet another contention point.

A registry would clearly be under TC administration, though all the
heavy lifting might be handed over to the API working group. I still
imagine collision around some areas might be contentious.


We had a good discussion last week here on the list, and I think the
consensus was that:

1) We should use option #2 and have standard service types

2) The API Working Group was probably as good a place as any to own /
drive this.

I'd like to follow on with the following recommendations:

3) This be a dedicated repository 'openstack/service-registry'. The API
WG will have votes on it (I would also suggest the folks that have been
working on Service Catalog TNG - myself, Anne Gentle, Brant Knudson, and
Chris Dent be added to this). The actual registry will be some
structured file that supports comments (probably yaml).

4) We seed it with the 'well known' service types from current devstack.
Then we patch in services one at a time after that as requested.
Basically sift through all the non controversial stuff first. Let debate
happen on the more contentious ones later.

5) We'll build up guidelines in this repo about the kinds of service
types names which we think are good. We may dedicate some reserve words
that are too highly confusing in the OpenStack space to be used (policy
comes to mind).

If there are concerns with this approach let me know. Otherwise I'll
propose the repo tomorrow and try to keep this ball rolling.

-Sean



i think this sounds like a fine idea. +1

mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Multi release packages

2016-02-10 Thread Dmitry Borodaenko
+1 to Stas, supplanting VCS branches with code duplication is a path to
madness and despair. The dubious benefits of a cross-release backwards
compatible plugin binary are not worth the code and infra technical debt
that such approach would accrue over time.

On Wed, Feb 10, 2016 at 07:36:30PM +0300, Stanislaw Bogatkin wrote:
> It changes mostly nothing for case of furious plugin development when big
> parts of code changed from one release to another.
> 
> You will have 6 different deployment_tasks directories and 30 a little bit
> different files in root directory of plugin. Also you forgot about
> repositories directory (+6 at least), pre_build hooks (also 6) and so on.
> It will look as hell after just 3 years of development.
> 
> Also I can't imagine how to deal with plugin licensing if you have Apache
> for liberty but BSD for mitaka release, for example.
> 
> Much easier way to develop a plugin is to keep it's source in VCS like Git
> and just make a branches for every fuel release. It will give us
> opportunity to not store a bunch of similar but a little bit different
> files in repo. There is no reason to drag all different versions of code
> for specific release.
> 
> 
> On other hand there is a pros - your plugin can survive after upgrade if it
> supports new release, no changes needed here.
> 
> On Wed, Feb 10, 2016 at 4:04 PM, Alexey Shtokolov 
> wrote:
> 
> > Fuelers,
> >
> > We are discussing the idea to extend the multi release packages for
> > plugins.
> >
> > Fuel plugin builder (FPB) can create one rpm-package for all supported
> > releases (from metadata.yaml) but we can specify only deployment scripts
> > and repositories per release.
> >
> > Current release definition (in metadata.yaml):
> > - os: ubuntu
> >   version: liberty-8.0
> >   mode: ['ha']
> >   deployment_scripts_path: deployment_scripts/
> >   repository_path: repositories/ubuntu
> >
> > So the idea [0] is to make releases fully configurable.
> > Suggested changes for release definition (in metadata.yaml):
> >   components_path: components_liberty.yaml
> >   deployment_tasks_path: deployment_tasks_liberty/ # <- folder
> >   environment_config_path: environment_config_liberty.yaml
> >   network_roles_path: network_roles_liberty.yaml
> >   node_roles_path: node_roles_liberty.yaml
> >   volumes_path: volumes_liberty.yaml
> >
> > I see the issue: if we change anything for one release (e.g.
> > deployment_task typo) revalidation is needed for all releases.
> >
> > Your Pros and cons please?
> >
> > [0] https://review.openstack.org/#/c/271417/
> > ---
> > WBR, Alexey Shtokolov
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> 
> -- 
> with best regards,
> Stan.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread John Griffith
On Tue, Feb 9, 2016 at 3:23 PM, Ildikó Váncsa 
wrote:

> Hi Walt,
>
> > -Original Message-
> > From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
> > Sent: February 09, 2016 23:15
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining
> when to call os-brick's connector.disconnect_volume
> >
> > On 02/09/2016 02:04 PM, Ildikó Váncsa wrote:
> > > Hi Walt,
> > >
> > > Thanks for starting this thread. It is a good summary of the issue and
> the proposal also looks feasible to me.
> > >
> > > I have a quick, hopefully not too wild idea based on the earlier
> discussions we had. We were considering earlier to store the target
> > identifier together with the other items of the attachment info. The
> problem with this idea is that when we call initialize_connection
> > from Nova, Cinder does not get the relevant information, like
> instance_id, to be able to do this. This means we cannot do that using
> > the functionality we have today.
> > >
> > > My idea here is to extend the Cinder API so that Nova can send the
> missing information after a successful attach. Nova should have
> > all the information including the 'target', which means that it could
> update the attachment information through the new Cinder API.
> > I think we need to do is to allow the connector to be passed at
> > os-attach time.   Then cinder can save it in the attachment's table
> entry.
> >
> > We will also need a new cinder API to allow that attachment to be
> updated during live migration, or the connector for the attachment
> > will get stale and incorrect.
>
> When saying below that it will be good for live migration as well I meant
> that the update is part of the API.
>
> Ildikó
>
> >
> > Walt
> > >
> > > It would mean that when we request for the volume info from Cinder at
> detach time the 'attachments' list would contain all the
> > required information for each attachments the volume has. If we don't
> have the 'target' information because of any reason we can
> > still use the approach described below as fallback. This approach could
> even be used in case of live migration I think.
> > >
> > > The Cinder API extension would need to be added with a new
> microversion to avoid problems with older Cinder versions talking to
> > new Nova.
> > >
> > > The advantage of this direction is that we can reduce the round trips
> to Cinder at detach time. The round trip after a successful
> > attach should not have an impact on the normal operation as if that
> fails the only issue we have is we need to use the fall back method
> > to be able to detach properly. This would still affect only
> multiattached volumes, where we have more than one attachments on the
> > same host. By having the information stored in Cinder as well we can
> also avoid removing a target when there are still active
> > attachments connected to it.
> > >
> > > What do you think?
> > >
> > > Thanks,
> > > Ildikó
> > >
> > >
> > >> -Original Message-
> > >> From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
> > >> Sent: February 09, 2016 20:50
> > >> To: OpenStack Development Mailing List (not for usage questions)
> > >> Subject: [openstack-dev] [Nova][Cinder] Multi-attach, determining
> > >> when to call os-brick's connector.disconnect_volume
> > >>
> > >> Hey folks,
> > >>  One of the challenges we have faced with the ability to attach a
> > >> single volume to multiple instances, is how to correctly detach that
> > >> volume.  The issue is a bit complex, but I'll try and explain the
> problem, and then describe one approach to solving one part of the
> > detach puzzle.
> > >>
> > >> Problem:
> > >> When a volume is attached to multiple instances on the same host.
> > >> There are 2 scenarios here.
> > >>
> > >> 1) Some Cinder drivers export a new target for every attachment
> > >> on a compute host.  This means that you will get a new unique volume
> path on a host, which is then handed off to the VM
> > instance.
> > >>
> > >> 2) Other Cinder drivers export a single target for all instances
> > >> on a compute host.  This means that every instance on a single host,
> will reuse the same host volume path.
> > >>
> > >>
> > >> When a user issues a request to detach a volume, the workflow boils
> > >> down to first calling os-brick's connector.disconnect_volume before
> > >> calling Cinder's terminate_connection and detach. disconnect_volume's
> job is to remove the local volume from the host OS and
> > close any sessions.
> > >>
> > >> There is no problem under scenario 1.  Each disconnect_volume only
> > >> affects the attached volume in question and doesn't affect any other
> > >> VM using that same volume, because they are using a different path
> that has shown up on the host.  It's a different target
> > exported from the Cinder backend/array.
> > >>
> > >> The problem comes under scenario 2, where that single volume is
> > >> shared for every instance on 

Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread Sean McGinnis
On Wed, Feb 10, 2016 at 03:30:42PM -0700, John Griffith wrote:
> On Tue, Feb 9, 2016 at 3:23 PM, Ildikó Váncsa 
> wrote:
> 
> >
> ​This may still be in fact the easiest way to handle this.  The only other
> thing I am still somewhat torn on here is that maybe Nova should be doing
> ref-counting WRT shared connections and NOT send the detach in that
> scenario to begin with?
> 
> In the case of unique targets per-attach we already just "work", but if you
> are using the same target/attachment on a compute node for multiple
> instances, then you probably should keep track of that on the users end and
> not remove it while in use.  That seems like the more "correct" way to deal
> with this, but ​maybe that's just me.  Keep in mind we could also do the
> same ref-counting on the Cinder side if we so choose.

This is where I've been pushing too. It seems odd to me that the storage
domain should need to track how the volume is being used by the
consumer. Whether it is attached to one instance, 100 instances, or the
host just likes to keep it around as a pet, from the storage perspective
I don't know why we should care.

Looking beyond Nova usage, does Cinder now need to start tracking
information about containers? Bare metal hosts? Apps that are associated
with LUNs. It just seems like concepts that the storage component
shouldn't need to know or care about.

I know there's some history here and it may not be as easy as that. But
just wanted to state my opinion that in an ideal world (which I
recognize we don't live in) this should not be Cinder's concern.

> 
> We talked about this at mid-cycle with the Nova team and I proposed
> independent targets for each connection on Cinder's side.  We can still do
> that IMO but that doesn't seem to be a very popular idea.

John, I don't think folks are against this idea as a concept. I think
the problem is I don't believe all storage vendors can support exposing
new targets for the same volume for each attachment.

> 
> My point here is just that it seems like there might be a way to fix this
> without breaking compatibility in the API.  Thoughts?

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing a simple new tool: git-restack

2016-02-10 Thread Carl Baldwin
Jim,

I've had this reply queued up for a week now.  Sorry for the delay.
The problem that I run in to when I work with multiple dependent
changes doesn't seem to be covered by your description.

For me, the trouble with this workflow comes when there is more than
one contributor working on a chain of patches like this one [1].  In
these cases, I find it is particularly dangerous to rebase the whole
chain at once.

Imagine that I'm working on the fourth patch set and another
contributor updates the second one.  My work depends on that change
but I now have an older copy of it in my working copy.

Now, if I rebase my out-of-date copy of that change and then upload to
gerrit then it will appear to gerrit to be a replacement for the most
recent version of the change that it has.  The work that the other
contributor did is clobbered.  I've gotten really good at spotting
this problem because it happens quite often.  In these cases, I dive
in and manually tease them apart and ensure that all of the latest
stuff is reflected in all of the changes.  I've found it difficult to
teach others about this problem, how to detect when it happens, and
how to properly recover from it.

For this reason, I never rebase more than one patch set at a time.  I
also have taken measures to ensure that "git review" will never rebase
my changes, ever.  My workflow for rebasing a chain of changes looks
something like the following which I have not taken the time to
automate:

  for change in changes_listed_from_least_to_most_dependent:
  if change_depends_on_other_change:
  parent_version = version_of_parents_latest_patch_set_in_gerrit
  git rebase HEAD^ --onto parent_version
  git review

Any merge conflict in any of the chain's changes essentially requires
that I rebase the bottom (least dependent) change to master before
doing this.  All the contributors to such a chain of changes must be
very careful to work from the most recent version of a change and be
vigilant.

Any thoughts on this?

Carl

[1] https://review.openstack.org/#/q/status:open+topic:bp/bgp-dynamic-routing

On Tue, Feb 2, 2016 at 10:53 AM, James E. Blair  wrote:
> Hi,
>
> I'm pleased to announce a new and very simple tool to help with managing
> large patch series with our Gerrit workflow.
>
> In our workflow we often find it necessary to create a series of
> dependent changes in order to make a larger change in manageable chunks,
> or because we have a series of related changes.  Because these are part
> of larger efforts, it often seems like they are even more likely to have
> to go through many revisions before they are finally merged.  Each step
> along the way reviewers look at the patches in Gerrit and leave
> comments.  As a reviewer, I rely heavily on looking at the difference
> between patchsets to see how the series evolves over time.
>
> Occasionally we also find it necessary to re-order the patch series, or
> to include or exclude a particular patch from the series.  Of course the
> interactive git rebase command makes this easy -- but in order to use
> it, you need to supply a base upon which to "rebase".  A simple choice
> would be to rebase the series on master, however, that creates
> difficulties for reviewers if master has moved on since the series was
> begun.  It is very difficult to see any actual intended changes between
> different patch sets when they have different bases which include
> unrelated changes.
>
> The best thing to do to make it easy for reviewers (and yourself as you
> try to follow your own changes) is to keep the same "base" for the
> entire patch series even as you "rebase" it.  If you know how long your
> patch series is, you can simply run "git rebase -i HEAD~N" where N is
> the patch series depth.  But if you're like me and have trouble with
> numbers other than 0 and 1, then you'll like this new command.
>
> The git-restack command is very simple -- it looks for the most recent
> commit that is both in your current branch history and in the branch it
> was based on.  It uses that as the base for an interactive rebase
> command.  This means that any time you are editing a patch series, you
> can simply run:
>
>   git restack
>
> and you will be placed in an interactive rebase session with all of the
> commits in that patch series staged.  Git-restack is somewhat
> branch-aware as well -- it will read a .gitreview file to find the
> remote branch to compare against.  If your stack was based on a
> different branch, simply run:
>
>   git restack 
>
> and it will use that branch for comparison instead.
>
> Git-restack is on pypi so you can install it with:
>
>   pip install git-restack
>
> The source code is based heavily on git-review and is in Gerrit under
> openstack-infra/git-restack.
>
> https://pypi.python.org/pypi/git-restack/1.0.0
> https://git.openstack.org/cgit/openstack-infra/git-restack
>
> I hope you find this useful,
>
> Jim
>
> 

Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread John Griffith
On Wed, Feb 10, 2016 at 3:59 PM, Sean McGinnis 
wrote:

> On Wed, Feb 10, 2016 at 03:30:42PM -0700, John Griffith wrote:
> > On Tue, Feb 9, 2016 at 3:23 PM, Ildikó Váncsa <
> ildiko.van...@ericsson.com>
> > wrote:
> >
> > >
> > ​This may still be in fact the easiest way to handle this.  The only
> other
> > thing I am still somewhat torn on here is that maybe Nova should be doing
> > ref-counting WRT shared connections and NOT send the detach in that
> > scenario to begin with?
> >
> > In the case of unique targets per-attach we already just "work", but if
> you
> > are using the same target/attachment on a compute node for multiple
> > instances, then you probably should keep track of that on the users end
> and
> > not remove it while in use.  That seems like the more "correct" way to
> deal
> > with this, but ​maybe that's just me.  Keep in mind we could also do the
> > same ref-counting on the Cinder side if we so choose.
>
> This is where I've been pushing too. It seems odd to me that the storage
> domain should need to track how the volume is being used by the
> consumer. Whether it is attached to one instance, 100 instances, or the
> host just likes to keep it around as a pet, from the storage perspective
> I don't know why we should care.
>
> Looking beyond Nova usage, does Cinder now need to start tracking
> information about containers? Bare metal hosts? Apps that are associated
> with LUNs. It just seems like concepts that the storage component
> shouldn't need to know or care about.
>
>
​Well said​

​, I agree
​


> I know there's some history here and it may not be as easy as that. But
> just wanted to state my opinion that in an ideal world (which I
> recognize we don't live in) this should not be Cinder's concern.
>
> >
> > We talked about this at mid-cycle with the Nova team and I proposed
> > independent targets for each connection on Cinder's side.  We can still
> do
> > that IMO but that doesn't seem to be a very popular idea.
>
> John, I don't think folks are against this idea as a concept. I think
> the problem is I don't believe all storage vendors can support exposing
> new targets for the same volume for each attachment.
>
​
Ahh, well that's a very valid reason to take a different approach.​


>
> >
> > My point here is just that it seems like there might be a way to fix this
> > without breaking compatibility in the API.  Thoughts?
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-10 Thread Carl Baldwin
On Thu, Feb 4, 2016 at 8:12 PM, Armando M.  wrote:
> as for c) I think it's a little late to make pluggable ipam default in
> Mitaka; I'd rather switch defaults early in the cycle (depending on the
> entity of the config) and this one seems serious enough that I'd rather have
> enough exercising in the gate to prove it solid. In a nutshell: let's defer
> the driver switch to N. When we do, we'll have to worry about grenade, but
> Grenade can run scripts and we can 'emulate' the operator hand.

Yes, it is too late.  It was wishful thinking to think that we could
do it this late.  You're good for calling us on that.  I hope that we
can have something teed up to be merged as soon as Newton development
is open so that we can put this to rest.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread Fox, Kevin M
I think part of the issue is whether to count or not is cinder driver specific 
and only cinder knows if it should be done or not.

But if cinder told nova that particular multiattach endpoints must be 
refcounted, that might resolve the issue?

Thanks,
Kevin

From: Sean McGinnis [sean.mcgin...@gmx.com]
Sent: Wednesday, February 10, 2016 2:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to 
call os-brick's connector.disconnect_volume

On Wed, Feb 10, 2016 at 03:30:42PM -0700, John Griffith wrote:
> On Tue, Feb 9, 2016 at 3:23 PM, Ildikó Váncsa 
> wrote:
>
> >
> ​This may still be in fact the easiest way to handle this.  The only other
> thing I am still somewhat torn on here is that maybe Nova should be doing
> ref-counting WRT shared connections and NOT send the detach in that
> scenario to begin with?
>
> In the case of unique targets per-attach we already just "work", but if you
> are using the same target/attachment on a compute node for multiple
> instances, then you probably should keep track of that on the users end and
> not remove it while in use.  That seems like the more "correct" way to deal
> with this, but ​maybe that's just me.  Keep in mind we could also do the
> same ref-counting on the Cinder side if we so choose.

This is where I've been pushing too. It seems odd to me that the storage
domain should need to track how the volume is being used by the
consumer. Whether it is attached to one instance, 100 instances, or the
host just likes to keep it around as a pet, from the storage perspective
I don't know why we should care.

Looking beyond Nova usage, does Cinder now need to start tracking
information about containers? Bare metal hosts? Apps that are associated
with LUNs. It just seems like concepts that the storage component
shouldn't need to know or care about.

I know there's some history here and it may not be as easy as that. But
just wanted to state my opinion that in an ideal world (which I
recognize we don't live in) this should not be Cinder's concern.

>
> We talked about this at mid-cycle with the Nova team and I proposed
> independent targets for each connection on Cinder's side.  We can still do
> that IMO but that doesn't seem to be a very popular idea.

John, I don't think folks are against this idea as a concept. I think
the problem is I don't believe all storage vendors can support exposing
new targets for the same volume for each attachment.

>
> My point here is just that it seems like there might be a way to fix this
> without breaking compatibility in the API.  Thoughts?

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-10 Thread Carl Baldwin
On Thu, Feb 4, 2016 at 8:12 PM, Armando M.  wrote:
> Technically we can make this as sophisticated and seamless as we want, but
> this is a one-off, once it's done the pain goes away, and we won't be doing
> another migration like this ever again. So I wouldn't over engineer it.

Frankly, I was worried that going the other way was over-engineering
it.  It will be more difficult for us to manage this transition.

I'm still struggling to see what makes this particular migration
different than other cases where we change the database schema and the
code a bit and we automatically migrate everyone to it as part of the
routine migration.  What is it about this case that necessitates
giving the operator the option?

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread Sean McGinnis
On Wed, Feb 10, 2016 at 11:16:28PM +, Fox, Kevin M wrote:
> I think part of the issue is whether to count or not is cinder driver 
> specific and only cinder knows if it should be done or not.
> 
> But if cinder told nova that particular multiattach endpoints must be 
> refcounted, that might resolve the issue?
> 
> Thanks,
> Kevin

I this case (the point John and I were making at least) it doesn't
matter. Nothing is driver specific, so it wouldn't matter which backend
is being used.

If a volume is needed, request it to be attached. When it is no longer
needed, tell Cinder to take it away. Simple as that.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-10 Thread gordon chung


On 10/02/2016 4:28 PM, Tim Bell wrote:
>
> On 10/02/16 21:53, "gordon chung"  wrote:
>
>> apologies if this was asked somewhere else in thread, but should we try
>> to define "production" scale or can we even? based on the last survey,
>> the vast majority of deployments are under 100nodes[1]. that said, a few
>> years ago, one company was dreaming 100,000 nodes.
>>
>> i'd imagine the 50 node solution won't satisfy the 1000 node solution
>> let alone the 10k node. similarly, the opposite direction will probably
>> give an overkill solution. it seems somewhat difficult to define
>> something against 'production' term unless we scope it somehow (e.g # of
>> node ranges)?
>>
>> [1] http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf
>
>
> As always, scale is relative. However, projects have shown major difficulties 
> to scale to 10% of the larger deployments. Scaling beyond that, even with 
> commercial solutions, has required major investments in custom configurations 
> by the deployers.
>
> There are two risks I see
>
> A. Use sqlite and then change to proprietary solution X for scale
> B. Works at a small scale but scalability has not been considered as a design 
> criteria or demonstrated
>
> I think it is important that the community is informed on these constraints 
> before feeling that a particular project is the solution for them and that 
> the TC factors these questions into their approval criteria.
>

is there a source for this? a place where people list their reference 
architectures and deployment scales?

i'm not a deployer but as an outsider, i've found that there isn't a lot 
of transparency in regards to how projects have been made to scale. 
maybe this is a side effect of OpenStack being hard as hell to use, but 
it seems configurations are the secret sauce people use to sell so we 
have a lot of failure stories (bottom-end constraints) in the community 
rather than successes (upper-end constraints).

are there a collection of fully transparent deployers out there to be 
our 'production' baseline? to help vet scalability? just CERN?

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Multi release packages

2016-02-10 Thread Andrew Woodward
On Wed, Feb 10, 2016 at 2:23 PM Dmitry Borodaenko 
wrote:

> +1 to Stas, supplanting VCS branches with code duplication is a path to
> madness and despair. The dubious benefits of a cross-release backwards
> compatible plugin binary are not worth the code and infra technical debt
> that such approach would accrue over time.
>

Supporting multiple fuel releases will likely result in madness as
discussed, however as we look to support multiple OpenStack releases from
the same version of fuel, this methodology becomes much more important.


> On Wed, Feb 10, 2016 at 07:36:30PM +0300, Stanislaw Bogatkin wrote:
> > It changes mostly nothing for case of furious plugin development when big
> > parts of code changed from one release to another.
> >
> > You will have 6 different deployment_tasks directories and 30 a little
> bit
> > different files in root directory of plugin. Also you forgot about
> > repositories directory (+6 at least), pre_build hooks (also 6) and so on.
> > It will look as hell after just 3 years of development.
> >
> > Also I can't imagine how to deal with plugin licensing if you have Apache
> > for liberty but BSD for mitaka release, for example.
> >
> > Much easier way to develop a plugin is to keep it's source in VCS like
> Git
> > and just make a branches for every fuel release. It will give us
> > opportunity to not store a bunch of similar but a little bit different
> > files in repo. There is no reason to drag all different versions of code
> > for specific release.
> >
> >
> > On other hand there is a pros - your plugin can survive after upgrade if
> it
> > supports new release, no changes needed here.
> >
> > On Wed, Feb 10, 2016 at 4:04 PM, Alexey Shtokolov <
> ashtoko...@mirantis.com>
> > wrote:
> >
> > > Fuelers,
> > >
> > > We are discussing the idea to extend the multi release packages for
> > > plugins.
> > >
> > > Fuel plugin builder (FPB) can create one rpm-package for all supported
> > > releases (from metadata.yaml) but we can specify only deployment
> scripts
> > > and repositories per release.
> > >
> > > Current release definition (in metadata.yaml):
> > > - os: ubuntu
> > >   version: liberty-8.0
> > >   mode: ['ha']
> > >   deployment_scripts_path: deployment_scripts/
> > >   repository_path: repositories/ubuntu
> > >
>

This will result in far too much clutter.
For starters we should support nested over rides. for example the author
may have already taken account for the changes between one openstack
version to another. In this case they only should need to define the
releases they support and not specify any additional locations. Later they
may determine that they only need to replace packages, or one other file
they should not be required to code every location for each release

Also, at the same time we MUST clean up importing various yaml files.
Specifically, tasks, volumes, node roles, and network roles. Requiring that
they all be maintained in a single file doesn't scale, we don't require it
for tasks.yaml in fuel library, and we should not require it in plugins. We
should simply do the same thing as tasks.yaml in library, scan the subtree
for specific file names and just merge them all together. (This has been
expressed multiple times by people with larger plugins)

> > So the idea [0] is to make releases fully configurable.
> > > Suggested changes for release definition (in metadata.yaml):
> > >   components_path: components_liberty.yaml
> > >   deployment_tasks_path: deployment_tasks_liberty/ # <- folder

> >   environment_config_path: environment_config_liberty.yaml
> > >   network_roles_path: network_roles_liberty.yaml
> > >   node_roles_path: node_roles_liberty.yaml
> > >   volumes_path: volumes_liberty.yaml
> > >
> > > I see the issue: if we change anything for one release (e.g.
> > > deployment_task typo) revalidation is needed for all releases.
> > >
> > > Your Pros and cons please?
> > >
> > > [0] https://review.openstack.org/#/c/271417/
> > > ---
> > > WBR, Alexey Shtokolov
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> >
> >
> > --
> > with best regards,
> > Stan.
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/opensta

Re: [openstack-dev] [fuel] Fuel Community ISO 8.0

2016-02-10 Thread Andrew Woodward
Was a bug ever filed for this? It's still not on the landing page

On Thu, Feb 4, 2016 at 4:19 AM Ivan Kolodyazhny  wrote:

> Thanks, Igor.
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Thu, Feb 4, 2016 at 1:21 PM, Igor Belikov 
> wrote:
>
>> Hi Ivan,
>>
>> I think this counts as a bug in our community page, thanks for noticing.
>> You can get 8.0 Community ISO using links in status dashboard on
>> https://ci.fuel-infra.org
>> --
>> Igor Belikov
>> Fuel CI Engineer
>> ibeli...@mirantis.com
>>
>> On 04 Feb 2016, at 13:53, Ivan Kolodyazhny  wrote:
>>
>> Hi team,
>>
>> I've tried to download Fuel Community ISO 8.0 from [1] and failed. We've
>> got 2 options there: the latest stable (7.0) and nightly build (9.0). Where
>> can I download 8.0 build?
>>
>> [1] https://www.fuel-infra.org/#fuelget
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread Fox, Kevin M
But the issue is, when told to detach, some of the drivers do bad things. then, 
is it the driver's issue to refcount to fix the issue, or is it nova's to 
refcount so that it doesn't call the release before all users are done with it? 
I think solving it in the middle, in cinder's probably not the right place to 
track it, but if its to be solved on nova's side, nova needs to know when it 
needs to do it. But cinder might have to relay some extra info from the backend.

Either way, On the driver side, there probably needs to be a mechanism on the 
driver to say it either can refcount properly so its multiattach compatible (or 
that nova should refcount), or to default to not allowing multiattach ever, so 
existing drivers don't break.

Thanks,
Kevin

From: Sean McGinnis [sean.mcgin...@gmx.com]
Sent: Wednesday, February 10, 2016 3:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to 
call os-brick's connector.disconnect_volume

On Wed, Feb 10, 2016 at 11:16:28PM +, Fox, Kevin M wrote:
> I think part of the issue is whether to count or not is cinder driver 
> specific and only cinder knows if it should be done or not.
>
> But if cinder told nova that particular multiattach endpoints must be 
> refcounted, that might resolve the issue?
>
> Thanks,
> Kevin

I this case (the point John and I were making at least) it doesn't
matter. Nothing is driver specific, so it wouldn't matter which backend
is being used.

If a volume is needed, request it to be attached. When it is no longer
needed, tell Cinder to take it away. Simple as that.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stable branch policy for Mitaka

2016-02-10 Thread Derek Higgins



On 10/02/16 18:05, James Slagle wrote:



On Wed, Feb 10, 2016 at 4:57 PM, Steven Hardy mailto:sha...@redhat.com>> wrote:

Hi all,

We discussed this in our meeting[1] this week, and agreed a ML
discussion
to gain consensus and give folks visibility of the outcome would be
a good
idea.

In summary, we adopted a more permissive "release branch" policy[2]
for our
stable/liberty branches, where feature backports would be allowed,
provided
they worked with liberty and didn't break backwards compatibility.

The original idea was really to provide a mechanism to "catch up" where
features are added e.g to liberty OpenStack components late in the cycle
and TripleO requires changes to integrate with them.

However, the reality has been that the permissive backport policy
has been
somewhat abused (IMHO) with a large number of major features being
proposed
for backport, and in a few cases this has broken downstream (RDO)
consumers
of TripleO.

Thus, I would propose that from Mitaka, we revise our backport policy to
simply align with the standard stable branch model observed by all
projects[3].

Hopefully this will allow us to retain the benefits of the stable branch
process, but provide better stability for downstream consumers of these
branches, and minimise confusion regarding what is a permissable
backport.

If we do this, only backports that can reasonably be considered
"Appropriate fixes"[4] will be valid backports - in the majority of
cases
this will mean bugfixes only, and large features where the risk of
regression is significant will not be allowed.

What are peoples thoughts on this?


​I'm in agreement. I think this change is needed and will help set
better expectations around what will be included in which release.

If we adopt this as the new policy, then the immediate followup is to
set and communicate when we'll be cutting the stable branches, so that
it's understood when the features have to be done/committed. I'd suggest
that we more or less completely adopt the integrated release
schedule[1]. Which I believe means the week of RC1 for cutting the
stable/mitaka branches, which is March 14th-18th.

It seems to follow logically then that we'd then want to also be more
aggresively aligned with other integrated release events such as the
feature freeze date, Feb 29th - March 4th.

An alternative to strictly following the schedule, would be to say that
TripleO lags the integrated release dates by some number of weeks (1 or
2 I'd think), to allow for some "catchup" time since TripleO is often
consuming features from projects part of the integrated release.


This is where my vote would lie, given that we are consumers of the 
other projects we may need a little time to support a feature that is 
merged late in the cycle. Of course we can also have patches lined up 
ready to merge so the lag shouldn't need to be excessive.


If we don't lag we could achieve the same thing by allowing a short 
window in the stable branch where features may be allowed based on group 
opinion.





[1] http://releases.openstack.org/mitaka/schedule.html​


Thanks,

Steve

[1]

http://eavesdrop.openstack.org/meetings/tripleo/2016/tripleo.2016-02-09-14.01.log.html
[2]

https://github.com/openstack/tripleo-specs/blob/master/specs/liberty/release-branch.rst
[3] http://docs.openstack.org/project-team-guide/stable-branches.html
[4]

http://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
-- James Slagle
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][heat] Bug 1544227

2016-02-10 Thread Hongbin Lu
Hi Heat team,

As mentioned in IRC, magnum gate broke with bug 1544227 . Rabi submitted on a 
fix (https://review.openstack.org/#/c/278576/), but it doesn't seem to be 
enough to unlock the broken gate. In particular, it seems templates with 
SoftwareDeploymentGroup resource failed to complete (I have commented on the 
review above for how to reproduce).

Right now, I prefer to merge the reverted patch 
(https://review.openstack.org/#/c/278575/) to unlock our gate immediately, 
unless someone can work on a quick fix. We appreciate the help.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] All hail the new per-region pypi, wheel and apt mirrors

2016-02-10 Thread Monty Taylor

Hey everybody,

tl;dr - We have new AFS-based consistent per-region mirrors of PyPI and 
APT repos with additional wheel repos containing pre-built wheels for 
all the modules in global-requirements


We've just rolled out a new change that you should mostly never notice - 
except that jobs should be a bit faster and more reliable.


The underpinning of the new mirrors is AFS, which is a global 
distributed filesystem developed by Carnegie Mellon back in the 1980's. 
In a lovely fit of old-is-new-again, the challenges that software had to 
deal with in the 80s (flaky networks, frequent computer failures) mirror 
life in the cloud pretty nicely, and the engineering work to solve them 
winds up being quite relevant.


One of the nice things we get from AFS is the ability to do atomic 
consistent releases of new filesystem snapshots to read-only replicas. 
That means we can build a new version of our mirror content, check it 
for consistency, and then release it for consumption to all of the 
consumers at the same time. That's important for the gate, because our 
"package not found" errors are usually about the mirror state shifting 
during a test job run.


We've had per-region PyPI mirrors for quite some time (and indeed the 
gate would largely be dead in the water without them). The improvement 
from this work for them is that they're now AFS based, so we should 
never have a visible mirror state that's wonky or inconsistent between 
regions, and we can more easily expand into new cloud regions.


We've added per-region apt mirrors (with yum to come soon) to the mix 
based on the same concept - we build the new mirror state then release 
it. There is one additional way that apt can fail even with consistent 
mirror states, which is that apt repos purge old versions of packages 
that are no longer referenced. If a new mirror state rolls out between 
the time devstack runs apt-get update and the time it tries to do 
apt-get install of something, you can get a situation where apt is 
trying to install a version of a package that is no longer present in 
the archive. To mitigate this, we're purging our mirror on a delay ... 
in our mirror runs every 2 hours we add new packages and update the 
index, and then in the next mirror run we'll delete the packages the 
previous run made unreferenced. This should make apt errors about 
package not found go away.


Last but certainly not least, there are now also wheel repositories of 
wheels built for all of our python packages from global-requirements. 
This is a speed increase and shaves 1.8 tens of minutes off of  a normal 
devstack run.


With these changes, it means we're writing not only pip.conf but now 
sources.list files into the test nodes. If you happen to be doing extra 
special things with either of those in your jobs, you'll want to make 
sure you consume the config files we're laying down


Finally, although all Infra projects are a team effort - a big shout out 
to Michael Krotschek and Jim Blair for diving in and getting this 
finished over the past couple of weeks.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][requirements] Why do we use pip install -U as our install_command

2016-02-10 Thread Tony Breeds
Hi All,
I confess up front that I'm pretty green in the is area and there is a lot
of history that I just don't have.  That wont stop me from asking/opening the
discussion.

As I ask in $subject: why do we install with --upgrade in our tox environments?
Are there issues this is fixing/hiding?

I've been seeing a few failures with requirements updates on stable/kilo.  I
expect that this applies to liberty and master BUT we're seeing less of it as
constraints are a good thing on those branches.

I'll use glance_store as an example[2] https://review.openstack.org/#/c/265182
I want to be clear this isn't about this specific library (glance_store or
requests), I'm seeing something similar with testtools, fixtures and other 
libraries.

Looking at:
 
http://logs.openstack.org/18/277018/2/check/gate-glance_store-python27/1691013/tox/py27-1.log.txt
 
http://logs.openstack.org/18/277018/2/check/gate-glance_store-python27/1691013/tox/py27-2.log.txt

In py-1.log we (edited for clarity):
pip install --allow-all-external --allow-insecure netaddr -U -rrequirements.txt 
-rtest-requirements.txt

---
Collecting python-cinderclient<1.2.0,>=1.1.0 (from -r 
/home/jenkins/workspace/gate-glance_store-python27/requirements.txt (line 8))
  Downloading 
http://mirror.iad.rax.openstack.org/pypi/packages/py2.py3/p/python-cinderclient/python_cinderclient-1.1.2-py2.py3-none-any.whl
 (202kB)
...
Collecting requests!=2.4.0,<2.8.0,>=2.2.0 (from -r 
/home/jenkins/workspace/gate-glance_store-python27/test-requirements.txt (line 
6))
  Downloading 
http://mirror.iad.rax.openstack.org/pypi/packages/2.7/r/requests/requests-2.7.0-py2.py3-none-any.whl
 (470kB)
---

So we installed requests 2.7.0 as per our current g-r specification.  IIUC We
use the spec from test-requirements as all the requirements+specs from
requirements.txt and test-requirements.txt are processed before looking at the
requirements of each library.  So when we look for the requests library while
processing python-cinderclient requirements we already have a spec that's been
satisfied and move on.

Then in py-2.log we (edited for clarity):
pip install --allow-all-external --allow-insecure netaddr -U -e 
---
Requirement already up-to-date: python-cinderclient<1.2.0,>=1.1.0 in 
./.tox/py27/lib/python2.7/site-packages (from glance-store==0.4.1.dev16)
...
Collecting requests!=2.4.0,>=2.2.0 (from 
python-cinderclient<1.2.0,>=1.1.0->glance-store==0.4.1.dev16)
  Downloading 
http://mirror.iad.rax.openstack.org/pypi/packages/2.7/r/requests/requests-2.9.1-py2.py3-none-any.whl
 (501kB)
---

Here we upgrade requests because the python-cinderclient is less restrictive[3]
Here we're only looking at requirements.txt which doesn't have a requests
specification so when we process python-cinderclient's requirements (with -U)
we see a "better" requests library install that and then "go bang" [4]

I *think* this particular failure would be "fixed" if we didn't install our
packages with -U.

I know that people are working on enhancing the pip dependency resolver but
that isn't work we can use today.

Again there are alternate solutions for this specific issue but I feel like
removing -U would fix a class of problems, perhaps it'll create another I don't
know.

Discuss :)

Yours Tony.

[1] Footnote deleted in editing and I'm too lazy renumber the rest :D
[2] Just because it's the one I have open in my browser
[3] See https://review.openstack.org/#/c/265182
[4] 
http://logs.openstack.org/18/277018/2/check/gate-glance_store-python27/1691013/console.html#_2016-02-10_18_35_39_818


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] All hail the new per-region pypi, wheel and apt mirrors

2016-02-10 Thread Davanum Srinivas
w00t! thanks infra team

On Wed, Feb 10, 2016 at 7:45 PM, Monty Taylor  wrote:
> Hey everybody,
>
> tl;dr - We have new AFS-based consistent per-region mirrors of PyPI and APT
> repos with additional wheel repos containing pre-built wheels for all the
> modules in global-requirements
>
> We've just rolled out a new change that you should mostly never notice -
> except that jobs should be a bit faster and more reliable.
>
> The underpinning of the new mirrors is AFS, which is a global distributed
> filesystem developed by Carnegie Mellon back in the 1980's. In a lovely fit
> of old-is-new-again, the challenges that software had to deal with in the
> 80s (flaky networks, frequent computer failures) mirror life in the cloud
> pretty nicely, and the engineering work to solve them winds up being quite
> relevant.
>
> One of the nice things we get from AFS is the ability to do atomic
> consistent releases of new filesystem snapshots to read-only replicas. That
> means we can build a new version of our mirror content, check it for
> consistency, and then release it for consumption to all of the consumers at
> the same time. That's important for the gate, because our "package not
> found" errors are usually about the mirror state shifting during a test job
> run.
>
> We've had per-region PyPI mirrors for quite some time (and indeed the gate
> would largely be dead in the water without them). The improvement from this
> work for them is that they're now AFS based, so we should never have a
> visible mirror state that's wonky or inconsistent between regions, and we
> can more easily expand into new cloud regions.
>
> We've added per-region apt mirrors (with yum to come soon) to the mix based
> on the same concept - we build the new mirror state then release it. There
> is one additional way that apt can fail even with consistent mirror states,
> which is that apt repos purge old versions of packages that are no longer
> referenced. If a new mirror state rolls out between the time devstack runs
> apt-get update and the time it tries to do apt-get install of something, you
> can get a situation where apt is trying to install a version of a package
> that is no longer present in the archive. To mitigate this, we're purging
> our mirror on a delay ... in our mirror runs every 2 hours we add new
> packages and update the index, and then in the next mirror run we'll delete
> the packages the previous run made unreferenced. This should make apt errors
> about package not found go away.
>
> Last but certainly not least, there are now also wheel repositories of
> wheels built for all of our python packages from global-requirements. This
> is a speed increase and shaves 1.8 tens of minutes off of  a normal devstack
> run.
>
> With these changes, it means we're writing not only pip.conf but now
> sources.list files into the test nodes. If you happen to be doing extra
> special things with either of those in your jobs, you'll want to make sure
> you consume the config files we're laying down
>
> Finally, although all Infra projects are a team effort - a big shout out to
> Michael Krotschek and Jim Blair for diving in and getting this finished over
> the past couple of weeks.
>
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][keystone][kolla][bandit] linters jobs

2016-02-10 Thread Joshua Hesketh
Hey Andreas,

Why not keep pep8 as an alias for the new linters target? Would this allow
for a transition path while work on updating the PTI is done?

Cheers,
Josh

On Thu, Feb 11, 2016 at 6:55 AM, Andreas Jaeger  wrote:

> Hi,
>
> the pep8 target is our usual target to include style and lint checks and
> thus is used besides pep8 also for doc8, bashate, bandit, etc as
> documented in the PTI (=Python Test Interface,
> http://governance.openstack.org/reference/cti/python_cti.html).
>
> We've had some discussions to introduce a new target called linters as
> better name for this and when I mentioned this in a few discussions, it
> resonated with these projects. Unfortunately, I missed the relevance of
> the PTI for such a change - and changing the PTI to replace pep8 with
> linters and then pushing that one through to all projects is more than I
> can commit to right now.
>
> I apologize for being a too eager and will send patches for official
> projects moving them back to pep8, so consider this is heads up and
> background about my incoming patches with topic "pti-pep8-linters".
>
> If somebody else wants to do the whole conversion in the future, I can
> give pointers on what to do,
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] All hail the new per-region pypi, wheel and apt mirrors

2016-02-10 Thread Tony Breeds
On Wed, Feb 10, 2016 at 06:45:25PM -0600, Monty Taylor wrote:
> Hey everybody,
> 
> tl;dr - We have new AFS-based consistent per-region mirrors of PyPI and APT
> repos with additional wheel repos containing pre-built wheels for all the
> modules in global-requirements

Woot!

I do have a couple of questions about the pre-built wheels:

1) You say global-requirements, I assume this includes upper-constraints as
well.  Do you check that the version of each library as listed in
upper-constraints does exist on the mirror?  How many versions of each
library do you build wheels for?

2) How doe this work on stable branches?  I'm guessing you look at the g-r for
   each branch, build the wheels and then upload snapshot the whole bunch.  
Verify
   that and release it for consumption.

3) Do you mirror all matches for a requirements spec or just the highest one
   that matches?

4) Will we see version selection vary between the gate and tests run outside the
   gate?

Actually all of those questions can probably be answer by linking to the code
that builds the wheels.



> Finally, although all Infra projects are a team effort - a big shout out to
> Michael Krotschek and Jim Blair for diving in and getting this finished over
> the past couple of weeks.

Thanks to everyone involved.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mock and the stdlib

2016-02-10 Thread Robert Collins
We've just had a mass gate breakage due to
https://review.openstack.org/#/c/268945/ go through, so I thought I'd
try to get ahead of anyone trying this again.

unittest.mock in the stdlib is not static, it evolves over time. We're
currently writing to - and depending on - the unittest.mock version
approximately == that in python 3.5. Until we have that as our minimum
python version - no 2.7 - we can't just use 'unittest.mock'.

'mock', the original code that became unittest.mock, is still
maintained. Its a rolling backport of the features that land in
Python's stdlib, which lets us use newer features on older pythons. So
- until the hypothetical date where our minimum Python version is
newer than the oldest capability we need from unittest.mock, we're
going to be using 'mock', not 'unittest.mock'.

-> noone should be importing 'unittest.mock', and 'mock' is the
dependency, not conditional on any given version of Python.

I'm sorry I didn't spot 268945 going through, or I would have -2'd it :(.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] All hail the new per-region pypi, wheel and apt mirrors

2016-02-10 Thread Clark Boylan
On Wed, Feb 10, 2016, at 06:02 PM, Tony Breeds wrote:
> On Wed, Feb 10, 2016 at 06:45:25PM -0600, Monty Taylor wrote:
> > Hey everybody,
> > 
> > tl;dr - We have new AFS-based consistent per-region mirrors of PyPI and APT
> > repos with additional wheel repos containing pre-built wheels for all the
> > modules in global-requirements
> 
> Woot!
> 
> I do have a couple of questions about the pre-built wheels:
> 
> 1) You say global-requirements, I assume this includes upper-constraints
> as
> well.  Do you check that the version of each library as listed in
> upper-constraints does exist on the mirror?  How many versions of
> each
> library do you build wheels for?

It explicitly builds the wheels using upper constraints.
> 
> 2) How doe this work on stable branches?  I'm guessing you look at the
> g-r for
>each branch, build the wheels and then upload snapshot the whole
>bunch.  Verify
>that and release it for consumption.
It iterates through the stable branches and builds wheels for the upper
constraints that it can find. Looks like it should noop if no
constraints are present.

> 
> 3) Do you mirror all matches for a requirements spec or just the highest
> one
>that matches?
We mirror all of the wheels that we have built over time. As upper
constraints move new wheels will be added. There isn't currently a
delete step but we may add one in the future if necessary.

> 
> 4) Will we see version selection vary between the gate and tests run
> outside the
>gate?
Not if you use constraints. Pip's selection of deps are constraints win,
if no constraints then take the highest version available and if that
version is available as a wheel use the wheel.

> 
> Actually all of those questions can probably be answer by linking to the
> code
> that builds the wheels.
> 
Code is at
https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/wheel-build.sh.

Hope that helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Why do we use pip install -U as our install_command

2016-02-10 Thread Clark Boylan
On Wed, Feb 10, 2016, at 05:46 PM, Tony Breeds wrote:
> Hi All,
> I confess up front that I'm pretty green in the is area and there is
> a lot
> of history that I just don't have.  That wont stop me from asking/opening
> the
> discussion.
> 
> As I ask in $subject: why do we install with --upgrade in our tox
> environments?
> Are there issues this is fixing/hiding?
> 
> I've been seeing a few failures with requirements updates on stable/kilo.
>  I
> expect that this applies to liberty and master BUT we're seeing less of
> it as
> constraints are a good thing on those branches.
> 
> I'll use glance_store as an example[2]
> https://review.openstack.org/#/c/265182
> I want to be clear this isn't about this specific library (glance_store
> or
> requests), I'm seeing something similar with testtools, fixtures and
> other libraries.
> 
> Looking at:
>  
> http://logs.openstack.org/18/277018/2/check/gate-glance_store-python27/1691013/tox/py27-1.log.txt
>  
> http://logs.openstack.org/18/277018/2/check/gate-glance_store-python27/1691013/tox/py27-2.log.txt
> 
> In py-1.log we (edited for clarity):
> pip install --allow-all-external --allow-insecure netaddr -U
> -rrequirements.txt -rtest-requirements.txt
> 
> ---
> Collecting python-cinderclient<1.2.0,>=1.1.0 (from -r
> /home/jenkins/workspace/gate-glance_store-python27/requirements.txt (line
> 8))
>   Downloading
>   
> http://mirror.iad.rax.openstack.org/pypi/packages/py2.py3/p/python-cinderclient/python_cinderclient-1.1.2-py2.py3-none-any.whl
>   (202kB)
> ...
> Collecting requests!=2.4.0,<2.8.0,>=2.2.0 (from -r
> /home/jenkins/workspace/gate-glance_store-python27/test-requirements.txt
> (line 6))
>   Downloading
>   
> http://mirror.iad.rax.openstack.org/pypi/packages/2.7/r/requests/requests-2.7.0-py2.py3-none-any.whl
>   (470kB)
> ---
> 
> So we installed requests 2.7.0 as per our current g-r specification. 
> IIUC We
> use the spec from test-requirements as all the requirements+specs from
> requirements.txt and test-requirements.txt are processed before looking
> at the
> requirements of each library.  So when we look for the requests library
> while
> processing python-cinderclient requirements we already have a spec that's
> been
> satisfied and move on.
> 
> Then in py-2.log we (edited for clarity):
> pip install --allow-all-external --allow-insecure netaddr -U -e  repo>
> ---
> Requirement already up-to-date: python-cinderclient<1.2.0,>=1.1.0 in
> ./.tox/py27/lib/python2.7/site-packages (from glance-store==0.4.1.dev16)
> ...
> Collecting requests!=2.4.0,>=2.2.0 (from
> python-cinderclient<1.2.0,>=1.1.0->glance-store==0.4.1.dev16)
>   Downloading
>   
> http://mirror.iad.rax.openstack.org/pypi/packages/2.7/r/requests/requests-2.9.1-py2.py3-none-any.whl
>   (501kB)
> ---
> 
> Here we upgrade requests because the python-cinderclient is less
> restrictive[3]
> Here we're only looking at requirements.txt which doesn't have a requests
> specification so when we process python-cinderclient's requirements (with
> -U)
> we see a "better" requests library install that and then "go bang" [4]
> 
> I *think* this particular failure would be "fixed" if we didn't install
> our
> packages with -U.
> 
> I know that people are working on enhancing the pip dependency resolver
> but
> that isn't work we can use today.
> 
> Again there are alternate solutions for this specific issue but I feel
> like
> removing -U would fix a class of problems, perhaps it'll create another I
> don't
> know.
> 
> Discuss :)
> 
> Yours Tony.
> 
> [1] Footnote deleted in editing and I'm too lazy renumber the rest :D
> [2] Just because it's the one I have open in my browser
> [3] See https://review.openstack.org/#/c/265182
> [4]
> http://logs.openstack.org/18/277018/2/check/gate-glance_store-python27/1691013/console.html#_2016-02-10_18_35_39_818

The reason that I remember off the top of my head is because we spent
far too much time telling people to run `tox -r` when their code failed
during Jenkins testing but ran just fine locally. It removes a
significant amount of debugging overhead to have everyone using a
relatively consistent set of packages whenever they rerun tests.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cross_project] Ensuring there is an admin project

2016-02-10 Thread Adam Young
We have a fix for one of the most egregious bugs in the history of 
Keystone: https://bugs.launchpad.net/keystone/+bug/968696  The only 
problem is, it requires a configuration file change. A deployer needs to 
set the values:


CONF.resource.admin_project_name
CONF.resource.admin_domain_name

How can we ensure that happens upon upgrade?  Otherwise, we are stuck 
with the existing brokeness.


For devstack, we can do

CONF.resource.admin_project_name = 'admin'
CONF.resource.admin_domain_name = 'Default'

And then, if we want, we would change the default policy files like this:


-"admin_required":"role:admin or is_admin:1",
+"admin_required":"role:admin and token.is_admin_project:True",

How do we make this happen?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mock and the stdlib

2016-02-10 Thread Davanum Srinivas
I've fast tracked the revert - https://review.openstack.org/#/c/278814/

On Wed, Feb 10, 2016 at 9:38 PM, Robert Collins
 wrote:
> We've just had a mass gate breakage due to
> https://review.openstack.org/#/c/268945/ go through, so I thought I'd
> try to get ahead of anyone trying this again.
>
> unittest.mock in the stdlib is not static, it evolves over time. We're
> currently writing to - and depending on - the unittest.mock version
> approximately == that in python 3.5. Until we have that as our minimum
> python version - no 2.7 - we can't just use 'unittest.mock'.
>
> 'mock', the original code that became unittest.mock, is still
> maintained. Its a rolling backport of the features that land in
> Python's stdlib, which lets us use newer features on older pythons. So
> - until the hypothetical date where our minimum Python version is
> newer than the oldest capability we need from unittest.mock, we're
> going to be using 'mock', not 'unittest.mock'.
>
> -> noone should be importing 'unittest.mock', and 'mock' is the
> dependency, not conditional on any given version of Python.
>
> I'm sorry I didn't spot 268945 going through, or I would have -2'd it :(.
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] All hail the new per-region pypi, wheel and apt mirrors

2016-02-10 Thread Tony Breeds
On Wed, Feb 10, 2016 at 07:07:22PM -0800, Clark Boylan wrote:
> On Wed, Feb 10, 2016, at 06:02 PM, Tony Breeds wrote:
> > On Wed, Feb 10, 2016 at 06:45:25PM -0600, Monty Taylor wrote:
> > > Hey everybody,
> > > 
> > > tl;dr - We have new AFS-based consistent per-region mirrors of PyPI and 
> > > APT
> > > repos with additional wheel repos containing pre-built wheels for all the
> > > modules in global-requirements
> > 
> > Woot!
> > 
> > I do have a couple of questions about the pre-built wheels:
> > 
> > 1) You say global-requirements, I assume this includes upper-constraints
> > as
> > well.  Do you check that the version of each library as listed in
> > upper-constraints does exist on the mirror?  How many versions of
> > each
> > library do you build wheels for?
> 
> It explicitly builds the wheels using upper constraints.

Ahh okay.

> > 2) How doe this work on stable branches?  I'm guessing you look at the
> > g-r for
> >each branch, build the wheels and then upload snapshot the whole
> >bunch.  Verify
> >that and release it for consumption.
> It iterates through the stable branches and builds wheels for the upper
> constraints that it can find. Looks like it should noop if no
> constraints are present.

Okay We'll need to think about that one as the contrainst in stable/kilo can
be bogus, sometime we have a version in contraints that isn't valid compared
to g-r

We don't enforce constrains on kilo so that causes different pain :D

> > 3) Do you mirror all matches for a requirements spec or just the highest
> > one
> >that matches?
> We mirror all of the wheels that we have built over time. As upper
> constraints move new wheels will be added. There isn't currently a
> delete step but we may add one in the future if necessary.

Ok.

> > 4) Will we see version selection vary between the gate and tests run
> > outside the
> >gate?
> Not if you use constraints. Pip's selection of deps are constraints win,
> if no constraints then take the highest version available and if that
> version is available as a wheel use the wheel.

Right I was thinking of $library adds a release I rin tox (uncontstrained) at
home and get that new release I then run the same (unconstrained) test in the
gate and get the wheel frmo the cache.  Just somethign to keep in mind not a
problem as such.

> Code is at
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/wheel-build.sh.

Wow okay that is remarkably simple :)

Thanks Clark

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread John Griffith
On Wed, Feb 10, 2016 at 5:12 PM, Fox, Kevin M  wrote:

> But the issue is, when told to detach, some of the drivers do bad things.
> then, is it the driver's issue to refcount to fix the issue, or is it
> nova's to refcount so that it doesn't call the release before all users are
> done with it? I think solving it in the middle, in cinder's probably not
> the right place to track it, but if its to be solved on nova's side, nova
> needs to know when it needs to do it. But cinder might have to relay some
> extra info from the backend.
>
> Either way, On the driver side, there probably needs to be a mechanism on
> the driver to say it either can refcount properly so its multiattach
> compatible (or that nova should refcount), or to default to not allowing
> multiattach ever, so existing drivers don't break.
>
> Thanks,
> Kevin
> 
> From: Sean McGinnis [sean.mcgin...@gmx.com]
> Sent: Wednesday, February 10, 2016 3:25 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when
> to call os-brick's connector.disconnect_volume
>
> On Wed, Feb 10, 2016 at 11:16:28PM +, Fox, Kevin M wrote:
> > I think part of the issue is whether to count or not is cinder driver
> specific and only cinder knows if it should be done or not.
> >
> > But if cinder told nova that particular multiattach endpoints must be
> refcounted, that might resolve the issue?
> >
> > Thanks,
> > Kevin
>
> I this case (the point John and I were making at least) it doesn't
> matter. Nothing is driver specific, so it wouldn't matter which backend
> is being used.
>
> If a volume is needed, request it to be attached. When it is no longer
> needed, tell Cinder to take it away. Simple as that.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Hey Kevin,

So I think what Sean M pointed out is still valid in your case.  It's not
really that some drivers do bad things, the problem is actually the way
attach/detach works in OpenStack as a whole.  The original design (which we
haven't strayed very far from) was that you could only attach a single
resource to a single compute node.  That was it, there was no concept of
multi-attach etc.

Now however folks want to introduce multi-attach, which means all of the
old assumptions that the code was written on and designed around are kinda
"bad assumptions" now.  It's true, as you pointed out however that there
are some drivers that behave or deal with targets in a way that makes
things complicated, but they're completely inline with the scsi standards
and aren't doing anything *wrong*.

The point Sean M and I were trying to make is that for the specific use
case of a single volume being attached to a compute node, BUT being passed
through to more than one Instance it might be worth looking at just
ensuring that Compute Node doesn't call detach unless it's *done* with all
of the Instances that it was passing that volume through to.

You're absolutely right, there are some *weird* things that a couple of
vendors do with targets in the case of like replication where they may
actually create a new target and attach; those sorts of things are
ABSOLUTELY Cinder's problem and Nova should not have to know anything about
that as a consumer of the Target.

My view is that maybe we should look at addressing the multiple use of a
single target case in Nova, and then absolutely figure out how to make
things work correctly on the Cinder side for all the different behaviors
that may occur on the Cinder side from the various vendors.

Make sense?

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] All hail the new per-region pypi, wheel and apt mirrors

2016-02-10 Thread Jeremy Stanley
On 2016-02-11 14:41:03 +1100 (+1100), Tony Breeds wrote:
[...]
> Okay We'll need to think about that one as the contrainst in
> stable/kilo can be bogus, sometime we have a version in contraints
> that isn't valid compared to g-r
[...]

Oh, right since there's an upper-constraints.txt in stable/kilo of
openstack/requirements we're building and serving wheels of
whatever's listed in that too. If that file isn't actually relevant
on the branch, it may make more sense to delete it from the repo?
Regardless, if jobs on stable/kilo of projects aren't using those
versions of packages, then the wheels of them aren't hurting
anything they're just not going to help either.

> Right I was thinking of $library adds a release I rin tox
> (uncontstrained) at home and get that new release I then run the
> same (unconstrained) test in the gate and get the wheel frmo the
> cache.  Just somethign to keep in mind not a problem as such.
[...]

Unconstrained jobs will mostly benefit from our custom wheel mirror
if upper-constraints.txt in openstack/requirements and the
requirements files in individual repos are being kept current. If
there's a newer version of some dependency than is listed in the
constraints file but is still within the valid range in a project's
requirements file then unconstrained jobs for that repo will end up
grabbing the newer sdist or (if available) wheel from our full PyPI
mirrors instead of the custom wheel mirror.

In short, constrained jobs will benefit greatly, especially if they
have dependencies which link external C libs and would normally take
a long time to compile. Unconstrained jobs for repos participating
in global requirements sync/enforcement will benefit a lot of the
time as long as their requirements and the constraints list updates
are being kept up with. Jobs for repos not participating in global
requirements may still benefit if they share some requirement
versions with things which are listed in the constraints file on
some branch (or were at some point in recent history).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] Bug 1544227

2016-02-10 Thread Rabi Mishra
Hi,

We did some analysis of the issue you are facing.

One of the issues from heat side is, we convert None(singleton) resource 
references 
to 'None'(string) and the translation logic is not ignoring them. Though we 
don't
apply translation rules to resource references[1].We don't see this issue after
this patch[2].

The issue you mentioned below with respect to SD and SDG, does not look like
something to do with this patch. I also see the similar issues when you tested 
with
the reverted patch[3].

I also noticed that there are some 404 from neutron in the engine logs[4] for 
the test patch. 
I did not notice them when I tested locally with the templates you had provided.


Having said that, we can still revert the patch, if that resolves your issue. 

[1] 
https://github.com/openstack/heat/blob/master/heat/engine/translation.py#L234
[2] https://review.openstack.org/#/c/278576/
[3]http://logs.openstack.org/78/278778/1/check/gate-functional-dsvm-magnum-k8s/ea48ba2/console.html#_2016-02-11_03_07_49_039
[4] 
http://logs.openstack.org/78/278578/1/check/gate-functional-dsvm-magnum-swarm/51eeb3b/logs/screen-h-eng.txt


Regards,
Rabi

> Hi Heat team,
> 
> As mentioned in IRC, magnum gate broke with bug 1544227 . Rabi submitted on a
> fix (https://review.openstack.org/#/c/278576/), but it doesn't seem to be
> enough to unlock the broken gate. In particular, it seems templates with
> SoftwareDeploymentGroup resource failed to complete (I have commented on the
> review above for how to reproduce).
> 
> Right now, I prefer to merge the reverted patch
> (https://review.openstack.org/#/c/278575/) to unlock our gate immediately,
> unless someone can work on a quick fix. We appreciate the help.
> 
> Best regards,
> Hongbin
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Supporting multiple Openstack versions

2016-02-10 Thread Andrew Woodward
Right now master (targeting 9.0) is still deploying liberty and there is
active work going on to support both Kilo and Mitaka. On the review queue
are changes that would make fuel-library in-compatible with the prior
(liberty) openstack release. However I think if we extend a little bit of
effort we can keep some semblance of "support" while creating a pattern for
the Kilo support to continue to use. At the same time this pattern can help
us test parallel versions as we move through openstack releases and should
reduce occurrences of our CI freeze/merge parties

What is this magic pattern? Well its already present, and all be it not
designed for this I think we could quickly make it work. We use the release
fixture already present in fuel. Originally designed to work for upgrades,
we could reuse this within the same fuel release to control various aspects
needed to deploy a separate openstack version.

What we need to support multiple OpenStack versions:
1) Packge repo's that contain the relevant bits. CHECK, this can be toggled
with a new release  [1][2]
2) can point to different Puppet modules CHECK, also in toggled release  [3]
3) Composition layer that supports calls to different puppet-openstack
modules, WIP, it still needs work, but can be done [4]

So what is open? The composition layer.

Currently, we just abandon support for previous versions in the composition
layer and leave them to only be monuments in the stable/ series
branches for maintenance. If we instead started making changes (forwards or
backwards that) change the calls based on the openstack version [5] then we
would be able to change the calls based on then needs of that release, and
the puppet-openstack modules we are working with.

Given that we most of the time we would be supporting the previous release
(liberty) (which we should avoid dropping until after dev releases) and the
currently under development release (Mitaka), this will give us some
magical powers.

Testing master while keeping stable. Given the ability to conditional what
source of openstack bits, which versions of manifests we can start testing
both master and keep health on stable. This would help accelerate both fuel
development and deploying and testing development versions of openstack

Deploying stable and upgrading later. Again given the ability to deploy
multiple OpenStack versions within the same Fuel version, teams focused on
upgrades can take advantage of the latest enhancements in fuel to work the
upgrade process more easily, as an added benefit this would eventually lead
to better support for end user upgrades too.

Deploying older versions, in the odd case that we need to take advantage of
older OpenStack releases like in the case of Kilo with a newer version of
Fuel we can easily maintain that version too as we can keep the older cases
around in the composition layer with out adding much burden on the other
components.

[1]
https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L1957
[2]
https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L1906

[3]
https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L1371

[4] https://github.com/xarses/fuel-library/tree/9-Kilo

[5]
https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L1948

-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] All hail the new per-region pypi, wheel and apt mirrors

2016-02-10 Thread Tony Breeds
On Thu, Feb 11, 2016 at 04:46:39AM +, Jeremy Stanley wrote:
> On 2016-02-11 14:41:03 +1100 (+1100), Tony Breeds wrote:
> [...]
> > Okay We'll need to think about that one as the contrainst in
> > stable/kilo can be bogus, sometime we have a version in contraints
> > that isn't valid compared to g-r
> [...]
> 
> Oh, right since there's an upper-constraints.txt in stable/kilo of
> openstack/requirements we're building and serving wheels of
> whatever's listed in that too. If that file isn't actually relevant
> on the branch, it may make more sense to delete it from the repo?

Possibly deleting it would make sense, I'll leave that for another thread ;P
but a quick grep indicates it's probably safe.

> Regardless, if jobs on stable/kilo of projects aren't using those
> versions of packages, then the wheels of them aren't hurting
> anything they're just not going to help either.

Thanks for the explanation.

> > Right I was thinking of $library adds a release I rin tox
> > (uncontstrained) at home and get that new release I then run the
> > same (unconstrained) test in the gate and get the wheel frmo the
> > cache.  Just somethign to keep in mind not a problem as such.
> [...]
> 
> Unconstrained jobs will mostly benefit from our custom wheel mirror
> if upper-constraints.txt in openstack/requirements and the
> requirements files in individual repos are being kept current. If
> there's a newer version of some dependency than is listed in the
> constraints file but is still within the valid range in a project's
> requirements file then unconstrained jobs for that repo will end up
> grabbing the newer sdist or (if available) wheel from our full PyPI
> mirrors instead of the custom wheel mirror.

Ok, Thanks for clarifying that.
 
> In short, constrained jobs will benefit greatly, especially if they
> have dependencies which link external C libs and would normally take
> a long time to compile. Unconstrained jobs for repos participating
> in global requirements sync/enforcement will benefit a lot of the
> time as long as their requirements and the constraints list updates
> are being kept up with. Jobs for repos not participating in global
> requirements may still benefit if they share some requirement
> versions with things which are listed in the constraints file on
> some branch (or were at some point in recent history).

Woot!  I don't want any of these questions to imply dissatisfaction.  Quite the
opposite I think it's a great thing I'm just trying to understand how it all
works.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread Avishay Traeger
I think Sean and John are in the right direction.  Nova and Cinder need to
be more decoupled in the area of volume attachments.

I think some of the mess here is due to different Cinder backend behavior -
with some Cinder backends you actually attach volumes to a host (e.g., FC,
iSCSI), with some you attach to a VM (e.g., Ceph), and with some you attach
an entire pool of volumes to a host (e.g., NFS).  I think this difference
should all be contained in the Nova drivers that do the attachments.

On Thu, Feb 11, 2016 at 6:06 AM, John Griffith 
wrote:

>
>
> On Wed, Feb 10, 2016 at 5:12 PM, Fox, Kevin M  wrote:
>
>> But the issue is, when told to detach, some of the drivers do bad things.
>> then, is it the driver's issue to refcount to fix the issue, or is it
>> nova's to refcount so that it doesn't call the release before all users are
>> done with it? I think solving it in the middle, in cinder's probably not
>> the right place to track it, but if its to be solved on nova's side, nova
>> needs to know when it needs to do it. But cinder might have to relay some
>> extra info from the backend.
>>
>> Either way, On the driver side, there probably needs to be a mechanism on
>> the driver to say it either can refcount properly so its multiattach
>> compatible (or that nova should refcount), or to default to not allowing
>> multiattach ever, so existing drivers don't break.
>>
>> Thanks,
>> Kevin
>> 
>> From: Sean McGinnis [sean.mcgin...@gmx.com]
>> Sent: Wednesday, February 10, 2016 3:25 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining
>> when to call os-brick's connector.disconnect_volume
>>
>> On Wed, Feb 10, 2016 at 11:16:28PM +, Fox, Kevin M wrote:
>> > I think part of the issue is whether to count or not is cinder driver
>> specific and only cinder knows if it should be done or not.
>> >
>> > But if cinder told nova that particular multiattach endpoints must be
>> refcounted, that might resolve the issue?
>> >
>> > Thanks,
>> > Kevin
>>
>> I this case (the point John and I were making at least) it doesn't
>> matter. Nothing is driver specific, so it wouldn't matter which backend
>> is being used.
>>
>> If a volume is needed, request it to be attached. When it is no longer
>> needed, tell Cinder to take it away. Simple as that.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ​Hey Kevin,
>
> So I think what Sean M pointed out is still valid in your case.  It's not
> really that some drivers do bad things, the problem is actually the way
> attach/detach works in OpenStack as a whole.  The original design (which we
> haven't strayed very far from) was that you could only attach a single
> resource to a single compute node.  That was it, there was no concept of
> multi-attach etc.
>
> Now however folks want to introduce multi-attach, which means all of the
> old assumptions that the code was written on and designed around are kinda
> "bad assumptions" now.  It's true, as you pointed out however that there
> are some drivers that behave or deal with targets in a way that makes
> things complicated, but they're completely inline with the scsi standards
> and aren't doing anything *wrong*.
>
> The point Sean M and I were trying to make is that for the specific use
> case of a single volume being attached to a compute node, BUT being passed
> through to more than one Instance it might be worth looking at just
> ensuring that Compute Node doesn't call detach unless it's *done* with all
> of the Instances that it was passing that volume through to.
>
> You're absolutely right, there are some *weird* things that a couple of
> vendors do with targets in the case of like replication where they may
> actually create a new target and attach; those sorts of things are
> ABSOLUTELY Cinder's problem and Nova should not have to know anything about
> that as a consumer of the Target.
>
> My view is that maybe we should look at addressing the multiple use of a
> single target case in Nova, and then absolutely figure out how to make
> things work correctly on the Cinder side for all the different behaviors
> that may occur on the Cinder side from the various vendors.
>
> Make sense?
>
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: opens

[openstack-dev] [infra] [trove] gate jobs failing with ovh apt mirrors

2016-02-10 Thread Craig Vyvial
I started noticing more of the Trove gate jobs failing in the last 24 hours
and I think i've tracked it down to this mirror specifically.
http://mirror.bhs1.ovh.openstack.org/ubuntu/pool/main/p/
It looks like its missing the python-software-properties package and
causing our gate job to fail. I found a job that passed in the last 24
hours and it wasnt using this same mirror.

[job pass]
http://logs.openstack.org/50/278050/1/check/gate-trove-functional-dsvm-mysql/067e81c/console.html#_2016-02-10_18_39_14_494
[job fail]
http://logs.openstack.org/50/278050/1/check/gate-trove-functional-dsvm-mysql/e70f5c0/logs/devstack-gate-post_test_hook.txt.gz#_2016-02-11_05_12_01_023

Can someone help us resolve this?

Thanks,
Craig Vyvial
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Multi release packages

2016-02-10 Thread Bulat Gaifullin
I agree with Stas, one rpm - one version.

But plugin builder allows to specify several releases as compatible. The 
deployment tasks and repositories can be specified per release, at the same 
time the deployment graph is one for all releases.
Currently it looks like half-implemented feature.  Can we drop this feature? or 
should we finish implementation of this feature.


Regards,
Bulat Gaifullin
Mirantis Inc.



> On 11 Feb 2016, at 02:41, Andrew Woodward  wrote:
> 
> 
> 
> On Wed, Feb 10, 2016 at 2:23 PM Dmitry Borodaenko  > wrote:
> +1 to Stas, supplanting VCS branches with code duplication is a path to
> madness and despair. The dubious benefits of a cross-release backwards
> compatible plugin binary are not worth the code and infra technical debt
> that such approach would accrue over time.
> 
> Supporting multiple fuel releases will likely result in madness as discussed, 
> however as we look to support multiple OpenStack releases from the same 
> version of fuel, this methodology becomes much more important.
>  
> On Wed, Feb 10, 2016 at 07:36:30PM +0300, Stanislaw Bogatkin wrote:
> > It changes mostly nothing for case of furious plugin development when big
> > parts of code changed from one release to another.
> >
> > You will have 6 different deployment_tasks directories and 30 a little bit
> > different files in root directory of plugin. Also you forgot about
> > repositories directory (+6 at least), pre_build hooks (also 6) and so on.
> > It will look as hell after just 3 years of development.
> >
> > Also I can't imagine how to deal with plugin licensing if you have Apache
> > for liberty but BSD for mitaka release, for example.
> >
> > Much easier way to develop a plugin is to keep it's source in VCS like Git
> > and just make a branches for every fuel release. It will give us
> > opportunity to not store a bunch of similar but a little bit different
> > files in repo. There is no reason to drag all different versions of code
> > for specific release.
> >
> >
> > On other hand there is a pros - your plugin can survive after upgrade if it
> > supports new release, no changes needed here.
> >
> > On Wed, Feb 10, 2016 at 4:04 PM, Alexey Shtokolov  > >
> > wrote:
> >
> > > Fuelers,
> > >
> > > We are discussing the idea to extend the multi release packages for
> > > plugins.
> > >
> > > Fuel plugin builder (FPB) can create one rpm-package for all supported
> > > releases (from metadata.yaml) but we can specify only deployment scripts
> > > and repositories per release.
> > >
> > > Current release definition (in metadata.yaml):
> > > - os: ubuntu
> > >   version: liberty-8.0
> > >   mode: ['ha']
> > >   deployment_scripts_path: deployment_scripts/
> > >   repository_path: repositories/ubuntu
> > >
> 
> This will result in far too much clutter.
> For starters we should support nested over rides. for example the author may 
> have already taken account for the changes between one openstack version to 
> another. In this case they only should need to define the releases they 
> support and not specify any additional locations. Later they may determine 
> that they only need to replace packages, or one other file they should not be 
> required to code every location for each release
> 
> Also, at the same time we MUST clean up importing various yaml files. 
> Specifically, tasks, volumes, node roles, and network roles. Requiring that 
> they all be maintained in a single file doesn't scale, we don't require it 
> for tasks.yaml in fuel library, and we should not require it in plugins. We 
> should simply do the same thing as tasks.yaml in library, scan the subtree 
> for specific file names and just merge them all together. (This has been 
> expressed multiple times by people with larger plugins)
> 
> > > So the idea [0] is to make releases fully configurable.
> > > Suggested changes for release definition (in metadata.yaml):
> > >   components_path: components_liberty.yaml
> > >   deployment_tasks_path: deployment_tasks_liberty/ # <- folder 
> > >   environment_config_path: environment_config_liberty.yaml
> > >   network_roles_path: network_roles_liberty.yaml
> > >   node_roles_path: node_roles_liberty.yaml
> > >   volumes_path: volumes_liberty.yaml
> > >
> > > I see the issue: if we change anything for one release (e.g.
> > > deployment_task typo) revalidation is needed for all releases.
> > >
> > > Your Pros and cons please?
> > >
> > > [0] https://review.openstack.org/#/c/271417/ 
> > > 
> > > ---
> > > WBR, Alexey Shtokolov
> > >
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: 
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> > > 
> > > ht

Re: [openstack-dev] [all] RFC - service naming registry under API-WG

2016-02-10 Thread Thierry Carrez

Sean Dague wrote:

[...]
3) This be a dedicated repository 'openstack/service-registry'. The API
WG will have votes on it (I would also suggest the folks that have been
working on Service Catalog TNG - myself, Anne Gentle, Brant Knudson, and
Chris Dent be added to this). The actual registry will be some
structured file that supports comments (probably yaml).
[...]


The alternative would be to dump that information in the projects.yaml 
file from the governance repository. I think I prefer your approach of a 
separate repository, since it's more consistent with my goal of having 
the TC step out of the way of getting things done. Decisions from that 
group can easily be appealed to the TC if a conflict arises. The only 
issue is the consistency between the two repositories, but I think it's 
manageable.


TL;DR: +1

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-10 Thread Tim Bell

On 11/02/16 00:33, "gordon chung"  wrote:

>
>
>On 10/02/2016 4:28 PM, Tim Bell wrote:
>>
>> On 10/02/16 21:53, "gordon chung"  wrote:
>>
>>> apologies if this was asked somewhere else in thread, but should we try
>>> to define "production" scale or can we even? based on the last survey,
>>> the vast majority of deployments are under 100nodes[1]. that said, a few
>>> years ago, one company was dreaming 100,000 nodes.
>>>
>>> i'd imagine the 50 node solution won't satisfy the 1000 node solution
>>> let alone the 10k node. similarly, the opposite direction will probably
>>> give an overkill solution. it seems somewhat difficult to define
>>> something against 'production' term unless we scope it somehow (e.g # of
>>> node ranges)?
>>>
>>> [1] http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf
>>
>>
>> As always, scale is relative. However, projects have shown major 
>> difficulties to scale to 10% of the larger deployments. Scaling beyond that, 
>> even with commercial solutions, has required major investments in custom 
>> configurations by the deployers.
>>
>> There are two risks I see
>>
>> A. Use sqlite and then change to proprietary solution X for scale
>> B. Works at a small scale but scalability has not been considered as a 
>> design criteria or demonstrated
>>
>> I think it is important that the community is informed on these constraints 
>> before feeling that a particular project is the solution for them and that 
>> the TC factors these questions into their approval criteria.
>>
>
>is there a source for this? a place where people list their reference 
>architectures and deployment scales?
>
>i'm not a deployer but as an outsider, i've found that there isn't a lot 
>of transparency in regards to how projects have been made to scale. 
>maybe this is a side effect of OpenStack being hard as hell to use, but 
>it seems configurations are the secret sauce people use to sell so we 
>have a lot of failure stories (bottom-end constraints) in the community 
>rather than successes (upper-end constraints).
>
>are there a collection of fully transparent deployers out there to be 
>our 'production' baseline? to help vet scalability? just CERN?

The large deployment team 
(https://wiki.openstack.org/wiki/Large_Deployment_Team) meets regularly and 
presents architectures at the summits and ops mid cycle meetup ‘show&tell’ 
sessions. I can remember presentations from Walmart, Paypal/eBay, Rackspace and 
Yahoo! recently. The LDT etherpads also contain a lot of information. This is 
then regularly put into the ops manual 
(http://docs.openstack.org/openstack-ops/content/architecture.html).

>
>-- 
>gord
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev