Re: [openstack-dev] [openstack-ansible][security] Creating a CA for openstack-ansible deployments?

2015-10-29 Thread Clark, Robert Graham
On 29/10/2015 21:43, "Major Hayden"  wrote:



>On 10/29/2015 04:33 AM, McPeak, Travis wrote:
>> The only potential security drawback is that we are introducing a new
>> asset to protect.  If we create the tools that enable a deployer to
>> easily create and administer a lightweight CA, that should add
>> significant value to OpenStack, especially for smaller organizations
>> that don't have experience running a CA.
>
>This is certainly true.  However, I'd like to solve for the use of self-signed 
>SSL certificates in openstack-ansible first.
>
>At the moment, each self-signed certificate for various services is generated 
>within each role.  The goal would be to make a CA at the beginning and then 
>allow roles to utilize another role/task to issue certificates from that CA.  
>The CA would most likely be located on the deployment host.
>
>Deployers who are very security conscious can provide keys, certificates, and 
>CA certificates in the deployment configuration and those will be used instead 
>of generating self-signed certificates.
>
>--
>Major Hayden

It sounds like what you probably need is a lightweight CA, without revocation, 
that gives you some basic constraints by which you can restrict certificate 
issuance to just your ansible tasks and that could potentially be thrown away 
when it’s no longer required. Particularly something light enough that it could 
live on any deployment/installer node.

This sounds like it _might_ be a good fit for Anchor[1], though possibly not if 
I’ve misunderstood your use-case.

[1] https://wiki.openstack.org/wiki/Security#Anchor_-_Ephemeral_PKI

Cheers
-Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Creating a CA for openstack-ansible deployments?

2015-10-29 Thread Major Hayden
On 10/29/2015 04:33 AM, McPeak, Travis wrote:
> The only potential security drawback is that we are introducing a new
> asset to protect.  If we create the tools that enable a deployer to
> easily create and administer a lightweight CA, that should add
> significant value to OpenStack, especially for smaller organizations
> that don't have experience running a CA.

This is certainly true.  However, I'd like to solve for the use of self-signed 
SSL certificates in openstack-ansible first.

At the moment, each self-signed certificate for various services is generated 
within each role.  The goal would be to make a CA at the beginning and then 
allow roles to utilize another role/task to issue certificates from that CA.  
The CA would most likely be located on the deployment host.

Deployers who are very security conscious can provide keys, certificates, and 
CA certificates in the deployment configuration and those will be used instead 
of generating self-signed certificates.

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] gate-cinder-python34 failing test_nova_timeout after novaclient 2.33 release

2015-10-29 Thread Matt Riedemann



On 10/29/2015 5:55 AM, Andrey Kurilin wrote:

 >But it was released with 2961e82 which was the backward incompatible
requests exception change, which we now have a fix for that we want to
release, but would include 0cd5812.

I suppose we need to revert 0cd5812 change too, cut new release and then
revert revert of 0cd5812 :)

On Wed, Oct 28, 2015 at 8:44 PM, Matt Riedemann
> wrote:



On 10/28/2015 12:28 PM, Matt Riedemann wrote:



On 10/28/2015 10:41 AM, Ivan Kolodyazhny wrote:

Matt,

Thank you for bring this topic to the ML.

In cinder, we've merged [1] patch to unblock gates. I've
proposed other
patch [2] to fix global-requirements for the stable/liberty
branch.


[1] https://review.openstack.org/#/c/239837/
[2] https://review.openstack.org/#/c/239799/

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Thu, Oct 29, 2015 at 12:13 AM, Matt Riedemann

>> wrote:



 On 10/28/2015 9:22 AM, Matt Riedemann wrote:



 On 10/28/2015 9:06 AM, Yuriy Nesenenko wrote:

 Hi. Look at
https://review.openstack.org/#/c/239837/

 On Wed, Oct 28, 2015 at 3:52 PM, Matt Riedemann
 
 >
 
 







http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__


 OpenStack Development Mailing List (not for
usage questions)
 Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 




http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Heh, well that's 3 bugs then, I didn't see that
one. jgriffith
and I
 were talking in IRC about just handling both
exceptions in
 cinder to fix
 this but we also agreed that this is a backward
incompatible
 change on
 the novaclient side, which was also discussed in
the original
 novaclient
 wishlist bug that prompted the breaking change.

 Given the backward compat issues, we might not just
be breaking
 cinder
 here, so I've proposed a revert of the novaclient
change with
 justification in the commit message:


Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-29 Thread Jay Lau
Hi Steve,

It is really a big loss to Magnum and thanks very much for your help in my
Magnum journey. Wish you good luck in Kolla!


On Tue, Oct 27, 2015 at 2:29 PM, 大塚元央  wrote:

> Hi Steve,
>
> I'm very sad about your stepping down from Magnum core. Without your help,
> I couldn't contribute to magnum project.
> But kolla is also fantastic project.
> I wish you the best of luck in kolla.
>
> Best regards.
> - Yuanying Otsuka
>
> On Tue, Oct 27, 2015 at 00:39 Baohua Yang  wrote:
>
>> Really a pity!
>>
>> We need more resources on the container part in OpenStack indeed, as so
>> many new projects are just initiated.
>>
>> Community is not only about putting technologies together, but also
>> putting technical guys together.
>>
>> Happy to see so many guys in the Tokyo Summit this afternoon.
>>
>> Let's take care of the opportunities to make good communications with
>> each other.
>>
>> On Mon, Oct 26, 2015 at 8:17 AM, Steven Dake (stdake) 
>> wrote:
>>
>>> Hey folks,
>>>
>>> It is with sadness that I find myself under the situation to have to
>>> write this message.  I have the privilege of being involved in two of the
>>> most successful and growing projects (Magnum, Kolla) in OpenStack.  I chose
>>> getting involved in two major initiatives on purpose, to see if I could do
>>> the job; to see if  I could deliver two major initiatives at the same
>>> time.  I also wanted it to be a length of time that was significant – 1+
>>> year.  I found indeed I was able to deliver both Magnum and Kolla, however,
>>> the impact on my personal life has not been ideal.
>>>
>>> The Magnum engineering team is truly a world class example of how an
>>> Open Source project should be constructed and organized.  I hope some young
>>> academic writes a case study on it some day but until then, my gratitude to
>>> the Magnum core reviewer team is warranted by the level of  their sheer
>>> commitment.
>>>
>>> I am officially focusing all of my energy on Kolla going forward.  The
>>> Kolla core team elected me as PTL (or more accurately didn’t elect anyone
>>> else;) and I really want to be effective for them, especially in what I
>>> feel is Kolla’s most critical phase of growth.
>>>
>>> I will continue to fight  for engineering resources for Magnum
>>> internally in Cisco.  Some of these have born fruit already including the
>>> Heat resources, the Horizon plugin, and of course the Networking plugin
>>> system.  I will also continue to support Magnum from a resources POV where
>>> I can do so (like the fedora image storage for example).  What I won’t be
>>> doing is reviewing Magnum code (serving as a gate), or likely making much
>>> technical contribution to Magnum in the future.  On the plus side I’ve
>>> replaced myself with many many more engineers from Cisco who should be much
>>> more productive combined then I could have been alone ;)
>>>
>>> Just to be clear, I am not abandoning Magnum because I dislike the
>>> people or the technology.  I think the people are fantastic! And the
>>> technology – well I helped design the entire architecture!  I am letting
>>> Magnum grow up without me as I have other children that need more direct
>>> attention.  I think this viewpoint shows trust in the core reviewer team,
>>> but feel free to make your own judgements ;)
>>>
>>> Finally I want to thank Perry Myers for influencing me to excel at
>>> multiple disciplines at once.  Without Perry as a role model, Magnum may
>>> have never happened (or would certainly be much different then it is
>>> today). Being a solid hybrid engineer has a long ramp up time and is really
>>> difficult, but also very rewarding.  The community has Perry to blame for
>>> that ;)
>>>
>>> Regards
>>> -steve
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best wishes!
>> Baohua
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-29 Thread Bogdan Dobrelya
Hello.
There are few types of a deployment regressions possible. When changing
a module version to be used from upstream (or internal module repo), for
example from Liberty to Mitaka. Or when changing the composition layer
(modular tasks in Fuel). Specifically, adding/removing/changing classes
and a class parameters.

An example regression for swift deployment data [0]. Something was
changed unnoticed by existing noop tests and as a result
the swift data became being stored in root partition.

Suggested per-commit based regressions detection [1] for deployment data
assumes to automatically detect if a class in a noop catalog run has
gained or lost a parameter or if it has been updated to another value by
a patch under test. Later, this check could even replace existing noop
tests, everything will be checked automatically, unless every deployment
scenario is covered by a corresponding template, which are represented
as YAML files [2] in Fuel.
Note: The tool [3] can help to get all deployment cases (-Y) and all
deployment tasks (-S) as well.

I propose to review the patch [1], understand how it works (see tl;dr
section below) and to start using it ASAP. The earlier we commit the
"initial" data layer state, less regressions would pop up.

(tl;dr)
The check should be done for every modular component (aka deployment
task). Data generated in the noop catalog run for all classes and
defines of a given deployment task should be verified against its
"acknowledged" (committed) state.
And fail the test gate, if changes has been found, like new parameter
with a defined value, removed a parameter, changed a parameter's value.

In order to remove a regression, a patch author will have to add (and
reviewers should acknowledge) detected changes in the committed state of
the deployment data. This may be done manually, with a tool like [3] or
by a pre-commit hook, or even at the CI side!
The regression check should show the diff between committed state and a
new state proposed in a patch. Changed state should be *reviewed* and
accepted with a patch, to became a committed one. So the deployment data
will evolve with *only* approved changes. And those changes would be
very easy to be discovered for each patch under review process!
No more regressions, everyone happy.

Examples:

- A. A patch author removed the mpm_module parameter from the
composition layer (apache modular task). The test should fail with a

Diff:
  @@ -90,7 +90,7 @@
 manage_user=> 'true',
 max_keepalive_requests => '100',
 mod_dir=> '/etc/httpd/conf.d',
  -  mpm_module => 'false',
  +  mpm_module => 'prefork',
 name   => 'Apache',
 package_ensure => 'installed',
 ports_file => '/etc/httpd/conf/ports.conf',

It illustrates that the mpm_module's committed value was "false".
But the new one came as the 'prefork', likely from the apache class
defaults.
The solution:
Follow the failed build link and see for detected changes (a diff).
Acknowledge the changes and include rebuilt templates in the patch as a
new revision. The tool [3] (use -h for help) example command:
./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb

Or edit the committed templates manually and include data changes in the
patch as well.

-B. An upstream module author added the new parameter mpm_mode with a
default '123'. The test should fail with a

Diff:
   @@ -90,6 +90,7 @@
  manage_user=> 'true',
  max_keepalive_requests => '100',
  mod_dir=> '/etc/httpd/conf.d',
   +  mpm_mode   => '123',
  mpm_module => 'false',
  name   => 'Apache',
  package_ensure => 'installed',

It illustrates that the composition layer is not consistent with the
upstream module data schema, and that could be a potential regression in
deployment (new parameter added upstream and goes with defaults, being
ignored by the composition manifest).
The solution is the same as for the case A.

[0] https://bugs.launchpad.net/fuel/+bug/1508482
[1] https://review.openstack.org/240015
[2]
https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
[3]
https://review.openstack.org/#/c/240015/7/utils/jenkins/fuel_noop_tests.rb

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-29 Thread Bogdan Dobrelya
On 29.10.2015 15:24, Bogdan Dobrelya wrote:
> Hello.
> There are few types of a deployment regressions possible. When changing
> a module version to be used from upstream (or internal module repo), for
> example from Liberty to Mitaka. Or when changing the composition layer
> (modular tasks in Fuel). Specifically, adding/removing/changing classes
> and a class parameters.
> 
> An example regression for swift deployment data [0]. Something was
> changed unnoticed by existing noop tests and as a result
> the swift data became being stored in root partition.
> 
> Suggested per-commit based regressions detection [1] for deployment data
> assumes to automatically detect if a class in a noop catalog run has
> gained or lost a parameter or if it has been updated to another value by
> a patch under test. Later, this check could even replace existing noop
> tests, everything will be checked automatically, unless every deployment
> scenario is covered by a corresponding template, which are represented
> as YAML files [2] in Fuel.
> Note: The tool [3] can help to get all deployment cases (-Y) and all
> deployment tasks (-S) as well.
> 
> I propose to review the patch [1], understand how it works (see tl;dr
> section below) and to start using it ASAP. The earlier we commit the
> "initial" data layer state, less regressions would pop up.
> 
> (tl;dr)
> The check should be done for every modular component (aka deployment
> task). Data generated in the noop catalog run for all classes and
> defines of a given deployment task should be verified against its
> "acknowledged" (committed) state.
> And fail the test gate, if changes has been found, like new parameter
> with a defined value, removed a parameter, changed a parameter's value.
> 
> In order to remove a regression, a patch author will have to add (and
> reviewers should acknowledge) detected changes in the committed state of
> the deployment data. This may be done manually, with a tool like [3] or
> by a pre-commit hook, or even at the CI side!
> The regression check should show the diff between committed state and a
> new state proposed in a patch. Changed state should be *reviewed* and
> accepted with a patch, to became a committed one. So the deployment data
> will evolve with *only* approved changes. And those changes would be
> very easy to be discovered for each patch under review process!
> No more regressions, everyone happy.
> 
> Examples:
> 
> - A. A patch author removed the mpm_module parameter from the
> composition layer (apache modular task). The test should fail with a
> 
> Diff:
>   @@ -90,7 +90,7 @@
>  manage_user=> 'true',
>  max_keepalive_requests => '100',
>  mod_dir=> '/etc/httpd/conf.d',
>   -  mpm_module => 'false',
>   +  mpm_module => 'prefork',
>  name   => 'Apache',
>  package_ensure => 'installed',
>  ports_file => '/etc/httpd/conf/ports.conf',
> 
> It illustrates that the mpm_module's committed value was "false".
> But the new one came as the 'prefork', likely from the apache class
> defaults.
> The solution:
> Follow the failed build link and see for detected changes (a diff).
> Acknowledge the changes and include rebuilt templates in the patch as a
> new revision. The tool [3] (use -h for help) example command:
> ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
> 
> Or edit the committed templates manually and include data changes in the
> patch as well.
> 
> -B. An upstream module author added the new parameter mpm_mode with a
> default '123'. The test should fail with a
> 
> Diff:
>@@ -90,6 +90,7 @@
>   manage_user=> 'true',
>   max_keepalive_requests => '100',
>   mod_dir=> '/etc/httpd/conf.d',
>+  mpm_mode   => '123',
>   mpm_module => 'false',
>   name   => 'Apache',
>   package_ensure => 'installed',
> 
> It illustrates that the composition layer is not consistent with the
> upstream module data schema, and that could be a potential regression in
> deployment (new parameter added upstream and goes with defaults, being
> ignored by the composition manifest).
> The solution is the same as for the case A.
> 
> [0] https://bugs.launchpad.net/fuel/+bug/1508482
> [1] https://review.openstack.org/240015

Please use the 6th revision to catch the idea. Next revisions are filled
with tons of auto-generated templates representing committed state of
the deployment data plane. This is still WIP...

We are thinking on should we check the data state for *all* classes and
defines of each deployment task, or should we do this only for some of
the classes. The latter one would drastically decrease the amount of
auto-generated data templates. But the former one would allow to catch
more regressions.

> [2]
> 

Re: [openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-29 Thread Matthew Mosesohn
Bogdan,

I don't want to maintain catalog resources 10 (or 20) times. It's an
incredible amount of work to upkeep. There has to be a better way to ensure
we don't lose some things. The approach I had in mind would have covered
these cases:
1 - track all hiera lookups and make sure we catch new/lost lookups
2 - ensure all classes called from top level tasks are passed these values
from hiera

We lost in this case a hiera lookup. Such a test would cover this, rather
than comparing resulting puppet resources.

On Thu, Oct 29, 2015 at 5:39 PM, Bogdan Dobrelya 
wrote:

> On 29.10.2015 15:24, Bogdan Dobrelya wrote:
> > Hello.
> > There are few types of a deployment regressions possible. When changing
> > a module version to be used from upstream (or internal module repo), for
> > example from Liberty to Mitaka. Or when changing the composition layer
> > (modular tasks in Fuel). Specifically, adding/removing/changing classes
> > and a class parameters.
> >
> > An example regression for swift deployment data [0]. Something was
> > changed unnoticed by existing noop tests and as a result
> > the swift data became being stored in root partition.
> >
> > Suggested per-commit based regressions detection [1] for deployment data
> > assumes to automatically detect if a class in a noop catalog run has
> > gained or lost a parameter or if it has been updated to another value by
> > a patch under test. Later, this check could even replace existing noop
> > tests, everything will be checked automatically, unless every deployment
> > scenario is covered by a corresponding template, which are represented
> > as YAML files [2] in Fuel.
> > Note: The tool [3] can help to get all deployment cases (-Y) and all
> > deployment tasks (-S) as well.
> >
> > I propose to review the patch [1], understand how it works (see tl;dr
> > section below) and to start using it ASAP. The earlier we commit the
> > "initial" data layer state, less regressions would pop up.
> >
> > (tl;dr)
> > The check should be done for every modular component (aka deployment
> > task). Data generated in the noop catalog run for all classes and
> > defines of a given deployment task should be verified against its
> > "acknowledged" (committed) state.
> > And fail the test gate, if changes has been found, like new parameter
> > with a defined value, removed a parameter, changed a parameter's value.
> >
> > In order to remove a regression, a patch author will have to add (and
> > reviewers should acknowledge) detected changes in the committed state of
> > the deployment data. This may be done manually, with a tool like [3] or
> > by a pre-commit hook, or even at the CI side!
> > The regression check should show the diff between committed state and a
> > new state proposed in a patch. Changed state should be *reviewed* and
> > accepted with a patch, to became a committed one. So the deployment data
> > will evolve with *only* approved changes. And those changes would be
> > very easy to be discovered for each patch under review process!
> > No more regressions, everyone happy.
> >
> > Examples:
> >
> > - A. A patch author removed the mpm_module parameter from the
> > composition layer (apache modular task). The test should fail with a
> >
> > Diff:
> >   @@ -90,7 +90,7 @@
> >  manage_user=> 'true',
> >  max_keepalive_requests => '100',
> >  mod_dir=> '/etc/httpd/conf.d',
> >   -  mpm_module => 'false',
> >   +  mpm_module => 'prefork',
> >  name   => 'Apache',
> >  package_ensure => 'installed',
> >  ports_file => '/etc/httpd/conf/ports.conf',
> >
> > It illustrates that the mpm_module's committed value was "false".
> > But the new one came as the 'prefork', likely from the apache class
> > defaults.
> > The solution:
> > Follow the failed build link and see for detected changes (a diff).
> > Acknowledge the changes and include rebuilt templates in the patch as a
> > new revision. The tool [3] (use -h for help) example command:
> > ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
> >
> > Or edit the committed templates manually and include data changes in the
> > patch as well.
> >
> > -B. An upstream module author added the new parameter mpm_mode with a
> > default '123'. The test should fail with a
> >
> > Diff:
> >@@ -90,6 +90,7 @@
> >   manage_user=> 'true',
> >   max_keepalive_requests => '100',
> >   mod_dir=> '/etc/httpd/conf.d',
> >+  mpm_mode   => '123',
> >   mpm_module => 'false',
> >   name   => 'Apache',
> >   package_ensure => 'installed',
> >
> > It illustrates that the composition layer is not consistent with the
> > upstream module data schema, and that could be a potential regression in
> > deployment (new parameter 

[openstack-dev] New [puppet] module for Magnum project

2015-10-29 Thread Potter, Nathaniel
Hi everyone,

I'm interested in starting up a puppet module that will handle the Magnum 
containers project. Would this be something the community might want? Thanks!

Best,
Nate Potter
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-29 Thread Bogdan Dobrelya
> The solution:
>> > Follow the failed build link and see for detected changes (a diff).
>> > Acknowledge the changes and include rebuilt templates in the patch as a
>> > new revision. The tool [3] (use -h for help) example command:
>> > ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
>> >
>> > Or edit the committed templates manually and include data changes in the
>> > patch as well.
>> >
>> > -B. An upstream module author added the new parameter mpm_mode with a
>> > default '123'. The test should fail with a
>> >
>> > Diff:
>> >@@ -90,6 +90,7 @@
>> >   manage_user=> 'true',
>> >   max_keepalive_requests => '100',
>> >   mod_dir=> '/etc/httpd/conf.d',
>> >+  mpm_mode   => '123',
>> >   mpm_module => 'false',
>> >   name   => 'Apache',
>> >   package_ensure => 'installed',
>> >
>> > It illustrates that the composition layer is not consistent with the
>> > upstream module data schema, and that could be a potential regression in
>> > deployment (new parameter added upstream and goes with defaults, being
>> > ignored by the composition manifest).
>> > The solution is the same as for the case A.
>> >
>> > [0] https://bugs.launchpad.net/fuel/+bug/1508482
>> > [1] https://review.openstack.org/240015
>>
>> Please use the 6th revision to catch the idea. Next revisions are filled
>> with tons of auto-generated templates representing committed state of
>> the deployment data plane. This is still WIP...
>>
>> We are thinking on should we check the data state for *all* classes and
>> defines of each deployment task, or should we do this only for some of
>> the classes. The latter one would drastically decrease the amount of
>> auto-generated data templates. But the former one would allow to catch
>> more regressions.
>>
>> > [2]
>> >
>> https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
>> > [3]
>> >
>> https://review.openstack.org/#/c/240015/7/utils/jenkins/fuel_noop_tests.rb
>> >
>>
>>
>> --
>> Best regards,
>> Bogdan Dobrelya,
>> Irc #bogdando
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151029/443594ed/attachment.html>


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New [puppet] module for Magnum project

2015-10-29 Thread Potter, Nathaniel
Hi Adrian,

Basically it would fall under the same umbrella as all of the other 
puppet-openstack projects, which use puppet automation to configure as well as 
manage various OpenStack projects. An example of a mature one is here for the 
Cinder project: https://github.com/openstack/puppet-cinder. Right now there are 
about 35-40 such puppet modules for different projects in OpenStack, so one 
example of people who might make use of this project are people who have 
already used the existing puppet modules to set up their cloud and wish to 
incorporate Magnum into their cloud using the same tool.

Thanks,
Nate

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: Thursday, October 29, 2015 10:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] New [puppet] module for Magnum project

Nate,

On Oct 29, 2015, at 11:26 PM, Potter, Nathaniel 
> wrote:

Hi everyone,

I’m interested in starting up a puppet module that will handle the Magnum 
containers project. Would this be something the community might want? Thanks!

Best,
Nate Potter

Can you elaborate a bit more about your concept? Who would use this? What 
function would it provide? My guess is that you are suggesting a puppet config 
for adding the Magnum service to an OpenStack cloud. Is that what you meant? If 
so, could you share a reference to an existing one that we could see as an 
example of what you had in mind?

Thanks,

Adrian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Performance Team summit session results

2015-10-29 Thread Dina Belova
Hey folks!

On Tuesday we had great summit session about performance team kick-off and
yesterday it was a great LDT session as well and I’m really glad to see how
much does the OpenStack performance topic is important for all of us. 40
minutes session surely was not enough to analyse everyone’s feedback and
bottlenecks people usually see, so I’ll try to finalise what have been
discussed and the next steps in this email.

Performance team kick-off session (
https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off)
can be shortly described with the following points:


   - IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others were
   taking part in the session
   - Various tools are used right now for OpenStack benchmarking and
   profiling right now:
   - Rally (IBM, HP, Mirantis, Yahoo!)
  - Shaker (Mirantis, merging its functionality to Rally right now)
  - Gatling (Rackspace)
  - Zipkin (Yahoo!)
  - JMeter (Yandex)
  - and others…
   - Various issues have been seen during the OpenStack cloud operating
   (full list can be found here -
   https://etherpad.openstack.org/p/openstack-performance-issues). Most
   mentioned issues were the following:
   - performance of DB-related layers (DB itself and oslo.db) - it is about
  7 abstraction DB layers in Nova; performance of Nova conductor was
  mentioned several times
  - performance of MQ-related layers (MQ itself and oslo.messaging)
   - Different companies are using different standards for performance
   benchmarking (both control plane and data plane testing)
   - The most wished output from the team due to the comments will be:
   - agree on the “performance testing standard”, including answers on the
  following questions:
 - what tools need to be used for OpenStack performance
 benchmarking?
 - what benchmarking meters need to be covered? what we would like
 to compare?
 - what scenarios need to be covered?
 - how can we compare performance of different cloud deployments?
 - what performance deployment patterns can be used for various
 workloads?
  - share test plans and perform benchmarking tests
  - create methodologies and documentation about best OpenStack
  deployment and performance testing practices


We’re going to cover all these topics further. First of all IRC channel for
the discussions was created: *#openstack-performance*. We’re going to have
weekly meeting related to current progress on that channel, doodle with the
voting can be found here: http://doodle.com/poll/wv6qt8eqtc3mdkuz#table
 (I was brave enough not to include timeslots that were overlapping with
some of mine really hard-to-move activities :))

Let’s have next week as a voting time, and have first IRC meeting in our
channel the week after next. We can start our further discussions with
“performance” and “performance testing” terms definition and benchmarking
tools analysis.

Cheers,
Dina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New [puppet] module for Magnum project

2015-10-29 Thread Emilien Macchi


On 10/29/2015 11:26 PM, Potter, Nathaniel wrote:
> Hi everyone,
> 
>  
> 
> I’m interested in starting up a puppet module that will handle the
> Magnum containers project. Would this be something the community might
> want? Thanks!

If this module is about deploying Magnum service(s) and configuration,
that's a great idea to create puppet-magnum (we currently don't have it).

Please look https://wiki.openstack.org/wiki/Puppet/New_module and come
up on IRC if you have any questions, I'm glad someone is taking care of it!

Emilien

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Performance Team summit session results

2015-10-29 Thread Matt Riedemann



On 10/29/2015 9:30 AM, Dina Belova wrote:

Hey folks!

On Tuesday we had great summit session about performance team kick-off
and yesterday it was a great LDT session as well and I’m really glad to
see how much does the OpenStack performance topic is important for all
of us. 40 minutes session surely was not enough to analyse everyone’s
feedback and bottlenecks people usually see, so I’ll try to finalise
what have been discussed and the next steps in this email.

Performance team kick-off session
(https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off)
can be shortly described with the following points:

  * IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others were
taking part in the session
  * Various tools are used right now for OpenStack benchmarking and
profiling right now:
  o Rally (IBM, HP, Mirantis, Yahoo!)
  o Shaker (Mirantis, merging its functionality to Rally right now)
  o Gatling (Rackspace)
  o Zipkin (Yahoo!)
  o JMeter (Yandex)
  o and others…
  * Various issues have been seen during the OpenStack cloud operating
(full list can be found here -
https://etherpad.openstack.org/p/openstack-performance-issues). Most
mentioned issues were the following:
  o performance of DB-related layers (DB itself and oslo.db) - it is
about 7 abstraction DB layers in Nova; performance of Nova
conductor was mentioned several times
  o performance of MQ-related layers (MQ itself and oslo.messaging)
  * Different companies are using different standards for performance
benchmarking (both control plane and data plane testing)
  * The most wished output from the team due to the comments will be:
  o agree on the “performance testing standard”, including answers
on the following questions:
  + what tools need to be used for OpenStack performance
benchmarking?
  + what benchmarking meters need to be covered? what we would
like to compare?
  + what scenarios need to be covered?
  + how can we compare performance of different cloud deployments?
  + what performance deployment patterns can be used for various
workloads?
  o share test plans and perform benchmarking tests
  o create methodologies and documentation about best OpenStack
deployment and performance testing practices


We’re going to cover all these topics further. First of all IRC channel
for the discussions was created: *#openstack-performance*. We’re going
to have weekly meeting related to current progress on that channel,
doodle with the voting can be found here:
http://doodle.com/poll/wv6qt8eqtc3mdkuz#table
  (I was brave enough not to include timeslots that were overlapping
with some of mine really hard-to-move activities :))

Let’s have next week as a voting time, and have first IRC meeting in our
channel the week after next. We can start our further discussions with
“performance” and “performance testing” terms definition and
benchmarking tools analysis.

Cheers,
Dina


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks for writing this up, it's great to see people getting together 
and sharing info on performance issues and trying to pinpoint the big ones.


I poked through the performance issues etherpad and was wondering how 
many people with DB issues, particularly for nova-conductor, are using a 
level of oslo.db that's new enough to be using pymysql rather than 
mysql-python because from what I remember there were eventlet issues 
without pymysql. That was added to oslo.db 1.12.0 [1].


The nova-conductor workers / CPU usage is also a known issue in the 
large ops gate job [2] but I'm not aware of anyone spending the time 
drilling into what exactly is causing a lot of that overhead and if any 
of it is abnormal.


Finally, wrt DB, I'd also be interested to know if Rackspace, or anyone 
else, is still running with the direct-to-sql stuff that comstud wrote 
for nova [3] and if that still shows significant performance 
improvements over using sqlalchemy ORM. Not to open that can of worms in 
the -dev list here again, but it'd be an interesting data point.


[1] https://review.openstack.org/#/c/184392/
[2] https://review.openstack.org/#/c/228636/
[3] https://blueprints.launchpad.net/nova/+spec/db-mysqldb-impl

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally][Meeting][Agenda]

2015-10-29 Thread Roman Vasilets
Hi, its a friendly reminder that if you what to discuss some topics at
Rally meetings, please add you topic to our Meeting agenda
https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
specify who will lead topic discussion. Add some information about
topic(links, etc.) Thank you for your attention.

- Best regards, Vasilets Roman.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] network_checker code freeze

2015-10-29 Thread Vladimir Kozhukalov
Dear colleagues,

We still can not say that network-checker is a separate project now. I'm
still working on related issues. Current status is

Network-checker

   - Launchpad bug https://bugs.launchpad.net/fuel/+bug/1506896
   - project-config patch https://review.openstack.org/235822 (DONE)
   - pypi (DONE)
   - run_tests.sh https://review.openstack.org/#/c/235829/ (DONE)
   - rpm/deb specs https://review.openstack.org/#/c/235966/ (DONE)
   - fuel-ci verification jobs https://review.fuel-infra.org/12923 (DONE)
   - label jenkins slaves for verification (DONE)
   - directory freeze (DONE)
   - prepare upstream (DONE)
   - wait for project-config patch to be merged (DONE)
   - .gitreview https://review.openstack.org/#/c/238500/ (DONE)
   - .gitignore https://review.openstack.org/#/c/238519/ (ON REVIEW)
   - custom jobs parameters https://review.fuel-infra.org/13272 (DONE)
   - fix core group (DONE)
   - fuel-main
  - fuel-main: use network-checker repository
  https://review.openstack.org/238992 (ON REVIEW)
  - fuel-menu: rename nailgun-net-check -> network-checker
  https://review.openstack.org/#/c/240225 (ON REVIEW)
  - network-checker: fix package spec
  https://review.openstack.org/#/c/240191/ (ON REVIEW)
   - packaging-ci  https://review.fuel-infra.org/13181 (DONE)
   - deprecate network_checker directory https://review.openstack.org/23
   (ON REVIEW) (once fuel-main patch is merged)
   - fix unit tests https://review.openstack.org/#/c/239425/ (DONE)
   - libpcap-dev package and fix tests (patches have been merged but not
   deployed yet)
  - https://review.openstack.org/#/c/239421/ openstack-ci libpcap-dev
  package (DONE)
  - https://review.openstack.org/239463 openstack-ci libpcap-dev
  package for puppet (DONE)
  - https://review.fuel-infra.org/13173 fuel-ci libpcap-dev package
  (DONE)
   - remove old nailgun-net-check package (TODO)

Network-checker tests are still red because libpcap-dev package is not
installed both on Openstack CI and on Fuel CI.

If you can help to review those patches which are (ON REVIEW), please help.



Vladimir Kozhukalov

On Tue, Oct 20, 2015 at 6:45 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> As you might know I'm working on splitting multiproject fuel-web
> repository into several sub-projects. network_checker is one of directories
> that are to be moved to a separate git project.
>
> Checklist for this to happen is as follows:
>
>
>- Launchpad bug: https://bugs.launchpad.net/fuel/+bug/1506896
>- project-config patch https://review.openstack.org/235822 (ON REVIEW)
>- pypi
>https://pypi.python.org/pypi?%3Aaction=pkg_edit=Network-checker
>(DONE)
>- run_tests.sh https://review.openstack.org/#/c/235829/ (DONE)
>- rpm/deb specs https://review.openstack.org/#/c/235966/ (DONE)
>- fuel-ci verification jobs https://review.fuel-infra.org/12923 (ON
>REVIEW)
>- label jenkins slaves for verification jobs (ci team)
>- directory freeze (WE ARE HERE)
>- prepare upstream (TODO)
>- project-config (ON REVIEW)
>- fuel-main patch (TODO)
>- packaging-ci patch (TODO)
>- deprecate network_checker directory (TODO)
>
>
> Now we are at the point where we need to freeze fuel-web/network_checker
> directory. So, I'd like to announce code freeze for this directory and all
> patches that make changes in the directory and are currently on review will
> need to be backported to the new git repository.
>
>
>
> Vladimir Kozhukalov
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's up with functional testing?

2015-10-29 Thread Matt Riedemann



On 10/27/2015 9:55 PM, Emilien Macchi wrote:

As a user[1], I would like to functionally test OpenStack services.

I'm using Tempest (which is AFIK [2] the official OpenStack project for
functional testing) and am able to validate that Puppet OpenStack module
actually deploy services & make them work together which is the goal of
Puppet OpenStack Integration testing [3].

Until now I was happy, until this bug [4] (TL;DR: Aodh can't be testing
with Tempest which is a bug I'm working on, and not really related to
this thread).
I realized Aodh [5] (and apparently some other projects like Ceilometer)
were using something else (gabbi [6]) for testing.

How come some big tent projects do not use Tempest anymore for
functional testing? I thought there was/is a move with tempest plugins
that will help projects to host their tempest tests in their repos.

Am I missing something? Any official decision taken?
Is gabbi supported by OpenStack?

I feel like there is currently 2 paths that try to do the same thing and
as a user, I'm not happy.

Please help me to understand,
Thank you.

[1] a user who deploy Puppet OpenStack modules in OpenStack Infra for CI
purposes
[2] http://goo.gl/sgI2D8
 http://goo.gl/DTR1cL
[3] https://github.com/openstack/puppet-openstack-integration#overview
[4] https://bugs.launchpad.net/tempest/+bug/1509885
[5] https://github.com/openstack/aodh
[6] https://pypi.python.org/pypi/gabbi/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



This might be helpful. I noticed recently that nova didn't have any API 
functional testing for a wrinkle in one of it's APIs. I opened a nova 
bug to add the functional testing (in nova's tree) since it only relies 
on having a wsgi server for the API call, and I think a sqlite database 
for stubbing out some resources.


A patch was proposed to Tempest [1] which I -1'ed since I thought we 
could just contain that code in nova's tree since there wasn't any 
cross-project interaction needed (nova functional tests don't run 
against a live devstack so we don't have glance/cinder/neutron).


mtreinish elaborated on the reason for not having it in Tempest in the 
review:


"Yeah, for something like this I feel it's more appropriate inside nova 
itself. It'll be far more efficient to test it there. This only really 
needs to be a tempest test if you want it to be externally covered.


Since it can be more effectively tested inside of nova, and is self 
contained in nova, the rule of thumb I use for a case like this is 
basically: if this is something you want to enforce every cloud provider 
to do correctly for all time. (in both directions, since deployments 
aren't always running the latest) You also have to weigh that against 
the cost of enforcing this on a change to every project's commits.


This is because the primary advantage tempest gives you over an in tree 
functional test is external visibility. Tempest is used a ton of 
different places to test real clouds, not just for gating. (things like 
defcore, CD setups, etc) So if we can test it more effectively inside of 
nova's functional tests the only major reason to add it to tempest would 
to take advantage of that."


Hopefully that helps. I think that statement is something that we should 
get into the Tempest docs, i.e. a section on 'when is a test appropriate 
for tempest'.


[1] https://review.openstack.org/#/c/233808/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-29 Thread Emilien Macchi
Why do you use [puppet] tag?
Is there anything related to Puppet OpenStack modules we should take
care of?

Good luck,

On 10/29/2015 11:24 PM, Bogdan Dobrelya wrote:
> Hello.
> There are few types of a deployment regressions possible. When changing
> a module version to be used from upstream (or internal module repo), for
> example from Liberty to Mitaka. Or when changing the composition layer
> (modular tasks in Fuel). Specifically, adding/removing/changing classes
> and a class parameters.
> 
> An example regression for swift deployment data [0]. Something was
> changed unnoticed by existing noop tests and as a result
> the swift data became being stored in root partition.
> 
> Suggested per-commit based regressions detection [1] for deployment data
> assumes to automatically detect if a class in a noop catalog run has
> gained or lost a parameter or if it has been updated to another value by
> a patch under test. Later, this check could even replace existing noop
> tests, everything will be checked automatically, unless every deployment
> scenario is covered by a corresponding template, which are represented
> as YAML files [2] in Fuel.
> Note: The tool [3] can help to get all deployment cases (-Y) and all
> deployment tasks (-S) as well.
> 
> I propose to review the patch [1], understand how it works (see tl;dr
> section below) and to start using it ASAP. The earlier we commit the
> "initial" data layer state, less regressions would pop up.
> 
> (tl;dr)
> The check should be done for every modular component (aka deployment
> task). Data generated in the noop catalog run for all classes and
> defines of a given deployment task should be verified against its
> "acknowledged" (committed) state.
> And fail the test gate, if changes has been found, like new parameter
> with a defined value, removed a parameter, changed a parameter's value.
> 
> In order to remove a regression, a patch author will have to add (and
> reviewers should acknowledge) detected changes in the committed state of
> the deployment data. This may be done manually, with a tool like [3] or
> by a pre-commit hook, or even at the CI side!
> The regression check should show the diff between committed state and a
> new state proposed in a patch. Changed state should be *reviewed* and
> accepted with a patch, to became a committed one. So the deployment data
> will evolve with *only* approved changes. And those changes would be
> very easy to be discovered for each patch under review process!
> No more regressions, everyone happy.
> 
> Examples:
> 
> - A. A patch author removed the mpm_module parameter from the
> composition layer (apache modular task). The test should fail with a
> 
> Diff:
>   @@ -90,7 +90,7 @@
>  manage_user=> 'true',
>  max_keepalive_requests => '100',
>  mod_dir=> '/etc/httpd/conf.d',
>   -  mpm_module => 'false',
>   +  mpm_module => 'prefork',
>  name   => 'Apache',
>  package_ensure => 'installed',
>  ports_file => '/etc/httpd/conf/ports.conf',
> 
> It illustrates that the mpm_module's committed value was "false".
> But the new one came as the 'prefork', likely from the apache class
> defaults.
> The solution:
> Follow the failed build link and see for detected changes (a diff).
> Acknowledge the changes and include rebuilt templates in the patch as a
> new revision. The tool [3] (use -h for help) example command:
> ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
> 
> Or edit the committed templates manually and include data changes in the
> patch as well.
> 
> -B. An upstream module author added the new parameter mpm_mode with a
> default '123'. The test should fail with a
> 
> Diff:
>@@ -90,6 +90,7 @@
>   manage_user=> 'true',
>   max_keepalive_requests => '100',
>   mod_dir=> '/etc/httpd/conf.d',
>+  mpm_mode   => '123',
>   mpm_module => 'false',
>   name   => 'Apache',
>   package_ensure => 'installed',
> 
> It illustrates that the composition layer is not consistent with the
> upstream module data schema, and that could be a potential regression in
> deployment (new parameter added upstream and goes with defaults, being
> ignored by the composition manifest).
> The solution is the same as for the case A.
> 
> [0] https://bugs.launchpad.net/fuel/+bug/1508482
> [1] https://review.openstack.org/240015
> [2]
> https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
> [3]
> https://review.openstack.org/#/c/240015/7/utils/jenkins/fuel_noop_tests.rb
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] New [puppet] module for Magnum project

2015-10-29 Thread Adrian Otto
Nate,

On Oct 29, 2015, at 11:26 PM, Potter, Nathaniel 
> wrote:

Hi everyone,

I’m interested in starting up a puppet module that will handle the Magnum 
containers project. Would this be something the community might want? Thanks!

Best,
Nate Potter

Can you elaborate a bit more about your concept? Who would use this? What 
function would it provide? My guess is that you are suggesting a puppet config 
for adding the Magnum service to an OpenStack cloud. Is that what you meant? If 
so, could you share a reference to an existing one that we could see as an 
example of what you had in mind?

Thanks,

Adrian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Bugs status

2015-10-29 Thread Dmitry Pyzhov
Here is our current stats. Overall situation looks ok. We have manageable
number of high priority bugs. Number of medium bugs is going down. It
doesn't look like we will fix all medium bugs by end of release but it
seems to be acceptable. High priority technical debt is under control.
Features backlog is huge and nobody expects that it will be closed by end
of release.

Again I’ll show it in format “total (UI / python / library)"

Critical and high bugs: 43 (5/15/23). Last week it was 52 (6/28/18)
Medium, low and wishlist bugs: 181 (43/102/36). Last week it was 196
(41/112/43)
Features tracked as bug reports: 143. 111 are marked with ‘feature’ tag and
32 covered by blueprints. Last week it was 147 in total, 115 with 'feature'
tag and 32 covered by blueprints.
Technical debt bugs: 105 (2/82/21). Last week it was 106 (2/80/24) in
total. We’ve marked some of tech-debt bugs as High because we think them
pretty important. There are 11 (0/8/3). Last week we had 10 (0/6/4)

This is going to be the last report based on 'assignee' field. Next week
I'm going to split bugs into areas according to our 'area' tags described
here: https://wiki.openstack.org/wiki/Fuel/Bug_tags#Area_tags
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Performance Team summit session results

2015-10-29 Thread Matt Riedemann



On 10/29/2015 10:55 AM, Matt Riedemann wrote:



On 10/29/2015 9:30 AM, Dina Belova wrote:

Hey folks!

On Tuesday we had great summit session about performance team kick-off
and yesterday it was a great LDT session as well and I’m really glad to
see how much does the OpenStack performance topic is important for all
of us. 40 minutes session surely was not enough to analyse everyone’s
feedback and bottlenecks people usually see, so I’ll try to finalise
what have been discussed and the next steps in this email.

Performance team kick-off session
(https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off)

can be shortly described with the following points:

  * IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others were
taking part in the session
  * Various tools are used right now for OpenStack benchmarking and
profiling right now:
  o Rally (IBM, HP, Mirantis, Yahoo!)
  o Shaker (Mirantis, merging its functionality to Rally right now)
  o Gatling (Rackspace)
  o Zipkin (Yahoo!)
  o JMeter (Yandex)
  o and others…
  * Various issues have been seen during the OpenStack cloud operating
(full list can be found here -
https://etherpad.openstack.org/p/openstack-performance-issues). Most
mentioned issues were the following:
  o performance of DB-related layers (DB itself and oslo.db) - it is
about 7 abstraction DB layers in Nova; performance of Nova
conductor was mentioned several times
  o performance of MQ-related layers (MQ itself and oslo.messaging)
  * Different companies are using different standards for performance
benchmarking (both control plane and data plane testing)
  * The most wished output from the team due to the comments will be:
  o agree on the “performance testing standard”, including answers
on the following questions:
  + what tools need to be used for OpenStack performance
benchmarking?
  + what benchmarking meters need to be covered? what we would
like to compare?
  + what scenarios need to be covered?
  + how can we compare performance of different cloud
deployments?
  + what performance deployment patterns can be used for various
workloads?
  o share test plans and perform benchmarking tests
  o create methodologies and documentation about best OpenStack
deployment and performance testing practices


We’re going to cover all these topics further. First of all IRC channel
for the discussions was created: *#openstack-performance*. We’re going
to have weekly meeting related to current progress on that channel,
doodle with the voting can be found here:
http://doodle.com/poll/wv6qt8eqtc3mdkuz#table
  (I was brave enough not to include timeslots that were overlapping
with some of mine really hard-to-move activities :))

Let’s have next week as a voting time, and have first IRC meeting in our
channel the week after next. We can start our further discussions with
“performance” and “performance testing” terms definition and
benchmarking tools analysis.

Cheers,
Dina


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks for writing this up, it's great to see people getting together
and sharing info on performance issues and trying to pinpoint the big ones.

I poked through the performance issues etherpad and was wondering how
many people with DB issues, particularly for nova-conductor, are using a
level of oslo.db that's new enough to be using pymysql rather than
mysql-python because from what I remember there were eventlet issues
without pymysql. That was added to oslo.db 1.12.0 [1].

The nova-conductor workers / CPU usage is also a known issue in the
large ops gate job [2] but I'm not aware of anyone spending the time
drilling into what exactly is causing a lot of that overhead and if any
of it is abnormal.

Finally, wrt DB, I'd also be interested to know if Rackspace, or anyone
else, is still running with the direct-to-sql stuff that comstud wrote
for nova [3] and if that still shows significant performance
improvements over using sqlalchemy ORM. Not to open that can of worms in
the -dev list here again, but it'd be an interesting data point.

[1] https://review.openstack.org/#/c/184392/
[2] https://review.openstack.org/#/c/228636/
[3] https://blueprints.launchpad.net/nova/+spec/db-mysqldb-impl



Oops, forgot to copy the ops list on the last reply.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Nova][Neutron] 6WIND Networking CI - approval for posting comments

2015-10-29 Thread Matt Riedemann



On 10/29/2015 12:13 PM, Francesco Santoro wrote:

Dear Infra team,

According to the requirements specified in [1] posting comments on
patches needs approval from core maintainers of projects.

Here at 6WIND we deployed and successfully tested [2] (using ci-sandbox
project) our third party CI system [4] following all the steps defined
in [1].
We also run our CI on nova (and neutron) patches without posting
comments just to test a bigger jobs load.
Example artifacts are available at [3]

For this reason we would like to get your official approval for posting
non voting comments to both nova and neutron.

Kind regards,
Francesco

[1]
http://docs.openstack.org/infra/system-config/third_party.html#requirements
[2] https://review.openstack.org/#/c/238139/ or
https://review.openstack.org/#/c/226956/
[3] http://openstack-ci.6wind.com/networking-6wind-ci/230537 or
http://openstack-ci.6wind.com/networking-6wind-ci/202098
[4] https://wiki.openstack.org/wiki/ThirdPartySystems/6WIND_Networking_CI


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Do you have any code in nova that's specific to your configuration?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][Nova][Neutron] 6WIND Networking CI - approval for posting comments

2015-10-29 Thread Francesco Santoro
Dear Infra team,

According to the requirements specified in [1] posting comments on patches
needs approval from core maintainers of projects.

Here at 6WIND we deployed and successfully tested [2] (using ci-sandbox
project) our third party CI system [4] following all the steps defined in
[1].
We also run our CI on nova (and neutron) patches without posting comments
just to test a bigger jobs load.
Example artifacts are available at [3]

For this reason we would like to get your official approval for posting non
voting comments to both nova and neutron.

Kind regards,
Francesco

[1]
http://docs.openstack.org/infra/system-config/third_party.html#requirements
[2] https://review.openstack.org/#/c/238139/ or
https://review.openstack.org/#/c/226956/
[3] http://openstack-ci.6wind.com/networking-6wind-ci/230537 or
http://openstack-ci.6wind.com/networking-6wind-ci/202098
[4] https://wiki.openstack.org/wiki/ThirdPartySystems/6WIND_Networking_CI
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] re-implementation in golang - hummingbird details

2015-10-29 Thread Rahul Nair
Hi All,

I was reading about the "hummingbird" re-implementation of some parts of
swift in golang, can someone kindly point to documentation/blogs on the
changes made, where I can understand the new implementation before going
into the code.

​Thanks,
Rahul U Nair
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron sub-project release association with Liberty?

2015-10-29 Thread Vadivel Poonathan
Hi,

Once the Neutron's sub-project is released to pyPI, where does it show that
this sub-project release is associated with Liberty release of Openstack?..
I tried to see it in the Liberty Release notes, but it doesn't have any
info about the supported/released vendor plug-ins/drivers.

I see the plug-in is listed here in the official sub-project list, but
again it is not specific to a particular release.
http://docs.openstack.org/developer/neutron/devref/sub_projects.html



In the following link, i see but only some vendor plug-ins, not all is
listed here!... why only some of vendor drivers are shipped with Openstack
and not all?..
https://www.openstack.org/marketplace/drivers/


So what is the criteria to get a vendor plug-in listed on this page? or
where can i see the supported vendor plugins/drivers for a given Openstack
release (specfically Liberty) ??

Any info/link on this would be much helpful and appreciated?...

Thanks,
Vad
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Change from Mitaka: Expected UNIX signal to generate Guru Meditation (error) Reports

2015-10-29 Thread Matt Riedemann



On 10/21/2015 7:43 AM, Kashyap Chamarthy wrote:

Background
--

Oslo Guru Meditation (error) Reports (GMR)[*] are a useful debugging
mechanism that allows one to capture the current state of a Nova
process/executable (e.g. `nova-compute`, `nova-api`, etc).

The way to generate the error report is to supply the 'User-defined
signal', SIGUSR1, when killing a Nova process.  E.g.

 $ kill -USR1 `pgrep nova-compute`

which results in GMR being printed to your standard error ('stderr')
stream, wherever it ends up being redirected to (e.g. to a corresponding
Nova process-specific log file, otherwise, on systemd-enabled systems,
to its journal).


Change in Mitaka (and above)


 From the upcoming Mitaka release onwards, the default expected UNIX
signal to generate GMR has been changed[1] from USR1 to USR2 (another
User-defined singal), because the USR1 is reserved by Apache 'mod_wsgi'
for its own purpose.

So, to generate GMR, from Mitaka release:

 $ kill -USR2 `pgrep nova-compute`

A corresponding Nova documentation change[2] has been submitted to
reflect this new reality.


[1] https://review.openstack.org/#/c/223133/ -- guru_meditation_report:
 Use SIGUSR2 instead of SIGUSR1
[2] https://review.openstack.org/#/c/227779/ -- doc: gmr: Update
 instructions to generate GMR error reports


[*] References
--

Related reading:

- http://docs.openstack.org/developer/nova/gmr.html
- http://docs.openstack.org/developer/oslo.reports/usage.html
- https://wiki.openstack.org/wiki/GuruMeditationReport
- 
https://www.berrange.com/posts/2015/02/19/nova-and-its-use-of-olso-incubator-guru-meditation-reports/



Looks like this broke some tooling in the gate job runs where gmr's are 
created at the end of the service logs when the services exit. Here is a 
mitaka change with grenade on the liberty side with the gmr at the end:


http://logs.openstack.org/97/227897/13/check/gate-grenade-dsvm/27723a9/logs/old/screen-n-cpu.txt.gz

And on the new side it's gone:

http://logs.openstack.org/97/227897/13/check/gate-grenade-dsvm/27723a9/logs/new/screen-n-cpu.txt.gz

So obviously an upgrade impact, I'm hoping we get this into the liberty 
release notes as something to change when people move up to oslo.reports 
1.6.0.


We should also get the gate tooling fixed around this, I'm not sure 
where that was configured/triggered though, sdague probably knows.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack disaster recover with CloudFerry, other tools?

2015-10-29 Thread Marzi, Fausto
Hi Jonathan,
We are using Freezer for backup restore and disaster recovery 
(http://wiki.openstack.org/wiki/Freezer). It gives us flexibility as multiple 
storage media are supported (swift, ssh, local fs).
So for example you may want to use ssh, to recover in case keystone or swift 
are not available.

We are also working on parallel storage media for backups and restore. So the 
user can use 2 swift with independent credentials or ssh + swift and so on.

We are very actively involved on the development.  Please let us know if 
there's anything we can do for you here or on #openstack-freezer.

Many thanks,
Fausto

-Original Message-
From: Jonathan Brownell [mailto:cadenza...@gmail.com] 
Sent: 23 October 2015 20:04
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Openstack disaster recover with CloudFerry, other 
tools?

Hello, I'm interested in using technology like CloudFerry 
[https://github.com/MirantisWorkloadMobility/CloudFerry] to migrate OpenStack 
resources from one cloud to another in the use case of disaster recovery.

I can deal with the storage replication necessary to make sure that the storage 
backend(s) are regularly freshened in the failover cloud, and its files will 
just need to be reattached to Cinder volume and Glance image objects during 
reconstruction (in preparation for association with new, failed-over compute 
instances).

CloudFerry is designed to migrate resources from one cloud to another while 
both environments are accessible and operable (i.e. its primary "Openstack 
version upgrade" scenario). So, for my use case, I expect to have to define 
metadata that would be regularly collected (via APIs and DB), transmitted, and 
cached on the failover side in order to perform a recovery if the primary cloud 
goes completely offline.

I can see a number of OpenStack Summit presentations over the years that 
describe this kind of method for failing over resources from one cloud to 
another to address disaster recovery, but have not found any other projects or 
tools that help accomplish this. Is there work in the community that targets 
this kind of functionality today that I should familiarize myself with? Or any 
huge red flags I'm missing that would prohibit this kind of solution from 
working?

Thanks,

-JB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Routed Networks meetup in Tokyo

2015-10-29 Thread Carl Baldwin
Sorry for the late notice.  I fell asleep yesterday before sending this out.

I'd like to get together at the afternoon session of the Neutron
contributors meetup today to discuss the next steps for addressing
operators' routed networks use case during the Mitaka cycle.  If you
are interested in working on this, please come find me.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Change from Mitaka: Expected UNIX signal to generate Guru Meditation (error) Reports

2015-10-29 Thread Davanum Srinivas
Matt,

Yes, Sean already opened a critical bug -
https://bugs.launchpad.net/oslo.reports/+bug/1510740

Please note that this change was added to oslo.reports *after* the liberty
release in support of Mitaka. So i am not sure why we need to add it to liberty
release notes. Also this is a consequence of NOT having version caps which
was a decision a while ago as well.

-- DIms

On Fri, Oct 30, 2015 at 4:47 AM, Matt Riedemann 
wrote:

>
>
> On 10/21/2015 7:43 AM, Kashyap Chamarthy wrote:
>
>> Background
>> --
>>
>> Oslo Guru Meditation (error) Reports (GMR)[*] are a useful debugging
>> mechanism that allows one to capture the current state of a Nova
>> process/executable (e.g. `nova-compute`, `nova-api`, etc).
>>
>> The way to generate the error report is to supply the 'User-defined
>> signal', SIGUSR1, when killing a Nova process.  E.g.
>>
>>  $ kill -USR1 `pgrep nova-compute`
>>
>> which results in GMR being printed to your standard error ('stderr')
>> stream, wherever it ends up being redirected to (e.g. to a corresponding
>> Nova process-specific log file, otherwise, on systemd-enabled systems,
>> to its journal).
>>
>>
>> Change in Mitaka (and above)
>> 
>>
>>  From the upcoming Mitaka release onwards, the default expected UNIX
>> signal to generate GMR has been changed[1] from USR1 to USR2 (another
>> User-defined singal), because the USR1 is reserved by Apache 'mod_wsgi'
>> for its own purpose.
>>
>> So, to generate GMR, from Mitaka release:
>>
>>  $ kill -USR2 `pgrep nova-compute`
>>
>> A corresponding Nova documentation change[2] has been submitted to
>> reflect this new reality.
>>
>>
>> [1] https://review.openstack.org/#/c/223133/ -- guru_meditation_report:
>>  Use SIGUSR2 instead of SIGUSR1
>> [2] https://review.openstack.org/#/c/227779/ -- doc: gmr: Update
>>  instructions to generate GMR error reports
>>
>>
>> [*] References
>> --
>>
>> Related reading:
>>
>> - http://docs.openstack.org/developer/nova/gmr.html
>> - http://docs.openstack.org/developer/oslo.reports/usage.html
>> - https://wiki.openstack.org/wiki/GuruMeditationReport
>> -
>> https://www.berrange.com/posts/2015/02/19/nova-and-its-use-of-olso-incubator-guru-meditation-reports/
>>
>>
> Looks like this broke some tooling in the gate job runs where gmr's are
> created at the end of the service logs when the services exit. Here is a
> mitaka change with grenade on the liberty side with the gmr at the end:
>
>
> http://logs.openstack.org/97/227897/13/check/gate-grenade-dsvm/27723a9/logs/old/screen-n-cpu.txt.gz
>
> And on the new side it's gone:
>
>
> http://logs.openstack.org/97/227897/13/check/gate-grenade-dsvm/27723a9/logs/new/screen-n-cpu.txt.gz
>
> So obviously an upgrade impact, I'm hoping we get this into the liberty
> release notes as something to change when people move up to oslo.reports
> 1.6.0.
>
> We should also get the gate tooling fixed around this, I'm not sure where
> that was configured/triggered though, sdague probably knows.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Change from Mitaka: Expected UNIX signal to generate Guru Meditation (error) Reports

2015-10-29 Thread Sean Dague
Right, the crux of the problem is the move by the library from SIGUSR1
-> SIGUSR2 with no overlap and deprecation period breaks the ability to
have any tooling use this without atomically updating that tooling, and
this library, in all environments, all at the same time.

We need the SIGUSR1 handler added back in, deprecated, and not removed
for a couple cycles.

-Sean

On 10/30/2015 06:46 AM, Davanum Srinivas wrote:
> Matt,
> 
> Yes, Sean already opened a critical bug
> - https://bugs.launchpad.net/oslo.reports/+bug/1510740
> 
> Please note that this change was added to oslo.reports *after* the
> liberty release in support of Mitaka. So i am not sure why we need to
> add it to liberty release notes. Also this is a consequence of NOT
> having version caps which was a decision a while ago as well.
> 
> -- DIms
> 
> On Fri, Oct 30, 2015 at 4:47 AM, Matt Riedemann
> > wrote:
> 
> 
> 
> On 10/21/2015 7:43 AM, Kashyap Chamarthy wrote:
> 
> Background
> --
> 
> Oslo Guru Meditation (error) Reports (GMR)[*] are a useful debugging
> mechanism that allows one to capture the current state of a Nova
> process/executable (e.g. `nova-compute`, `nova-api`, etc).
> 
> The way to generate the error report is to supply the 'User-defined
> signal', SIGUSR1, when killing a Nova process.  E.g.
> 
>  $ kill -USR1 `pgrep nova-compute`
> 
> which results in GMR being printed to your standard error ('stderr')
> stream, wherever it ends up being redirected to (e.g. to a
> corresponding
> Nova process-specific log file, otherwise, on systemd-enabled
> systems,
> to its journal).
> 
> 
> Change in Mitaka (and above)
> 
> 
>  From the upcoming Mitaka release onwards, the default expected UNIX
> signal to generate GMR has been changed[1] from USR1 to USR2
> (another
> User-defined singal), because the USR1 is reserved by Apache
> 'mod_wsgi'
> for its own purpose.
> 
> So, to generate GMR, from Mitaka release:
> 
>  $ kill -USR2 `pgrep nova-compute`
> 
> A corresponding Nova documentation change[2] has been submitted to
> reflect this new reality.
> 
> 
> [1] https://review.openstack.org/#/c/223133/ --
> guru_meditation_report:
>  Use SIGUSR2 instead of SIGUSR1
> [2] https://review.openstack.org/#/c/227779/ -- doc: gmr: Update
>  instructions to generate GMR error reports
> 
> 
> [*] References
> --
> 
> Related reading:
> 
> - http://docs.openstack.org/developer/nova/gmr.html
> - http://docs.openstack.org/developer/oslo.reports/usage.html
> - https://wiki.openstack.org/wiki/GuruMeditationReport
> -
> 
> https://www.berrange.com/posts/2015/02/19/nova-and-its-use-of-olso-incubator-guru-meditation-reports/
> 
> 
> Looks like this broke some tooling in the gate job runs where gmr's
> are created at the end of the service logs when the services exit.
> Here is a mitaka change with grenade on the liberty side with the
> gmr at the end:
> 
> 
> http://logs.openstack.org/97/227897/13/check/gate-grenade-dsvm/27723a9/logs/old/screen-n-cpu.txt.gz
> 
> And on the new side it's gone:
> 
> 
> http://logs.openstack.org/97/227897/13/check/gate-grenade-dsvm/27723a9/logs/new/screen-n-cpu.txt.gz
> 
> So obviously an upgrade impact, I'm hoping we get this into the
> liberty release notes as something to change when people move up to
> oslo.reports 1.6.0.
> 
> We should also get the gate tooling fixed around this, I'm not sure
> where that was configured/triggered though, sdague probably knows.
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Fuel][fuel-library] CI gate for regressions detection in deployment data

2015-10-29 Thread Dmitry Borodaenko
Good point, replaced the tag with fuel-library.
On Oct 30, 2015 12:53 AM, "Emilien Macchi"  wrote:

> Why do you use [puppet] tag?
> Is there anything related to Puppet OpenStack modules we should take
> care of?
>
> Good luck,
>
> On 10/29/2015 11:24 PM, Bogdan Dobrelya wrote:
> > Hello.
> > There are few types of a deployment regressions possible. When changing
> > a module version to be used from upstream (or internal module repo), for
> > example from Liberty to Mitaka. Or when changing the composition layer
> > (modular tasks in Fuel). Specifically, adding/removing/changing classes
> > and a class parameters.
> >
> > An example regression for swift deployment data [0]. Something was
> > changed unnoticed by existing noop tests and as a result
> > the swift data became being stored in root partition.
> >
> > Suggested per-commit based regressions detection [1] for deployment data
> > assumes to automatically detect if a class in a noop catalog run has
> > gained or lost a parameter or if it has been updated to another value by
> > a patch under test. Later, this check could even replace existing noop
> > tests, everything will be checked automatically, unless every deployment
> > scenario is covered by a corresponding template, which are represented
> > as YAML files [2] in Fuel.
> > Note: The tool [3] can help to get all deployment cases (-Y) and all
> > deployment tasks (-S) as well.
> >
> > I propose to review the patch [1], understand how it works (see tl;dr
> > section below) and to start using it ASAP. The earlier we commit the
> > "initial" data layer state, less regressions would pop up.
> >
> > (tl;dr)
> > The check should be done for every modular component (aka deployment
> > task). Data generated in the noop catalog run for all classes and
> > defines of a given deployment task should be verified against its
> > "acknowledged" (committed) state.
> > And fail the test gate, if changes has been found, like new parameter
> > with a defined value, removed a parameter, changed a parameter's value.
> >
> > In order to remove a regression, a patch author will have to add (and
> > reviewers should acknowledge) detected changes in the committed state of
> > the deployment data. This may be done manually, with a tool like [3] or
> > by a pre-commit hook, or even at the CI side!
> > The regression check should show the diff between committed state and a
> > new state proposed in a patch. Changed state should be *reviewed* and
> > accepted with a patch, to became a committed one. So the deployment data
> > will evolve with *only* approved changes. And those changes would be
> > very easy to be discovered for each patch under review process!
> > No more regressions, everyone happy.
> >
> > Examples:
> >
> > - A. A patch author removed the mpm_module parameter from the
> > composition layer (apache modular task). The test should fail with a
> >
> > Diff:
> >   @@ -90,7 +90,7 @@
> >  manage_user=> 'true',
> >  max_keepalive_requests => '100',
> >  mod_dir=> '/etc/httpd/conf.d',
> >   -  mpm_module => 'false',
> >   +  mpm_module => 'prefork',
> >  name   => 'Apache',
> >  package_ensure => 'installed',
> >  ports_file => '/etc/httpd/conf/ports.conf',
> >
> > It illustrates that the mpm_module's committed value was "false".
> > But the new one came as the 'prefork', likely from the apache class
> > defaults.
> > The solution:
> > Follow the failed build link and see for detected changes (a diff).
> > Acknowledge the changes and include rebuilt templates in the patch as a
> > new revision. The tool [3] (use -h for help) example command:
> > ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
> >
> > Or edit the committed templates manually and include data changes in the
> > patch as well.
> >
> > -B. An upstream module author added the new parameter mpm_mode with a
> > default '123'. The test should fail with a
> >
> > Diff:
> >@@ -90,6 +90,7 @@
> >   manage_user=> 'true',
> >   max_keepalive_requests => '100',
> >   mod_dir=> '/etc/httpd/conf.d',
> >+  mpm_mode   => '123',
> >   mpm_module => 'false',
> >   name   => 'Apache',
> >   package_ensure => 'installed',
> >
> > It illustrates that the composition layer is not consistent with the
> > upstream module data schema, and that could be a potential regression in
> > deployment (new parameter added upstream and goes with defaults, being
> > ignored by the composition manifest).
> > The solution is the same as for the case A.
> >
> > [0] https://bugs.launchpad.net/fuel/+bug/1508482
> > [1] https://review.openstack.org/240015
> > [2]
> >
> https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
> > [3]
> >

[openstack-dev] [Neutron] IRC weekly meeting

2015-10-29 Thread Armando M.
A reminder that we won't have the meeting next week.

Safe journey back from Tokyo, for who has travelled to the Summit.

Cheers,
Armadno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Nova][Neutron] 6WIND Networking CI - approval for posting comments

2015-10-29 Thread Armando M.
On 30 October 2015 at 02:49, Matt Riedemann 
wrote:

>
>
> On 10/29/2015 12:13 PM, Francesco Santoro wrote:
>
>> Dear Infra team,
>>
>> According to the requirements specified in [1] posting comments on
>> patches needs approval from core maintainers of projects.
>>
>> Here at 6WIND we deployed and successfully tested [2] (using ci-sandbox
>> project) our third party CI system [4] following all the steps defined
>> in [1].
>> We also run our CI on nova (and neutron) patches without posting
>> comments just to test a bigger jobs load.
>> Example artifacts are available at [3]
>>
>> For this reason we would like to get your official approval for posting
>> non voting comments to both nova and neutron.
>>
>
The CI hasn't been doing this long enough [1] to really see how reliable it
is, but it's been promising so far.

[1]
https://review.openstack.org/#/q/reviewer:%226WIND+Networking+CI+%253Copenstack-networking-ci%25406wind.com%253E%22+project:openstack/neutron,n,z


>
>> Kind regards,
>> Francesco
>>
>> [1]
>>
>> http://docs.openstack.org/infra/system-config/third_party.html#requirements
>> [2] https://review.openstack.org/#/c/238139/ or
>> https://review.openstack.org/#/c/226956/
>> [3] http://openstack-ci.6wind.com/networking-6wind-ci/230537 or
>> http://openstack-ci.6wind.com/networking-6wind-ci/202098
>> [4] https://wiki.openstack.org/wiki/ThirdPartySystems/6WIND_Networking_CI
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Do you have any code in nova that's specific to your configuration?
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] userdata empty when using software deployment/config in Kilo

2015-10-29 Thread Steve Baker

On 29/10/15 06:12, Gabe Black wrote:

Using my own template or the example template:
https://github.com/openstack/heat-templates/blob/master/hot/software-config/example-templates/example-deploy-sequence.yaml

results in the VM's /var/lib/cloud/instance/script/userdata being empty.

The only warnings during the cloud-init boot sequence are:
[   14.470601] cloud-init[775]: 2015-10-28 17:48:15,104 - util.py[WARNING]: 
Failed running /var/lib/cloud/instance/scripts/userdata [-]
[   15.051625] cloud-init[775]: 2015-10-28 17:48:15,685 - 
cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in 
/var/lib/cloud/instance/scripts)
[   15.057189] cloud-init[775]: 2015-10-28 17:48:15,690 - util.py[WARNING]: Running 
module scripts-user () failed

I believe those warnings are simply because the userdata file is empty

I googled and searched and couldn't find why it wasn't working for me.

The nova.api logs show the transfer of the files, no problem there.  It is 
really sending empty userdata and it thinks it should be doing that.

To verify I added some debug prints in 
heat/engine/resources/openstack/nova/server.py:612 in handle_create() method.  
Below is the first part of the method for reference:

 def handle_create(self):
 security_groups = self.properties.get(self.SECURITY_GROUPS)

 user_data_format = self.properties.get(self.USER_DATA_FORMAT)
 ud_content = self.properties.get(self.USER_DATA)  #<---

 if self.user_data_software_config() or self.user_data_raw(): #<---
 if uuidutils.is_uuid_like(ud_content):
 # attempt to load the userdata from software config
 ud_content = self.get_software_config(ud_content) #<--- 

I added some debug log prints after the #<--- above to see what it was getting 
for user_data, and it turns out it is empty (e.g. I don't even see the third debug 
print I put in).  Spending more time looking through the code it appears to me 
that the self.properties.get(self.USER_DATA) should be returning the uuid for the 
software config resource associated with the deployment, but I could be wrong.  
Either way, it is empty which I think is not right.

Does anyone have an idea what I might be doing wrong?  I've been struggling for 
the past couple of days on this one!  Or is deployment just not stable in Kilo? 
 Documentation seems to indicate it has been supported even before Kilo.

Thanks in advance!
Gabe


Hi Gabe

It is expected that userdata is empty, because the server resources do 
not specify any script in their user_data properties.


There is other data in the initial cloud-init package which bootstraps 
polling for deployment data. The actual deployment data comes from 
requests to the heat metadata API, not in cloud-init userdata.


An appropriately built custom image will configure 
/etc/os-collect-config.conf on boot so that it can start polling for 
deployment data from heat.


Please take a look at the documentation for this:
http://docs.openstack.org/developer/heat/template_guide/software_deployment.html

cheers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Sessions on openstack-cinder YouTube channel

2015-10-29 Thread Sean McGinnis
Hey everyone,

We recently created a YouTube channel for Cinder related activity. We
are hoping to use this to build up a library of recordings over time.

https://www.youtube.com/channel/UCJ8Koy4gsISMy0qW3CWZmaQ

Starting with this Summit, Walt Boring has done a lot of work making
sure each design summit session is recorded and posted to the channel.

Take a look and subscribe to the channel if you would like to get
notified of future activity.

Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] meetup on friday 10am in Prince room Tokyo Design Summit

2015-10-29 Thread Antoine Cabot
Hi,

We will have a contributors meetup on Watcher at 10am tomorrow in Prince
room.

Thanks,

Antoine
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-29 Thread Gilles Dubreuil


On 16/10/15 00:14, Emilien Macchi wrote:
> This thread is really huge and only 3 people are talking.
> Why don't you continue on an etherpad and do some brainstorm on it?
> If you do so, please share the link here.
> 
> It would be much more effective in my opinion.

I think we're almost there (Please read on)
Harder at this stage to summarize this in an etherpad.
But we'll certainly do that or either start a new thread/topic if needed.

> 
> On 10/15/2015 08:26 AM, Sofer Athlan-Guyot wrote:
>> Gilles Dubreuil  writes:
>>
>>> On 08/10/15 03:40, Rich Megginson wrote:
 On 10/07/2015 09:08 AM, Sofer Athlan-Guyot wrote:
> Rich Megginson  writes:
>
>> On 10/06/2015 02:36 PM, Sofer Athlan-Guyot wrote:
>>> Rich Megginson  writes:
>>>
 On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:
> Gilles Dubreuil  writes:
>
>> On 30/09/15 03:43, Rich Megginson wrote:
>>> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
 On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
> Gilles Dubreuil  writes:
>
>> On 15/09/15 06:53, Rich Megginson wrote:
>>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
 Hi,

 Gilles Dubreuil  writes:

> A. The 'composite namevar' approach:
>
> keystone_tenant {'projectX::domainY': ... }
>   B. The 'meaningless name' approach:
>
>keystone_tenant {'myproject': name='projectX',
> domain=>'domainY',
> ...}
>
> Notes:
>   - Actually using both combined should work too with
> the domain
> supposedly overriding the name part of the domain.
>   - Please look at [1] this for some background
> between the two
> approaches:
>
> The question
> -
> Decide between the two approaches, the one we would like to
> retain for
> puppet-keystone.
>
> Why it matters?
> ---
> 1. Domain names are mandatory in every user, group or
> project.
> Besides
> the backward compatibility period mentioned earlier, where
> no domain
> means using the default one.
> 2. Long term impact
> 3. Both approaches are not completely equivalent which
> different
> consequences on the future usage.
 I can't see why they couldn't be equivalent, but I may be
 missing
 something here.
>>> I think we could support both.  I don't see it as an either/or
>>> situation.
>>>
> 4. Being consistent
> 5. Therefore the community to decide
>
> Pros/Cons
> --
> A.
 I think it's the B: meaningless approach here.

>Pros
>  - Easier names
 That's subjective, creating unique and meaningful name
 don't look
 easy
 to me.
>>> The point is that this allows choice - maybe the user
>>> already has some
>>> naming scheme, or wants to use a more "natural" meaningful
>>> name -
>>> rather
>>> than being forced into a possibly "awkward" naming scheme
>>> with "::"
>>>
>>>   keystone_user { 'heat domain admin user':
>>> name => 'admin',
>>> domain => 'HeatDomain',
>>> ...
>>>   }
>>>
>>>   keystone_user_role {'heat domain admin
>>> user@::HeatDomain':
>>> roles => ['admin']
>>> ...
>>>   }
>>>
>Cons
>  - Titles have no meaning!
>>> They have meaning to the user, not necessarily to Puppet.
>>>
>  - Cases where 2 or more resources could exists
>>> This seems to be the hardest part - I still cannot figure
>>> out how
>>> to use
>>> "compound" names with Puppet.
>>>
>  - More difficult to debug
>>> More difficult than it is already? :P
>>>
>  

Re: [openstack-dev] [Neutron] Neutron Social Meetup in Tokyo

2015-10-29 Thread Hirofumi Ichihara
Although the restaurant is announced, the place is hard to go for us.
I and Akihiro will take you to the place.
Please gather in registration hall at 6:30.
Don’t mind about RSVP. You can come freely.

IMPORTANT THING: Don’t leave your wallet. We don’t have sponsor ;)


> On 2015/10/27, at 15:07, Takashi Yamamoto  wrote:
> 
> hi,
> 
> On Tue, Oct 27, 2015 at 10:31 AM, Sukhdev Kapur  > wrote:
>> Hey Akihiro,
>> 
>> Thanks for arranging this. I did not see any link to RSVP.
>> I would love to attend this event - please add me to the list.
> 
> at this point just go to the venue is fine.
> 
> here's a RSVP link in case you still want to register for some reason.
> http://neutrontokyo.app.rsvpify.com/ 
> 
>> 
>> Thanks
>> -Sukhdev
>> 
>> 
>> On Fri, Oct 23, 2015 at 9:23 AM, Akihiro Motoki  wrote:
>>> 
>>> Hi Neutron folks,
>>> 
>>> We are pleased to announce Neutron social meet-up in Tokyo.
>>> Thanks Takashi and Hirofumi for the big help.
>>> 
>>> I hope many of you will be there and enjoy the time.
>>> If you have made RSVP, don't miss it!
>>> We recommend  to join the beginning, but come and join us even if you
>>> arrive late.
>>> 
>>> 
>>> 
>>> Thursday, Oct 29 19:00-22:00
>>> Hokkaido (北海道 品川インターシティー店)
>>> 
>>> Location:
>>> 
>>> https://www.google.com/maps/d/edit?mid=zBFFkY6dvVno.kOTkyNjZ2oU0=sharing
>>> 5th floor at the "shop and restaurant building" (between A and B
>>> buildings).
>>> It is at the opposite side of JR Shinagawa Station from the Summit side.)
>>> 
>>> Approximately it costs ~5000 (Japanese) Yen depending on the number of
>>> folks who join.
>>> Note that we have no sponsors.
>>> 
>>> If you have any trouble in reaching there or some question, reach me
>>> @ritchey98 on Twitter.
>>> 
>>> 
>>> 
>>> See you in Tokyo!
>>> Akihiro
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Creating a CA for openstack-ansible deployments?

2015-10-29 Thread McPeak, Travis
This does seem to make a lot of sense.  Basically what we will get is
an improved lowest common denominator when it comes to intra-node TLS.
This probably also fits in nicely with work others in OpenStack
Security have recently discussed regarding the creation of a
super-lightweight CA.

The only potential security drawback is that we are introducing a new
asset to protect.  If we create the tools that enable a deployer to
easily create and administer a lightweight CA, that should add
significant value to OpenStack, especially for smaller organizations
that don't have experience running a CA.

I'd be curious to hear what the more crypto/CA focused members of
OpenStack Security have to say as well.

Thanks,
-Travis


>Hello there,
>
>I've been researching some additional ways to secure openstack-ansible
>deployments and I backed myself into a corner with secure log
>transport.  The rsyslog client requires a trusted CA certificate to be
>able to send encrypted logs to rsyslog servers.  That's not a problem
>if users bring their own certificates, but it does become a problem if
>we use the self-signed certificates that we're creating within the
>various roles.
>
>I'm wondering if we could create a role that creates a CA on the
>deployment host and then uses that CA to issue certificates for various
>services *if* user doesn't specify that they want to bring their own
>certificates.  We could build the CA very early in the installation
>process and then use it to sign certificates for each individual
>service.  That would allow to have some additional trust in
>environments where deployers don't choose to bring their own
>certificates.
>
>Does this approach make sense?



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Potential critical issue, due Puppet mix stderr and stdout while execute commands

2015-10-29 Thread Martin Mágr



On 10/23/2015 02:17 PM, Dmitry Ilyin wrote:
Here is the implementation of the puppet "command" that outputs only 
stdout and drops the stderr unless an error have happened.

https://github.com/dmitryilyin/puppet-neutron/commit/b55f36a8da62fc207a91b358c396c03c8c58981b

+1

I believe such logic should be in puppet-openstacklib and all providers 
in puppet-openstack should inherit from it.


Regards,
Martin



2015-10-22 17:59 GMT+03:00 Matt Fischer >:


On Thu, Oct 22, 2015 at 12:52 AM, Sergey Vasilenko
> wrote:


On Thu, Oct 22, 2015 at 6:16 AM, Matt Fischer
> wrote:

I thought we had code in other places that split out
stderr and only logged it if there was an actual error but
I cannot find the reference now. I think that matches the
original proposal. Not sure I like idea #3.


Matthew, this topic not about SSL. ANY warnings, ANY output to
stderr from CLI may corrupt work of providers from puppet-*
modules for openstack components.

IMHO it's a very serious bug, that potential affect openstack
puppet modules.

I see 3 way for fix it:

 1. Long way. big patch to puppet core for add ability to
collect stderr and stdout separately. But most of existing
puppet providers waits that stderr and stdout was mixed
when handle errors of execution (non-zero return code).
Such patch will broke backward compatibility, if will be
enabled by default.
 2. Middle way. We should write code for redefine 'commands'
method. New commands should collect stderr and stdout
separately, but if error happens return stderr (with
ability access to stdout too).
 3. Short way. Modify existing providers to use json output
instead plain-text or csv. JSON output may be separated
from any garbage (warnings) slightly. I make this patch as
example of this way:
https://review.openstack.org/#/c/238156/ . Anyway json
more formalized format for data exchange, than plain text.

IMHO way #1 is a best solution, but not easy.


I must confess that I'm a bit confused about this. It wasn't a
secret that we're calling out to commands and parsing the output.
It's been discussed over and over on this list as recently as last
week, so this has been a known possible issue for quite a long
time. In my original email I was agreeing with you, so I'm not
sure why we're arguing now. Anyway...

I think we need to split stderr and stdout and log stderr on
errors, your idea #2. Using json like openstack-client can do does
not solve this problem for us, you still can end up with a bunch
of junk on stderr.

This would be a good quick discussion in Tokyo if you guys will be
there.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's up with functional testing?

2015-10-29 Thread Emilien Macchi


On 10/28/2015 01:12 PM, Chris Dent wrote:
> On Wed, 28 Oct 2015, Emilien Macchi wrote:
> 
>> As a user[1], I would like to functionally test OpenStack services.
> 
> Bless you. Let's all be more like that.
> 
>> Until now I was happy, until this bug [4] (TL;DR: Aodh can't be testing
>> with Tempest which is a bug I'm working on, and not really related to
>> this thread).
>> I realized Aodh [5] (and apparently some other projects like Ceilometer)
>> were using something else (gabbi [6]) for testing.
> 
> There's a fair amount of history here for which I don't know all the
> details so I'll just try to address the gabbi related questions below
> after this initial comment:
> 
> aodh is not yet in tempest, but that's simply because it is not yet
> done, not because there are not plans to do it. Tomorrow morning at
> 9am in the Kiri room we're having a design session about functional
> and integration testing in all three of ceilometer, aodh and gnocchi
> and one of the primary topics is: getting all three to have tempest
> plugins. One of the other primary topics is making new and existing
> in-tree functional test as good and useful as possible. A lot of those
> tests are not tempest because they've been extracted from pre-existing
> so-called unit tests but were determined to not be because they use a
> database. A small segment are API tests driven by gabbi as a result of
> this spec[1]
> 
> If you or anyone else have time to show up to that session, that would
> be great.

I was here and I'm happy to see the efforts [1] that are proposed for
the next cycle.
Having a Tempest plugin is really what I was asking in my initial
request, thank you for considering that work.

[1] https://etherpad.openstack.org/p/mitaka-telemetry-testing

> 
>> How come some big tent projects do not use Tempest anymore for
>> functional testing? I thought there was/is a move with tempest plugins
>> that will help projects to host their tempest tests in their repos.
> 
> Ceilometer projects started a migration to more in-tree functional
> tests in advance of tempest-lib existing. These manifest as tests that
> require a storage backend and have their own non-tempest job
> description in project-config.
> 
>> Am I missing something? Any official decision taken?
>> Is gabbi supported by OpenStack?
> 
> When I first released gabbi there was talk about it becoming either
> part of QA or oslo but after showing it around the world a bit I had
> feedback from several non-openstack people that they would be _far_
> more likely to contribute to it if it was not subject to the openstack
> development model[0]. So it lives now as a project on github to which
> people submit pull requests and travis takes care of CI. To avoid bus
> termination errors, there are two other admins in addition to me who
> have all the same rights with regard to merging and releasing and
> maintaining on python. One of them is another OpenStack contributor
> (jasonamyers).
> 
> I'm obviously biased, but I think gabbi is teh ossum and makes writing
> API tests easy and perhaps more importantly makes reading them later
> fantastic.
> 
> Within gnocchi and aodh, gabbi has been so successful at making it easy
> to write API tests that it is now being used for writing integration
> tests of aodh+gnocchi+ceilometer+heat.
> 
>> I feel like there is currently 2 paths that try to do the same thing and
>> as a user, I'm not happy.
> 
> Yeah, that's perhaps a problem. One thing I'm hoping to explore (or
> hoping someone else will explore) is making it easy to do gabbi
> tests within a tempest plugin. This ought to be possible but there
> are some humps to deal with regard to how gabbi orders and groups
> tests.
> 
> Again, I'm biased, but I think that gabbi would be a _huge_ asset for
> API tests in tempest because by its very nature it is very close to
> HTTP without a notion of a (python) client being involved. I think
> this is an excellent guard for ensuring that OpenStack APIs are
> sufficiently agnostic about their context.
> 
>> Please help me to understand,
> 
> I hope that adds a bit more info. I know it doesn't actually answer
> the real question though. I hope some other voices will come along.

Thanks for taking care of this topic, I'm glad you replied during the
Summit so we can move forward on this topic and we can make sure our
puppet-aodh module will be tested (one day) with Tempest.

> 
> [0] I'm happy to discuss this elsewhere (in person, on another thread,
> whatever) but it would be bad to let it distract this current thread.
> [1]
> http://specs.openstack.org/openstack/ceilometer-specs/specs/kilo/declarative-http-tests.html
> 
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [cinder][nova] gate-cinder-python34 failing test_nova_timeout after novaclient 2.33 release

2015-10-29 Thread Andrey Kurilin
>But it was released with 2961e82 which was the backward incompatible
requests exception change, which we now have a fix for that we want to
release, but would include 0cd5812.

I suppose we need to revert 0cd5812 change too, cut new release and then
revert revert of 0cd5812 :)

On Wed, Oct 28, 2015 at 8:44 PM, Matt Riedemann 
wrote:

>
>
> On 10/28/2015 12:28 PM, Matt Riedemann wrote:
>
>>
>>
>> On 10/28/2015 10:41 AM, Ivan Kolodyazhny wrote:
>>
>>> Matt,
>>>
>>> Thank you for bring this topic to the ML.
>>>
>>> In cinder, we've merged [1] patch to unblock gates. I've proposed other
>>> patch [2] to fix global-requirements for the stable/liberty branch.
>>>
>>>
>>> [1] https://review.openstack.org/#/c/239837/
>>> [2] https://review.openstack.org/#/c/239799/
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>>> On Thu, Oct 29, 2015 at 12:13 AM, Matt Riedemann
>>> > wrote:
>>>
>>>
>>>
>>> On 10/28/2015 9:22 AM, Matt Riedemann wrote:
>>>
>>>
>>>
>>> On 10/28/2015 9:06 AM, Yuriy Nesenenko wrote:
>>>
>>> Hi. Look at https://review.openstack.org/#/c/239837/
>>>
>>> On Wed, Oct 28, 2015 at 3:52 PM, Matt Riedemann
>>> >> 
>>> >> >> wrote:
>>>
>>>  That job is failing at a decent rate, tracking with bug:
>>>
>>> https://bugs.launchpad.net/cinder/+bug/1510656
>>>
>>>  It lines up with the novaclient 2.33 release on 10/27,
>>> I'm checking
>>>  out what the change was that caused the regression.
>>>
>>>  This is a heads up that rechecks on this failure
>>> probably won't help.
>>>
>>>  So far I haven't seen any related patches up to fix it
>>> although
>>>  there were already 2 bugs reported when I got in this
>>> morning.
>>>
>>>  --
>>>
>>>  Thanks,
>>>
>>>  Matt Riedemann
>>>
>>>
>>>
>>>
>>>
>>> __
>>>
>>>
>>>  OpenStack Development Mailing List (not for usage
>>> questions)
>>>  Unsubscribe:
>>>
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>>
>>>
>>> 
>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>>
>>>
>>> __
>>>
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>>
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> Heh, well that's 3 bugs then, I didn't see that one. jgriffith
>>> and I
>>> were talking in IRC about just handling both exceptions in
>>> cinder to fix
>>> this but we also agreed that this is a backward incompatible
>>> change on
>>> the novaclient side, which was also discussed in the original
>>> novaclient
>>> wishlist bug that prompted the breaking change.
>>>
>>> Given the backward compat issues, we might not just be breaking
>>> cinder
>>> here, so I've proposed a revert of the novaclient change with
>>> justification in the commit message:
>>>
>>> https://review.openstack.org/#/c/239941/
>>>
>>> At least with the cinder change above we're OK for mitaka, and
>>> logstash
>>> isn't yet showing failures for cinder in stable/liberty, but
>>> given the
>>> requirements there it will be a failure in cinder python34
>>> tests in
>>> stalbe/liberty also - so we can backport the cinder fix or
>>> block the
>>> 2.33 novaclient version on stable/liberty global-requirements
>>> depending
>>> on what we do with the proposed novaclient revert.
>>>
>>>
>>> I have an alternative to the revert here:
>>>
>>> https://review.openstack.org/#/c/239963/
>>>
>>> That makes novaclient.exceptions.RequestTimeout extend
>>> requests.Timeout so that older cinder continues to work.
>>>
>>> I also have changes to block novaclient 2.33.0 in g-r on master and
>>> stable/liberty:
>>>
>>>
>>>
>>> https://review.openstack.org/#/q/I6e7657b60308b30eed89b269810c1f37cce43063,n,z
>>>
>>>
>>> I personally think we need to block 2.33.0 since it breaks cinder,
>>> then release a 

[openstack-dev] [Neutron] Weekly DVR Meeting starting next week

2015-10-29 Thread Brian Haley
A few of us had a discussion this week at Summit and decided to re-start 
the weekly Neutron Distributed Virtual Router (DVR) meeting.  The goal 
is to help:


- Stabilize DVR - fix the bugs
- Address performance/scalability issues
- Get the DVR jobs voting again

Meetings will be on Wednesdays starting next week at 15:00 UTC.  I'm in 
the process of updating 
http://eavesdrop.openstack.org/#Neutron_Distributed_Virtual_Router_Meeting 
with a link to the meeting page and agenda, which is currently at 
https://wiki.openstack.org/wiki/Meetings/Neutron-DVR


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Meetings Canceled for Nov 4

2015-10-29 Thread David Lyle
Due to recent summit and many folks being on vacation, both the
Horizon and Horizon Drivers meeting are canceled for Nov 4.  We will
resume on November 11.

David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-29 Thread Gilles Dubreuil


On 29/10/15 17:32, Gilles Dubreuil wrote:
> 
> 
> On 16/10/15 00:14, Emilien Macchi wrote:
>> This thread is really huge and only 3 people are talking.
>> Why don't you continue on an etherpad and do some brainstorm on it?
>> If you do so, please share the link here.
>>
>> It would be much more effective in my opinion.
> 
> I think we're almost there (Please read on)
> Harder at this stage to summarize this in an etherpad.
> But we'll certainly do that or either start a new thread/topic if needed.

For those interested, the discussion is now happening here
https://etherpad.openstack.org/p/keystone_no_domain

> 
>>
>> On 10/15/2015 08:26 AM, Sofer Athlan-Guyot wrote:
>>> Gilles Dubreuil  writes:
>>>
 On 08/10/15 03:40, Rich Megginson wrote:
> On 10/07/2015 09:08 AM, Sofer Athlan-Guyot wrote:
>> Rich Megginson  writes:
>>
>>> On 10/06/2015 02:36 PM, Sofer Athlan-Guyot wrote:
 Rich Megginson  writes:

> On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:
>> Gilles Dubreuil  writes:
>>
>>> On 30/09/15 03:43, Rich Megginson wrote:
 On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
>> Gilles Dubreuil  writes:
>>
>>> On 15/09/15 06:53, Rich Megginson wrote:
 On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
> Hi,
>
> Gilles Dubreuil  writes:
>
>> A. The 'composite namevar' approach:
>>
>> keystone_tenant {'projectX::domainY': ... }
>>   B. The 'meaningless name' approach:
>>
>>keystone_tenant {'myproject': name='projectX',
>> domain=>'domainY',
>> ...}
>>
>> Notes:
>>   - Actually using both combined should work too with
>> the domain
>> supposedly overriding the name part of the domain.
>>   - Please look at [1] this for some background
>> between the two
>> approaches:
>>
>> The question
>> -
>> Decide between the two approaches, the one we would like to
>> retain for
>> puppet-keystone.
>>
>> Why it matters?
>> ---
>> 1. Domain names are mandatory in every user, group or
>> project.
>> Besides
>> the backward compatibility period mentioned earlier, where
>> no domain
>> means using the default one.
>> 2. Long term impact
>> 3. Both approaches are not completely equivalent which
>> different
>> consequences on the future usage.
> I can't see why they couldn't be equivalent, but I may be
> missing
> something here.
 I think we could support both.  I don't see it as an either/or
 situation.

>> 4. Being consistent
>> 5. Therefore the community to decide
>>
>> Pros/Cons
>> --
>> A.
> I think it's the B: meaningless approach here.
>
>>Pros
>>  - Easier names
> That's subjective, creating unique and meaningful name
> don't look
> easy
> to me.
 The point is that this allows choice - maybe the user
 already has some
 naming scheme, or wants to use a more "natural" meaningful
 name -
 rather
 than being forced into a possibly "awkward" naming scheme
 with "::"

   keystone_user { 'heat domain admin user':
 name => 'admin',
 domain => 'HeatDomain',
 ...
   }

   keystone_user_role {'heat domain admin
 user@::HeatDomain':
 roles => ['admin']
 ...
   }

>>Cons
>>  - Titles have no meaning!
 They have meaning to the user, not necessarily to Puppet.

>>  - Cases where 2 or more resources could exists
 This seems to be the hardest part 

[openstack-dev] [swift] Plan to add Python 3 support to Swift

2015-10-29 Thread Victor Stinner

Hi,

We talked about Python 3 with Christian Schwede, Alistair Coles, Samuel 
Meritt, Jaivish Kothari and others (sorry, I don't recall all names :-/) 
during Swift contributor meetup. It looks like we had an agreement on 
how to add Python 3 support to Swift. The plan is:


1) Fix the gate-swift-python34 check job

2) Make the gate-swift-python34 check job voting

3) Port remaining code step by step (incremental development)

Python 3 issues had been fixed in the past in Swift, but came back. So 
it's important to not reintroduce such regressions by making the gate 
voting.


Christian said that he will explain the plan at the next Swift meeting 
(Wednesday). I don't think that I will be able to attend this meeting, I 
have another one at the same time with my team :-/


I can put this plan in a blueprint if you want. So we can refer to the 
blueprint in Python 3 changes. It's up to you.



Plan in detail.

(1) To fix the Python 3 job, the idea is to only run a subset of tests 
on Python 3. For example, if we fix Python 3 issues with dnspython 
(dnspython3) and PyEClib dependencies, we can run
"nosetests test/unit/common/test_exceptions.py" on Python 3 (the test 
pass on Python 3).


We need these two changes:

* "py3: Update pbr and dnspython requirements"
  https://review.openstack.org/#/c/217423/

* "py3: Add py34 test environment to tox"
  https://review.openstack.org/#/c/199034/


(2) When the gate-swift-python34 check job will pass and we waited long 
enough to consider that it's stable, we can make it voting. At this 
point, we cannot introduced Python 3 regressions on the code tested on 
Python 3. Then the idea is to run more and more tests on Python 3.



(3) Ok, now the interesting part. To port remaining code, following 
changes will enlarge the code coverage of Python 3 tests by adding new 
tests to tox.ini. For example, port utils.py to Python 3 and add 
test_utils.py to tox.ini.



Misc questions.

Q: "Is it possible to port Swift to Python 3 in a single patch?"

A: Yes, it is technically possible. But it would be one unique giant 
patch simply impossible to review and that will conflict at each merged 
change. Some changes required by Python 3 need discussions and to make 
technical choices.  It's more convenient to work on smaller patches.


Q: "How much changes do we need to port Swift to Python ?"

A: Sorry, I don't know. Since we cannot run all tests on Python 3 right 
now, we cannot see all issues. It's really hard to estimate the number 
of required changes. Anyway, the plan is to port the code step by step.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Nova][Neutron] 6WIND Networking CI - approval for posting comments

2015-10-29 Thread Maxime Leroy
Hi Matt, Armando,

On Fri, Oct 30, 2015 at 1:17 AM, Armando M.  wrote:
>>
>> Do you have any code in nova that's specific to your configuration?
>>

This CI is testing against our ML2 mechanism driver. This one uses the
vhostuser vif_type. So the specific code related to Nova is
get_config_vhostuser in libvirt.

We also plan to extend the vhostuser vif_type for your mechanism
driver for unplug/plug method in the mitaka release (see
https://blueprints.launchpad.net/nova/+spec/libvirt-vif-vhostuser-ovs-fp)

For now, we are testing this new modification with a monkey patch on
top of Nova.

Before enabling the comments, we should, I presume:
- nova: have our modification in vhost-user vif-type reviewed/merged
in the nova tree
- neutron/nova: The CI should be prove to be reliable for a long time.

Thanks,

Maxime

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Friday afternoon heat social

2015-10-29 Thread Steve Baker
The heat team is planning to meet this afternoon from 5pm for drinks, 
eating and chat.


The Craftsman beer bistro is 15 minutes walk from here:
https://goo.gl/maps/hviv4HK1Wor
http://craftsman-craftbeerbistro.jp/

It opens at 5pm, the food is a variety of small plates to share. It 
would be great to see you there.


cheers
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Plan to add Python 3 support to Swift

2015-10-29 Thread John Dickinson
Thanks for the update. This seems like a reasonable way forward, but also one 
that will take a long time. Thank you for your work.

I think this will result in larger and larger chunks of work, and so it will 
eventually result in large patches to move different components to py3. So 
you'll be able to start small, but the work will get larger as you go.

You're right about needing the voting gate job. That should be the first 
priority for py3 work.

--John




On 30 Oct 2015, at 12:47, Victor Stinner wrote:

> Hi,
>
> We talked about Python 3 with Christian Schwede, Alistair Coles, Samuel 
> Meritt, Jaivish Kothari and others (sorry, I don't recall all names :-/) 
> during Swift contributor meetup. It looks like we had an agreement on how to 
> add Python 3 support to Swift. The plan is:
>
> 1) Fix the gate-swift-python34 check job
>
> 2) Make the gate-swift-python34 check job voting
>
> 3) Port remaining code step by step (incremental development)
>
> Python 3 issues had been fixed in the past in Swift, but came back. So it's 
> important to not reintroduce such regressions by making the gate voting.
>
> Christian said that he will explain the plan at the next Swift meeting 
> (Wednesday). I don't think that I will be able to attend this meeting, I have 
> another one at the same time with my team :-/
>
> I can put this plan in a blueprint if you want. So we can refer to the 
> blueprint in Python 3 changes. It's up to you.
>
>
> Plan in detail.
>
> (1) To fix the Python 3 job, the idea is to only run a subset of tests on 
> Python 3. For example, if we fix Python 3 issues with dnspython (dnspython3) 
> and PyEClib dependencies, we can run
> "nosetests test/unit/common/test_exceptions.py" on Python 3 (the test pass on 
> Python 3).
>
> We need these two changes:
>
> * "py3: Update pbr and dnspython requirements"
> https://review.openstack.org/#/c/217423/
>
> * "py3: Add py34 test environment to tox"
> https://review.openstack.org/#/c/199034/
>
>
> (2) When the gate-swift-python34 check job will pass and we waited long 
> enough to consider that it's stable, we can make it voting. At this point, we 
> cannot introduced Python 3 regressions on the code tested on Python 3. Then 
> the idea is to run more and more tests on Python 3.
>
>
> (3) Ok, now the interesting part. To port remaining code, following changes 
> will enlarge the code coverage of Python 3 tests by adding new tests to 
> tox.ini. For example, port utils.py to Python 3 and add test_utils.py to 
> tox.ini.
>
>
> Misc questions.
>
> Q: "Is it possible to port Swift to Python 3 in a single patch?"
>
> A: Yes, it is technically possible. But it would be one unique giant patch 
> simply impossible to review and that will conflict at each merged change. 
> Some changes required by Python 3 need discussions and to make technical 
> choices.  It's more convenient to work on smaller patches.
>
> Q: "How much changes do we need to port Swift to Python ?"
>
> A: Sorry, I don't know. Since we cannot run all tests on Python 3 right now, 
> we cannot see all issues. It's really hard to estimate the number of required 
> changes. Anyway, the plan is to port the code step by step.
>
> Victor
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Learning to Debug the Gate

2015-10-29 Thread Anita Kuno
On 10/29/2015 08:27 AM, Anita Kuno wrote:
> On 10/28/2015 12:14 AM, Matt Riedemann wrote:
>>
>>
>> On 10/27/2015 4:08 AM, Anita Kuno wrote:
>>> Learning how to debug the gate was identified as a theme at the
>>> "Establish Key Themes for the Mitaka Cycle" cross-project session:
>>> https://etherpad.openstack.org/p/mitaka-crossproject-themes
>>>
>>> I agreed to take on this item and facilitate the process.
>>>
>>> Part one of the conversation includes referencing this video created by
>>> Sean Dague and Dan Smith:
>>> https://www.youtube.com/watch?v=fowBDdLGBlU
>>>
>>> Please consume this as you are able.
>>>
>>> Other suggestions for how to build on this resource were mentioned and
>>> will be coming in the future but this was an easy, actionable first step.
>>>
>>> Thank you,
>>> Anita.
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/tales-from-the-gate-how-debugging-the-gate-helps-your-enterprise
>>
>>
> 
> The source for the definition of "the gate":
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n34
> 
> Thanks for following along,
> Anita.
> 

This is the status page showing the status of our running jobs,
including patches in the gate pipeline: http://status.openstack.org/zuul/

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] correction: Re: [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-10-29 Thread Gilles Dubreuil


On 02/09/15 12:26, Rich Megginson wrote:
> Slight correction below:
> 
> On 09/01/2015 10:56 AM, Rich Megginson wrote:
>> To close this thread:
>> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072878.html
>>
>>
>> puppet-openstack will support Keystone domain scoped resource names
>> without a '::domain' in the name, only if the 'default_domain_id'
>> parameter in Keystone has _not_ been set.
> 
> Or if the 'default_domain_id' parameter has been set to 'default'.
> 
>> That is, if the default domain is 'Default'.  This means that if the
>> user/operator doesn't care about domains at all, the operator doesn't
>> have to deal with them.  However, once the user/operator uses
>> `keystone_domain`, and uses `is_default => true`, this means the
>> user/operator _must_ use '::domain' with _all_ domain scoped Keystone
>> resource names.
> 
> Note that the domain named 'Default' with the UUID 'default' is created
> automatically by Keystone, so no need for puppet to create it or ensure
> that it exists.
> 
>>
>> In addition:
>>
>> * In the OpenStack L release:
>>If 'default_domain_id' is set,
> or if 'default_domain_id' is not 'default',
>> puppet will issue a warning if a name is used without '::domain'. I
>> think this is a good thing to do, just in case someone sets the
>> default_domain_id by mistake.
>>
>> * In OpenStack M release:
>>Puppet will issue a warning if a name is used without '::domain'.
>>
>> * From Openstack N release:
>>A name must be used with '::domain'.
>>
>>

In the light of the composite namevar solution things have evolved a bit.

The rule has slightly changed but the depecration warnings should be put
in place.

For those interested, the discussion is now happening here
https://etherpad.openstack.org/p/keystone_no_domain

Thanks,
Gilles

>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Update on Reseller

2015-10-29 Thread Henry Nash
Hi

At the design summit we have had a number of discussions on how to complete the 
reseller functionality (original spec: https://review.openstack.org/#/c/139824 
). We all agreed that we should split 
the implementation into:

1) Refactor the way domains are stored, so that they are actually projects with 
the “is_domain” attribute.  At the end of this phase, all top level projects 
would be acting as domains. The domain API is not removed, but references the 
appropriate project.

2) Investigate alternatives to the original proposal of using nested projects 
acting as a domain to model the reseller use case. One proposed alternative was 
to try and use federated mapping to provide isolation between customers within 
a reseller.

The second part of this has undergone some intensive analysis over the last few 
days - here’s a summary of what we looked at:

This alternative proposal was intended to work like this;

1) The Cloud provider would create a domain for the reseller.
2) The reseller would on-board their customers by creating IdPs and the mapping 
rules, that would land a given customer’s users into a customer specific 
project or tree of projects, within the reseller's domain

A number of issues come out of this:

a) Most serious, is how we provide sufficient isolation between the customers - 
the key being ensuing that admin actions that a customer needs to carry out can 
be protected by generic policy files rules that would be written by the cloud 
provider without any knowledge of the specific reseller and their customers. 
Things that immediately seem hard in this area:
- CRUD of customer specific roles (which would be the analogy of what we had 
been calling domain specific roles)
- CRUD of the mapping rules for the customers IdPs (i.e. a customer’s admin 
would want to be able to change what groups/assignments would be used against 
attributes in the IdP’s assertion)
One could imagine doing the above by having some kind of “special project” for 
each customer (although one could say that this is no different than that 
"special project” being a project with the “is_domain” flag!)

b) Today, at least, all project names within a domain must be unique - which 
would be overly restrictive in this case (since all customer projects of the 
reseller are in the same domain).  So we’d need to move to the "project name 
must be unique within it’s parent” model - which we have discussed before (and 
solve the issues with referring to project by name etc.)

c) This solution really only works for one level of reseller (which would 
probably be OK for now, although is a concern for the future)

d) This solution only works if all a reseller's customers are ready to use a 
federated model, i.e. it won’t work if they want to use their corporate LDAP. 
With support of LDAP via the Apache plugin that would help - but I think the 
issue is more a customer operating model, rather than technically can you 
federate with their LDAP.

After discussing this with a few of the other cores (including Morgan), it was 
agreed that you really should use a domain per customer to ensure we have the 
correct isolation. But perhaps we could just create all the domains at the top 
level (i.e. avoiding the need for nested domains)? Analysis of this throws up 
some few additional issues:

- How is it that we maintain some kind of link/ownership/breadcrumb-trail from 
a customer domain back to their reseller? We might need this to, for instance, 
ensure that reseller A can only see their own customers' domains, and not those 
of reseller B. Interestingly, this linkage did exist in the solution when each 
customer had a special project in the reseller’s domain.
- At some point in the future, we probably won’t want the domain names to be 
all globally unique - rather you would want them unique within the reseller. 
Probably not an issue to start, but eventually this might become a problem. 
It’s not clear how you would provide such a restriction with the domain names 
at the top level. 

It is possible we could somehow use role assignments to provide the above - but 
this seems tenuous at best. This leads us all the way back to the original 
proposal of nested projects acting as domains. This was designed to solve the 
problems above. However, i think there are a couple of reasons why this 
solution has seemed so complicated and concerning:

i) Trying to implement this in one go meant changing multiple concepts at once 
- doing this in two phases (as discussed) solves most of these issues.
ii) The original discussions on nested domains where all very general and 
theoretical. I don’t think it had been explained well enough, that the ONLY 
thing that was trying to be achieved with the nesting of projects acting as 
domains for reseller was the idea of ownership & segregation.  We shouldn’t 
allow/attempt any of the other things that come with project hierarchies (e.g. 
inherited role 

[openstack-dev] What's Up, Doc? 三鷹 Summit Edition

2015-10-29 Thread Lana Brindley
Hi everyone,

Welcome to the Summit edition of the docs newsletter! Docs have had a 
wonderfully productive week here in Tokyo, and we've now established a plan for 
Mitaka. This newsletter covers the more important announcements, and the main 
points we want to hit in the next release.

Karin has kindly written up the summary of the Design Summit sessions here: 
https://etherpad.openstack.org/p/Mitaka-Docs-Meetup

== Mitaka Goals ==

This is the list at the moment. There are more notes in the docs etherpads, and 
not everything has blueprints yet.

* Switch API docs over to swagger 
http://specs.openstack.org/openstack/docs-specs/specs/liberty/api-site.html 
(???)
* Implement new docimpact plan: 
https://blueprints.launchpad.net/openstack-manuals/+spec/review-docimpact
* RST Conversions:
** Arch Guide: 
https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-rst
** Ops Guide: 
https://blueprints.launchpad.net/openstack-manuals/+spec/ops-guide-rst (pending 
O'Reilly conversation)
** Config Ref
* Reorganisations:
** Arch Guide: 
https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-reorg
** Ops Guide: 
https://blueprints.launchpad.net/openstack-manuals/+spec/improve-ops-guide
** User Guides: 
https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reorganised
* Training
** Labs: https://blueprints.launchpad.net/openstack-manuals/+spec/training-labs
** Guides: Upstream University & 'core' component updates, EOL Upstream Wiki 
page.
* First App Guide: testing & improvements
* Reorganise the index page
* Document the openstack-doc-tools

== Speciality Team Changes ==

More detail on the Speciality Team wiki page: 
https://wiki.openstack.org/wiki/Documentation/SpecialityTeams

* Install Guide is now with Christian Berendt
* HA Guide is now with Bogdan Dobrelya
* Networking Guide is now with Edgar Magana
* New Hypervisor Tuning Guide team with Joe Topjian

あなたの東京ありがとう!

Lana


Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia





signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Details for the Nova Mitaka mid-cycle

2015-10-29 Thread Michael Still
Hi,

It has been decided that the Nova Mitaka mid-cycle would be in Bristol, UK
between 26 and 28 January. You can register at:


https://www.eventbrite.com.au/e/openstack-mitaka-nova-mid-cycle-meetup-tickets-19326224257

Details of hotel discounts etc will be added to this thread when they are
known.

Cheers,
Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev