Re: [openstack-dev] Barbican : Unable to store the secret when Barbican was Integrated with SafeNet HSM

2015-07-18 Thread John Vrbanac
Asha,

It looks like you don't have your mkek label correctly configured. Make sure 
that the mkek_label and hmac_label values in your config correctly reflect the 
keys that you've generated on your HSM.

The plugin will cache the key handle to the mkek and hmac when the plugin 
starts, so if it cannot find them, it'll fail to load the plugin altogether.


If you need help generating your mkek and hmac, refer to 
http://docs.openstack.org/developer/barbican/api/quickstart/pkcs11keygeneration.html
 for instructions on how to create them using a script.


As far as who uses HSMs, I know we (Rackspace) use them with Barbican.


John Vrbanac

From: Asha Seshagiri 
Sent: Saturday, July 18, 2015 8:47 PM
To: openstack-dev
Cc: Reller, Nathan S.
Subject: [openstack-dev] Barbican : Unable to store the secret when Barbican 
was Integrated with SafeNet HSM

Hi All ,

I have configured Barbican to integrate with SafeNet  HSM.
Installed safenet client libraries , registered the barbican machine to point 
to HSM server  and also assigned HSM partition.

The following were the changes done in barbican.conf file


# = Secret Store Plugin ===
[secretstore]
namespace = barbican.secretstore.plugin
enabled_secretstore_plugins = store_crypto

# = Crypto plugin ===
[crypto]
namespace = barbican.crypto.plugin
enabled_crypto_plugins = p11_crypto

[p11_crypto_plugin]
# Path to vendor PKCS11 library
library_path = '/usr/lib/libCryptoki2_64.so'
# Password to login to PKCS11 session
login = 'test123'
# Label to identify master KEK in the HSM (must not be the same as HMAC label)
mkek_label = 'an_mkek'
# Length in bytes of master KEK
mkek_length = 32
# Label to identify HMAC key in the HSM (must not be the same as MKEK label)
hmac_label = 'my_hmac_label'
# HSM Slot id (Should correspond to a configured PKCS11 slot). Default: 1
slot_id = 1

Unable to store the secret when Barbican was integrated with HSM.

[root@HSM-Client crypto]# curl -X POST -H 'content-type:application/json' -H 
'X-Project-Id:12345' -d '{"payload": "my-secret-here", "payload_content_type": 
"text/plain"}' http://localhost:9311/v1/secrets
{"code": 500, "description": "Secret creation failure seen - please contact 
site administrator.", "title": "Internal Server Error"}[root@HSM-Client crypto]#


Please find the logs below :

2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils 
[req-354affce-b3d6-41fd-b050-5e5c604004eb - 12345 - - -] Problem seen creating 
plugin: 'p11_crypto'
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils Traceback (most 
recent call last):
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils   File 
"/root/barbican/barbican/plugin/util/utils.py", line 42, in instantiate_plugins
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils 
plugin_instance = ext.plugin(*invoke_args, **invoke_kwargs)
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils   File 
"/root/barbican/barbican/plugin/crypto/p11_crypto.py", line 70, in __init__
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils 
conf.p11_crypto_plugin.hmac_label)
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils   File 
"/root/barbican/barbican/plugin/crypto/pkcs11.py", line 344, in 
cache_mkek_and_hmac
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils 
self.get_mkek(self.current_mkek_label, session)
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils   File 
"/root/barbican/barbican/plugin/crypto/pkcs11.py", line 426, in get_mkek
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils raise 
P11CryptoKeyHandleException()
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils 
P11CryptoKeyHandleException: No key handle was found
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers 
[req-354affce-b3d6-41fd-b050-5e5c604004eb - 12345 - - -] Secret creation 
failure seen - please contact site administrator.


(I am not sure why we are geting CryptoPluginNotFound: Crypto plugin not found. 
Exception since the changes is able to hit the p11_crypto.py code)

2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers Traceback (most 
recent call last):
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers   File 
"/root/barbican/barbican/api/controllers/__init__.py", line 104, in handler
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers return 
fn(inst, *args, **kwargs)
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers   File 
"/root/barbican/barbican/api/controllers/__init__.py", line 90, in enforcer
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers return 
fn(inst, *args, **kwargs)
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers   File 
"/root/barbican/barbican/api/controllers/__init__.py", line 146, in 
content_types_enforcer
2015-07-18 17:15:32.64

Re: [openstack-dev] [Neutron]How to configure lbaas api v2 on devstack?

2015-07-18 Thread Al Miller
The quick answer is “enable_service q-lbaasv2”.

A longer answer is that the neutron-lbaas repository contains sample local.con 
and local.sh files that set up a working loadbalancer devstack setup using 
LBaaS V2.  They are in neutron-lbaas/devstack/samples.   Putting those files, 
along with webserver.sh into your devstack directory and running stack.sh will 
set up a V2 loadbalancer with two member instances.

More details are at 
https://chapter60.wordpress.com/2015/04/14/sample-scripts-to-automatically-set-up-lbaas-v2-loadbalancers-in-devstack/

Al


On Jul 18, 2015, at 5:13 PM, 姚威 
mailto:wilence@gmail.com>> wrote:

Hi all,
To enable lbaas on devstack with haproxy, I added `enabled_service q-lbaas` in 
devstack/loacl.conf. ButI find that lbaas api is v1, which means I could use 
`neutron lb-xxx` instead of  `neutron lbaas-xxx`. If I want to use api v2, what 
should I do with devstack/local.conf?

Wilence Yao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]How to configure lbaas api v2 on devstack?

2015-07-18 Thread Al Miller
The quick answer is “enable_service q-lbaasv2”.

A longer answer is that the neutron-lbaas repository contains sample local.con 
and local.sh files that set up a working loadbalancer devstack setup using 
LBaaS V2.  They are in neutron-lbaas/devstack/samples.   Putting those files, 
along with webserver.sh into your devstack directory and running stack.sh will 
set up a V2 loadbalancer with two member instances.

More details are at 
https://chapter60.wordpress.com/2015/04/14/sample-scripts-to-automatically-set-up-lbaas-v2-loadbalancers-in-devstack/

Al


On Jul 18, 2015, at 5:13 PM, 姚威 
mailto:wilence@gmail.com>> wrote:

Hi all,
To enable lbaas on devstack with haproxy, I added `enabled_service q-lbaas` in 
devstack/loacl.conf. ButI find that lbaas api is v1, which means I could use 
`neutron lb-xxx` instead of  `neutron lbaas-xxx`. If I want to use api v2, what 
should I do with devstack/local.conf?

Wilence Yao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with Hyper

2015-07-18 Thread Jay Lau
Hong Bin,

I have some online discussion with Peng, seems hyper is now integrating
with Kubernetes and also have plan integrate with mesos for scheduling.
Once mesos integration finished, we can treat mesos+hyper as another kind
of bay.

Thanks

2015-07-19 4:15 GMT+08:00 Hongbin Lu :

>  Peng,
>
>
>
> Several questions Here. You mentioned that HyperStack is a single big
> “bay”. Then, who is doing the multi-host scheduling, Hyper or something
> else? Were you suggesting to integrate Hyper with Magnum directly? Or you
> were suggesting to integrate Hyper with Magnum indirectly (i.e. through
> k8s, mesos and/or Nova)?
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Peng Zhao [mailto:p...@hyper.sh]
> *Sent:* July-17-15 12:34 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
> with Hyper
>
>
>
> Hi, Adrian, Jay and all,
>
>
>
> There could be a much longer version of this, but let me try to explain in
> a minimalist way.
>
>
>
> Bay currently has two modes: VM-based, BM-based. In both cases, Bay helps
> to isolate different tenants' containers. In other words, bay is
> single-tenancy. For BM-based bay, the single tenancy is a worthy tradeoff,
> given the performance merits of LXC vs VM. However, for a VM-based bay,
> there is no performance gain, but single tenancy seems a must, due to the
> lack of isolation in container. Hyper, as a hypervisor-based substitute for
> container, brings the much-needed isolation, and therefore enables
> multi-tenancy. In HyperStack, we don't really need Ironic to provision
> multiple Hyper bays. On the other hand,  the entire HyperStack cluster is a
> single big "bay". Pretty similar to how Nova works.
>
>
>
> Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN
> functionality. So when someone submits a Docker Compose app, HyperStack
> would launch HyperVMs and call Cinder/Neutron to setup the volumes and
> network. The architecture is quite simple.
>
>
>
> Here are a blog I'd like to recommend:
> https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html
>
>
>
> Let me know your questions.
>
>
>
> Thanks,
>
> Peng
>
>
>
> -- Original --
>
> *From: * "Adrian Otto";
>
> *Date: * Thu, Jul 16, 2015 11:02 PM
>
> *To: * "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
>
> *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
> onmetalwith Hyper
>
>
>
> Jay,
>
>
>
> Hyper is a substitute for a Docker host, so I expect it could work equally
> well for all of the current bay types. Hyper’s idea of a “pod” and a
> Kubernetes “pod” are similar, but different. I’m not yet convinced that
> integrating Hyper host creation direct with Magnum (and completely
> bypassing nova) is a good idea. It probably makes more sense to implement
> use nova with the ironic dirt driver to provision Hyper hosts so we can use
> those as substitutes for Bay nodes in our various Bay types. This would fit
> in the place were we use Fedora Atomic today. We could still rely on nova
> to do all of the machine instance management and accounting like we do
> today, but produce bays that use Hyper instead of a Docker host. Everywhere
> we currently offer CoreOS as an option we could also offer Hyper as an
> alternative, with some caveats.
>
>
>
> There may be some caveats/drawbacks to consider before committing to a
> Hyper integration. I’ll be asking those of Peng also on this thread, so
> keep an eye out.
>
>
>
> Thanks,
>
>
>
> Adrian
>
>
>
>  On Jul 16, 2015, at 3:23 AM, Jay Lau  wrote:
>
>
>
> Thanks Peng, then I can see two integration points for Magnum and Hyper:
>
> 1) Once Hyper and k8s integration finished, we can deploy k8s in two mode:
> docker and hyper mode, the end user can select which mode they want to use.
> For such case, we do not need to create a new bay but may need some
> enhancement for current k8s bay
>
> 2) After mesos and hyper integration,  we can treat mesos and hyper as a
> new bay to magnum. Just like what we are doing now for mesos+marathon.
>
> Thanks!
>
>
>
> 2015-07-16 17:38 GMT+08:00 Peng Zhao :
>
> Hi Jay,
>
>
>
> Yes, we are working with the community to integrate Hyper with Mesos and
> K8S. Since Hyper uses Pod as the default job unit, it is quite easy to
> integrate with K8S. Mesos takes a bit more efforts, but still
> straightforward.
>
>
>
> We expect to finish both integration in v0.4 early August.
>
>
>
> Best,
>
> Peng
>
>
>
> -
>
> Hyper - Make VM run like Container
>
>
>
>
>
>
>
> On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau  wrote:
>
>Hi Peng,
>
>   Just want to get more for Hyper. If we create a hyper bay, then can I
> set up multiple hosts in a hyper bay? If so, who will do the scheduling,
> does mesos or some others integrate with hyper?
>
> I did not find much info for hyper cluster manageme

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with Hyper

2015-07-18 Thread Jay Lau
Hi Peng,

Please check some of my understandings in line.

Thanks


2015-07-18 0:33 GMT+08:00 Peng Zhao :

> Hi, Adrian, Jay and all,
>
> There could be a much longer version of this, but let me try to explain in
> a minimalist way.
>
> Bay currently has two modes: VM-based, BM-based. In both cases, Bay helps
> to isolate different tenants' containers. In other words, bay is
> single-tenancy. For BM-based bay, the single tenancy is a worthy tradeoff,
> given the performance merits of LXC vs VM. However, for a VM-based bay,
> there is no performance gain, but single tenancy seems a must, due to the
> lack of isolation in container. Hyper, as a hypervisor-based substitute for
> container, brings the much-needed isolation, and therefore enables
> multi-tenancy. In HyperStack, we don't really need Ironic to provision
> multiple Hyper bays. On the other hand,  the entire HyperStack cluster is a
> single big "bay". Pretty similar to how Nova works.
>
IMHO, only creating one big bay might not fit into Magnum user scenario
well, what you mentioned that put the entire HyperStack as a single big bay
is more like a public cloud case. But for some private cloud  cases, there
are different users and tenants, and different  tenants might want to set
up their own HyperStack bay on their own resources.

>
> Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN
> functionality. So when someone submits a Docker Compose app, HyperStack
> would launch HyperVMs and call Cinder/Neutron to setup the volumes and
> network. The architecture is quite simple.
>
This is cool!

>
> Here are a blog I'd like to recommend:
> https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html
>
> Let me know your questions.
>
> Thanks,
> Peng
>
> -- Original --
> *From: * "Adrian Otto";
> *Date: * Thu, Jul 16, 2015 11:02 PM
> *To: * "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
> onmetalwith Hyper
>
> Jay,
>
>  Hyper is a substitute for a Docker host, so I expect it could work
> equally well for all of the current bay types. Hyper’s idea of a “pod” and
> a Kubernetes “pod” are similar, but different. I’m not yet convinced that
> integrating Hyper host creation direct with Magnum (and completely
> bypassing nova) is a good idea. It probably makes more sense to implement
> use nova with the ironic dirt driver to provision Hyper hosts so we can use
> those as substitutes for Bay nodes in our various Bay types. This would fit
> in the place were we use Fedora Atomic today. We could still rely on nova
> to do all of the machine instance management and accounting like we do
> today, but produce bays that use Hyper instead of a Docker host. Everywhere
> we currently offer CoreOS as an option we could also offer Hyper as an
> alternative, with some caveats.
>
>  There may be some caveats/drawbacks to consider before committing to a
> Hyper integration. I’ll be asking those of Peng also on this thread, so
> keep an eye out.
>
>  Thanks,
>
>  Adrian
>
>  On Jul 16, 2015, at 3:23 AM, Jay Lau  wrote:
>
>   Thanks Peng, then I can see two integration points for Magnum and Hyper:
>
>  1) Once Hyper and k8s integration finished, we can deploy k8s in two
> mode: docker and hyper mode, the end user can select which mode they want
> to use. For such case, we do not need to create a new bay but may need some
> enhancement for current k8s bay
>
>  2) After mesos and hyper integration,  we can treat mesos and hyper as a
> new bay to magnum. Just like what we are doing now for mesos+marathon.
>
>  Thanks!
>
> 2015-07-16 17:38 GMT+08:00 Peng Zhao :
>
>>Hi Jay,
>>
>>  Yes, we are working with the community to integrate Hyper with Mesos
>> and K8S. Since Hyper uses Pod as the default job unit, it is quite easy to
>> integrate with K8S. Mesos takes a bit more efforts, but still
>> straightforward.
>>
>>  We expect to finish both integration in v0.4 early August.
>>
>>  Best,
>>  Peng
>>
>>   -
>> Hyper - Make VM run like Container
>>
>>
>>
>>  On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau  wrote:
>>
>>>   Hi Peng,
>>>
>>>
>>>  Just want to get more for Hyper. If we create a hyper bay, then can I
>>> set up multiple hosts in a hyper bay? If so, who will do the scheduling,
>>> does mesos or some others integrate with hyper?
>>>
>>>  I did not find much info for hyper cluster management.
>>>
>>>  Thanks.
>>>
>>>  2015-07-16 9:54 GMT+08:00 Peng Zhao :
>>>




>
>
>   -- Original --
>  *From: * “Adrian Otto”;
> *Date: * Wed, Jul 15, 2015 02:31 AM
> *To: * “OpenStack Development Mailing List (not for usage questions)“<
> openstack-dev@lists.openstack.org>;
>
>  *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
> onmetal withHyper
>
>  P

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwith Hyper

2015-07-18 Thread Jay Lau
Thanks Adrian, I think that we are on same page for this: Using nova ironic
drive to provision hyper machines as bay. ;-)  In my previous email, I also
mentioned two integration proposals with assumption of using ironic driver
provision those hyper machines.

1) Once Hyper and k8s integration finished, we can deploy k8s in two mode:
docker and hyper mode, the end user can select which mode they want to use.
For such case, we do not need to create a new bay but may need some
enhancement for current k8s bay

2) After mesos and hyper integration,  we can treat mesos and hyper as a
new bay to magnum. Just like what we are doing now for mesos+marathon.

2015-07-16 23:02 GMT+08:00 Adrian Otto :

>  Jay,
>
>  Hyper is a substitute for a Docker host, so I expect it could work
> equally well for all of the current bay types. Hyper’s idea of a “pod” and
> a Kubernetes “pod” are similar, but different. I’m not yet convinced that
> integrating Hyper host creation direct with Magnum (and completely
> bypassing nova) is a good idea. It probably makes more sense to implement
> use nova with the ironic dirt driver to provision Hyper hosts so we can use
> those as substitutes for Bay nodes in our various Bay types. This would fit
> in the place were we use Fedora Atomic today. We could still rely on nova
> to do all of the machine instance management and accounting like we do
> today, but produce bays that use Hyper instead of a Docker host. Everywhere
> we currently offer CoreOS as an option we could also offer Hyper as an
> alternative, with some caveats.
>
>  There may be some caveats/drawbacks to consider before committing to a
> Hyper integration. I’ll be asking those of Peng also on this thread, so
> keep an eye out.
>
>  Thanks,
>
>  Adrian
>
>  On Jul 16, 2015, at 3:23 AM, Jay Lau  wrote:
>
>   Thanks Peng, then I can see two integration points for Magnum and Hyper:
>
>  1) Once Hyper and k8s integration finished, we can deploy k8s in two
> mode: docker and hyper mode, the end user can select which mode they want
> to use. For such case, we do not need to create a new bay but may need some
> enhancement for current k8s bay
>
>  2) After mesos and hyper integration,  we can treat mesos and hyper as a
> new bay to magnum. Just like what we are doing now for mesos+marathon.
>
>  Thanks!
>
> 2015-07-16 17:38 GMT+08:00 Peng Zhao :
>
>>Hi Jay,
>>
>>  Yes, we are working with the community to integrate Hyper with Mesos
>> and K8S. Since Hyper uses Pod as the default job unit, it is quite easy to
>> integrate with K8S. Mesos takes a bit more efforts, but still
>> straightforward.
>>
>>  We expect to finish both integration in v0.4 early August.
>>
>>  Best,
>>  Peng
>>
>>   -
>> Hyper - Make VM run like Container
>>
>>
>>
>>  On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau  wrote:
>>
>>>   Hi Peng,
>>>
>>>
>>>  Just want to get more for Hyper. If we create a hyper bay, then can I
>>> set up multiple hosts in a hyper bay? If so, who will do the scheduling,
>>> does mesos or some others integrate with hyper?
>>>
>>>  I did not find much info for hyper cluster management.
>>>
>>>  Thanks.
>>>
>>>  2015-07-16 9:54 GMT+08:00 Peng Zhao :
>>>




>
>
>   -- Original --
>  *From: * “Adrian Otto”;
> *Date: * Wed, Jul 15, 2015 02:31 AM
> *To: * “OpenStack Development Mailing List (not for usage questions)“<
> openstack-dev@lists.openstack.org>;
>
>  *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
> onmetal withHyper
>
>  Peng,
>
>  On Jul 13, 2015, at 8:37 PM, Peng Zhao  wrote:
>
>  Thanks Adrian!
>
>  Hi, all,
>
>  Let me recap what is hyper and the idea of hyperstack.
>
>  Hyper is a single-host runtime engine. Technically,
> Docker = LXC + AUFS
> Hyper = Hypervisor + AUFS
> where AUFS is the Docker image.
>
>
>  I do not understand the last line above. My understanding is that
> AUFS == UnionFS, which is used to implement a storage driver for Docker.
> Others exist for btrfs, and devicemapper. You select which one you want by
> setting an option like this:
>
>  DOCKEROPTS=”-s devicemapper”
>
>  Are you trying to say that with Hyper, AUFS is used to provide
> layered Docker image capability that are shared by multiple hypervisor
> guests?
>
>Peng >>> Yes, AUFS implies the Docker images here.

My guess is that you are trying to articulate that a host running
> Hyper is a 1:1 substitute for a host running Docker, and will respond 
> using
> the Docker remote API. This would result in containers running on the same
> host that have a superior security isolation than they would if LXC was
> used as the backend to Docker. Is this correct?
>
>   Peng>>> Exactly
>
>
>  Due to the shared-kernel nature of LXC, Docker la

[openstack-dev] [cinder]Question for availability_zone of cinder

2015-07-18 Thread hao wang
Hi  stackers,

I found now cinder only can configure one storage_availability_zone for
cinder-volume. If using multi-backend in one cinder-volume node, could we
have different AZ for each backend? So that we can specify each backend as
a AZ and create volume in this AZ.

Regards,
Wang Hao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican : Unable to store the secret when Barbican was Integrated with SafeNet HSM

2015-07-18 Thread Asha Seshagiri
Hi All ,

I have configured Barbican to integrate with SafeNet  HSM.
Installed safenet client libraries , registered the barbican machine to
point to HSM server  and also assigned HSM partition.

The following were the changes done in barbican.conf file


# = Secret Store Plugin ===
[secretstore]
namespace = barbican.secretstore.plugin
enabled_secretstore_plugins = store_crypto

# = Crypto plugin ===
[crypto]
namespace = barbican.crypto.plugin
enabled_crypto_plugins = p11_crypto

[p11_crypto_plugin]
# Path to vendor PKCS11 library
library_path = '/usr/lib/libCryptoki2_64.so'
# Password to login to PKCS11 session
login = 'test123'
# Label to identify master KEK in the HSM (must not be the same as HMAC
label)
mkek_label = 'an_mkek'
# Length in bytes of master KEK
mkek_length = 32
# Label to identify HMAC key in the HSM (must not be the same as MKEK label)
hmac_label = 'my_hmac_label'
# HSM Slot id (Should correspond to a configured PKCS11 slot). Default: 1
slot_id = 1

Unable to store the secret when Barbican was integrated with HSM.

[root@HSM-Client crypto]# curl -X POST -H 'content-type:application/json'
-H 'X-Project-Id:12345' -d '{"payload": "my-secret-here",
"payload_content_type": "text/plain"}' http://localhost:9311/v1/secrets
*{"code": 500, "description": "Secret creation failure seen - please
contact site administrator.", "title": "Internal Server
Error"}[root@HSM-Client crypto]#*


Please find the logs below :

2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils
[req-354affce-b3d6-41fd-b050-5e5c604004eb - 12345 - - -] Problem seen
creating plugin: 'p11_crypto'
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils Traceback
(most recent call last):
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils   File
"/root/barbican/barbican/plugin/util/utils.py", line 42, in
instantiate_plugins
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils
plugin_instance = ext.plugin(*invoke_args, **invoke_kwargs)
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils   File
"/root/barbican/barbican/plugin/crypto/p11_crypto.py", line 70, in __init__
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils
conf.p11_crypto_plugin.hmac_label)
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils   File
"/root/barbican/barbican/plugin/crypto/pkcs11.py", line 344, in
cache_mkek_and_hmac
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils
self.get_mkek(self.current_mkek_label, session)
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils   File
"/root/barbican/barbican/plugin/crypto/pkcs11.py", line 426, in get_mkek
2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils raise
P11CryptoKeyHandleException()
*2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils
P11CryptoKeyHandleException: No key handle was found*
*2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils*
*2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers
[req-354affce-b3d6-41fd-b050-5e5c604004eb - 12345 - - -] Secret creation
failure seen - please contact site administrator.*


(I am not sure why we are geting CryptoPluginNotFound: Crypto plugin not
found. Exception since the changes is able to hit the p11_crypto.py code)

2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers Traceback
(most recent call last):
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers   File
"/root/barbican/barbican/api/controllers/__init__.py", line 104, in handler
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers return
fn(inst, *args, **kwargs)
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers   File
"/root/barbican/barbican/api/controllers/__init__.py", line 90, in enforcer
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers return
fn(inst, *args, **kwargs)
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers   File
"/root/barbican/barbican/api/controllers/__init__.py", line 146, in
content_types_enforcer
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers return
fn(inst, *args, **kwargs)
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers   File
"/root/barbican/barbican/api/controllers/secrets.py", line 329, in on_post
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers
transport_key_id=data.get('transport_key_id'))
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers   File
"/root/barbican/barbican/plugin/resources.py", line 104, in store_secret
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers
secret_model, project_model)
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers   File
"/root/barbican/barbican/plugin/resources.py", line 267, in
_store_secret_using_plugin
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers
secret_metadata = store_plugin.store_secret(secret_dto, context)
2015-07-18 17:15:32.643 29838 ERROR barbican.api.controllers   File
"/root/barbican

Re: [openstack-dev] [openstack][cinder]A discussion about quota update lower than current usage.

2015-07-18 Thread hao wang
hi Duncan
Sorry for the long delay.   IMHO, the purpose of this change is let the
cinder have ability to control admin's action. If admin only want to limit
the users not to create resources any more, fine, use the option argument
'"skip_validation=False" to update the quota limit and just equal the usage
of current quota is ok, do not allow lower than it.  On the other hand, If
admin clearly know what he is doing and want to set a new quota limit of
tenant, ok, use the "skip_validation=True" to update the quota lower than
current usage and tell user "I've updated a new quota limit for you, now
delete some resource to get under it."

I agree that admin should be educated to know what they want to do and use
a proper argument to reduce the confusion that was brought to end user.

2015-07-13 18:21 GMT+08:00 Duncan Thomas :

> The problem is, if you reject the request to lower quota unless the usage
> is under the new quota, you've got an inherently racy process where the
> admin needs to communicate with the tenant to say 'Stop using some of your
> quota while I reduce it', which is no less complicated than 'I've reduced
> your quota, now delete some resources to get under it'. It honestly sounds
> like the right thing to do here is to educate the admins who are surprised
> by the current behaviour, rather than to introduce a new behaviour that is
> fundamentally no better.
>
> On 13 July 2015 at 12:14, hao wang  wrote:
>
>> Hi, Mike
>>
>> I'm not sure we really don't need any change about this feature. At
>> least, some end users I faced to think there should be changed
>>
>> IMHO, there is a main problem that some users whom I faced to can't
>> understand: What's the purpose that admin reduce quota lowner than existing
>> usage? Limit user to can't create any resources any more? But why reduce
>> quota just equal the current usage, it has same function. Make user to
>> delete their resources lower than the new limit line? It's weak if user
>> don't want to do that deletion and also bring some confusion to other users
>> that I have mentioned.
>>
>> I understood there may be 100 reasons to show me why admin can reduce the
>> quota lower than usage, and I don't want to object them too. But I hope
>> this change can bring some new usage to update quota: 1. When admin use
>> client(could be third party) to update the quota limit, they should check
>> quota usage first as winston mentioned, if they don't or forget, anyway,
>> they will change failed if quota is lower than usage, since we give the
>> ability to cinder it will stop them to do that thing and make admin back to
>> check quota usage. 2. If admin know what they are doing and just need to
>> reduce the limit lower for some reason, fine, take the option argument
>> '--force' or '--skip_validation' to update the quota.
>>
>> In personally, I felt this routine may be more improvement and little
>> confusion with it. I knew Eric said that of course we can implement this
>> purpose by using current APIs, it's a alternatives, but it depends on the
>> application which is top on cinder I think, and is hard to have consistent.
>>
>> 2015-07-11 7:24 GMT+08:00 Mike Perez :
>>
>>> On 12:30 Jul 10, hao wang wrote:
>>> > Cinder now doesn't check the existing resource when user lower the
>>> quota.
>>> > It's reasonable for admin can adjust the quota limit to lower level
>>> than
>>> > current usage.
>>> > But it also bring confusion that I have received to end user, they saw
>>> the
>>> > current usage
>>> > was more than limit, but they can't create resources any more.
>>> >
>>> > So there have been 'bug' reported[1] and code patch[2] committed, I
>>> knew it
>>> > may be
>>> > inappropriate as 'bug fix', but just want to optimize this API of
>>> updating
>>> > quota.
>>> >
>>> > We are proposing to add an option argument which is named 'force' in
>>> > request body.
>>> > Of course the default value is True that means admin can adjust the
>>> quota
>>> > lower then
>>> > current usage as same as what we did now. When the force is False, that
>>> > will occur
>>> > a Validation and return 400 Bad Request if the update value is lower
>>> than
>>> > current usage.
>>> >
>>> > I wonder to know folks' opinions and suggestions about this change to
>>> see
>>> > if this is value to merge this patch.
>>>
>>> Based on the feedback received in the bug and review, it seems like
>>> there is
>>> a clear consensus that people don't want this, even if it can be
>>> bypassed with
>>> a force option.
>>>
>>> --
>>> Mike Perez
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>>
>> Best Wishes For You!
>>
>>
>> __
>> OpenStack Developm

[openstack-dev] [Neutron]How to configure lbaas api v2 on devstack?

2015-07-18 Thread ????
Hi all,
To enable lbaas on devstack with haproxy, I added `enabled_service q-lbaas` in 
devstack/loacl.conf. ButI find that lbaas api is v1, which means I could use 
`neutron lb-xxx` instead of  `neutron lbaas-xxx`. If I want to use api v2, what 
should I do with devstack/local.conf?

Wilence Yao__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel][puppet] The state of collaboration: 5 weeks

2015-07-18 Thread Dmitry Borodaenko

It has been 5 weeks since Emilien has asked Fuel developers to contribute more
actively to Puppet OpenStack project [0]. We had a lively discussion on
openstack-dev, myself and other Fuel developers proposed some concrete steps
that could be done to reconcile the two projects, the whole thread was reported
and discussed on Linux Weekly News [1], and the things were looking up.

[0] http://lists.openstack.org/pipermail/openstack-dev/2015-June/066544.html
[1] https://lwn.net/Articles/648331/

And now, 5 weeks later, Emilien has reported to the Technical Committee that
there has been no progress on reconciliation between Fuel and Puppet OpenStack,
and used his authority as a PTL to request that the Fuel's proposal to join
OpenStack [2] is rejected.

[2] https://review.openstack.org/199232

In further comments on the same review, Emilien has claimed that there's
"clearly less" contribution to Puppet OpenStack from Fuel developers than
before, and even brought up an example of a review in puppet-horizon that was
proposed and then abandoned by Fuel team [3]. Jay went as far as calling that
example "an obvious failure of working with the upstream Puppet-OpenStack
community".

[3] https://review.openstack.org/198119

Needless to say, I found all these claims deeply disturbing, and had to look
closely into what's going on.

The case of the puppet-horizon commit has turned out to be surprisingly
obvious.

Even before looking into the review comments, I could see a technical reason
for abandoning the commit: if there is a bug in a component, fixing that bug in
the package is preferrable to fixing it in puppet, because it allows anybody to
benefit from the fix, not just the people deploying that package with puppet.

And if you do look at the review in question, you'll find that immediately (14
minutes, and that at 6pm on Friday night!) after Jay has asked in the comments
to the review why it was abandoned, the commit author from the Fuel team has
explained that this patch was a workaround for a packaging problem, and that
this was pointed out in the review by a Horizon core reviewer more than a week
ago, and later corroborated by a Puppet OpenStack core reviewer. Further
confirming that fixing this in the package instead of in puppet-horizon was an
example of Fuel developers agreeing with other Puppet OpenStack contributors
and doing the right thing.

Emilien has clearly found this case important enough to report to the TC, and
yet didn't find it important enough to simply ask Fuel developers why they
chose to abandon the commit. I guess you can call that an obvious failure to
work together.

Looking further into Fuel team's reviews for puppet-horizon, I found another
equally disturbing example [4].

[4] https://review.openstack.org/190548

Here's what I see in this review:

a) Fuel team has spent more than a month (since June 11) on trying to land this
patch.

b) 2 out of 5 negative reviews are from Fuel team, proving that Fuel developers
are not "rubberstamping" each other's commits as was implied by Emilien's
comments on the TC review above.

c) There was one patch set that was pushed 3 business days after a negative
review, all other patch sets (11 total) were pushed no later than next day
after a negative review.

All in all, I see great commitment and effort on behalf of Fuel team,
eventually awarded by a +2 from Michael Chapman.

On the same day (June 30), Emilien votes -2 for a correction in a comment, and
walks away from the review for good. 18 days and 4 patch sets and 2 outstanding
+1's later, the review remains blocked by that -2. Reaching out on
#puppet-openstack [5] didn't help, either.

[5] http://irclog.perlgeek.de/puppet-openstack/2015-07-08#i_10867124

At the same time, Emilien has commented on the TC review that the only metric
he considers reflective of Fuel's contribution to Puppet OpenStack is merged
commits. Isn't that a Catch-22 situation, requesting more merged commits and
refusing to merge them?

When I compare the trends in the number of patch sets pushed by Fuel developers
[6] against the number of merged commits [7], I see the same picture. It's
really obvious when comparing both graphs side by side.

[6] 
http://stackalytics.com/?module=puppetopenstack-group&metric=patches&company=mirantis
[7] 
http://stackalytics.com/?module=puppetopenstack-group&metric=commits&company=mirantis

Between May and July, the peak number of patchsets in Puppet OpenStack from all
contributors increased by a factor of 1.78, while patchsets associated with
Mirantis jumped up by a factor of 6.75. At the same time, merged commits from
all contributors increased by 2.44 (meaning 30% less patch sets per commit than
in May), for Mirantis the factor is only 2.0 (meaning 3 times as many patch
sets per commit as in May).

Just look at the attached picture to see how obviously massive is the increase
in the effort that Fuel team contributes to Puppet OpenStack. One can argue for
different ways to explain why this effor

Re: [openstack-dev] [fuel] Plan to implement the OpenStack Testing Interface for Fuel

2015-07-18 Thread Boris Pavlovic
Dmitry,


Am I missing any major risks or additional requirements here?


Syncing requirements with global openstack requirements can produce issues,
that requires changes in code.

I would strongly recommend to sync requirements by hand and test
everything before starting splitting repos and adding openstack-ci jobs.

-1 risk.


Best regards,
Boris Pavlovic

On Sat, Jul 18, 2015 at 12:16 AM, Dmitry Borodaenko <
dborodae...@mirantis.com> wrote:

> One of the requirements for all OpenStack projects is to use the same
> Testing
> Interface [0]. In response to the Fuel application [1], the Technical
> Committee
> has clarified that this includes running gate jobs on the OpenStack
> Infrastructure [2][3].
>
> [0]
> http://governance.openstack.org/reference/project-testing-interface.html
> [1] https://review.openstack.org/199232
> [2]
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-07-14-20.02.log.html#l-150
> [3] https://review.openstack.org/201766
>
> Although the proposed formal requirement could use some clarification,
> according to the meeting log linked above, TC has acknowledged that
> OpenStack
> Infrastructure can't currently host deployment tests for projects like
> Fuel and
> TripleO. This narrows the requirement down to codestyle checks, unit tests,
> coverage report, source tarball generation, and docs generation for all
> Python
> components of Fuel.
>
> As I mentioned in my previous email [4], we're days away from Feature
> Freeze
> for Fuel 7.0, so we need to plan a gradual transition instead of making the
> testing interface a hard requirement for all repositories.
>
> [4]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/069906.html
>
> I propose the following stages for transition of Fuel CI to OpenStack
> Infrastructure:
>
> Stage 1: Enable non-voting jobs compliant with the testing interface for a
> single Python component of Fuel. This has no impact on Fuel schedule and
> should
> be done immediately. Boris Pavlovic has kindly agreed to be our code fairy
> and
> magicked together a request that enables such jobs for nailgun in fuel-web
> [5].
>
> [5] https://review.openstack.org/202892
>
> As it turns out, OpenStack CI imposes strict limits on a project's
> directory
> structure, and fuel-web doesn't fit those since it contains a long list of
> components besides nailgun, some of them not even in Python. Making the
> above
> tests pass would involve a major restructuring of fuel-web repository,
> which
> once again is for now blocked by the 7.0 FF. We have a blueprint to split
> fuel-web [6], but so far we've only managed to extract fuel-agent, the rest
> will probably have to wait until 8.0.
>
> [6] https://blueprints.launchpad.net/fuel/+spec/split-fuel-web-repo
>
> Because of that, I think fuel-agent is a better candidate for the first
> Fuel
> component to get CI jobs on OpenStack Infrastructure.
>
> Stage 2: Get the non-voting jobs on the first component to pass, and make
> them
> voting and gating the commits to that component. Assuming that we pick a
> component that doesn't need major restructuring to pass OpenStack CI, we
> should
> be able to complete this stage before 7.0 soft code freeze on August 13
> [7].
>
> [7] https://wiki.openstack.org/wiki/Fuel/7.0_Release_Schedule
>
> Stage 3: Enable non-voting jobs for all other Python components of Fuel
> outside
> of fuel-web. We will have until 7.0 GA release on September 24, and we
> won't be
> able to proceed to following stages until 7.0 is released.
>
> Stage 4: Everything else that is too disruptive for 7.0 but doesn't require
> changes on the side of OpenStack Infrastructure can all start in parallel
> after
> Fuel 7.0 is released:
>
> a) Finish splitting fuel-web.
> b) Get all Python components of Fuel to pass OpenStack CI.
> c) Set up unit test gates for non-Python components of Fuel (e.g.
> fuel-astute).
> d) Finish the transition of upstream modules in fuel-library to librarian.
> e) Set up rspec based gates for non-upstream modules in fuel-library.
>
> I think completing all these can be done by 8.0 SCF in December, and if
> not,
> must become a blocker requirement for 9.0 (Q3 2016).
>
> Stage 5: Bonus objectives that are not required to meet TC requirements for
> joining OpenStack, but still achievable based on current state of OpenStack
> Infrastructure:
>
> a) functional tests for Fuel UI
> b) beaker tests for non-upstream parts of fuel-library
>
> Stage 6: Stretch goal for the distant future is to actually make it
> possible to
> run multi-node deploy tests on OpenStack Infrastructure. I guess we can at
> least start that discussion in Tokyo...
>
> Am I missing any major risks or additional requirements here? Do the dates
> make
> sense?
>
> Thanks,
>
> --
> Dmitry Borodaenko
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> ht

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with Hyper

2015-07-18 Thread Hongbin Lu
Peng,

Several questions Here. You mentioned that HyperStack is a single big “bay”. 
Then, who is doing the multi-host scheduling, Hyper or something else? Were you 
suggesting to integrate Hyper with Magnum directly? Or you were suggesting to 
integrate Hyper with Magnum indirectly (i.e. through k8s, mesos and/or Nova)?

Best regards,
Hongbin

From: Peng Zhao [mailto:p...@hyper.sh]
Sent: July-17-15 12:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with 
Hyper

Hi, Adrian, Jay and all,

There could be a much longer version of this, but let me try to explain in a 
minimalist way.

Bay currently has two modes: VM-based, BM-based. In both cases, Bay helps to 
isolate different tenants' containers. In other words, bay is single-tenancy. 
For BM-based bay, the single tenancy is a worthy tradeoff, given the 
performance merits of LXC vs VM. However, for a VM-based bay, there is no 
performance gain, but single tenancy seems a must, due to the lack of isolation 
in container. Hyper, as a hypervisor-based substitute for container, brings the 
much-needed isolation, and therefore enables multi-tenancy. In HyperStack, we 
don't really need Ironic to provision multiple Hyper bays. On the other hand,  
the entire HyperStack cluster is a single big "bay". Pretty similar to how Nova 
works.

Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN functionality. 
So when someone submits a Docker Compose app, HyperStack would launch HyperVMs 
and call Cinder/Neutron to setup the volumes and network. The architecture is 
quite simple.

Here are a blog I'd like to recommend: 
https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html

Let me know your questions.

Thanks,
Peng

-- Original --
From:  "Adrian 
Otto"mailto:adrian.o...@rackspace.com>>;
Date:  Thu, Jul 16, 2015 11:02 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"mailto:openstack-dev@lists.openstack.org>>;
Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetalwith Hyper

Jay,

Hyper is a substitute for a Docker host, so I expect it could work equally well 
for all of the current bay types. Hyper’s idea of a “pod” and a Kubernetes 
“pod” are similar, but different. I’m not yet convinced that integrating Hyper 
host creation direct with Magnum (and completely bypassing nova) is a good 
idea. It probably makes more sense to implement use nova with the ironic dirt 
driver to provision Hyper hosts so we can use those as substitutes for Bay 
nodes in our various Bay types. This would fit in the place were we use Fedora 
Atomic today. We could still rely on nova to do all of the machine instance 
management and accounting like we do today, but produce bays that use Hyper 
instead of a Docker host. Everywhere we currently offer CoreOS as an option we 
could also offer Hyper as an alternative, with some caveats.

There may be some caveats/drawbacks to consider before committing to a Hyper 
integration. I’ll be asking those of Peng also on this thread, so keep an eye 
out.

Thanks,

Adrian

On Jul 16, 2015, at 3:23 AM, Jay Lau 
mailto:jay.lau@gmail.com>> wrote:

Thanks Peng, then I can see two integration points for Magnum and Hyper:
1) Once Hyper and k8s integration finished, we can deploy k8s in two mode: 
docker and hyper mode, the end user can select which mode they want to use. For 
such case, we do not need to create a new bay but may need some enhancement for 
current k8s bay
2) After mesos and hyper integration,  we can treat mesos and hyper as a new 
bay to magnum. Just like what we are doing now for mesos+marathon.
Thanks!

2015-07-16 17:38 GMT+08:00 Peng Zhao mailto:p...@hyper.sh>>:
Hi Jay,

Yes, we are working with the community to integrate Hyper with Mesos and K8S. 
Since Hyper uses Pod as the default job unit, it is quite easy to integrate 
with K8S. Mesos takes a bit more efforts, but still straightforward.

We expect to finish both integration in v0.4 early August.

Best,
Peng

-
Hyper - Make VM run like Container



On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau 
mailto:jay.lau@gmail.com>> wrote:
Hi Peng,

Just want to get more for Hyper. If we create a hyper bay, then can I set up 
multiple hosts in a hyper bay? If so, who will do the scheduling, does mesos or 
some others integrate with hyper?
I did not find much info for hyper cluster management.

Thanks.

2015-07-16 9:54 GMT+08:00 Peng Zhao mailto:p...@hyper.sh>>:






-- Original --
From:  “Adrian 
Otto”mailto:adrian.o...@rackspace.com>>;
Date:  Wed, Jul 15, 2015 02:31 AM
To:  “OpenStack Development Mailing List (not for usage 
questions)“mailto:openstack-dev@lists.openstack.org>>;

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal withHyper

Peng,

On Jul 13, 2015, at 8:37 PM, Peng Zhao mailto:p...@h

Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-18 Thread Asselin, Ramy
HI Tang,

I just pushed the file [1]. Please read the migration instructions in the 
README.md.

Let me know of any issues [2].

[1] 
https://github.com/rasselin/os-ext-testing/commit/9b39bb3c9a3532e753763d5e65241eeff2a20a06
[2] https://github.com/rasselin/os-ext-testing/issues/new

Thanks,
Ramy


From: Asselin, Ramy
Sent: Saturday, July 18, 2015 11:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [CI] How to set a proxy for zuul.

HI Abhi & Tang,

Sorry I missed this thread. Let me know if you’ve resolved your issues.

My repo is undergoing migrations to reuse components in 
openstack-infra/puppet-openstackci.

For single-use-nodes, the file you need has been removed here [1]: But I see 
now that it is still needed, or a different function is needed based on this 
version used by infra: [2]. I will explore a solution.

A couple other notes, please use ci-sandbox  [3] instead of sandbox.

Zuul use behind a proxy: seems you got past this? Could you share your solution?

Also, feel free to join 3rd party ci IRC  meetings on freenode [4]. It’s a 
great place to ask questions and meet others setting up or maintaining these 
systems.

Thanks,
Ramy
IRC: asselin

[1] 
https://github.com/rasselin/os-ext-testing/commit/dafe822be7813522a6c7361993169da20b37ffb7
[2] 
https://github.com/openstack-infra/project-config/blob/master/zuul/openstack_functions.py
[3] http://git.openstack.org/cgit/openstack-dev/ci-sandbox/
[4] http://eavesdrop.openstack.org/#Third_Party_Meeting



From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: Monday, July 13, 2015 11:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI] How to set a proxy for zuul.

Also if you want to change it you will need to talk with Asselin Ramy who is 
the owner of the repo you followed.

On Tue, Jul 14, 2015 at 12:18 PM, Abhishek Shrivastava 
mailto:abhis...@cloudbyte.com>> wrote:
​Basically it is not required, and if you see /etc/jenkins_jobs/config folder 
you will find one dsvm-cinder-tempest.yaml which is to be used basically not 
examples.yaml. So its not an issue.​

On Tue, Jul 14, 2015 at 12:07 PM, Tang Chen 
mailto:tangc...@cn.fujitsu.com>> wrote:

On 07/14/2015 01:46 PM, Abhishek Shrivastava wrote:
Instead of it use reusable_node option.

Thanks. Problem resolved. :)

BTW, single_use_node is written in layout.yaml by default.
If it doesn't exist anymore, do we need a patch to fix it ?

For someone who uses CI for the first time, it is really a problem..

And also, if I want to post patch for zuul, where should I post the patch ?

Thanks.



On Tue, Jul 14, 2015 at 9:12 AM, Tang Chen 
mailto:tangc...@cn.fujitsu.com>> wrote:
Hi Abhishek, All,

I found the problem.

My /etc/zuul/layout/layout.yaml has the following config:

jobs:
  - name: ^dsvm-tempest.*$
parameter-function: single_use_node

But in _parseConfig() in zuul/scheduler.py, it failed to find single_use_node().

fname = config_job.get('parameter-function', None)
if fname:
func = config_env.get(fname, None)
if not func:
raise Exception("Unable to find function %s" % fname)

So projects section was not parsed.

Does anyone know why ?

Thanks.


On 07/14/2015 10:54 AM, Tang Chen wrote:
Hi Abhishek,

I printed the self.layout.projects in zuul/scheduler.py, it is empty.
So the project was not found.

But I did do the jenkins-jobs --flush-cache update /etc/jenkins_jobs/config/
And I did configure openstack-dev/sandbox in layout.yaml.

Do you have any idea what's wrong here ?

Thanks.
On 07/13/2015 05:58 PM, Tang Chen wrote:

On 07/13/2015 04:35 PM, Abhishek Shrivastava wrote:
Updating jobs using "sudo jenkins-jobs --flush-cache update 
/etc/jenkins_jobs/config/". Also update the myvendor in examples.yaml


Sorry, I updated the jobs, restart the whole machine. But it still doesn't work.

By the way, there is no vendor in examples.yaml.

It is still this error: Project openstack-dev/sandbox not found

Anything else should I pay attention to?

Thanks.


On Mon, Jul 13, 2015 at 1:45 PM, Tang Chen 
mailto:tangc...@cn.fujitsu.com>> wrote:

On 07/13/2015 03:50 PM, Abhishek Shrivastava wrote:
​ Use tester or something, also are you updating the jobs or not?​

I used tester as my vendor. It doesn't work.

And what do you mean by updating the jobs ? I built the 
noop-check-cimmunitication
job once in Jenkins UI. Does it matter ? All the others are not touched.

And referring to the error, "Project openstack-dev/sandbox not found", it seems 
like
somewhere the project name was wrong.

right ?


Thanks.


On Mon, Jul 13, 2015 at 1:16 PM, Tang Chen 
mailto:tangc...@cn.fujitsu.com>> wrote:
Hi Abhishek,

Thanks for the quick reply.
On 07/13/2015 03:16 PM, Abhishek Shrivastava wrote:
Also check that Gearman is connecting or not through Jenkins UI.

On Mon, Jul 13, 2015 at 12:45 PM, Abhishek Shrivastava 
mailto:abhis...@clo

Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-18 Thread Asselin, Ramy
HI Abhi & Tang,

Sorry I missed this thread. Let me know if you’ve resolved your issues.

My repo is undergoing migrations to reuse components in 
openstack-infra/puppet-openstackci.

For single-use-nodes, the file you need has been removed here [1]: But I see 
now that it is still needed, or a different function is needed based on this 
version used by infra: [2]. I will explore a solution.

A couple other notes, please use ci-sandbox  [3] instead of sandbox.

Zuul use behind a proxy: seems you got past this? Could you share your solution?

Also, feel free to join 3rd party ci IRC  meetings on freenode [4]. It’s a 
great place to ask questions and meet others setting up or maintaining these 
systems.

Thanks,
Ramy
IRC: asselin

[1] 
https://github.com/rasselin/os-ext-testing/commit/dafe822be7813522a6c7361993169da20b37ffb7
[2] 
https://github.com/openstack-infra/project-config/blob/master/zuul/openstack_functions.py
[3] http://git.openstack.org/cgit/openstack-dev/ci-sandbox/
[4] http://eavesdrop.openstack.org/#Third_Party_Meeting



From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: Monday, July 13, 2015 11:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI] How to set a proxy for zuul.

Also if you want to change it you will need to talk with Asselin Ramy who is 
the owner of the repo you followed.

On Tue, Jul 14, 2015 at 12:18 PM, Abhishek Shrivastava 
mailto:abhis...@cloudbyte.com>> wrote:
​Basically it is not required, and if you see /etc/jenkins_jobs/config folder 
you will find one dsvm-cinder-tempest.yaml which is to be used basically not 
examples.yaml. So its not an issue.​

On Tue, Jul 14, 2015 at 12:07 PM, Tang Chen 
mailto:tangc...@cn.fujitsu.com>> wrote:

On 07/14/2015 01:46 PM, Abhishek Shrivastava wrote:
Instead of it use reusable_node option.

Thanks. Problem resolved. :)

BTW, single_use_node is written in layout.yaml by default.
If it doesn't exist anymore, do we need a patch to fix it ?

For someone who uses CI for the first time, it is really a problem..

And also, if I want to post patch for zuul, where should I post the patch ?

Thanks.




On Tue, Jul 14, 2015 at 9:12 AM, Tang Chen 
mailto:tangc...@cn.fujitsu.com>> wrote:
Hi Abhishek, All,

I found the problem.

My /etc/zuul/layout/layout.yaml has the following config:

jobs:
  - name: ^dsvm-tempest.*$
parameter-function: single_use_node

But in _parseConfig() in zuul/scheduler.py, it failed to find single_use_node().

fname = config_job.get('parameter-function', None)
if fname:
func = config_env.get(fname, None)
if not func:
raise Exception("Unable to find function %s" % fname)

So projects section was not parsed.

Does anyone know why ?

Thanks.



On 07/14/2015 10:54 AM, Tang Chen wrote:
Hi Abhishek,

I printed the self.layout.projects in zuul/scheduler.py, it is empty.
So the project was not found.

But I did do the jenkins-jobs --flush-cache update /etc/jenkins_jobs/config/
And I did configure openstack-dev/sandbox in layout.yaml.

Do you have any idea what's wrong here ?

Thanks.

On 07/13/2015 05:58 PM, Tang Chen wrote:

On 07/13/2015 04:35 PM, Abhishek Shrivastava wrote:
Updating jobs using "sudo jenkins-jobs --flush-cache update 
/etc/jenkins_jobs/config/". Also update the myvendor in examples.yaml


Sorry, I updated the jobs, restart the whole machine. But it still doesn't work.

By the way, there is no vendor in examples.yaml.

It is still this error: Project openstack-dev/sandbox not found

Anything else should I pay attention to?

Thanks.



On Mon, Jul 13, 2015 at 1:45 PM, Tang Chen 
mailto:tangc...@cn.fujitsu.com>> wrote:

On 07/13/2015 03:50 PM, Abhishek Shrivastava wrote:
​ Use tester or something, also are you updating the jobs or not?​

I used tester as my vendor. It doesn't work.

And what do you mean by updating the jobs ? I built the 
noop-check-cimmunitication
job once in Jenkins UI. Does it matter ? All the others are not touched.

And referring to the error, "Project openstack-dev/sandbox not found", it seems 
like
somewhere the project name was wrong.

right ?


Thanks.



On Mon, Jul 13, 2015 at 1:16 PM, Tang Chen 
mailto:tangc...@cn.fujitsu.com>> wrote:
Hi Abhishek,

Thanks for the quick reply.
On 07/13/2015 03:16 PM, Abhishek Shrivastava wrote:
Also check that Gearman is connecting or not through Jenkins UI.

On Mon, Jul 13, 2015 at 12:45 PM, Abhishek Shrivastava 
mailto:abhis...@cloudbyte.com>> wrote:
First of all, change the "vendor" to your vendor name in 
/etc/jenkins_jobs/config/projects.yaml file. Also, restart the zuul and zuul 
merger.

I did the check. Gearman plugin works find. In Jenkins UI, I tested the 
connection, and it succeeded.

And also, I restarted zuul and zuul merger every time I modified the yaml files.

But it doesn't work.

And the vendor, does that matter ?  And what vendor name should I provide ?
I cannot find any vendor info in 

Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing with SSH timeout.

2015-07-18 Thread Asselin, Ramy
We ran into this issue as well. I never found the root cause, but I found a 
work-around: Use neutron-networking instead of the default nova-networking.

If you’re using devstack-gate, it’s as  simple as:
export DEVSTACK_GATE_NEUTRON=1

Then run the job as usual.

Ramy

From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: Friday, July 17, 2015 9:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing 
with SSH timeout.

Hi Folks,

In my CI I see the following tempest tests failure for a past couple of days.

·
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
 [361.274316s] ... FAILED

·
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
 [320.122458s] ... FAILED

·
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
 [317.399342s] ... FAILED

·
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes
 [257.858272s] ... FAILED

The failure logs are always the same every time, i.e;



03:34:09 2015-07-17 03:21:13,256 9505 ERROR[tempest.scenario.manager] 
(TestVolumeBootPattern:test_volume_boot_pattern) Initializing SSH connection to 
172.24.5.1 failed. Error: Connection to the 172.24.5.1 via SSH timed out.

03:34:09 User: cirros, Password: None

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
Traceback (most recent call last):

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
"tempest/scenario/manager.py", line 312, in get_remote_client

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
linux_client.validate_authentication()

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
"tempest/common/utils/linux/remote_client.py", line 62, in 
validate_authentication

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
self.ssh_client.test_connection_auth()

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py",
 line 151, in test_connection_auth

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
connection = self._get_ssh_connection()

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py",
 line 87, in _get_ssh_connection

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
password=self.password)

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
SSHTimeout: Connection to the 172.24.5.1 via SSH timed out.

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager User: 
cirros, Password: None

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager

03:34:09 2015-07-17 03:21:14,377 9505 INFO 
[tempest_lib.common.re



Because of these every job is failing, so if someone can help me regarding this 
please do reply.

--
[https://docs.google.com/uc?export=download&id=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0&revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks & Regards,
Abhishek
Cloudbyte Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 7/17 state of the gate (you know, fires)

2015-07-18 Thread Matt Riedemann



On 7/18/2015 10:48 AM, Louis Taylor wrote:

On Fri, Jul 17, 2015 at 07:51:35PM -0500, Matt Riedemann wrote:

We started the day with a mock 1.1.4 release breaking unit tests for a few
projects (nova, cinder, ironic at least).

The nova blocker was tracked with bug 1475661.

We got a g-r change up to block mock!=1.1.4 but that didn't fix the issue
because oslo.versionedobjects pulls mock in before pip processes the !=.
Luckily lifeless reverted the regression and released mock 1.2 so we should
be OK on that front for now.

However, cinder/glance/nova, which gate on the ceph job, are blocked by bug
1475811 which is a regression with the rbd driver in glance_store 0.7.0,
which was put into upper-constraints.txt today.

I have a patch up for glance_store here:

https://review.openstack.org/#/c/203294/

And an exclusion of 0.7.0 in g-r here:

https://review.openstack.org/#/c/203295/

If the glance core team could get around to approving that fix and releasing
0.7.1 it would unblock the gate for glance/cinder/nova and make me happy.


Hi Matt, thanks for working on this.

I've approved the glance_store fix and have submitted a request for a new
release:

 https://review.openstack.org/#/c/203362/

Cheers,
Louis



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I think we're good now, let the rechecks begin!

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-18 Thread Steven Dake (stdake)


From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 16, 2015 at 12:35 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

To be clear we have two pursuits on this thread:

1) What to rename bay.blatform to.
2) How we might eliminate the attribute, or replace it with something more 
intuitive

We have a consensus now on how to address #1. My direction to Kannan is to 
proceed using server_type as the new attribute name. If anyone disagrees, you 
can let us know now, or submit a subsequent patch to address that concern, and 
we can vote on it in Gerrit.

On this subject of potentially eliminating, or replacing this attribute with 
something else, let’s continue to discuss that.

One key issue is that our current HOT file format does not have any facility 
for conditional logic evaluation, so if the Bay orchestration differs between 
various server_type values, we need to select the appropriate value based on 
the way the bay is created. I’m open to hearing suggestions for implementing 
any needed conditional logic, if we can put it into a better place.

Conditional logic could be done easily by having separate templates for each 
variant naming the template based upon the 3 properties, and loading the 
correct template by name.  In other words, do the conditional logic in our heat 
template launching, and leave the templates without conditional logic.

Regards
-steve


Adrian

On Jul 16, 2015, at 8:54 AM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:

Wait... so the issue is if you were to just use nova flavor, you don't have 
enough information to choose a set of templates that may be more optimal for 
that flavor type (like vm's or bare metal)? Is this a NaaS vs flatdhcp kind of 
thing? I just took a quick skim of the heat templates and it wasn't really 
clear why the template needs to know.

If that sort of thing is needed, maybe allow a heat environment or the template 
set to be tagged onto nova flavors in Magnum by the admin, and then the user 
can be concerned only with nova flavors? They are use to dealing with them. 
Sahara and Trove do some similar things I think.

Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Wednesday, July 15, 2015 8:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field you 
proposed in baymodel [1]. I prefer to drop it and use the existing field 
‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or 
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
  We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc. And 
separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot Baremetal. 
'VM' or Baremetal here is just used for heat template selection.


1> If used flavor, it is nova specific concept: take two as example,
m1.small, or m1.middle.
   m1.small < 'VM' m1.middle < 'VM'
   Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I think 
flavor not good to be used.


2> @Adrian, we have --flavor-id field for baymodel now, it would picked up by 
heat-templates, and boot instances with such flavor.


3> Finally, I think instance_type is better.  instance_type can be used as heat 
templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat stack 
deploy

instance_type = 'baremetal', it means such templates fit for ironic baremetal 
heat stack deploy.





Thanks!


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Hon

Re: [openstack-dev] 7/17 state of the gate (you know, fires)

2015-07-18 Thread Louis Taylor
On Fri, Jul 17, 2015 at 07:51:35PM -0500, Matt Riedemann wrote:
> We started the day with a mock 1.1.4 release breaking unit tests for a few
> projects (nova, cinder, ironic at least).
> 
> The nova blocker was tracked with bug 1475661.
> 
> We got a g-r change up to block mock!=1.1.4 but that didn't fix the issue
> because oslo.versionedobjects pulls mock in before pip processes the !=.
> Luckily lifeless reverted the regression and released mock 1.2 so we should
> be OK on that front for now.
> 
> However, cinder/glance/nova, which gate on the ceph job, are blocked by bug
> 1475811 which is a regression with the rbd driver in glance_store 0.7.0,
> which was put into upper-constraints.txt today.
> 
> I have a patch up for glance_store here:
> 
> https://review.openstack.org/#/c/203294/
> 
> And an exclusion of 0.7.0 in g-r here:
>
> https://review.openstack.org/#/c/203295/
>
> If the glance core team could get around to approving that fix and releasing
> 0.7.1 it would unblock the gate for glance/cinder/nova and make me happy.

Hi Matt, thanks for working on this.

I've approved the glance_store fix and have submitted a request for a new
release:

https://review.openstack.org/#/c/203362/

Cheers,
Louis


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 7/17 state of the gate (you know, fires)

2015-07-18 Thread Matt Riedemann



On 7/18/2015 9:31 AM, Matt Riedemann wrote:



On 7/18/2015 8:54 AM, Matt Riedemann wrote:



On 7/17/2015 7:51 PM, Matt Riedemann wrote:

We started the day with a mock 1.1.4 release breaking unit tests for a
few projects (nova, cinder, ironic at least).

The nova blocker was tracked with bug 1475661.

We got a g-r change up to block mock!=1.1.4 but that didn't fix the
issue because oslo.versionedobjects pulls mock in before pip processes
the !=.  Luckily lifeless reverted the regression and released mock 1.2
so we should be OK on that front for now.

However, cinder/glance/nova, which gate on the ceph job, are blocked by
bug 1475811 which is a regression with the rbd driver in glance_store
0.7.0, which was put into upper-constraints.txt today.

I have a patch up for glance_store here:

https://review.openstack.org/#/c/203294/

And an exclusion of 0.7.0 in g-r here:

https://review.openstack.org/#/c/203295/

If the glance core team could get around to approving that fix and
releasing 0.7.1 it would unblock the gate for glance/cinder/nova and
make me happy.



The g-r block on glance_store 0.7.0 is approved but can't merge because
one of the jobs keeps failing to build qpid-proton:

http://logs.openstack.org/95/203295/1/check/gate-requirements-integration-dsvm/10073dd/console.html#_2015-07-18_13_16_32_766



So if anyone from oslo.messaging or qpid can help with that it'd be good.



Got a change up here for devstack:

https://review.openstack.org/#/c/203354/



devstack was the wrong place, we found the regression from yesterday, 
and have a revert posted:


https://review.openstack.org/#/c/203355/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][puppet] Managing upstream puppet modules in fuel-library under librarian

2015-07-18 Thread Jay Pipes

On 07/17/2015 11:27 PM, Dmitry Borodaenko wrote:

One of the concerns raised by TC in response to the proposal to add
Fuel to OpenStack Projects [0] is the fact that fuel-library includes
copies of upstream Puppet modules, most importantly from OpenStack
Puppet [1].

[0] https://review.openstack.org/199232 [1]
https://lwn.net/Articles/648331/

I have brought this up in the Fuel weekly meeting [2], and we have
agreed to start implementing the proposal from Alex Schultz to use
puppet-librarian to manage upstream modules [3].

[2]
http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-07-16-16.00.log.html#l-245


[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067806.html

 We are days away from Feature Freeze for Fuel 7.0 scheduled for July
23 [4], so it's not reasonable to make librarian mandatory for all
upstream modules just yet, but I believe that we should make full
migration to librarian a blocker requirement for Fuel 8.0.


Thanks for stepping up to address this requested change, Dima. I've 
given a preliminary review to Alex's puppet-librarian patch and things 
look fine from my perspective.


I'm quite interested to know Emilien and Chris Hoge's opinions on this 
direction and proposed timeline.



[4] https://wiki.openstack.org/wiki/Fuel/7.0_Release_Schedule

In the meanwhile, we're going to try to implement integration of the
newly added puppet-ironic module under librarian [5][6][7]. [5]
https://review.openstack.org/194184 [6]
https://review.openstack.org/202763 [7]
https://review.openstack.org/202767

To make sure this effort does not block integration of Ironic support
in Fuel 7.0 and does not impact the Feature Freeze, we've set a
deadline for this effort until Tuesday July 21. Fuelers and other
Puppet experts, please help Alex and Pavlo get this done on time by
reviewing their changes and offering advice. It is a small but
crucial step towards full convergence with upstream, it would help a
lot if we could confirm now that this approach is viable.


Agreed, reviews from fuel-core folks would be very much appreciated.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 7/17 state of the gate (you know, fires)

2015-07-18 Thread Matt Riedemann



On 7/18/2015 8:54 AM, Matt Riedemann wrote:



On 7/17/2015 7:51 PM, Matt Riedemann wrote:

We started the day with a mock 1.1.4 release breaking unit tests for a
few projects (nova, cinder, ironic at least).

The nova blocker was tracked with bug 1475661.

We got a g-r change up to block mock!=1.1.4 but that didn't fix the
issue because oslo.versionedobjects pulls mock in before pip processes
the !=.  Luckily lifeless reverted the regression and released mock 1.2
so we should be OK on that front for now.

However, cinder/glance/nova, which gate on the ceph job, are blocked by
bug 1475811 which is a regression with the rbd driver in glance_store
0.7.0, which was put into upper-constraints.txt today.

I have a patch up for glance_store here:

https://review.openstack.org/#/c/203294/

And an exclusion of 0.7.0 in g-r here:

https://review.openstack.org/#/c/203295/

If the glance core team could get around to approving that fix and
releasing 0.7.1 it would unblock the gate for glance/cinder/nova and
make me happy.



The g-r block on glance_store 0.7.0 is approved but can't merge because
one of the jobs keeps failing to build qpid-proton:

http://logs.openstack.org/95/203295/1/check/gate-requirements-integration-dsvm/10073dd/console.html#_2015-07-18_13_16_32_766


So if anyone from oslo.messaging or qpid can help with that it'd be good.



Got a change up here for devstack:

https://review.openstack.org/#/c/203354/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 7/17 state of the gate (you know, fires)

2015-07-18 Thread Matt Riedemann



On 7/17/2015 7:51 PM, Matt Riedemann wrote:

We started the day with a mock 1.1.4 release breaking unit tests for a
few projects (nova, cinder, ironic at least).

The nova blocker was tracked with bug 1475661.

We got a g-r change up to block mock!=1.1.4 but that didn't fix the
issue because oslo.versionedobjects pulls mock in before pip processes
the !=.  Luckily lifeless reverted the regression and released mock 1.2
so we should be OK on that front for now.

However, cinder/glance/nova, which gate on the ceph job, are blocked by
bug 1475811 which is a regression with the rbd driver in glance_store
0.7.0, which was put into upper-constraints.txt today.

I have a patch up for glance_store here:

https://review.openstack.org/#/c/203294/

And an exclusion of 0.7.0 in g-r here:

https://review.openstack.org/#/c/203295/

If the glance core team could get around to approving that fix and
releasing 0.7.1 it would unblock the gate for glance/cinder/nova and
make me happy.



The g-r block on glance_store 0.7.0 is approved but can't merge because 
one of the jobs keeps failing to build qpid-proton:


http://logs.openstack.org/95/203295/1/check/gate-requirements-integration-dsvm/10073dd/console.html#_2015-07-18_13_16_32_766

So if anyone from oslo.messaging or qpid can help with that it'd be good.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][puppet] Managing upstream puppet modules in fuel-library under librarian

2015-07-18 Thread Davanum Srinivas
Dmitry,

+1 to "full migration to librarian a blocker requirement for Fuel 8.0."

thanks,
dims

On Fri, Jul 17, 2015 at 11:27 PM, Dmitry Borodaenko
 wrote:
> One of the concerns raised by TC in response to the proposal to add Fuel to
> OpenStack Projects [0] is the fact that fuel-library includes copies of
> upstream Puppet modules, most importantly from OpenStack Puppet [1].
>
> [0] https://review.openstack.org/199232
> [1] https://lwn.net/Articles/648331/
>
> I have brought this up in the Fuel weekly meeting [2], and we have agreed to
> start implementing the proposal from Alex Schultz to use puppet-librarian to
> manage upstream modules [3].
>
> [2]
> http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-07-16-16.00.log.html#l-245
> [3] http://lists.openstack.org/pipermail/openstack-dev/2015-June/067806.html
>
> We are days away from Feature Freeze for Fuel 7.0 scheduled for July 23 [4],
> so it's not reasonable to make librarian mandatory for all upstream modules
> just
> yet, but I believe that we should make full migration to librarian a blocker
> requirement for Fuel 8.0.
>
> [4] https://wiki.openstack.org/wiki/Fuel/7.0_Release_Schedule
>
> In the meanwhile, we're going to try to implement integration of the newly
> added puppet-ironic module under librarian [5][6][7].
> [5] https://review.openstack.org/194184
> [6] https://review.openstack.org/202763
> [7] https://review.openstack.org/202767
>
> To make sure this effort does not block integration of Ironic support in
> Fuel
> 7.0 and does not impact the Feature Freeze, we've set a deadline for this
> effort until Tuesday July 21. Fuelers and other Puppet experts, please help
> Alex and Pavlo get this done on time by reviewing their changes and offering
> advice. It is a small but crucial step towards full convergence with
> upstream,
> it would help a lot if we could confirm now that this approach is viable.
>
> Thank you,
>
> --
> Dmitry Borodaenko
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-18 Thread Sam Stoelinga
+1 on Kevin Benton's comments.
Ironic should have integration with switches where the switches are SDN
compatible. The individual bare metal node should not care which vlan,
vxlan or other translation is programmed at the switch. The individual bare
metal nodes just knows I have 2 nics and and these are on Neutron network
x. The SDN controller is responsible for making sure the baremetal node
only has access to Neutron Network x through changing the switch
configuration dynamically.

Making an individual baremetal have access to several vlans and let the
baremetal node configure a vlan tag at the baremetal node itself is a big
security risk and should not be supported. Unless an operator specifically
configures a baremetal node to be vlan trunk.

Sam Stoelinga

On Sat, Jul 18, 2015 at 5:10 AM, Kevin Benton  wrote:

> > which requires VLAN info to be pushed to the host. I keep hearing "bare
> metal will never need to know about VLANs" so I want to quash that ASAP.
>
> That's leaking implementation details though if the bare metal host only
> needs to be on one network. It also creates a security risk if the bare
> metal node is untrusted.
>
> If the tagging is to make it so it can access multiple networks, then that
> makes sense for now but it should ultimately be replaced by the vlan trunk
> ports extension being worked on this cycle that decouples the underlying
> network transport from what gets tagged to the VM/bare metal.
> On Jul 17, 2015 11:47 AM, "Jim Rollenhagen" 
> wrote:
>
>> On Fri, Jul 17, 2015 at 10:56:36AM -0600, Kevin Benton wrote:
>> > Check out my comments on the review. Only Neutron knows whether or not
>> an
>> > instance needs to do manual tagging based on the plugin/driver loaded.
>> >
>> > For example, Ironic/bare metal ports can be bound by neutron with a
>> correct
>> > driver so they shouldn't get the VLAN information at the instance level
>> in
>> > those cases. Nova has no way to know whether Neutron is configured this
>> way
>> > so Neutron should have an explicit response in the port binding
>> information
>> > indicating that an instance needs to tag.
>>
>> Agree. However, I just want to point out that there are neutron drivers
>> that exist today[0] that support bonded NICs with trunked VLANs, which
>> requires VLAN info to be pushed to the host. I keep hearing "bare metal
>> will never need to know about VLANs" so I want to quash that ASAP.
>>
>> As far as Neutron sending the flag to decide whether the instance should
>> tag packets, +1, I think that should work.
>>
>> // jim
>> >
>> > On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen <
>> j...@jimrollenhagen.com>
>> > wrote:
>> >
>> > > On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
>> > > > On 17 July 2015 at 11:23, Sean Dague  wrote:
>> > > > > On 07/16/2015 06:06 PM, Sean M. Collins wrote:
>> > > > >> On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
>> > > > >>> So it looks like there is a missing part in this feature. There
>> > > should
>> > > > >>> be a way to "hide" this information if the instance does not
>> require
>> > > to
>> > > > >>> configure vlan interfaces to make network functional.
>> > > > >>
>> > > > >> I just commented on the review, but the provider network API
>> extension
>> > > > >> is admin only, most likely for the reasons that I think someone
>> has
>> > > > >> already mentioned, that it exposes details of the phyiscal
>> network
>> > > > >> layout that should not be exposed to tenants.
>> > > > >
>> > > > > So, clearly, under some circumstances the network operator wants
>> to
>> > > > > expose this information, because there was the request for that
>> > > feature.
>> > > > > The question in my mind is what circumstances are those, and what
>> > > > > additional information needs to be provided here.
>> > > > >
>> > > > > There is always a balance between the private cloud case which
>> wants to
>> > > > > enable more self service from users (and where the users are
>> often also
>> > > > > the operators), and the public cloud case where the users are
>> outsiders
>> > > > > and we want to hide as much as possible from them.
>> > > > >
>> > > > > For instance, would an additional attribute on a provider network
>> that
>> > > > > says "this is cool to tell people about" be an acceptable
>> approach? Is
>> > > > > there some other creative way to tell our infrastructure that
>> these
>> > > > > artifacts are meant to be exposed in this installation?
>> > > > >
>> > > > > Just kicking around ideas, because I know a pile of gate hardware
>> for
>> > > > > everyone to use is at the other side of answers to these
>> questions. And
>> > > > > given that we've been running full capacity for days now, keeping
>> this
>> > > > > ball moving forward would be great.
>> > > >
>> > > > Maybe we just need to add policy around who gets to see that extra
>> > > > detail, and maybe hide it by default?
>> > > >
>> > > > Would that deal with the concerns here?
>> > >
>> > > I'm not so 

Re: [openstack-dev] [Fuel][Fuel-library] Using librarian-puppet to manage upstream fuel-library modules

2015-07-18 Thread Igor Kalnitsky
Sergii,

It's not about tests, it's about how we want to retrieve upstream
packages. Directly from Git? Or package them and add deps to our
fuel-library package?

Thanks,
Igor

On Sat, Jul 18, 2015 at 1:14 AM, Sergii Golovatiuk
 wrote:
> Igor,
>
> There shouldn't be any holywars as we are going to add our tests to Puppet
> manifests projects. We'll be able to resolve fast enough. In case of
> problems we can stick librarian to particular commit in upstream repo.
>
>
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Fri, Jul 17, 2015 at 8:19 AM, Igor Kalnitsky 
> wrote:
>>
>> Hello guys,
>>
>> > Update 'make iso' scripts:
>> >   * Make them use 'r10k' (or other tool) to download upstream modules
>> > based on 'Puppetfile'
>>
>> I foreseen holywars with our Build team. AFAIK they are deeply
>> concerned about Internet access during ISO build process. Hence,
>> they'll propose to package upstream puppet manifests, and that can
>> complicate our experience to work with upstream flexibly.
>>
>> Thanks,
>> Igor
>>
>> On Thu, Jul 16, 2015 at 11:55 PM, Sergii Golovatiuk
>>  wrote:
>> > Hi,
>> >
>> >
>> > On Thu, Jul 16, 2015 at 9:01 AM, Aleksandr Didenko
>> > 
>>
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> guys, what if we "simplify" things a bit? All we need is:
>> >>
>> >> Remove all the community modules from fuel-library.
>> >> Create 'Puppetfile' with list of community modules and their versions
>> >> that
>> >> we currently use.
>> >> Make sure all our customizations are proposed to the upstream modules
>> >> (via
>> >> gerrit or github pull-requests).
>> >> Create a separate file with list of patches for each module we need to
>> >> cherry-pick (we need to support gerrit reviews and github
>> >> pull-requests).
>> >> Update 'make iso' scripts:
>> >>
>> >> Make them use 'r10k' (or other tool) to download upstream modules based
>> >> on
>> >> 'Puppetfile'
>> >
>> > I am giving +1 to librarian here.
>> >>
>> >> Iterate over list of patches for each module and cherry-pick them (just
>> >> like we do for custom ISO build. I'm not sure if librarian provides
>> >> such
>> >> possibility)
>> >
>> >
>> > Puppetlabs is in transition of moving all modules to openstack. We may
>> > use
>> > pull-requests here just specifying repository. However, I am thinking
>> > about
>> > hacking librarian to add cherry-pick option.
>> >
>> >>
>> >> Eventually, when all the functionality we rely on is accepted in
>> >> upstream
>> >> modules, we'll get rid of file with list of patches for modules. But
>> >> meanwhile it should be much easier to manage modules and customization
>> >> in
>> >> such way.
>> >>
>> >> Regards,
>> >>
>> >> Alex
>> >>
>> >>
>> >>
>> >> On Fri, Jul 10, 2015 at 5:25 PM, Alex Schultz 
>> >> wrote:
>> >>>
>> >>> Done. Sorry about that.
>> >>>
>> >>> -Alex
>> >>>
>> >>> On Fri, Jul 10, 2015 at 9:22 AM, Simon Pasquier
>> >>> 
>> >>> wrote:
>> 
>>  Alex, could you enable the comments for all on your document?
>>  Thanks!
>>  Simon
>> 
>>  On Thu, Jul 9, 2015 at 11:07 AM, Bogdan Dobrelya
>>   wrote:
>> >
>> > > Hello everyone,
>> > >
>> > > I took some time this morning to write out a document[0] that
>> > > outlines
>> > > one possible ways for us to manage our upstream modules in a more
>> > > consistent fashion. I know we've had a few emails bouncing around
>> > > lately around this topic of our use of upstream modules and how
>> > > can
>> > > we
>> > > improve this. I thought I would throw out my idea of leveraging
>> > > librarian-puppet to manage the upstream modules within our
>> > > fuel-library repository. Ideally, all upstream modules should come
>> > > from upstream sources and be removed from the fuel-library itself.
>> > > Unfortunately because of the way our repository sits today, this
>> > > is a
>> > > very large undertaking and we do not currently have a way to
>> > > manage
>> > > the inclusion of the modules in an automated way. I believe this
>> > > is
>> > > where librarian-puppet can come in handy and provide a way to
>> > > manage
>> > > the modules. Please take a look at my document[0] and let me know
>> > > if
>> > > there are any questions.
>> > >
>> > > Thanks,
>> > > -Alex
>> > >
>> > > [0]
>> > >
>> > > https://docs.google.com/document/d/13aK1QOujp2leuHmbGMwNeZIRDr1bFgJi88nxE642xLA/edit?usp=sharing
>> >
>> > The document is great, Alex!
>> > I'm fully support the idea to start adapting fuel-library by
>> > the suggested scheme. The "monitoring" feature of ibrarian looks not
>> > intrusive and we have no blockers to start using the librarian just
>> > immediately.
>> >
>> > --
>> > Best regards,
>> > Bogdan Dobrelya,
>> > Irc #bogdando
>> >
>> >
>> >
>> > __
>> > OpenSta

Re: [openstack-dev] [keystone] Request for spec freeze execption

2015-07-18 Thread Morgan Fainberg
This is a relatively small scope of change and seems to be worth including in 
Liberty. I'm ok with this exception is there are no other red flags from the 
core team. 

Sent via mobile

> On Jul 17, 2015, at 11:15, Henry Nash  wrote:
> 
> Hi
> 
> Role assignment inheritance has been an extension in Keystone for a number of 
> cycle.  With the introduction of project hierarchies (which also support 
> assignment inheritance), we’d like to move inheritance into core.
> 
> At the same time as the move to core, we’d like to modify the way inheritance 
> rules are applied.  When assignment inheritance was first introduced, it was 
> designed for domain->project inheritance - and under that model, the rule was 
> that an inherited role assigned to the domain would be applied to all the 
> projects in the domain, but not to the domain itself.  Now that we have 
> generalised project hierarchies, this seem to make a lot less sense…and the 
> more standard model of the assignment being applied to its target and all the 
> targets sub-projects makes more sense.
> 
> The proposal is, therefore, that the API we support in core (which is, by 
> definition, different from the one in OS-INHERIT extension), will support 
> this new model, but we will, for backward compatibility, continue to support 
> the old extension (with the old model of inheritance), marked as deprecate, 
> with removal no sooner than 4 cycles. Although probably not recommended from 
> an operations usability point of view, there would be no issue mixing and 
> matching assignments both from the core API and the extension API, during the 
> deprecation period.
> 
> The spec for this change can be found here:  
> https://review.openstack.org/#/c/200434
> 
> At the next keystone IRC meeting, I’d like to discuss granting a spec freeze 
> exception to allow this move to core.
> 
> Henry
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Plan to implement the OpenStack Testing Interface for Fuel

2015-07-18 Thread Dmitry Borodaenko

One of the requirements for all OpenStack projects is to use the same Testing
Interface [0]. In response to the Fuel application [1], the Technical Committee
has clarified that this includes running gate jobs on the OpenStack
Infrastructure [2][3].

[0] http://governance.openstack.org/reference/project-testing-interface.html
[1] https://review.openstack.org/199232
[2] 
http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-07-14-20.02.log.html#l-150
[3] https://review.openstack.org/201766

Although the proposed formal requirement could use some clarification,
according to the meeting log linked above, TC has acknowledged that OpenStack
Infrastructure can't currently host deployment tests for projects like Fuel and
TripleO. This narrows the requirement down to codestyle checks, unit tests,
coverage report, source tarball generation, and docs generation for all Python
components of Fuel.

As I mentioned in my previous email [4], we're days away from Feature Freeze
for Fuel 7.0, so we need to plan a gradual transition instead of making the
testing interface a hard requirement for all repositories.

[4] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069906.html

I propose the following stages for transition of Fuel CI to OpenStack
Infrastructure:

Stage 1: Enable non-voting jobs compliant with the testing interface for a
single Python component of Fuel. This has no impact on Fuel schedule and should
be done immediately. Boris Pavlovic has kindly agreed to be our code fairy and
magicked together a request that enables such jobs for nailgun in fuel-web [5].

[5] https://review.openstack.org/202892

As it turns out, OpenStack CI imposes strict limits on a project's directory
structure, and fuel-web doesn't fit those since it contains a long list of
components besides nailgun, some of them not even in Python. Making the above
tests pass would involve a major restructuring of fuel-web repository, which
once again is for now blocked by the 7.0 FF. We have a blueprint to split
fuel-web [6], but so far we've only managed to extract fuel-agent, the rest
will probably have to wait until 8.0.

[6] https://blueprints.launchpad.net/fuel/+spec/split-fuel-web-repo

Because of that, I think fuel-agent is a better candidate for the first Fuel
component to get CI jobs on OpenStack Infrastructure.

Stage 2: Get the non-voting jobs on the first component to pass, and make them
voting and gating the commits to that component. Assuming that we pick a
component that doesn't need major restructuring to pass OpenStack CI, we should
be able to complete this stage before 7.0 soft code freeze on August 13 [7].

[7] https://wiki.openstack.org/wiki/Fuel/7.0_Release_Schedule

Stage 3: Enable non-voting jobs for all other Python components of Fuel outside
of fuel-web. We will have until 7.0 GA release on September 24, and we won't be
able to proceed to following stages until 7.0 is released.

Stage 4: Everything else that is too disruptive for 7.0 but doesn't require
changes on the side of OpenStack Infrastructure can all start in parallel after
Fuel 7.0 is released:

a) Finish splitting fuel-web.
b) Get all Python components of Fuel to pass OpenStack CI.
c) Set up unit test gates for non-Python components of Fuel (e.g. fuel-astute).
d) Finish the transition of upstream modules in fuel-library to librarian.
e) Set up rspec based gates for non-upstream modules in fuel-library.

I think completing all these can be done by 8.0 SCF in December, and if not,
must become a blocker requirement for 9.0 (Q3 2016).

Stage 5: Bonus objectives that are not required to meet TC requirements for
joining OpenStack, but still achievable based on current state of OpenStack
Infrastructure:

a) functional tests for Fuel UI
b) beaker tests for non-upstream parts of fuel-library

Stage 6: Stretch goal for the distant future is to actually make it possible to
run multi-node deploy tests on OpenStack Infrastructure. I guess we can at
least start that discussion in Tokyo...

Am I missing any major risks or additional requirements here? Do the dates make
sense?

Thanks,

--
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev