Re: [openstack-dev] [Trove] Resource not found when creating db instances.

2017-01-18 Thread Wang Sen
Hi Matt,

Thanks for your kindly reply. I mispelled Newton in last email.

I did not meet any problems with RabbitMQ, and the nova instances works well. 
From
the error log I pasted, it seems something goes wrong when novaclient is trying 
to
communicate with nova api.

On Wed, Jan 18, 2017 at 09:22:04PM -0700, Matt Fischer wrote:
> Trove works fine with neutron. I would look deeper into your logs. Do you
> have any errors about issues with Rabbit message timeouts? If so your guest
> may have issues talking to Rabbit. That seems to be a common issue.
> 
> On Wed, Jan 18, 2017 at 8:59 PM, Amrith Kumar 
> wrote:
> 
> > Sorry Wang Sen, why do you say Trove is not ready for Neutron"? It has
> > worked with Neutron for some releases now.
> >
> > This does not appear to be at all related to Neutron.
> >
> > -amrith
> >
> > --
> > amrith.ku...@gmail.com
> > On Jan 18, 2017 10:56 PM, "Wang Sen"  wrote:
> >
> >> Hi all,
> >>
> >> I met the resource not found error when I'm creating a database
> >> instance. The instance stays on build status and turns to error status
> >> after timeout.
> >>
> >> I know trove is not ready for neuton. Is there a work around for this
> >> issue ? Thanks in advance.
> >>
> >> Below is the detailed information.
> >>
> >> Error Log
> >> =
> >>
> >> /var/log/trove/trove-taskmanager.log:
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task [-] Error
> >> during Manager.publish_exists_event
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task Traceback
> >> (most recent call last):
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line
> >> 220, in run_periodic_tasks
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >>  task(self, context)
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/trove/taskmanager/manager.py", line
> >> 429, in publish_exists_event
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >>  self.admin_context)
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
> >> line 178, in publish_exist_events
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >>  notifications = transformer()
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
> >> line 271, in __call__
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >>  client=self.nova_client)
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
> >> line 40, in load_mgmt_instances
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >>  mgmt_servers = client.servers.list(search_opts={'all_tenants': 1})
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 835,
> >> in list
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >>  "servers")
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 249, in _list
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp,
> >> body = self.api.client.get(url)
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 480, in get
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return
> >> self._cs_request(url, 'GET', **kwargs)
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 436, in
> >> _cs_request
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >>  self.authenticate()
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 619, in
> >> authenticate
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >>  self._v2_auth(auth_url)
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 684, in
> >> _v2_auth
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return
> >> self._authenticate(url, body)
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> >> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 697, in
> >> _authenticate
> >> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >>  **kwargs)
> >> 2017-01-19 11:27:31.666 22795 ERROR 

Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem in devstack install - No Network found for private

2017-01-18 Thread Andreas Scheuring
Excellent!

BTW, the following options are obsolete (as they became default in the
meanwhile)


disable_service n-net


enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-meta
enable_service q-l3


-- 
-
Andreas 
IRC: andreas_s



On Mi, 2017-01-18 at 10:19 +, nidhi.h...@wipro.com wrote:
> Hi Andreas,
> 
> 
> As in between you suggested to try with default devstack
> 
> neutron config params. I tried that i set no config option 
> 
> for neutron part all default.
> 
> 
> This local.conf is working well..
> 
> 
> for others who are facing problem pasting working local.conf here
> 
> http://paste.openstack.org/show/595339/
> 
> 
> 
> Attaching too.
> 
> 
> Thanks
> 
> Nidhi
> 
> 
> 
> 
> 
> 
> 
> __
> From: Nidhi Mittal Hada (Product Engineering Service)
> Sent: Wednesday, January 18, 2017 2:44 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing
> problem in devstack install - No Network found for private 
>  
> Andreas,
> 
> 
> I require nothing specific from neutron side.
> 
> Just a basic working setup from neutron side 
> 
> because my work is mostly on storage side of 
> 
> OpenStack.
> 
> 
> Can you please suggest a working configuration 
> 
> if  tried recently.
> 
> 
> Thanks
> 
> nidhi
> 
> 
> 
> __
> From: Nidhi Mittal Hada (Product Engineering Service)
> Sent: Wednesday, January 18, 2017 2:35:13 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing
> problem in devstack install - No Network found for private 
>  
> HI Andreas,
> 
> 
> Thanks for your reply. 
> 
> 
> I have no specific reason for using this network configuration in
> local.conf.
> 
> I have only basic knowledge of these config options even.
> 
> 
> This local.conf network configurations used to work well with earlier 
> 
> devstack openstack versions. So i did not change it..
> 
> Just this time its creating trouble. 
> 
> 
> I have not created any ovs bridge manually  before running devstack.
> 
> just created this local.conf and ran ./stack.sh in devstack folder.
> 
> 
> Can you please suggest changes if i have not created ovs-bridge
> manually.
> 
> 
> At present my settings are - from local.conf - for reference - 
> 
> FIXED_RANGE=10.11.12.0/24
> NETWORK_GATEWAY=10.11.12.1
> FIXED_NETWORK_SIZE=256
> 
> 
> FLOATING_RANGE=10.0.2.0/24
> Q_FLOATING_ALLOCATION_POOL=start=10.0.2.104,end=10.0.2.111
> PUBLIC_NETWORK_GATEWAY=10.0.2.1
> HOST_IP=10.0.2.15
> 
> 
> PUBLIC_INTERFACE=eth0
> 
> 
> Q_USE_SECGROUP=True
> ENABLE_TENANT_VLANS=True
> TENANT_VLAN_RANGE=1000:1999
> PHYSICAL_NETWORK=default
> OVS_PHYSICAL_BRIDGE=br-ex
> 
> 
> 
> 
> Q_USE_PROVIDER_NETWORKING=True
> Q_L3_ENABLED=False
> 
> 
> PROVIDER_SUBNET_NAME="provider_net"
> PROVIDER_NETWORK_TYPE="vlan"
> SEGMENTATION_ID=2010
> 
> 
> 
> 
> 
> 
> Thanks
> 
> Nidhi
> 
> 
> 
> 
> __
> From: Andreas Scheuring 
> Sent: Wednesday, January 18, 2017 1:09:17 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing
> problem in devstack install - No Network found for private 
>  
> ** This mail has been sent from an external source **
> 
> Without looking into the details
> 
> you're specifying
> Q_USE_PROVIDER_NETWORKING=True
> in your local.conf - usually this results in the creation of a single
> provider network called "public". But the manila devstack plugin seems
> not to be able to deal with provider networks as it's expecting a
> network named "private" to be present.
> 
> 
> Why are you using provider networks? Just for sake of VLANs? You can
> also configure devstack to use vlans with the default setup. This has
> worked for me in the past - results in a private network using vlans
> (assuming you have created ovs b bridge br-data manually):
> 
> 
> OVS_PHYSICAL_BRIDGE=br-data
> PHYSICAL_NETWORK=phys-data
> 
> ENABLE_TENANT_TUNNELS=False
> Q_ML2_TENANT_NETWORK_TYPE=vlan
> ENABLE_TENANT_VLANS=True
> TENANT_VLAN_RANGE=1000:1000
> 
> 
> 
> 
> --
> -
> Andreas
> IRC: andreas_s
> 
> 
> 
> On Mi, 2017-01-18 at 06:59 +, nidhi.h...@wipro.com wrote:
> > Hi All,
> >
> >
> > I was trying to install latest Newton version of OpenStack using
> > devstack on my laptop, all in one machine,
> >
> > using Virtualbox VM. Lately i have been facing same problem in last
> > few tries and installation doesn't get successful.
> >
> >
> > My VM network adapter configuration is as below.
> >
> >
> > Adapter1
> >
> >
> >
> >
> >
> >
> >
> > and 2nd adapter is as
> >
> > Adapter2
> >
> >
> >
> >
> >
> >
> >
> > Thats detail of Host Only Networking
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Thats my local.conf 

Re: [OpenStack-Infra] [Fuel Plugin] [nimblestorage-cinder] Please add me to groups.

2017-01-18 Thread Andrey Nikitin
Hello, Infra team!

As far I can see, I'm not still a member of the group.

Could you please add me there?


On 16.01.17 12:48, Andrey Nikitin wrote:
> Hello!
>
> A few days ago I created the following request to create one more
> repository to store a Fuel Plugin code -
> https://review.openstack.org/#/c/413651/.
>
> As far I can see the request is merged and the project is created. Could
> you please add me to the following groups to add other members there:
> - https://review.openstack.org/#/admin/groups/1691,members
> - https://review.openstack.org/#/admin/groups/1692,members ?
>

-- 
Andrey Nikitin
aniki...@mirantis.com




signature.asc
Description: OpenPGP digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [Trove] Resource not found when creating db instances.

2017-01-18 Thread Wang Sen
Sorry Amrith, I mispelled Newton as neuton ... :(

On Wed, Jan 18, 2017 at 10:59:06PM -0500, Amrith Kumar wrote:
> Sorry Wang Sen, why do you say Trove is not ready for Neutron"? It has
> worked with Neutron for some releases now.
> 
> This does not appear to be at all related to Neutron.
> 
> -amrith
> 
> --
> amrith.ku...@gmail.com
> On Jan 18, 2017 10:56 PM, "Wang Sen"  wrote:
> 
> > Hi all,
> >
> > I met the resource not found error when I'm creating a database
> > instance. The instance stays on build status and turns to error status
> > after timeout.
> >
> > I know trove is not ready for neuton. Is there a work around for this
> > issue ? Thanks in advance.
> >
> > Below is the detailed information.
> >
> > Error Log
> > =
> >
> > /var/log/trove/trove-taskmanager.log:
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task [-] Error
> > during Manager.publish_exists_event
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task Traceback
> > (most recent call last):
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line
> > 220, in run_periodic_tasks
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >  task(self, context)
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/trove/taskmanager/manager.py", line
> > 429, in publish_exists_event
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >  self.admin_context)
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
> > line 178, in publish_exist_events
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >  notifications = transformer()
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
> > line 271, in __call__
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >  client=self.nova_client)
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
> > line 40, in load_mgmt_instances
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >  mgmt_servers = client.servers.list(search_opts={'all_tenants': 1})
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 835, in
> > list
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >  "servers")
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 249, in _list
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp,
> > body = self.api.client.get(url)
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 480, in get
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return
> > self._cs_request(url, 'GET', **kwargs)
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 436, in
> > _cs_request
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >  self.authenticate()
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 619, in
> > authenticate
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >  self._v2_auth(auth_url)
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 684, in
> > _v2_auth
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return
> > self._authenticate(url, body)
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 697, in
> > _authenticate
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
> >  **kwargs)
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 431, in
> > _time_request
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp,
> > body = self.request(url, method, **kwargs)
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> > "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 425, in
> > request
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task raise
> > exceptions.from_response(resp, body, url, method)
> > 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task NotFound:
> > The resource could 

[Openstack] How to troubleshoot Security Group rules

2017-01-18 Thread Vimal Kumar
Hi!

How can I troubleshoot issues related to security groups? It is probably
getting implemented via iptables but where? In the host iptables, or inside
network namespace, or inside instance itself? I am running a single-node
Newton.

I am looking for a way to check whether the rules in my security group is
actually being implemented or not.

Thank you!

Regards,

Vimal
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [networking-sfc] Does SFC support chaining of Layer 2 devices?

2017-01-18 Thread Vikash Kumar
All,

   I am exploring SFC for chaining an IDS device (strictly in L2 mode). As
of now, it looks SFC default supports only L3 devices. SFC APIs doesn't
have any way to specify the nature of device and without that, it seems
there is no way an operator can spin any device/VNF except L3 mode VNFs. Is
anything I am missing here ? Can one still spin a L2 IDS with SFC ?


-- 
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [OpenStack] VM start up with no route rules

2017-01-18 Thread Xu, Rongjie (Nokia - CN/Hangzhou)
Hi,

I am launch a heat stack on top of Mirantis OpenStack Mitaka. However, I see no 
route rules (output of command 'ip route' is empty) inside VM, which make the 
VM cannot get the metadata from metadata server. Basically, the VM is connected 
to a management network (192.168.1.0/24 DHCP enabled).

How can I debug this problem? Is it something wrong with Neutron? Thanks.



Best Regards
Xu Rongjie (Max)
Mobile:18658176819



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Trove] Resource not found when creating db instances.

2017-01-18 Thread Matt Fischer
Trove works fine with neutron. I would look deeper into your logs. Do you
have any errors about issues with Rabbit message timeouts? If so your guest
may have issues talking to Rabbit. That seems to be a common issue.

On Wed, Jan 18, 2017 at 8:59 PM, Amrith Kumar 
wrote:

> Sorry Wang Sen, why do you say Trove is not ready for Neutron"? It has
> worked with Neutron for some releases now.
>
> This does not appear to be at all related to Neutron.
>
> -amrith
>
> --
> amrith.ku...@gmail.com
> On Jan 18, 2017 10:56 PM, "Wang Sen"  wrote:
>
>> Hi all,
>>
>> I met the resource not found error when I'm creating a database
>> instance. The instance stays on build status and turns to error status
>> after timeout.
>>
>> I know trove is not ready for neuton. Is there a work around for this
>> issue ? Thanks in advance.
>>
>> Below is the detailed information.
>>
>> Error Log
>> =
>>
>> /var/log/trove/trove-taskmanager.log:
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task [-] Error
>> during Manager.publish_exists_event
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task Traceback
>> (most recent call last):
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line
>> 220, in run_periodic_tasks
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  task(self, context)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/trove/taskmanager/manager.py", line
>> 429, in publish_exists_event
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  self.admin_context)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
>> line 178, in publish_exist_events
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  notifications = transformer()
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
>> line 271, in __call__
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  client=self.nova_client)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
>> line 40, in load_mgmt_instances
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  mgmt_servers = client.servers.list(search_opts={'all_tenants': 1})
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 835,
>> in list
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  "servers")
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 249, in _list
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp,
>> body = self.api.client.get(url)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 480, in get
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return
>> self._cs_request(url, 'GET', **kwargs)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 436, in
>> _cs_request
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  self.authenticate()
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 619, in
>> authenticate
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  self._v2_auth(auth_url)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 684, in
>> _v2_auth
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return
>> self._authenticate(url, body)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 697, in
>> _authenticate
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  **kwargs)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 431, in
>> _time_request
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp,
>> body = self.request(url, method, **kwargs)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 425, in
>> request
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task raise
>> exceptions.from_response(resp, body, url, method)
>> 

Re: [openstack-dev] [Trove] Resource not found when creating db instances.

2017-01-18 Thread Amrith Kumar
Sorry Wang Sen, why do you say Trove is not ready for Neutron"? It has
worked with Neutron for some releases now.

This does not appear to be at all related to Neutron.

-amrith

--
amrith.ku...@gmail.com
On Jan 18, 2017 10:56 PM, "Wang Sen"  wrote:

> Hi all,
>
> I met the resource not found error when I'm creating a database
> instance. The instance stays on build status and turns to error status
> after timeout.
>
> I know trove is not ready for neuton. Is there a work around for this
> issue ? Thanks in advance.
>
> Below is the detailed information.
>
> Error Log
> =
>
> /var/log/trove/trove-taskmanager.log:
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task [-] Error
> during Manager.publish_exists_event
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task Traceback
> (most recent call last):
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line
> 220, in run_periodic_tasks
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>  task(self, context)
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/trove/taskmanager/manager.py", line
> 429, in publish_exists_event
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>  self.admin_context)
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
> line 178, in publish_exist_events
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>  notifications = transformer()
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
> line 271, in __call__
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>  client=self.nova_client)
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
> line 40, in load_mgmt_instances
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>  mgmt_servers = client.servers.list(search_opts={'all_tenants': 1})
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 835, in
> list
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>  "servers")
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 249, in _list
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp,
> body = self.api.client.get(url)
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 480, in get
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return
> self._cs_request(url, 'GET', **kwargs)
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 436, in
> _cs_request
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>  self.authenticate()
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 619, in
> authenticate
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>  self._v2_auth(auth_url)
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 684, in
> _v2_auth
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return
> self._authenticate(url, body)
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 697, in
> _authenticate
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>  **kwargs)
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 431, in
> _time_request
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp,
> body = self.request(url, method, **kwargs)
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 425, in
> request
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task raise
> exceptions.from_response(resp, body, url, method)
> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task NotFound:
> The resource could not be found. (HTTP 404)
>
> Openstack Cluster
> =
>
> openstack version: Neuton
> trove version: 2.5.0
> $ root@kvm-215:~# trove --version
> 2.5.0
> $ root@kvm-215:~# openstack --version
> openstack 3.2.0
>
> Controller Node: ubuntu 16.04, 9.181.129.215
> Compute Node: ubuntu 

[openstack-dev] [Elections][PTL][Cinder] Sean McGinnis Candidacy for Pike

2017-01-18 Thread Sean McGinnis
Hello everyone,

I have been the Cinder PTL since the Mitaka release and I would love the
opportunity to be the PTL one more time.

I work for Dell EMC, working with storage for the last 14+ years. I am
lucky enough that I have the support to focus on OpenStack for my job.

I also have the support I've needed to both get out and get in front of
developers and users to support learning and using OpenStack. Over the
last year I have been able to attend a few OpenStack Days events and
attend the last Operators Midcycle. As PTL I can continue to spend the
time educating and evangelizing OpenStack.

More importantly, as PTL I can do whatever I can to eliminate
distractions and ease the way for the incredible work that members of
the community are doing. Through code reviews and helping coordinate
support within the group and across projects, I hope I can help
everyone get things accomplished.

Now that we are past the shorter Ocata cycle, I think we can start
focusing again on the larger efforts that we've had outstanding for a
few releases now and getting some of the important specs implemented.
I hope to help get capabilities like multiattach working across Cinder
and Nova. I also hope to work with the vendors that are such a large
part of the usefulness of Cinder to both make sure that they are
properly engaging with the OpenStack community, and that the community
is also aware of and able to work with the vendors to help them be
involved.

I hope you will consider me for the Pike PTL.

Thanks!
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Resource not found when creating db instances.

2017-01-18 Thread Wang Sen
Hi all,

I met the resource not found error when I'm creating a database
instance. The instance stays on build status and turns to error status
after timeout.

I know trove is not ready for neuton. Is there a work around for this
issue ? Thanks in advance.

Below is the detailed information.

Error Log
=

/var/log/trove/trove-taskmanager.log:
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task [-] Error during 
Manager.publish_exists_event
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task Traceback (most 
recent call last):
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task task(self, 
context)
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/trove/taskmanager/manager.py", line 429, in 
publish_exists_event
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task 
self.admin_context)
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py", 
line 178, in publish_exist_events
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task 
notifications = transformer()
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py", 
line 271, in __call__
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task 
client=self.nova_client)
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py", 
line 40, in load_mgmt_instances
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task mgmt_servers 
= client.servers.list(search_opts={'all_tenants': 1})
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 835, in list
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task "servers")
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/novaclient/base.py", line 249, in _list
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp, body = 
self.api.client.get(url)
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 480, in get
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return 
self._cs_request(url, 'GET', **kwargs)
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 436, in 
_cs_request
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task 
self.authenticate()
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 619, in 
authenticate
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task 
self._v2_auth(auth_url)
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 684, in _v2_auth
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return 
self._authenticate(url, body)
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 697, in 
_authenticate
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task **kwargs)
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 431, in 
_time_request
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp, body = 
self.request(url, method, **kwargs)
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 425, in request
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task raise 
exceptions.from_response(resp, body, url, method)
2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task NotFound: The 
resource could not be found. (HTTP 404)

Openstack Cluster
=

openstack version: Neuton
trove version: 2.5.0
$ root@kvm-215:~# trove --version
2.5.0
$ root@kvm-215:~# openstack --version
openstack 3.2.0

Controller Node: ubuntu 16.04, 9.181.129.215
Compute Node: ubuntu 16.04, 9.181.129.213

Manage network: 192.168.1.0/24
Provider network: 9.181.129.0/24

endpoints:

$ openstack endpoint list

+--+---+--+--+-+---+--+
| ID   | Region| Service Name | Service Type | 
Enabled | Interface | URL

Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-18 Thread Lingxian Kong
On Thu, Jan 19, 2017 at 5:54 AM, Mikhail Fedosin  wrote:

> And here I want to ask the community - how exactly Glare may be useful in
> OpenStack? Glare was developed as a repository for all possible data types,
> and it has many possible applications. For example, it's a storage of vm
> images for Nova. Currently Glance is used for this, but Glare has much more
> features and this transition is easy to implement. Then it's a storage of
> Tosca templates. We were discussing integration with Heat and storing
> templates and environments in Glare, also it may be interesting for TripleO
> project. Mistral will store its workflows in Glare, it has already been
> decided. I'm not sure if Murano project is still alive, but they already
> use Glare 0.1 from Glance repo and it will be removed soon (in Pike afaik),
> so they have no other options except to start using Glare v1. Finally there
> were rumors about storing torrent files from Ironic.


​Seems Swift already could do such things.​


Cheers,
Lingxian Kong (Larry)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gluon] F2F Meeting on Feb 6 and Feb 7 in Sunnyvale, CA to Complete Deliverables

2017-01-18 Thread HU, BIN
Hello team,

During the IRC meeting on January 11 [1] and today [2], the group agreed to 
have a face-to-face meeting on February 6th & 7th (Monday and Tuesday) in 
Silicon Valley to complete coding and testing before Ocata release schedule.

The meeting logistics is available at [3], and tentative agenda can be found at 
[4]. If you plan to attend the meeting, please follow the protocol outlined in 
[3].

Please join us, and we look forward to meeting everyone.

Thank you
Bin

[1] 
http://eavesdrop.openstack.org/meetings/gluon/2017/gluon.2017-01-11-18.00.html
[2] 
http://eavesdrop.openstack.org/meetings/gluon/2017/gluon.2017-01-18-18.01.html
[3] https://wiki.openstack.org/wiki/Meetings/Gluon/Logistics-2017020607
[4] https://wiki.openstack.org/wiki/Meetings/Gluon/Agenda-2017020607

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Morgan Fainberg
On Wed, Jan 18, 2017 at 5:18 PM, Clint Byrum  wrote:

> Excerpts from Morgan Fainberg's message of 2017-01-18 15:33:00 -0800:
> > On Wed, Jan 18, 2017 at 11:23 AM, Brant Knudson  wrote:
> >
> > >
> > >
> > > On Wed, Jan 18, 2017 at 9:58 AM, Dave McCowan (dmccowan) <
> > > dmcco...@cisco.com> wrote:
> > >
> > >>
> > >> On Mon, Jan 16, 2017 at 7:35 AM, Ian Cordasco  >
> > >> wrote:
> > >>
> > >>> Hi everyone,
> > >>>
> > >>> I've seen a few nascent projects wanting to implement their own
> secret
> > >>> storage to either replace Barbican or avoid adding a dependency on
> it.
> > >>> When I've pressed the developers on this point, the only answer I've
> > >>> received is to make the operator's lives simpler.
> > >>>
> > >>>
> > >> This is my opinion, but I'd like to see Keystone use Barbican for
> storing
> > >> credentials. It hasn't happened yet because nobody's had the time or
> > >> inclination (what we have works). If this happened, we could
> deprecate the
> > >> current way of storing credentials and require Barbican in a couple of
> > >> releases. Then Barbican would be a required service. The Barbican team
> > >> might find this to be the easiest route towards convincing other
> projects
> > >> to also use Barbican.
> > >>
> > >> - Brant
> > >>
> > >>
> > >> Can you provides some details on how you'd see this work?
> > >> Since Barbican typically uses Keystone to authenticate users before
> > >> determining which secrets they have access to, this leads to a
> circular
> > >> logic.
> > >>
> > >> Barbican's main purpose is a secret manager.  It supports a variety of
> > >> RBAC and ACL access control methods to determine if a request to
> > >> read/write/delete a secret should be allowed or not.  For secret
> storage,
> > >> Barbican itself needs a secure backend for storage.  There is a
> > >> customizable plugin interface to access secure storage.  The current
> > >> implementations can support a database with encryption, an HSM via
> KMIP,
> > >> and Dogtag.
> > >>
> > >>
> > > I haven't thought about it much so don't have details figured out.
> > > Keystone stores many types of secrets for users, and maybe you're
> thinking
> > > about the user password being tricky. I'm thinking about the users' EC2
> > > credentials (for example). I don't think this would be difficult and
> would
> > > involve creating a credentials backend for keystone that supports
> barbican.
> > > Maybe have a 'keystone' project for credentials keystone is storing? If
> > > you're familiar with the Barbican interface, compare with keystone's
> > > credential interface[0].
> > >
> > > [0] http://git.openstack.org/cgit/openstack/keystone/tree/
> > > keystone/credential/backends/base.py#n26
> > >
> > > - Brant
> > >
> > >
> > The user passwords and the MFA tokens would be particularly difficult as
> > they are to be used for authentication purposes. Anything tied to the
> main
> > AuthN path would require something akin to a "service wide" secret store
> > that could be accessed/controlled by keystone itself and not "on behalf
> of
> > user" where the user still owns the data stored in barbican.
> >
> > I can noodle over this a bit more and see if I can come up with a
> mechanism
> > that (without too much pain) utilizes barbican for the AuthN paths in the
> > current architecture.
> >
> > I think it is doable, but I hesitate to make Keystone's AuthN path rely
> on
> > any external service so we don't run into a circular dependency of
> services
> > causing headaches for users. Keystone has provided a fairly stable base
> for
> > other projects including Barbican to be built on.
> >
> > Now... If the underlying tech behind Barbican could be pushed into
> keystone
> > as the credential driver (and possibly store for passwords?) without
> > needing to lean on Barbican's Server APIs (restful), I think that is
> quite
> > viable and could be of value since we could offload the credentials to a
> > more secure store without needing a "restful service" that uses keystone
> as
> > an AuthN/AuthZ source to determine who has access to what secret.
>
> Things like Barbican are there for the times where it's worth it to
> try and minimize exposure for something _ever_ leaking, so you can't do
> something like record all encrypted traffic and then compromise a key
> later, decrypt the traffic, and gain access to still-secret data.
>
> I'm not sure passwords would fall into that category. You'd be adding
> quite a bit of overhead for something that can be mitigated simply by
> rotating accounts and/or passwords.


I totally agree. Most everything in Keystone falls into this category. We
could use the same tech Barbican uses to be smarter about storing the data,
but I don't think we can use the Rest APIa for the reasons you outlined.

__
> OpenStack Development Mailing List (not for usage questions)
> 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Clint Byrum
Excerpts from Morgan Fainberg's message of 2017-01-18 15:33:00 -0800:
> On Wed, Jan 18, 2017 at 11:23 AM, Brant Knudson  wrote:
> 
> >
> >
> > On Wed, Jan 18, 2017 at 9:58 AM, Dave McCowan (dmccowan) <
> > dmcco...@cisco.com> wrote:
> >
> >>
> >> On Mon, Jan 16, 2017 at 7:35 AM, Ian Cordasco 
> >> wrote:
> >>
> >>> Hi everyone,
> >>>
> >>> I've seen a few nascent projects wanting to implement their own secret
> >>> storage to either replace Barbican or avoid adding a dependency on it.
> >>> When I've pressed the developers on this point, the only answer I've
> >>> received is to make the operator's lives simpler.
> >>>
> >>>
> >> This is my opinion, but I'd like to see Keystone use Barbican for storing
> >> credentials. It hasn't happened yet because nobody's had the time or
> >> inclination (what we have works). If this happened, we could deprecate the
> >> current way of storing credentials and require Barbican in a couple of
> >> releases. Then Barbican would be a required service. The Barbican team
> >> might find this to be the easiest route towards convincing other projects
> >> to also use Barbican.
> >>
> >> - Brant
> >>
> >>
> >> Can you provides some details on how you'd see this work?
> >> Since Barbican typically uses Keystone to authenticate users before
> >> determining which secrets they have access to, this leads to a circular
> >> logic.
> >>
> >> Barbican's main purpose is a secret manager.  It supports a variety of
> >> RBAC and ACL access control methods to determine if a request to
> >> read/write/delete a secret should be allowed or not.  For secret storage,
> >> Barbican itself needs a secure backend for storage.  There is a
> >> customizable plugin interface to access secure storage.  The current
> >> implementations can support a database with encryption, an HSM via KMIP,
> >> and Dogtag.
> >>
> >>
> > I haven't thought about it much so don't have details figured out.
> > Keystone stores many types of secrets for users, and maybe you're thinking
> > about the user password being tricky. I'm thinking about the users' EC2
> > credentials (for example). I don't think this would be difficult and would
> > involve creating a credentials backend for keystone that supports barbican.
> > Maybe have a 'keystone' project for credentials keystone is storing? If
> > you're familiar with the Barbican interface, compare with keystone's
> > credential interface[0].
> >
> > [0] http://git.openstack.org/cgit/openstack/keystone/tree/
> > keystone/credential/backends/base.py#n26
> >
> > - Brant
> >
> >
> The user passwords and the MFA tokens would be particularly difficult as
> they are to be used for authentication purposes. Anything tied to the main
> AuthN path would require something akin to a "service wide" secret store
> that could be accessed/controlled by keystone itself and not "on behalf of
> user" where the user still owns the data stored in barbican.
> 
> I can noodle over this a bit more and see if I can come up with a mechanism
> that (without too much pain) utilizes barbican for the AuthN paths in the
> current architecture.
> 
> I think it is doable, but I hesitate to make Keystone's AuthN path rely on
> any external service so we don't run into a circular dependency of services
> causing headaches for users. Keystone has provided a fairly stable base for
> other projects including Barbican to be built on.
> 
> Now... If the underlying tech behind Barbican could be pushed into keystone
> as the credential driver (and possibly store for passwords?) without
> needing to lean on Barbican's Server APIs (restful), I think that is quite
> viable and could be of value since we could offload the credentials to a
> more secure store without needing a "restful service" that uses keystone as
> an AuthN/AuthZ source to determine who has access to what secret.

Things like Barbican are there for the times where it's worth it to
try and minimize exposure for something _ever_ leaking, so you can't do
something like record all encrypted traffic and then compromise a key
later, decrypt the traffic, and gain access to still-secret data.

I'm not sure passwords would fall into that category. You'd be adding
quite a bit of overhead for something that can be mitigated simply by
rotating accounts and/or passwords.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Not running for Oslo PTL for Pike

2017-01-18 Thread Joshua Harlow

gordon chung wrote:


On 03/01/17 03:03 PM, Joshua Harlow wrote:

Hi Oslo folks (and others),

Happy new year!

After serving for about a year I think it's a good opportunity for
myself to let another qualified individual run for Oslo PTL (seems
common to only go for two terms and hand-off to another).

So I just wanted to let folks know that I will be doing this, so that we
can grow others in the community that wish to try out being a PTL.

I don't plan on leaving the Oslo community btw, just want to make sure
we spread the knowledge (and the fun!) of being a PTL.

Hopefully I've been a decent PTL (with  room to improve) during
this time :-)



thanks for leading the oslo community and being the glue in OpenStack, Josh!



Anytime! More power to the glue!

https://www.youtube.com/watch?v=f_SwD7RveNE

'Oslo apply directly to the forehead' (maybe our new slogan!)

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-18 Thread Joe Gordon
On Thu, Jan 19, 2017 at 10:27 AM, Matt Riedemann  wrote:

> On 1/18/2017 4:53 AM, Jens Rosenboom wrote:
>
>> To me it looks like the times of 2G are long gone, Nova is using
>> almost 2G all by itself. And 8G may be getting tight if additional
>> stuff like Ceph is being added.
>>
>>
> I'm not really surprised at all about Nova being a memory hog with the
> versioned object stuff we have which does it's own nesting of objects.
>
> What tools to people use to be able to profile the memory usage by the
> types of objects in memory while this is running?


objgraph and guppy/heapy

http://smira.ru/wp-content/uploads/2011/08/heapy.html

https://www.huyng.com/posts/python-performance-analysis

You can also use gc.get_objects() (
https://docs.python.org/2/library/gc.html#gc.get_objects) to get a list of
all objects in memory and go from there.

Slots (https://docs.python.org/2/reference/datamodel.html#slots) are useful
for reducing the memory usage of objects.


>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-18 Thread Ian Wienand
On 01/14/2017 02:48 AM, Jakub Libosvar wrote:
> recently I noticed we got oom-killer in action in one of our jobs [1]. 

> Any other ideas?

I spent quite a while chasing down similar things with centos a while
ago.  I do have some ideas :)

The symptom is probably that mysql gets chosen by the OOM killer but
it's unlikely to be mysql's fault, it's just big and a good target.

If the system is going offline, I added the ability to turn on the
netconsole in devstack-gate with [1].  As the comment mentions, you
can put little tests that stream data in /dev/kmsg and they will
generally get off the host, even if ssh has been killed.  I found this
very useful for getting the initial oops data (i've used this several
times for other gate oopses, including other kernel issues we've
seen).

For starting to pin down what is really consuming the memory, the
first thing I did was wrote a peak-memory usage tracker that gave me
stats on memory growth during the devstack run [2].  You have to
enable this with "enable_service peakmem_tracker".  This starts to
give you the big picture of where memory is starting to go.

At this point, you should have a rough idea of the real cause, and
you're going to want to start dumping /proc/pid/smaps of target
processes to get an idea of where the memory they're allocating is
going, or at the very least what libraries might be involved.  The
next step is going to depend on what you need to target...

If it's python, it can get a bit tricky to see where the memory is
going but there's a number of approaches.  At the time, despite it
being mostly unmaintained but I had some success with guppy [1].  In
my case, for example, I managed to hook into swift's wsgi startup and
run that under guppy, giving me the ability to get some heap stats.
from my notes [4] that looked something like

---
import signal, os
from guppy import hpy

def handler(signum, frame):
f = open('/tmp/heap.txt', 'w+')
f.write("testing\n")
hp = hpy()
f.write(str(hp.heap()))
f.close()

if __name__ == '__main__':
conf_file, options = parse_options()
signal.signal(signal.SIGUSR1, handler)

sys.exit(run_wsgi(conf_file, 'object-server',
  global_conf_callback=server.global_conf_callback,
  **options))
---

There are of course other tools from gdb to malloc tracers, etc.

But that was enough that I could try different things and compare the
heap usage.  Once you've got the smoking gun ... well then the hard
work starts of fixing it :) In my case it was pycparser and we came up
with a good solution [5].

Hopefully that's some useful tips ... #openstack-infra can of course
help holding vms etc as required.

-i

[1] 
http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n438
[2] 
https://git.openstack.org/cgit/openstack-dev/devstack/tree/tools/peakmem_tracker.sh
[3] https://pypi.python.org/pypi/guppy/
[4] https://etherpad.openstack.org/p/oom-in-rax-centos7-CI-job
[5] https://github.com/eliben/pycparser/issues/72

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][Elections] Nominations for OpenStack PTLs (Program Team Leads) are now open

2017-01-18 Thread Kendall Nelson
Hello All!

Nominations for OpenStack PTLs (Program Team Leads) are now open and will
remain open until Jan 29, 2017 23:45 UTC.

All candidates must be submitted as a text file to the openstack/election
repository as explained at
http://governance.openstack.org/election/#how-to-submit-your-candidacy
Please make sure to follow the new candidacy file naming convention:
$cycle_name/$project_name/$ircname.txt.

In order to be an eligible candidate (and be allowed to vote) in a given
PTL election, you need to have contributed an accepted patch to one of the
corresponding program's projects[0] during the Newton-Ocata timeframe (Apr
11, 2016 00:00 UTC to Jan 23, 2017 23:59 UTC).

Additional information about the nomination process can be found here:
https://governance.openstack.org/election/

As the election officials  approve candidates, they will be listsed here:
https://governance.openstack.org/election/#pike

-ptl-candidates


The electorate is requested to confirm their email address in gerrit,
review.openstack.org > Settings > Contact Information >  Preferred Email,
prior to Jan 25, 2017 23:59 UTC so that the emailed ballots are mailed to
the correct email address.

Happy running,

-Kendall Nelson (diablo_rojo)

[0]
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Morgan Fainberg
On Wed, Jan 18, 2017 at 11:23 AM, Brant Knudson  wrote:

>
>
> On Wed, Jan 18, 2017 at 9:58 AM, Dave McCowan (dmccowan) <
> dmcco...@cisco.com> wrote:
>
>>
>> On Mon, Jan 16, 2017 at 7:35 AM, Ian Cordasco 
>> wrote:
>>
>>> Hi everyone,
>>>
>>> I've seen a few nascent projects wanting to implement their own secret
>>> storage to either replace Barbican or avoid adding a dependency on it.
>>> When I've pressed the developers on this point, the only answer I've
>>> received is to make the operator's lives simpler.
>>>
>>>
>> This is my opinion, but I'd like to see Keystone use Barbican for storing
>> credentials. It hasn't happened yet because nobody's had the time or
>> inclination (what we have works). If this happened, we could deprecate the
>> current way of storing credentials and require Barbican in a couple of
>> releases. Then Barbican would be a required service. The Barbican team
>> might find this to be the easiest route towards convincing other projects
>> to also use Barbican.
>>
>> - Brant
>>
>>
>> Can you provides some details on how you'd see this work?
>> Since Barbican typically uses Keystone to authenticate users before
>> determining which secrets they have access to, this leads to a circular
>> logic.
>>
>> Barbican's main purpose is a secret manager.  It supports a variety of
>> RBAC and ACL access control methods to determine if a request to
>> read/write/delete a secret should be allowed or not.  For secret storage,
>> Barbican itself needs a secure backend for storage.  There is a
>> customizable plugin interface to access secure storage.  The current
>> implementations can support a database with encryption, an HSM via KMIP,
>> and Dogtag.
>>
>>
> I haven't thought about it much so don't have details figured out.
> Keystone stores many types of secrets for users, and maybe you're thinking
> about the user password being tricky. I'm thinking about the users' EC2
> credentials (for example). I don't think this would be difficult and would
> involve creating a credentials backend for keystone that supports barbican.
> Maybe have a 'keystone' project for credentials keystone is storing? If
> you're familiar with the Barbican interface, compare with keystone's
> credential interface[0].
>
> [0] http://git.openstack.org/cgit/openstack/keystone/tree/
> keystone/credential/backends/base.py#n26
>
> - Brant
>
>
The user passwords and the MFA tokens would be particularly difficult as
they are to be used for authentication purposes. Anything tied to the main
AuthN path would require something akin to a "service wide" secret store
that could be accessed/controlled by keystone itself and not "on behalf of
user" where the user still owns the data stored in barbican.

I can noodle over this a bit more and see if I can come up with a mechanism
that (without too much pain) utilizes barbican for the AuthN paths in the
current architecture.

I think it is doable, but I hesitate to make Keystone's AuthN path rely on
any external service so we don't run into a circular dependency of services
causing headaches for users. Keystone has provided a fairly stable base for
other projects including Barbican to be built on.

Now... If the underlying tech behind Barbican could be pushed into keystone
as the credential driver (and possibly store for passwords?) without
needing to lean on Barbican's Server APIs (restful), I think that is quite
viable and could be of value since we could offload the credentials to a
more secure store without needing a "restful service" that uses keystone as
an AuthN/AuthZ source to determine who has access to what secret.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-18 Thread Matt Riedemann

On 1/18/2017 4:53 AM, Jens Rosenboom wrote:

To me it looks like the times of 2G are long gone, Nova is using
almost 2G all by itself. And 8G may be getting tight if additional
stuff like Ceph is being added.



I'm not really surprised at all about Nova being a memory hog with the 
versioned object stuff we have which does it's own nesting of objects.


What tools to people use to be able to profile the memory usage by the 
types of objects in memory while this is running?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-18 Thread Matt Riedemann

On 1/13/2017 9:48 AM, Jakub Libosvar wrote:

Hi,

recently I noticed we got oom-killer in action in one of our jobs [1]. I
saw it several times, so far only with linux bridge job. The consequence
is that usually mysqld gets killed as a processes that consumes most of
the memory, sometimes even nova-api gets killed.

Does anybody know whether we can bump memory on nodes in the gate
without losing resources for running other jobs?
Has anybody experience with memory consumption being higher when using
linux bridge agents?

Any other ideas?

Thanks,
Jakub

[1]
http://logs.openstack.org/73/373973/13/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/295d92f/logs/syslog.txt.gz#_Jan_11_13_56_32


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't think it's just the linuxbridge job, see:

http://status.openstack.org//elastic-recheck/index.html#1656850

And the linked logstash query, then expand by build_name.

I also tracked that in logstash to have started around 1/10 which was 
under our 10-days of logs, so something happened around then to start 
tipping us over. I had some leads in the bug report but I think the 
keystone team took over from there.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Ubuntu 14.04 support in Newton and on

2017-01-18 Thread Eric K
Hi all, Is there any community-wide policy on how long we strive to maintain
compatibility with Ubuntu 14.04? For example by avoiding relying on MySQL
5.7 features.
I¹ve had a hard time finding it on openstack.org and ML discussions. Thanks
lots!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] What would you like in Pike?

2017-01-18 Thread Lance Bragstad
Hi Sam,

I've been trying to wrangle folks into discussions to see how we can
improve policy as a whole across OpenStack [0] [1]. So far, we've had some
involvement from a couple operators, but more feedback would be even better.

My goal is to try and generate a bunch of discussion prior to the PTG so
that in Atlanta we can start painting a picture of what policy/RBAC in
OpenStack needs to be. I think once that is clearly documented it will be
easier for us to break it into actionable goals that the projects can
pursue as a group.

Feel free to come find me (lbragstad) in #openstack-keystone. I'd be happy
to make some time to get specific feedback.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2017-January/109967.html
[1] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting


On Wed, Jan 18, 2017 at 4:20 PM, Matt Jarvis  wrote:

> I think one of the problems we're seeing now is that a lot of operators
> have actually already scratched some of these missing functionality itches
> like quota management and project nesting by handling those scenarios in
> external management systems. I know we certainly did at DataCentred. That
> probably means these things don't surface enough to upstream as
> requirements, whereas for new users who aren't necessarily in the loop with
> community communication they may well be creating friction to adoption.
>
> On Wed, Jan 18, 2017 at 10:06 PM, Sam Morrison  wrote:
>
>> I would love it if all the projects policy.json was actually usable. Too
>> many times the policy.json isn’t the only place where authN happens with
>> lots of hard coded is_admin etc.
>>
>> Just the ability to to have a certain role to a certain thing would be
>> amazing. It makes it really hard to have read only users to generate
>> reports with that we can show our funders how much people use our openstack
>> cloud.
>>
>> Cheers,
>> Sam
>> (non-enterprise)
>>
>>
>>
>> On 18 Jan 2017, at 6:10 am, Melvin Hillsman  wrote:
>>
>> Well said, as a consequence of this thread being on the mailing list, I
>> hope that we can get *all* operators, end-users, and app-developers to
>> respond. If you are aware of folks who do not fall under the "enterprise"
>> label please encourage them directly to respond; I would encourage everyone
>> to do the same.
>>
>> On Tue, Jan 17, 2017 at 11:52 AM, Silence Dogood 
>> wrote:
>>
>>> I can see a huge problem with your contributing operators... all of them
>>> are enterprise.
>>>
>>> enterprise needs are radically different from small to medium deployers
>>> who openstack has traditionally failed to work well for.
>>>
>>> On Tue, Jan 17, 2017 at 12:47 PM, Piet Kruithof 
>>> wrote:
>>>
 Sorry for the late reply, but wanted to add a few things.

 OpenStack UX did suggest to the foundation that the community needs a
 second survey that focuses exclusively on operators.  The rationale was
 that the user survey is primarily focused on marketing data and there isn't
 really a ton of space for additional questions that focuses exclusively on
 operators. We also recommended a second survey called a MaxDiff study that
 enabled operators to identify areas of improvement and also rate them in
 order of importance including distance.

 There is also an etherpad that asked operators three priorities for
 OpenStack:

 https://etherpad.openstack.org/p/mitaka-openstackux-enterprise-goals

 It was distributed about a year ago, so I'm not sure how much of it was
 relevant.  The list does include responses from folks at TWC, Walmart,
 Pacific Northwest Labs, BestBuy, Comcast, NTTi3 and the US government. It
 might be a good place for the group to add their own improvements as well
 as "+" other peoples suggestions.

 There is also a list of studies that have been conducted with operators
 on behalf of the community. The study included quotas, deployment and
 information needs. Note that the information needs study extended beyond
 docs to things like the need to easily google solutions and the need for
 SMEs.

 Hope this is helpful.

 Piet

 ___
 OPENSTACK USER EXPERIENCE STATUS
 The goal of this presentation is to provide an overview of research
 that was conducted on behalf of the OpenStack community.  All of the
 studies conducted on behalf of the OpenStack community were included in
 this presentation.

 Why this research matters:
 Consistency across projects has been identified as an issue in the user
 survey.

 Study design:
 This usability study, conducted at the OpenStack Austin Summit,
 observed 10 operators as they attempted to perform standard tasks in the
 OpenStack client.

 https://docs.google.com/presentation/d/1hZYCOADJ1gXiFHT1ahwv
 

[openstack-dev] [tacker] Tacker PTL Non-candidacy

2017-01-18 Thread Sridhar Ramaswamy
As I announced in the last Tacker weekly meeting, I'm not planning to run
for Pike PTL position. Having served in this role for the last three cycles
(including for the periods before it was a big-tent project), I think it is
time for someone else to step in and take this forward. I'll continue to
contribute as a core-team member. I'll be available to help the new PTL in
any ways needed.

Personally, it has been a such a rewarding experience. I would like to
thank all the contributors - cores and non-core members - who supported
this project and me. We had an incredible amount of cross-project
collaboration in tacker, with the likes of tosca-parser / heat-translator,
neutron networking-sfc, senlin, and mistral - my sincere thanks to all the
PTLs and the members of those projects.

Now going forward, we have tons to do in Tacker - towards making it a
leading, community built TOSCA Orchestrator service. And that, not just for
the current focus area of NFV but also expand into Enterprise and Container
use-cases. Fun times!

thanks,
Sridhar
irc: sridhar_ram
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature-related changes that need a final +2 (1/18)

2017-01-18 Thread Matt Riedemann
Just to bring this to the awareness of other reviewers, here is a list 
of blueprint-related patches that have a +2 and need a final push:


1. 
https://blueprints.launchpad.net/nova/+spec/ironic-plug-unplug-vifs-update


https://review.openstack.org/#/c/364413/ - don't miss the bug fix change 
below it.


2. https://blueprints.launchpad.net/nova/+spec/ironic-portgroups-support

https://review.openstack.org/#/c/388756/ - builds on the change above.

3. 
https://blueprints.launchpad.net/nova/+spec/libvirt-os-vif-fastpath-vhostuser


https://review.openstack.org/#/c/410737/ - simple change, the one after 
it is close too.


4. 
https://blueprints.launchpad.net/nova/+spec/resource-providers-scheduler-db-filters


https://review.openstack.org/#/c/418134/ - the bottom change is simple.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Fuel] Fuel plugin for installing Congress

2017-01-18 Thread Eric K
Hi Serg,

That¹s awesome news. Thanks all for the great work!

Please do let us know how the Congress team can be helpful.

On 1/16/17, 11:33 AM, "Serg Melikyan"  wrote:

>I'd like to introduce you fuel plugin for installing and configuring
>Congress for Fuel [0].
>
>This plugin was developed by Fuel@Opnfv [1] Community in order to be
>included to the next release of the Fuel@Opnfv - Danube. We believe
>that this plugin might be helpful not only for us but also for general
>OpenStack community and decided to continue development of the plugin
>in the OpenStack Community.
>
>Please join us in the development of the Congress plugin, your
>feedback is greatly appreciated.
>
>P.S. Right now core team includes Fedor Zhadaev - original developer
>of the plugin, and couple of developers from Fuel@Opnfv including me.
>We considered adding congress-core to the list but decided to see
>amount of interest and feedback first from Congress team.
>
>References:
>[0] http://git.openstack.org/cgit/openstack/fuel-plugin-congress/
>[1] https://wiki.opnfv.org/display/Fuel/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

We've also talked about fancier non-keystone-auth like x.509 certificate
s.

- - Douglas

On 1/18/17 11:52 AM, Clint Byrum wrote:
> Excerpts from Dave McCowan (dmccowan)'s message of 2017-01-18
> 15:58:19 +:
>> 
>> On Mon, Jan 16, 2017 at 7:35 AM, Ian Cordasco
>> > wrote: Hi
>> everyone,
>> 
>> I've seen a few nascent projects wanting to implement their own
>> secret storage to either replace Barbican or avoid adding a
>> dependency on it. When I've pressed the developers on this point,
>> the only answer I've received is to make the operator's lives
>> simpler.
>> 
>> 
>> This is my opinion, but I'd like to see Keystone use Barbican for
>> storing credentials. It hasn't happened yet because nobody's had
>> the time or inclination (what we have works). If this happened,
>> we could deprecate the current way of storing credentials and
>> require Barbican in a couple of releases. Then Barbican would be
>> a required service. The Barbican team might find this to be the
>> easiest route towards convincing other projects to also use
>> Barbican.
>> 
>> - Brant
>> 
>> Can you provides some details on how you'd see this work? Since
>> Barbican typically uses Keystone to authenticate users before
>> determining which secrets they have access to, this leads to a
>> circular logic.
>> 
>> Barbican's main purpose is a secret manager.  It supports a
>> variety of RBAC and ACL access control methods to determine if a
>> request to read/write/delete a secret should be allowed or not.
>> For secret storage, Barbican itself needs a secure backend for
>> storage.  There is a customizable plugin interface to access
>> secure storage.  The current implementations can support a
>> database with encryption, an HSM via KMIP, and Dogtag.
> 
> Just bootstrap the genesis admin credentials into Barbican and
> Keystone the same way we bootstrap them into Keystone now. Once
> there's admin creds, they can be validated separate from updating
> them, and there's no circle anymore, Just two one-way
> dependencies.
> 
> __

>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJYf+7/AAoJEB7Z2EQgmLX7YFQQAJ9J1j/PflaPU18o0Aej1j0p
LLuFRUehR29LKFQJdmmd2GPq+Inuvie9mjRo/Aa89TfF0BpNOJqqma4A7mduHxZQ
QLz5lO0Cg5tuDOKdaml21OJVoxV+8EkslYTn9OOwv0ktL/JxhgSp9wSeJpkkgDKP
lqzCu2WZvHjb1BlDs8DYwW3cOyzJ9vTL4m3UDHz/Z7E2KrW60t8OieJEcYwZH1Iv
r9K4dLE5Qyc552ZB442aR/ypPZS+Wy4/YJwdY6NnS+oI+kkNgW2TVadBkHkRIudy
wTGZSSHIv2NTFugwUOCZF2If+0RkOniTbxev8/xNZZdUJI7N/xeYnc2YozvPHEzD
AG9ghKcFi6drFk+A1cYxy20NaGFxBqM97bXWad5IAhh7c/3Eg0FAf5gl3hYG/nBV
bmEX2LEQiU23yP5ug9Z45KH06rkP7R7i+EG8UpByP88zMREJyPhaaxQFEd5625K7
4Baz7geSHosaK+bTVFdD1FDT8OWxBPbkfJ9hglk2kUoKlhpBLeNPdDNwj4EGz7H3
3tyRlhdaTkETIVIBFOcn6LrZGdgTeveeFVm1XLVPd6+4Ie5akOqrV7we8jFP7bm8
a1X/mzEcdZx74RgLm1+4TAU6N1wgdhdyZoKQCwDrPjPVssI07aNT6BFkSCkAeNdo
pbUudKVnJYS9jhO3BsjR
=8P6e
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] What would you like in Pike?

2017-01-18 Thread Matt Jarvis
I think one of the problems we're seeing now is that a lot of operators
have actually already scratched some of these missing functionality itches
like quota management and project nesting by handling those scenarios in
external management systems. I know we certainly did at DataCentred. That
probably means these things don't surface enough to upstream as
requirements, whereas for new users who aren't necessarily in the loop with
community communication they may well be creating friction to adoption.

On Wed, Jan 18, 2017 at 10:06 PM, Sam Morrison  wrote:

> I would love it if all the projects policy.json was actually usable. Too
> many times the policy.json isn’t the only place where authN happens with
> lots of hard coded is_admin etc.
>
> Just the ability to to have a certain role to a certain thing would be
> amazing. It makes it really hard to have read only users to generate
> reports with that we can show our funders how much people use our openstack
> cloud.
>
> Cheers,
> Sam
> (non-enterprise)
>
>
>
> On 18 Jan 2017, at 6:10 am, Melvin Hillsman  wrote:
>
> Well said, as a consequence of this thread being on the mailing list, I
> hope that we can get *all* operators, end-users, and app-developers to
> respond. If you are aware of folks who do not fall under the "enterprise"
> label please encourage them directly to respond; I would encourage everyone
> to do the same.
>
> On Tue, Jan 17, 2017 at 11:52 AM, Silence Dogood 
> wrote:
>
>> I can see a huge problem with your contributing operators... all of them
>> are enterprise.
>>
>> enterprise needs are radically different from small to medium deployers
>> who openstack has traditionally failed to work well for.
>>
>> On Tue, Jan 17, 2017 at 12:47 PM, Piet Kruithof 
>> wrote:
>>
>>> Sorry for the late reply, but wanted to add a few things.
>>>
>>> OpenStack UX did suggest to the foundation that the community needs a
>>> second survey that focuses exclusively on operators.  The rationale was
>>> that the user survey is primarily focused on marketing data and there isn't
>>> really a ton of space for additional questions that focuses exclusively on
>>> operators. We also recommended a second survey called a MaxDiff study that
>>> enabled operators to identify areas of improvement and also rate them in
>>> order of importance including distance.
>>>
>>> There is also an etherpad that asked operators three priorities for
>>> OpenStack:
>>>
>>> https://etherpad.openstack.org/p/mitaka-openstackux-enterprise-goals
>>>
>>> It was distributed about a year ago, so I'm not sure how much of it was
>>> relevant.  The list does include responses from folks at TWC, Walmart,
>>> Pacific Northwest Labs, BestBuy, Comcast, NTTi3 and the US government. It
>>> might be a good place for the group to add their own improvements as well
>>> as "+" other peoples suggestions.
>>>
>>> There is also a list of studies that have been conducted with operators
>>> on behalf of the community. The study included quotas, deployment and
>>> information needs. Note that the information needs study extended beyond
>>> docs to things like the need to easily google solutions and the need for
>>> SMEs.
>>>
>>> Hope this is helpful.
>>>
>>> Piet
>>>
>>> ___
>>> OPENSTACK USER EXPERIENCE STATUS
>>> The goal of this presentation is to provide an overview of research that
>>> was conducted on behalf of the OpenStack community.  All of the studies
>>> conducted on behalf of the OpenStack community were included in this
>>> presentation.
>>>
>>> Why this research matters:
>>> Consistency across projects has been identified as an issue in the user
>>> survey.
>>>
>>> Study design:
>>> This usability study, conducted at the OpenStack Austin Summit, observed
>>> 10 operators as they attempted to perform standard tasks in the OpenStack
>>> client.
>>>
>>> https://docs.google.com/presentation/d/1hZYCOADJ1gXiFHT1ahwv
>>> 8-tDIQCSingu7zqSMbKFZ_Y/edit#slide=id.p
>>>
>>>
>>>
>>> ___
>>> USER RESEARCH RESULTS: SEARCHLIGHT/HORIZON INTEGRATION
>>> Why this research matters:
>>> The Searchlight plug-in for Horizon aims to provide a consistent search
>>> API across OpenStack resources. To validate its suitability and ease of
>>> use, we evaluated it with cloud operators who use Horizon in their role.
>>>
>>> Study design:
>>> Five operators performed tasks that explored Searchlight’s filters,
>>> full-text capability, and multi-term search.
>>>
>>> https://docs.google.com/presentation/d/1TfF2sm98Iha-bNwBJrCT
>>> Cp6k49zde1Z8I9Qthx1moIM/edit?usp=sharing
>>>
>>>
>>>
>>> ___
>>> CLOUD OPERATOR INTERVIEWS: QUOTA MANAGEMENT AT PRODUCTION SCALE
>>> Why this research matters:
>>> The study was initiated following operator feedback identifying quotas
>>> as a challenge to manage at scale.
>>>
>>> Study design:
>>> One-on-one interviews with cloud operators sought to understand their
>>> methods for managing quotas at production 

Re: [openstack-dev] [requirements] changing the order of values in upper-constraints.txt

2017-01-18 Thread Jeremy Stanley
On 2017-01-18 16:34:56 -0500 (-0500), Doug Hellmann wrote:
> When I tried to merge the upper-constraints updates for the library
> releases we did today, I ran into quite a lot of merge conflicts
> with the Oslo libraries. I'm exploring options for reducing the
> likelihood that those sorts of conflicts will occur with a few
> patches that change how we generate the constraints list.
[...original options omitted...]

Have you considered following a similar pattern to requirements
updates? Namely, have the job check Gerrit to find out if a
constraints update change is already open. If not, create one like
now. If so, retrieve the last patchset, amend it with the desired
edit, and then push it back into Gerrit.

This approach may be too simplified since (unlike requirements
updates) these changes aren't idempotent and so could race with one
another if a lot of releases happen in one go. However, in practice
we only have one job node which creates these and it doesn't run any
jobs in parallel, so we'd never actually see the race. Odds are
we'll just implement a serializing solution for things like this (so
that two jobs with the same name in some specific pipeline won't run
concurrently) if we ever get to the point where we need to
horizontally scale our static nodes, since we probably have more
cases like this already we'll need to solve for.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [keystone] 2017-1-11 policy meeting

2017-01-18 Thread Lance Bragstad
Looping this into the operator's list, too!

On Wed, Jan 18, 2017 at 2:13 PM, Lance Bragstad  wrote:

> Thanks to Morgan in today's policy meeting [0], we were able to shed some
> light on the reasons for keystone having two policy files. The main reason
> a second policy file was introduced was to recenter RBAC around concepts
> introduced in the V3 API. The problem was that the policy file that came
> later [1] wasn't a drop in replacement for the initial one because it
> required new roles in order to work properly. Switching to the newer policy
> file by default would break deployers who did nothing but implement the
> basic RBAC roles required by the initial version [2]. At the time there was
> no real way to "migrate" from one policy file to another, so two were
> maintained in tree.
>
> Consolidating to a single file, or set of defaults, has benefits for
> maintainers and deployers, so we covered paths to accomplish that. We were
> able to come up with three paths forward.
>
>1. Drop support for the original/initial policy file and only maintain
>policy.v3cloudsample.json
>2. Leverage `keystone-manage bootstrap` to create the new roles
>required by policy.v3cloudsample.json
>3. Codify the existing policy file using oslo.policy as a vehicle to
>introduce new defaults from policy.v3cloudsample.json
>
> Everyone seemed to agree the 1st option was the most painful for everyone.
> Option 2 (and maybe 3) would more than likely require some sort of upgrade
> documentation that describes the process.
>
> Without swaying anyone's opinion, I think I tend to lean towards option 3
> because it sounds similar to what nova has done, or is going to do. After
> talking to John Garbutt about some of their nova work, it sounded like one
> of their next steps was to re-evaluate all RBAC roles/rules now that they
> have them in code. If they come across an operation that would benefit from
> a different default value, they can use oslo.policy to deprecate or propose
> a new default (much like how we use oslo.config for changing or deprecating
> configuration values today). From a keystone perspective, this would
> effectively mean we would move what we have in policy.json into code, then
> do the same exercise with policy.v3cloudsample.json. The result would 0
> policy files to maintain in tree and everything would be in code. From
> there - we can work with other projects to standardize on what various
> roles mean across OpenStack (hopefully following some sort of guide or
> document).
>
> I'm excited to hear what others think of the current options, or if there
> is another path forward we missed.
>
>
> [0] http://eavesdrop.openstack.org/meetings/policy/2017/
> policy.2017-01-18-16.00.log.html
> [1] https://github.com/openstack/keystone/blob/
> 7f2b7e58e74c79e5a09bd5c20e0de9c15d9eabd0/etc/policy.v3cloudsample.json
> [2] https://github.com/openstack/keystone/blob/
> 7f2b7e58e74c79e5a09bd5c20e0de9c15d9eabd0/etc/policy.json
>
> On Wed, Jan 11, 2017 at 11:28 AM, Lance Bragstad 
> wrote:
>
>> Hey folks,
>>
>> In case you missed the policy meeting today, we had a good discussion [0]
>> around incorporating keystone's policy into code using the Nova approach.
>>
>> Keystone is in a little bit of a unique position since we maintain two
>> different policy files [1] [2], and there were a lot of questions around
>> why we have two. This same topic came up in a recent keystone meeting, and
>> we wanted to loop Henry Nash into the conversation, since I believe he
>> spearheaded a lot of the original policy.v3cloudsample work.
>>
>> Let's see if we can air out some of that tribal knowledge and answer a
>> couple questions.
>>
>> What was the main initiative for introducing policy.v3cloudsample.json?
>>
>> Is it possible to consolidate the two?
>>
>>
>> [0] http://eavesdrop.openstack.org/meetings/policy/2017/
>> policy.2017-01-11-16.00.log.html
>> [1] https://github.com/openstack/keystone/blob/master/etc/
>> policy.v3cloudsample.json
>> [2] https://github.com/openstack/keystone/blob/master/etc/policy.json
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [keystone] 2017-1-11 policy meeting

2017-01-18 Thread Lance Bragstad
Looping this into the operator's list, too!

On Wed, Jan 18, 2017 at 2:13 PM, Lance Bragstad  wrote:

> Thanks to Morgan in today's policy meeting [0], we were able to shed some
> light on the reasons for keystone having two policy files. The main reason
> a second policy file was introduced was to recenter RBAC around concepts
> introduced in the V3 API. The problem was that the policy file that came
> later [1] wasn't a drop in replacement for the initial one because it
> required new roles in order to work properly. Switching to the newer policy
> file by default would break deployers who did nothing but implement the
> basic RBAC roles required by the initial version [2]. At the time there was
> no real way to "migrate" from one policy file to another, so two were
> maintained in tree.
>
> Consolidating to a single file, or set of defaults, has benefits for
> maintainers and deployers, so we covered paths to accomplish that. We were
> able to come up with three paths forward.
>
>1. Drop support for the original/initial policy file and only maintain
>policy.v3cloudsample.json
>2. Leverage `keystone-manage bootstrap` to create the new roles
>required by policy.v3cloudsample.json
>3. Codify the existing policy file using oslo.policy as a vehicle to
>introduce new defaults from policy.v3cloudsample.json
>
> Everyone seemed to agree the 1st option was the most painful for everyone.
> Option 2 (and maybe 3) would more than likely require some sort of upgrade
> documentation that describes the process.
>
> Without swaying anyone's opinion, I think I tend to lean towards option 3
> because it sounds similar to what nova has done, or is going to do. After
> talking to John Garbutt about some of their nova work, it sounded like one
> of their next steps was to re-evaluate all RBAC roles/rules now that they
> have them in code. If they come across an operation that would benefit from
> a different default value, they can use oslo.policy to deprecate or propose
> a new default (much like how we use oslo.config for changing or deprecating
> configuration values today). From a keystone perspective, this would
> effectively mean we would move what we have in policy.json into code, then
> do the same exercise with policy.v3cloudsample.json. The result would 0
> policy files to maintain in tree and everything would be in code. From
> there - we can work with other projects to standardize on what various
> roles mean across OpenStack (hopefully following some sort of guide or
> document).
>
> I'm excited to hear what others think of the current options, or if there
> is another path forward we missed.
>
>
> [0] http://eavesdrop.openstack.org/meetings/policy/2017/
> policy.2017-01-18-16.00.log.html
> [1] https://github.com/openstack/keystone/blob/
> 7f2b7e58e74c79e5a09bd5c20e0de9c15d9eabd0/etc/policy.v3cloudsample.json
> [2] https://github.com/openstack/keystone/blob/
> 7f2b7e58e74c79e5a09bd5c20e0de9c15d9eabd0/etc/policy.json
>
> On Wed, Jan 11, 2017 at 11:28 AM, Lance Bragstad 
> wrote:
>
>> Hey folks,
>>
>> In case you missed the policy meeting today, we had a good discussion [0]
>> around incorporating keystone's policy into code using the Nova approach.
>>
>> Keystone is in a little bit of a unique position since we maintain two
>> different policy files [1] [2], and there were a lot of questions around
>> why we have two. This same topic came up in a recent keystone meeting, and
>> we wanted to loop Henry Nash into the conversation, since I believe he
>> spearheaded a lot of the original policy.v3cloudsample work.
>>
>> Let's see if we can air out some of that tribal knowledge and answer a
>> couple questions.
>>
>> What was the main initiative for introducing policy.v3cloudsample.json?
>>
>> Is it possible to consolidate the two?
>>
>>
>> [0] http://eavesdrop.openstack.org/meetings/policy/2017/
>> policy.2017-01-11-16.00.log.html
>> [1] https://github.com/openstack/keystone/blob/master/etc/
>> policy.v3cloudsample.json
>> [2] https://github.com/openstack/keystone/blob/master/etc/policy.json
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] What would you like in Pike?

2017-01-18 Thread Sam Morrison
I would love it if all the projects policy.json was actually usable. Too many 
times the policy.json isn’t the only place where authN happens with lots of 
hard coded is_admin etc.

Just the ability to to have a certain role to a certain thing would be amazing. 
It makes it really hard to have read only users to generate reports with that 
we can show our funders how much people use our openstack cloud.

Cheers,
Sam
(non-enterprise)



> On 18 Jan 2017, at 6:10 am, Melvin Hillsman  wrote:
> 
> Well said, as a consequence of this thread being on the mailing list, I hope 
> that we can get all operators, end-users, and app-developers to respond. If 
> you are aware of folks who do not fall under the "enterprise" label please 
> encourage them directly to respond; I would encourage everyone to do the same.
> 
> On Tue, Jan 17, 2017 at 11:52 AM, Silence Dogood  > wrote:
> I can see a huge problem with your contributing operators... all of them are 
> enterprise.
> 
> enterprise needs are radically different from small to medium deployers who 
> openstack has traditionally failed to work well for.
> 
> On Tue, Jan 17, 2017 at 12:47 PM, Piet Kruithof  > wrote:
> Sorry for the late reply, but wanted to add a few things.
> 
> OpenStack UX did suggest to the foundation that the community needs a second 
> survey that focuses exclusively on operators.  The rationale was that the 
> user survey is primarily focused on marketing data and there isn't really a 
> ton of space for additional questions that focuses exclusively on operators. 
> We also recommended a second survey called a MaxDiff study that enabled 
> operators to identify areas of improvement and also rate them in order of 
> importance including distance.
> 
> There is also an etherpad that asked operators three priorities for OpenStack:
> 
> https://etherpad.openstack.org/p/mitaka-openstackux-enterprise-goals 
> 
> 
> It was distributed about a year ago, so I'm not sure how much of it was 
> relevant.  The list does include responses from folks at TWC, Walmart, 
> Pacific Northwest Labs, BestBuy, Comcast, NTTi3 and the US government. It 
> might be a good place for the group to add their own improvements as well as 
> "+" other peoples suggestions.
> 
> There is also a list of studies that have been conducted with operators on 
> behalf of the community. The study included quotas, deployment and 
> information needs. Note that the information needs study extended beyond docs 
> to things like the need to easily google solutions and the need for SMEs.
> 
> Hope this is helpful.  
> 
> Piet
> 
> ___
> OPENSTACK USER EXPERIENCE STATUS
> The goal of this presentation is to provide an overview of research that was 
> conducted on behalf of the OpenStack community.  All of the studies conducted 
> on behalf of the OpenStack community were included in this presentation. 
> 
> Why this research matters:
> Consistency across projects has been identified as an issue in the user 
> survey.
> 
> Study design:
> This usability study, conducted at the OpenStack Austin Summit, observed 10 
> operators as they attempted to perform standard tasks in the OpenStack client.
> 
> https://docs.google.com/presentation/d/1hZYCOADJ1gXiFHT1ahwv8-tDIQCSingu7zqSMbKFZ_Y/edit#slide=id.p
>  
> 
>  
> 
> 
> 
> ___
> USER RESEARCH RESULTS: SEARCHLIGHT/HORIZON INTEGRATION
> Why this research matters:
> The Searchlight plug-in for Horizon aims to provide a consistent search API 
> across OpenStack resources. To validate its suitability and ease of use, we 
> evaluated it with cloud operators who use Horizon in their role.
> 
> Study design:
> Five operators performed tasks that explored Searchlight’s filters, full-text 
> capability, and multi-term search.
> 
> https://docs.google.com/presentation/d/1TfF2sm98Iha-bNwBJrCTCp6k49zde1Z8I9Qthx1moIM/edit?usp=sharing
>  
> 
>  
> 
> 
> 
> ___
> CLOUD OPERATOR INTERVIEWS: QUOTA MANAGEMENT AT PRODUCTION SCALE
> Why this research matters:
> The study was initiated following operator feedback identifying quotas as a 
> challenge to manage at scale.
> 
> Study design:
> One-on-one interviews with cloud operators sought to understand their methods 
> for managing quotas at production scale.
> 
> https://docs.google.com/presentation/d/1J6-8MwUGGOwy6-A_w1EaQcZQ1Bq2YWeB-kw4vCFxbwM/edit
>  
> 
> 
> 
> 
> ___
> CLOUD OPERATOR INTERVIEWS: INFORMATION NEEDS
> Why this research matters:
> Documentation has been consistently identified as an issue by operators 
> during the 

[openstack-dev] [neutron] "Setup firewall filters only for required ports" bug

2017-01-18 Thread Bernard Cafarelli
Hi neutrinos,

I would like your feedback on the mentioned changeset in title[1]
(yes, added since Liberty).

With this patch, we (should) skip ports with
port_security_enabled=False or with an empty list of security groups
when processing added ports [2]. But we found multiple problems here

* Ports create with port_security_enabled=False

This is the original bug that started this mail: if the FORWARD
iptables chain has a REJECT default policy/last rule, the traffic is
still blocked[3]. There is also a launchpad bug with similar details
[4]
The problem here: these ports must not be skipped, as we add specific
firewall rules to allow all traffic. These iptables rules have the
following comment:
"/* Accept all packets when port security is disabled. */"

With the current code, any port created with port security will not
have these rules (and updates do not work).
I initially sent a patch to process these ports again [5], but there
is more (as detailed by some in the launchpad bug)

* Ports with no security groups, current code

There is a bug in the  current agent code [6]: even with no security
groups, the check will return true as, the security_groups key exists
in the port details (with value "[]").
So the port will not be skipped

* Ports with no security groups, updated code

Next step was to update checks (security groups list not empy, port
security True or None), and test again. The port this time was
skipped, but this showed up in openvswitch-agent.log:
2017-01-18 16:19:56.780 7458 INFO
neutron.agent.linux.iptables_firewall
[req-c49ca24f-1df8-40d7-8c48-6aab842ba34a - - - - -] Attempted to
update port filter which is not filtered
c2c58f8f-3b76-4c00-b792-f1726b28d2fc
2017-01-18 16:19:56.853 7458 INFO
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
[req-c49ca24f-1df8-40d7-8c48-6aab842ba34a - - - - -] Configuration for
devices up [u'c2c58f8f-3b76-4c00-b792-f1726b28d2fc'] and devices down
[] completed.

Which is the kind of logs we saw in the first bug report. So as an
additional test, I tried to update this port, adding a security group.
New log entries:
2017-01-18 17:36:53.164 7458 INFO neutron.agent.securitygroups_rpc
[req-c49ca24f-1df8-40d7-8c48-6aab842ba34a - - - - -] Refresh firewall
rules
2017-01-18 17:36:55.873 7458 INFO
neutron.agent.linux.iptables_firewall
[req-c49ca24f-1df8-40d7-8c48-6aab842ba34a - - - - -] Attempted to
update port filter which is not filtered
0f2eea88-0e6a-4ea9-819c-e26eb692cb25
2017-01-18 17:36:58.587 7458 INFO
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
[req-c49ca24f-1df8-40d7-8c48-6aab842ba34a - - - - -] Configuration for
devices up [u'0f2eea88-0e6a-4ea9-819c-e26eb692cb25'] and devices down
[] completed.

And the iptables configuration did not change to show the newly allowed ports.

So with a fixed check, wend up back in the same buggy situation as the
first one.

* Feedback

So which course of action should we take? After checking these 3 cases
out, I am in favour of reverting this commit entirely, as in its
current state it does not help for ports without security groups, and
breaks ports with port security disabled.

Also, on the tests side, should we add more tests only using create
calls (port_security tests mostly update an existing port)? How to
make sure these iptables rules are correctly applied (the ping tests
are not enough, especially if the host system does not reject packets
by default)?

[1] https://review.openstack.org/#/c/210321/
[2] 
https://github.com/openstack/neutron/blob/a66c27193573ce015c6c1234b0f2a1d86fb85a22/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1640
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1406263
[4] https://bugs.launchpad.net/neutron/+bug/1549443
[5] https://review.openstack.org/#/c/421832/
[6] 
https://github.com/openstack/neutron/blob/a66c27193573ce015c6c1234b0f2a1d86fb85a22/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1521

Thanks!

-- 
Bernard Cafarelli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] changing the order of values in upper-constraints.txt

2017-01-18 Thread Matthew Thode
I actually like the last option (sha) the most, even as a packager I can
just take the file and sort it if I want something more human readable.

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] changing the order of values in upper-constraints.txt

2017-01-18 Thread Doug Hellmann
When I tried to merge the upper-constraints updates for the library
releases we did today, I ran into quite a lot of merge conflicts
with the Oslo libraries. I'm exploring options for reducing the
likelihood that those sorts of conflicts will occur with a few
patches that change how we generate the constraints list.

The first option is to insert some blank lines into the file so
that we won’t be changing consecutive lines. I don't know if this
will actually work, because it's not clear we have sufficient control
over the git context range. Still, for the sake of argument see
https://review.openstack.org/422205 and https://review.openstack.org/422251
for the sample output.

The second option is one proposed by Dirk which changes the name
to remove common prefix values. This one is rather simple, and it's
possible explain to a human who wants to add a new line to the file
by hand how to figure out where it goes. See
https://review.openstack.org/45 and https://review.openstack.org/422239
for the sample.

The final option uses a SHA1 hash of the name as the sort key. It
wouldn't be easy for a human to update the file by hand, but we
could make a tool that does. I don't know how often that case comes
up, so I don't know how important it is. See
https://review.openstack.org/422245 and https://review.openstack.org/422246
for the sample output.

I don't expect us to change this right now, but I had the time to
spend on it today and I thought it would be useful to have something
put together before we get to Atlanta.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [validations][ui] resetting state of validations

2017-01-18 Thread Dan Trainor
Hi -

Is there a way to reset the state of all the validations that have
previously ran, back to the original state they were prior to running?

Using the UI, for example, some validations (by design) run as soon as you
log in.  Others run after different actions are completed.  But there's a
state at which none of the validations have been ran, prior to logging in
to the UI.  I want to re-run those validations as if I had logged in to the
UI for the first time, for testing purposes.

Thanks!
-dant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

I think that a Vault backend would only be valuable to folks who are
already using Vault.

For deployers who don't yet have a key management solution, a Vault
backend would not solve the problem of having to deploy yet another
service.  In fact it would make it worse since the deployer would have
to deploy both Vault AND Barbican to get a working solution.  It seems
to me that it would create the same concerns that folks are having
about deploying DogTag and Barbican to get a software-only solution.

I do like Vault, and I think that some of the things they've done with
the software-only configuration are pretty cool.  I spent some time
looking into what it would take to wire up Barbican to use Vault as a
backend, and the tricky part is being able to map Keystone auth to one
of Vault's many auth drivers.

For my use case, the effort of sorting out the auth mapping between
the two systems in addition to the overhead of running both Vault and
Barbican seemed like a bigger task than improving the Simple Crypto
driver to remove the encryption key from the conf file.

- - Douglas

On 1/17/17 7:49 AM, Dave McCowan (dmccowan) wrote:
> 
> 
> On 1/16/17, 3:06 PM, "Ian Cordasco" 
> wrote:
> 
>> -Original Message- From: Dave McCowan (dmccowan)
>>  Reply: OpenStack Development Mailing List
>> (not for usage questions)  
>> Date: January 16, 2017 at 13:03:41 To: OpenStack Development
>> Mailing List (not for usage questions) 
>>  Subject:  Re: [openstack-dev]
>> [all] [barbican] [security] Why are projects trying to avoid
>> Barbican, still?
>>> Yep. Barbican supports four backend secret stores. [1]
>>> 
>>> The first (Simple Crypto) is easy to deploy, but not
>>> extraordinarily secure, since the secrets are encrypted using a
>>> static key defined in the barbican.conf file.
>>> 
>>> The second and third (PKCS#11 and KMIP) are secure, but require
>>> an HSM as a hardware base to encrypt and/or store the secrets. 
>>> The fourth (Dogtag) is secure, but requires a deployment of
>>> Dogtag to encrypt and store the secrets.
>>> 
>>> We do not currently have a secret store that is both highly
>>> secure and easy to deploy/manage.
>>> 
>>> We, the Barbican community, are very open to any ideas,
>>> blueprints, or patches on how to achieve this. In any of the
>>> homegrown per-project secret stores, has a solution been 
>>> developed that solves both of these?
>>> 
>>> 
>>> [1]
>>> 
>>> http://docs.openstack.org/project-install-guide/key-manager/draft/ba
rbica
>>>
>>> 
n-
>>> backend.html
>> 
>> So there seems to be a consensus that Vault is a good easy and
>> secure solution to deploy. Can Barbican use that as a backend
>> secret store?
> 
> Adding a new secret store plugin for Vault would be a welcome
> addition. We have documentation in our repo on how to write a new
> plugin. [1]   I can schedule some time at the PTG to plan for this
> in Pike if there are interested developers.
> 
> [1] 
> https://github.com/openstack/barbican/blob/master/doc/source/plugin/se
cret_
>
> 
store.rst
> 
> 
> __

>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJYf9jPAAoJEB7Z2EQgmLX7UQkQAKNFKOfAazPmzQGETKWuy2uP
9G86dGNrRO4PaFKO7asUgqmdtFiMfouTT8yayogT3vLokhOoQW4bBxLKunGQ4Un3
mVg5pYD8zwBtYTKd09WVLEYfiSdUSurKfA6gq/b3l0NC7fEp0zkx58Rzt1/ITW7H
o+90ajghnfl2X6yfE+dudGody5aKoicDqxgzwh6YbIDwz6ZaGfwE9tUGJdQ4OJ1O
YfG1I61JPvNz+r1RJeyREo0SEuNi0RMgWHqigu/H9QfOGNxJrfKGM1KC5TbAnMkA
82BmxNUw/hYQZsSk/beDqelH4JqZmywlMna9YAjLC9VrgvnmC7srHbQBLMsyavBH
Zfv04kG30ucsauxQOni0YfbqhalSb+6wXJipwTdaetwTe2wiVltz1a9pscc/57r9
omBCoNUh+dS1uy8axRSE92oDw2ASfBEH7B5+NBLZ0Y8ZlfN8JU6BqY8cJdpzSSer
CvmyLDiUE1MEYj2L05lPJXZnbiWSJK1FZNNXf6kuJBXfqsNz7QRkrwkVIS1a+Uke
n4U8Fl9c3VlGiLanfnNGHgBOOG9lwL0/g1gc5JtCZYPaNRj/+TSLQBHfgm3SgtSG
6rmJCU7t4PLqdIylDN7uTSPgFX4BCU4yXY9IcfLiz0OLZmbFzsLLG/zYN2dc4iM5
uCpGu9rsziz1ujaTwneC
=gIXT
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [all] [goals] proposing a new goal: "Control Plane API endpoints deployment via WSGI"

2017-01-18 Thread Emilien Macchi
On Thu, Jan 12, 2017 at 8:40 PM, Emilien Macchi  wrote:
> Greetings OpenStack community,
>
> I have been looking for a Community Goal [1] that would directly help
> Operators and I found the "run API via WSGI" useful.
> So I've decided to propose this one as a goal for Pike but I'll stay
> open to postpone it to Queen if our community thinks we already have
> too much goals for Pike.

Please provide feedback by next week, TC will decide which goals we
pick between this one and the tempest-plugin one.
This one is more a backup plan in case tempest-plugin goal can't make
it for Pike.

Thanks for your time,

> Note that this goal might help to achieve 2 other goals later:
> - enable and test SSL everywhere
> - enable and test IPv6 everywhere
>
> Here's the draft:
> https://review.openstack.org/419706
>
> Any feedback is very welcome, thanks for reading so far.
>
> [1] https://etherpad.openstack.org/p/community-goals
> --
> Emilien Macchi



-- 
Emilien Macchi

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all] [goals] proposing a new goal: "Control Plane API endpoints deployment via WSGI"

2017-01-18 Thread Emilien Macchi
On Thu, Jan 12, 2017 at 8:40 PM, Emilien Macchi  wrote:
> Greetings OpenStack community,
>
> I have been looking for a Community Goal [1] that would directly help
> Operators and I found the "run API via WSGI" useful.
> So I've decided to propose this one as a goal for Pike but I'll stay
> open to postpone it to Queen if our community thinks we already have
> too much goals for Pike.

Please provide feedback by next week, TC will decide which goals we
pick between this one and the tempest-plugin one.
This one is more a backup plan in case tempest-plugin goal can't make
it for Pike.

Thanks for your time,

> Note that this goal might help to achieve 2 other goals later:
> - enable and test SSL everywhere
> - enable and test IPv6 everywhere
>
> Here's the draft:
> https://review.openstack.org/419706
>
> Any feedback is very welcome, thanks for reading so far.
>
> [1] https://etherpad.openstack.org/p/community-goals
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Not running for Oslo PTL for Pike

2017-01-18 Thread gordon chung


On 03/01/17 03:03 PM, Joshua Harlow wrote:
> Hi Oslo folks (and others),
>
> Happy new year!
>
> After serving for about a year I think it's a good opportunity for
> myself to let another qualified individual run for Oslo PTL (seems
> common to only go for two terms and hand-off to another).
>
> So I just wanted to let folks know that I will be doing this, so that we
> can grow others in the community that wish to try out being a PTL.
>
> I don't plan on leaving the Oslo community btw, just want to make sure
> we spread the knowledge (and the fun!) of being a PTL.
>
> Hopefully I've been a decent PTL (with  room to improve) during
> this time :-)
>

thanks for leading the oslo community and being the glue in OpenStack, Josh!

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Reminder: 1/19 is non-client library release freeze

2017-01-18 Thread Matt Riedemann
This is just a reminder that tomorrow (Thursday 1/19) is the non-client 
library release freeze for Ocata. This means if you have code that 
depends on something in a non-client library, like any of the oslo or 
os-* libraries, those changes have to be merged and released by EOD 
tomorrow or they won't get into Ocata.


If there is something you're dependent on for a feature in Nova in Ocata 
please let me know so I can keep an eye on it tomorrow.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] 2017-1-11 policy meeting

2017-01-18 Thread Lance Bragstad
Thanks to Morgan in today's policy meeting [0], we were able to shed some
light on the reasons for keystone having two policy files. The main reason
a second policy file was introduced was to recenter RBAC around concepts
introduced in the V3 API. The problem was that the policy file that came
later [1] wasn't a drop in replacement for the initial one because it
required new roles in order to work properly. Switching to the newer policy
file by default would break deployers who did nothing but implement the
basic RBAC roles required by the initial version [2]. At the time there was
no real way to "migrate" from one policy file to another, so two were
maintained in tree.

Consolidating to a single file, or set of defaults, has benefits for
maintainers and deployers, so we covered paths to accomplish that. We were
able to come up with three paths forward.

   1. Drop support for the original/initial policy file and only maintain
   policy.v3cloudsample.json
   2. Leverage `keystone-manage bootstrap` to create the new roles required
   by policy.v3cloudsample.json
   3. Codify the existing policy file using oslo.policy as a vehicle to
   introduce new defaults from policy.v3cloudsample.json

Everyone seemed to agree the 1st option was the most painful for everyone.
Option 2 (and maybe 3) would more than likely require some sort of upgrade
documentation that describes the process.

Without swaying anyone's opinion, I think I tend to lean towards option 3
because it sounds similar to what nova has done, or is going to do. After
talking to John Garbutt about some of their nova work, it sounded like one
of their next steps was to re-evaluate all RBAC roles/rules now that they
have them in code. If they come across an operation that would benefit from
a different default value, they can use oslo.policy to deprecate or propose
a new default (much like how we use oslo.config for changing or deprecating
configuration values today). From a keystone perspective, this would
effectively mean we would move what we have in policy.json into code, then
do the same exercise with policy.v3cloudsample.json. The result would 0
policy files to maintain in tree and everything would be in code. From
there - we can work with other projects to standardize on what various
roles mean across OpenStack (hopefully following some sort of guide or
document).

I'm excited to hear what others think of the current options, or if there
is another path forward we missed.


[0] http://eavesdrop.openstack.org/meetings/policy/
2017/policy.2017-01-18-16.00.log.html
[1]
https://github.com/openstack/keystone/blob/7f2b7e58e74c79e5a09bd5c20e0de9c15d9eabd0/etc/policy.v3cloudsample.json
[2]
https://github.com/openstack/keystone/blob/7f2b7e58e74c79e5a09bd5c20e0de9c15d9eabd0/etc/policy.json

On Wed, Jan 11, 2017 at 11:28 AM, Lance Bragstad 
wrote:

> Hey folks,
>
> In case you missed the policy meeting today, we had a good discussion [0]
> around incorporating keystone's policy into code using the Nova approach.
>
> Keystone is in a little bit of a unique position since we maintain two
> different policy files [1] [2], and there were a lot of questions around
> why we have two. This same topic came up in a recent keystone meeting, and
> we wanted to loop Henry Nash into the conversation, since I believe he
> spearheaded a lot of the original policy.v3cloudsample work.
>
> Let's see if we can air out some of that tribal knowledge and answer a
> couple questions.
>
> What was the main initiative for introducing policy.v3cloudsample.json?
>
> Is it possible to consolidate the two?
>
>
> [0] http://eavesdrop.openstack.org/meetings/policy/
> 2017/policy.2017-01-11-16.00.log.html
> [1] https://github.com/openstack/keystone/blob/master/etc/policy.
> v3cloudsample.json
> [2] https://github.com/openstack/keystone/blob/master/etc/policy.json
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[OpenStack-Infra] Upcoming database maintenance affecting wiki

2017-01-18 Thread James E. Blair
Rackspace writes:

>Hello,
>A maintenance event has been scheduled.  Details below:
>
>Affected device(s)
>Wiki-MySQL
>Wiki-Dev-MySQL
>
>Maintenance window
>
>2017-01-19 0300-0400 CST
>
>Expected disruption
>
>Loss of access to database for 10-15 minutes
>
>To complete this maintenance and ensure the integrity of the
>environment, we will be rebooting listed devices during this time.
>Rackspace engineers will be on hand during and after the
>maintenance to make certain environments resume operations as
>expected.
>
>We apologize for any inconvenience this may cause.  Should you have
>any questions for Support, please don’t hesitate to reach out to us
>via ticket or phone.  Thank you.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Brant Knudson
On Wed, Jan 18, 2017 at 9:58 AM, Dave McCowan (dmccowan)  wrote:

>
> On Mon, Jan 16, 2017 at 7:35 AM, Ian Cordasco 
> wrote:
>
>> Hi everyone,
>>
>> I've seen a few nascent projects wanting to implement their own secret
>> storage to either replace Barbican or avoid adding a dependency on it.
>> When I've pressed the developers on this point, the only answer I've
>> received is to make the operator's lives simpler.
>>
>>
> This is my opinion, but I'd like to see Keystone use Barbican for storing
> credentials. It hasn't happened yet because nobody's had the time or
> inclination (what we have works). If this happened, we could deprecate the
> current way of storing credentials and require Barbican in a couple of
> releases. Then Barbican would be a required service. The Barbican team
> might find this to be the easiest route towards convincing other projects
> to also use Barbican.
>
> - Brant
>
>
> Can you provides some details on how you'd see this work?
> Since Barbican typically uses Keystone to authenticate users before
> determining which secrets they have access to, this leads to a circular
> logic.
>
> Barbican's main purpose is a secret manager.  It supports a variety of
> RBAC and ACL access control methods to determine if a request to
> read/write/delete a secret should be allowed or not.  For secret storage,
> Barbican itself needs a secure backend for storage.  There is a
> customizable plugin interface to access secure storage.  The current
> implementations can support a database with encryption, an HSM via KMIP,
> and Dogtag.
>
>
I haven't thought about it much so don't have details figured out. Keystone
stores many types of secrets for users, and maybe you're thinking about the
user password being tricky. I'm thinking about the users' EC2 credentials
(for example). I don't think this would be difficult and would involve
creating a credentials backend for keystone that supports barbican. Maybe
have a 'keystone' project for credentials keystone is storing? If you're
familiar with the Barbican interface, compare with keystone's credential
interface[0].

[0]
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/credential/backends/base.py#n26

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-18 Thread Adrian Otto

On Jan 18, 2017, at 10:48 AM, Mark Baker 
> wrote:

Hi Adrian,

Let me know if you have similar questions or concerns about Ubuntu Core with 
Magnum.

Mark

Thanks Mark! Is there any chance you, or an Ubuntu Core representative could 
join us for a discussion at the PTG, and/or an upcoming IRC team meeting? The 
topic of supported operating system images for our cluster drivers is a current 
topic of team conversation, and it would be helpful to have clarity on what 
(support/dev/test) resources upstream Linux packagers may be able to offer to 
help guide our conversation.

To give you a sense, we do have a Suse specific k8s driver that has been 
maturing during the Ocata release cycle, our Mesos driver uses Ubuntu Server, 
our Swarm and k8s drivers use Fedora Atomic, and another newer k8s driver uses 
Fedora. The topic of Operating System (OS) support for cluster nodes (versus 
what OS containers are based on) is confusing for many cloud operators, so it 
would be helpful we worked on clarifying the options, and involve stakeholders 
from various OS distributions so that suitable options are available for those 
who prefer to form Magnum clusters from OS images composed from one particular 
OS or another.

Ideally we could have this discussion at the PTG in Atlanta with participants 
like our core reviewers, Josh Berkus, you, our Suse contributors, and any other 
representatives from OS distribution organizations who may have an interest in 
cluster drivers for their respective OS types. If that discussion proves 
productive, we could also engage our wider contributor base in a followup IRC 
team meeting with a dedicated agenda item to cover what’s possible, and 
summarize what various stakeholders provided to us as input at the PTG. This 
might give us a chance to source further input from a wider audience than our 
PTG attendees.

Thoughts?

Thanks,

Adrian


On 18 Jan 2017 8:36 p.m., "Adrian Otto" 
> wrote:
Josh,

> On Jan 18, 2017, at 10:18 AM, Josh Berkus 
> > wrote:
>
> Magnum Devs:
>
> Is there going to be a magnum team meeting around OpenStack Summit in
> Boston?
>
> I'm the community manager for Atomic Host, so if you're going to have
> Magnum meetings, I'd like to send you some Atomic engineers to field any
> questions/issues at the Summit.

Thanks for your question. We are planning to have our team design meetings at 
the upcoming PTG event in Atlanta. We are not currently planning to have any 
such meetings in Boston. With that said, we would very much like to involve you 
in an important Atomic related design decision that has recently surfaced, and 
would like to welcome you to an upcoming Magnum IRC team meeting to meet you 
and explain our interests and concerns. I do expect to attend the Boston summit 
myself, so I’m willing to meet you and your engineers on behalf of our team if 
you are unable to attend the PTG. I’ll reach out to you individually by email 
to explore our options for an Atomic Host meeting agenda item in the mean time.

Regards,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-18 Thread Mark Baker
Hi Adrian,

Let me know if you have similar questions or concerns about Ubuntu Core
with Magnum.

Mark

On 18 Jan 2017 8:36 p.m., "Adrian Otto"  wrote:

> Josh,
>
> > On Jan 18, 2017, at 10:18 AM, Josh Berkus  wrote:
> >
> > Magnum Devs:
> >
> > Is there going to be a magnum team meeting around OpenStack Summit in
> > Boston?
> >
> > I'm the community manager for Atomic Host, so if you're going to have
> > Magnum meetings, I'd like to send you some Atomic engineers to field any
> > questions/issues at the Summit.
>
> Thanks for your question. We are planning to have our team design meetings
> at the upcoming PTG event in Atlanta. We are not currently planning to have
> any such meetings in Boston. With that said, we would very much like to
> involve you in an important Atomic related design decision that has
> recently surfaced, and would like to welcome you to an upcoming Magnum IRC
> team meeting to meet you and explain our interests and concerns. I do
> expect to attend the Boston summit myself, so I’m willing to meet you and
> your engineers on behalf of our team if you are unable to attend the PTG.
> I’ll reach out to you individually by email to explore our options for an
> Atomic Host meeting agenda item in the mean time.
>
> Regards,
>
> Adrian
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-18 Thread Adrian Otto
Josh,

> On Jan 18, 2017, at 10:18 AM, Josh Berkus  wrote:
> 
> Magnum Devs:
> 
> Is there going to be a magnum team meeting around OpenStack Summit in
> Boston?
> 
> I'm the community manager for Atomic Host, so if you're going to have
> Magnum meetings, I'd like to send you some Atomic engineers to field any
> questions/issues at the Summit.

Thanks for your question. We are planning to have our team design meetings at 
the upcoming PTG event in Atlanta. We are not currently planning to have any 
such meetings in Boston. With that said, we would very much like to involve you 
in an important Atomic related design decision that has recently surfaced, and 
would like to welcome you to an upcoming Magnum IRC team meeting to meet you 
and explain our interests and concerns. I do expect to attend the Boston summit 
myself, so I’m willing to meet you and your engineers on behalf of our team if 
you are unable to attend the PTG. I’ll reach out to you individually by email 
to explore our options for an Atomic Host meeting agenda item in the mean time.

Regards,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

I'm very much interested in an out-of-the-box software-only backend
driver for Barbican.

I think that one of the reasons people have been hesitant to deploy
Barbican is that we claim that our Simple Crypto software-only driver
is "not secure in any way", when really we should be saying that it
provides minimal security which may or may not be acceptable to your
business.

I believe we could provide a level of security comparable to
software-only Vault with considerably less effort than it would take
to create a driver that can utilize Vault.

We could, for example, add a new API call to provide the encryption
key at runtime instead of requiring it to be present in the conf file.

- - Douglas

On 1/16/17 12:43 PM, Rob C wrote:
> 
> The last I checked, Rob, they also support DogTag IPA which is
> purely a Software based HSM. Hopefully the Barbican team can
> confirm this. -- Ian Cordasco
> 
> 
> Yup, that's my understanding too. However, that requires Barbican
> _and_ Dogtag, an even bigger overhead. Especially as at least
> historically Dogtag has been difficult to maintain. If you have a
> deployment already, there's a great synergy there. If you don't
> then it introduces a lot of overhead.
> 
> I'm interested to know if an out of the box, stand alone
> software-only version of Barbican would be any more appealing
> 
> Cheers -Rob
> 
> 
> __

>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJYf7TrAAoJEB7Z2EQgmLX7En8P/3Nw+AcPTt5m7oQagy7I7r84
Bmk2AzsHN/GvkeqlRTTu3ilHWswspfwf1AcPACAC6aOnOZ3Y9JX9oF8W9sJlYDBl
edniOBOrLZnRAQrZgLmoU7ifIRf1HkaUiy0nkRQ8t21CbyRzDIzPZ9qC92DVZXxr
Ho/9tJXWUmFXPLUvcOAa0H4jOISvtDCono6HM47JtdARzvWoxDrYY47tFqoY0x4G
ocDkYxqu+IUCoM4HzdiqszJT0PwSgp8yaXtWaZadS2k8yEiQwpnTZEDhHNvGKc+X
Ty2dpR5LzQ1WVcEE4FKW47fNBNsQf7IFwSYXt7k8CUpjK8e+rJ//9ndpozdZGrSn
w/HPO84vVC0a2cN582MzVC9zAuUjsJ4fbh9CxWvT83X84lbor9RTk6XnV3IvzNDd
ytVouSfchVrPwBFt4qgo9W30zUUjcUq2NFhOMDwW9ns2E2cd8PuT9GCRvkzLh1xE
RF/OePXjE0DU3pmjL6lRN0S2/69ndAhKvERKB7kcKxNBvfXQg6CCB3hqf7TBUg0w
Do2WSrUx65PHuYdOquwGCAlMscsX0lnkh1yF19KaohxPp+Pjg2dL905lJx/gt6no
emDbTiJFYhcYJKpmAkVToz6GnUcwL6jLHciP8KmXjjcMnG6dCYjErOfIg1ixGmeT
+mKOupRI6l2G9VlM9Cj1
=sGDy
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-18 Thread Doug Hellmann
Excerpts from Mikhail Fedosin's message of 2017-01-18 19:54:01 +0300:
> Hello!
> 
> In this letter I want to tell you the current status of Glare project and
> discuss its future development within the entire OpenStack community.
> 
> In the beginning I have to say a few words about myself - my name is Mike
> and I am the PTL of Glare. Currently I work as a consultant at Nokia, where
> we're developing the service as a universal catalog of binary data. As I
> understand it right, Nokia has big plans for this service, Moshe Elisha can
> tell you more about them.
> 
> And here I want to ask the community - how exactly Glare may be useful in
> OpenStack? Glare was developed as a repository for all possible data types,
> and it has many possible applications. For example, it's a storage of vm
> images for Nova. Currently Glance is used for this, but Glare has much more
> features and this transition is easy to implement. Then it's a storage of

Is there actually an upgrade path today, or is that something someone
would have to build?

> Tosca templates. We were discussing integration with Heat and storing
> templates and environments in Glare, also it may be interesting for TripleO
> project. Mistral will store its workflows in Glare, it has already been
> decided. I'm not sure if Murano project is still alive, but they already
> use Glare 0.1 from Glance repo and it will be removed soon (in Pike afaik),
> so they have no other options except to start using Glare v1. Finally there
> were rumors about storing torrent files from Ironic.

Glare is not currently an official project, and it seems to have
very few contributors during the Ocata time frame. Do either of
those things concern any of the project teams considering adding
it as a dependency? Do you have plans to address those?

Doug

> 
> Now let me briefly describe Glare features:
> 
>  * Versioning of artifacts - each artifact has a version in SemVer format
> and you can sort and filter by this field.
>  * Multiblob support - there can be several files and folders per one
> artifact.
>  * The ease of creating new artifact types with oslo_versionedobjects
> framework.
>  * Fair immutability - no one can change artifact when it's active.
>  * Multistore support - each artifact type data may be stored in different
> storages: images may go to Swift; heat templates may be stored directly in
> sql-database; for Docker Contatiners you can use Ceph, if you want.
>  * Advanced sorting and filtering with various operators.
>  * Uploaded data validation and conversion with hooks - for example, Glare
> may check if uploaded file was a valid Tosca template and return Bad
> Request if it's not.
> 
> If you're interested, I recorded several demos in asciinema, that describe
> how Glare works and present the most useful features. Another demo about
> uploading hooks will be recorded and published this week.
> 
> So, please tell me what you think and recommend in what direction we should
> develop the project. Thanks in advance!
> 
> Best,
> Mike
> 
> Useful links:
> [1] Api documentation in rst format:
> https://etherpad.openstack.org/p/glare-api
> [2] Basic artifact workflow on devstack: https://asciinema.org/a/97985
> [3] Listing of artifacts: https://asciinema.org/a/97986
> [4] Creating your own artifact type with oslo_vo:
> https://asciinema.org/a/97987
> [5] Locations, Tags, Links and Folders in Glare:
> https://asciinema.org/a/99771

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][nova][horizon][release] unable to update constraint for python-novaclient to 7.0.0

2017-01-18 Thread Doug Hellmann
Nice work tracking that down, thanks!

Excerpts from Diana Clarke's message of 2017-01-18 12:19:35 -0500:
> Fixed. I'm a big girl now, lol.
> 
> Cheers,
> 
> --diana
> 
> On Wed, Jan 18, 2017 at 9:42 AM, Doug Hellmann  wrote:
> > The automatically produced patch to update the constraints to include
> > python-novaclient 7.0.0 are failing on the horizon test job. Can someone
> > please look into whether that is still actually an issue so we can be
> > sure of including the client release in Ocata?
> >
> > Thanks,
> > Doug
> >
> > https://review.openstack.org/#/c/414170/
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-18 Thread Josh Berkus
Magnum Devs:

Is there going to be a magnum team meeting around OpenStack Summit in
Boston?

I'm the community manager for Atomic Host, so if you're going to have
Magnum meetings, I'd like to send you some Atomic engineers to field any
questions/issues at the Summit.

-- 
--
Josh Berkus
Project Atomic
Red Hat OSAS

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Clint Byrum
Excerpts from Dave McCowan (dmccowan)'s message of 2017-01-18 15:58:19 +:
> 
> On Mon, Jan 16, 2017 at 7:35 AM, Ian Cordasco 
> > wrote:
> Hi everyone,
> 
> I've seen a few nascent projects wanting to implement their own secret
> storage to either replace Barbican or avoid adding a dependency on it.
> When I've pressed the developers on this point, the only answer I've
> received is to make the operator's lives simpler.
> 
> 
> This is my opinion, but I'd like to see Keystone use Barbican for storing 
> credentials. It hasn't happened yet because nobody's had the time or 
> inclination (what we have works). If this happened, we could deprecate the 
> current way of storing credentials and require Barbican in a couple of 
> releases. Then Barbican would be a required service. The Barbican team might 
> find this to be the easiest route towards convincing other projects to also 
> use Barbican.
> 
> - Brant
> 
> Can you provides some details on how you'd see this work?
> Since Barbican typically uses Keystone to authenticate users before 
> determining which secrets they have access to, this leads to a circular logic.
> 
> Barbican's main purpose is a secret manager.  It supports a variety of RBAC 
> and ACL access control methods to determine if a request to read/write/delete 
> a secret should be allowed or not.  For secret storage, Barbican itself needs 
> a secure backend for storage.  There is a customizable plugin interface to 
> access secure storage.  The current implementations can support a database 
> with encryption, an HSM via KMIP, and Dogtag.

Just bootstrap the genesis admin credentials into Barbican and Keystone
the same way we bootstrap them into Keystone now. Once there's admin
creds, they can be validated separate from updating them, and there's
no circle anymore, Just two one-way dependencies.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Improving Vendor Driver Discoverability

2017-01-18 Thread Morales, Victor
Just a FYI, Ankur have been working on have a Feature Classification Matrix in 
Neutron[1] which collects some of this information

[1] https://review.openstack.org/#/c/318192/

Regards/Saludos
Victor Morales
Irc: electrocucaracha





On 1/13/17, 10:29 PM, "Mike Perez"  wrote:

>Hello all,
>
>In the spirit of recent Technical Committee discussions I would like to bring
>focus on how we're doing vendor driver discoverability. Today we're doing this
>with the OpenStack Foundation marketplace [1] which is powered by the driverlog
>project. In a nutshell, it is a big JSON file [2] that has information on which
>vendor solutions are supported by which projects in which releases. This
>information is then parsed to generate the marketplace so that users can
>discover them. As discussed in previous TC meetings [3] we need to recognize
>vendors that are trying to make great products work in OpenStack so that they
>can be successful, which allows our community to be successful and healthy.
>
>In the feedback I have received from various people in the community, some
>didn’t know how it worked, and were unhappy that the projects themselves
>weren’t owning this. I totally agree that project teams should own this and
>should be encouraged to be involved in the reviews. Today that’s not happening.
>I’d like to propose we come up with a way for the marketplace to be more
>community-driven by the projects that are validating these solutions.
>
>At the Barcelona Summit [4] we discussed ways to improve driverlog. Projects
>like Nova have a support matrix of hypervisors in their in-tree documentation.
>Various members of the Cinder project also expressed interest in using this
>solution. It was suggested in the session that the marketplace should just link
>to the projects' appropriate documentation. The problem with this solution is
>the information is not presented in a consistent way across projects, as
>driverlog does it today. We could accomplish this instead by using a parsable
>format that is stored in each appropriate project's git repository. I'm
>thinking of pretty much how driverlog works today, but broken up into
>individual projects.
>
>The marketplace can parse this information and present it in one place
>consistently. Projects may also continue to parse this information in their own
>documentation, and we can even write a common tool to do this. The way a vendor
>is listed here is based on being validated by the project team itself. Keeping
>things in the marketplace would also address the suggestions that came out of
>the recent feedback we received from various driver maintainers [4].
>
>The way validation works is completely up to the project team. In my research
>as shown in the Summit etherpad [5] there's a clear trend in projects doing
>continuous integration for validation. If we wanted to we could also have the
>marketplace give the current CI results, which was also requested in the
>feedback from driver maintainers.
>
>I would like to volunteer in creating the initial files for each project with
>what the marketplace says today.
>
>[1] - https://www.openstack.org/marketplace/drivers/
>[2] - 
>http://git.openstack.org/cgit/openstack/driverlog/tree/etc/default_data.json
>[3] - 
>http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-01-10-20.01.log.html#l-106
>[4] - 
>http://lists.openstack.org/pipermail/openstack-dev/2017-January/109855.html
>[5] - https://etherpad.openstack.org/p/driverlog-validation
>
>-- 
>Mike Perez
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] [I18n] Regarding Zanata upgrade plan to 3.9.6 with Xenial: Help is needed

2017-01-18 Thread Alex Eng
Yes. 4.0 has some major backend/feature changes

But it requires Wildfly 10 (i believe openstack is running on wildfly 9)
and some configuration changes. I would happy to help out if needed :)


-

Alex Eng
Senior Software Engineer
Globalisation Tools Engineering
DID: +61 3514 8262 
Mobile: +614 2335 3457 

Red Hat, Asia-Pacific Pty Ltd
Level 1, 193 North Quay
Brisbane 4000
Office: +61 7 3514 8100 
Fax: +61 7 3514 8199 
Website: www.redhat.com
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [I18n] Regarding Zanata upgrade plan to 3.9.6 with Xenial: Help is needed

2017-01-18 Thread Alex Eng
I think at the moment, its best to upgrade to 3.9.6. We can worry about 4.0
later.


-

Alex Eng
Senior Software Engineer
Globalisation Tools Engineering
DID: +61 3514 8262 
Mobile: +614 2335 3457 

Red Hat, Asia-Pacific Pty Ltd
Level 1, 193 North Quay
Brisbane 4000
Office: +61 7 3514 8100 
Fax: +61 7 3514 8199 
Website: www.redhat.com

On Wed, Jan 18, 2017 at 10:45 AM, Clark Boylan  wrote:

> On Tue, Jan 17, 2017, at 01:55 PM, Alex Eng wrote:
> > Yes. 4.0 has some major backend/feature changes
> >
> > But it requires Wildfly 10 (i believe openstack is running on wildfly 9)
> > and some configuration changes. I would happy to help out if needed :)
>
> Yup we are running wildfly 9. That seems to be set in
> openstack-infra/system-config/manifests/site.pp.
>
> Maybe it is easier for now to do the jump to Xenial + java 8, then 3.9.6
> for the features, then figure out 4.0? Are the feature changes in 4.0
> worth skipping 3.9.6 for?
>
> Clark
>
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2017-01-18 Thread Julien Danjou
On Wed, Jan 18 2017, Mehdi Abaakouk wrote:

> So, I agree with gordc, perhaps you should stay with the old and
> unsupported lib. And let other to use the supported one.

The best option would be for the Monasca folks to actually participate
upstream to the Kafka driver development and help getting the perf they
want. That would be more constructive than staying at old version and
finger-pointing the deficiencies of the new versions.

(Maybe they are doing that but I did not see any pointer toward that
direction).

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] [I18n] Regarding Zanata upgrade plan to 3.9.6 with Xenial: Help is needed

2017-01-18 Thread Clark Boylan
On Tue, Jan 17, 2017, at 09:11 PM, Alex Eng wrote:
> I think at the moment, its best to upgrade to 3.9.6. We can worry about
> 4.0
> later.

Sounds good, I have gone ahead and pushed up
https://review.openstack.org/422124 which should upgrade translate dev
for us. Ian, can you please check this review and +1 it when you are
ready to upgrade?

One thing I noticed when writing this change is that the puppet-zanata
manifest includes info for wildfly hibernate and wildfly mojarra
packages but those packages are for wildfly 8 and we are running 9 [1].
Checking in the sourceforge downloads dir I don't see any wildfly 9
equivalents. Does this mean that the wildfly 8 packages are fine to use
or maybe they are no longer necessary and we can clean this up?

[1]
https://git.openstack.org/cgit/openstack-infra/puppet-zanata/tree/manifests/init.pp#n28

Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] [requirements][nova][horizon][release] unable to update constraint for python-novaclient to 7.0.0

2017-01-18 Thread Matt Riedemann

On 1/18/2017 11:19 AM, Diana Clarke wrote:

Fixed. I'm a big girl now, lol.

Cheers,

--diana



Nice, thanks for jumping on that.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-18 Thread Matt Riedemann

On 1/18/2017 10:54 AM, Mikhail Fedosin wrote:

Hello!

In this letter I want to tell you the current status of Glare project
and discuss its future development within the entire OpenStack community.

In the beginning I have to say a few words about myself - my name is
Mike and I am the PTL of Glare. Currently I work as a consultant at
Nokia, where we're developing the service as a universal catalog of
binary data. As I understand it right, Nokia has big plans for this
service, Moshe Elisha can tell you more about them.

And here I want to ask the community - how exactly Glare may be useful
in OpenStack? Glare was developed as a repository for all possible data
types, and it has many possible applications. For example, it's a
storage of vm images for Nova. Currently Glance is used for this, but
Glare has much more features and this transition is easy to implement.
Then it's a storage of Tosca templates. We were discussing integration
with Heat and storing templates and environments in Glare, also it may
be interesting for TripleO project. Mistral will store its workflows in
Glare, it has already been decided. I'm not sure if Murano project is
still alive, but they already use Glare 0.1 from Glance repo and it will
be removed soon (in Pike afaik), so they have no other options except to
start using Glare v1. Finally there were rumors about storing torrent
files from Ironic.

Now let me briefly describe Glare features:

 * Versioning of artifacts - each artifact has a version in SemVer
format and you can sort and filter by this field.
 * Multiblob support - there can be several files and folders per one
artifact.
 * The ease of creating new artifact types with oslo_versionedobjects
framework.
 * Fair immutability - no one can change artifact when it's active.
 * Multistore support - each artifact type data may be stored in
different storages: images may go to Swift; heat templates may be stored
directly in sql-database; for Docker Contatiners you can use Ceph, if
you want.
 * Advanced sorting and filtering with various operators.
 * Uploaded data validation and conversion with hooks - for example,
Glare may check if uploaded file was a valid Tosca template and return
Bad Request if it's not.

If you're interested, I recorded several demos in asciinema, that
describe how Glare works and present the most useful features. Another
demo about uploading hooks will be recorded and published this week.

So, please tell me what you think and recommend in what direction we
should develop the project. Thanks in advance!

Best,
Mike

Useful links:
[1] Api documentation in rst format:
https://etherpad.openstack.org/p/glare-api
[2] Basic artifact workflow on devstack: https://asciinema.org/a/97985
[3] Listing of artifacts: https://asciinema.org/a/97986
[4] Creating your own artifact type with oslo_vo:
https://asciinema.org/a/97987
[5] Locations, Tags, Links and Folders in Glare:
https://asciinema.org/a/99771


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



What use cases does Glare make available to Nova that Nova doesn't 
already get from Glance? In other words, what problems/missing features 
are there in Nova that can't be solved by Glance but can by Glare?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2017-01-18 Thread Mehdi Abaakouk

Thanks Joe for all these details, I can see that Monasca is still
not able to switch to the new lib for new very good reasons.

But according your comment on https://review.openstack.org/#/c/420579/ :


I don't think that anyone currently using Monasca wants to accept either
of those options so we need to find a way to maintain the current data
guarantees while using the async behaviour of the new client library.
That takes time and engineering effort to make that happen.  Is there
anyone in the community willing to put in the effort to help build and
test these new features at scale?


Nobody have plan to fix this issues soon.

And about from the same review:


On another topic I'm curious what new features are you looking to get
out of the new library.  Is there anything we can do to help you get the
capabilities you want with the existing client?


I don't think asking other projects to use deprecated and unsupported
lib version in new code is good, it's just adding fresh technical debt.

So, I agree with gordc, perhaps you should stay with the old and
unsupported lib. And let other to use the supported one.


On Tue, Jan 17, 2017 at 11:58:25PM +, Keen, Joe wrote:

Tony, I have some observations on the new client based on a short term
test and a long running test.

For short term use it uses 2x the memory compared to the older client.
The logic that deals with receiving partial messages from Kafka was
completely rewritten in the 1.x series and with logging enabled I see
continual warnings about truncated messages.  I don’t lose any data
because of this but I haven’t been able to verify if it’s doing more reads
than necessary.  I don’t know that either of these problems are really a
sticking point for Monasca but the increase in memory usage is potentially
a problem.



Long term testing showed some additional problems.  On a Kafka server that
has been running for a couple weeks I can write data in but the
kafka-python library is no longer able to read data from Kafka.  Clients
written in other languages are able to read successfully.  Profiling of
the python-kafka client shows that it’s spending all it’s time in a loop
attempting to connect to Kafka:

   276150.0860.0000.0860.000 {method 'acquire’ of
'thread.lock' objects}
431520.2500.0000.3850.000 types.py:15(_unpack)
431530.1350.0000.1350.000 {_struct.unpack}
48040/477980.1640.0000.1650.000 {len}
603510.2010.0000.2010.000 {method 'read’ of
'_io.BytesIO' objects}
  7389962   23.9850.000   23.9850.000 {method 'keys' of ‘dict'
objects}
  738  104.9310.000  395.6540.000 conn.py:560(recv)
  738   58.3420.000  100.0050.000
conn.py:722(_requests_timed_out)
  738   97.7870.000  167.5680.000 conn.py:588(_recv)
  7390071   46.5960.000   46.5960.000 {method 'recv’ of
'_socket.socket' objects}
  7390145   23.1510.000   23.1510.000 conn.py:458(connected)
  7390266   21.4170.000   21.4170.000 {method 'tell’ of
'_io.BytesIO' objects}
  7395664   41.6950.000   41.6950.000 {time.time}



I also see additional problems with the use of the deprecated
SimpleConsumer and SimpleProducer clients.  We really do need to
investigate migrating to the new async only Producer objects while still
maintaining the reliability guarantees that Monasca requires.


On 12/13/16, 10:01 PM, "Tony Breeds"  wrote:


On Mon, Dec 05, 2016 at 04:03:13AM +, Keen, Joe wrote:


I don’t know, yet, that we can.  Unless we can find an answer to the
questions I had above I’m not sure that this new library will be
performant and durable enough for the use cases Monasca has.  I’m fairly
confident that we can make it work but the performance issues with
previous versions prevented us from even trying to integrate so it will
take us some time.  If you need an answer more quickly than a week or
so,
and if anyone in the community is willing, I can walk them through the
testing I’d expect to happen to validate the new library.


Any updates Joe?  It's been 10 days and we're running close to Christamas
so
at this rate it'll be next year before we know if this is workable.

Yours Tony.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][nova][horizon][release] unable to update constraint for python-novaclient to 7.0.0

2017-01-18 Thread Diana Clarke
Fixed. I'm a big girl now, lol.

Cheers,

--diana

On Wed, Jan 18, 2017 at 9:42 AM, Doug Hellmann  wrote:
> The automatically produced patch to update the constraints to include
> python-novaclient 7.0.0 are failing on the horizon test job. Can someone
> please look into whether that is still actually an issue so we can be
> sure of including the client release in Ocata?
>
> Thanks,
> Doug
>
> https://review.openstack.org/#/c/414170/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2017-01-18 Thread gordon chung


On 17/01/17 06:58 PM, Keen, Joe wrote:
>
> I also see additional problems with the use of the deprecated
> SimpleConsumer and SimpleProducer clients.  We really do need to
> investigate migrating to the new async only Producer objects while still
> maintaining the reliability guarantees that Monasca requires.
>

is there a reason why you are against bumping Kafka up for OpenStack? it 
seems Monasca requires 0.9.5 and is content with it. but the oslo team 
has developed something that works well for the use case for the broader 
OpenStack ecosystem.

this seems like a better solution would be to just allow monasca to stay 
as is and the openstack requirements to progress rather than get blocked 
by a service that may or may not be deployed.

the same thing was done when we bumped elasticsearch requirements. there 
are multiple projects using elasticsearch. we didn't have anyone working 
on it in Ceilometer so rather than block the entire community. we let it 
proceed and we could catch up later if it was urgent.

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-18 Thread Mikhail Fedosin
Hello!

In this letter I want to tell you the current status of Glare project and
discuss its future development within the entire OpenStack community.

In the beginning I have to say a few words about myself - my name is Mike
and I am the PTL of Glare. Currently I work as a consultant at Nokia, where
we're developing the service as a universal catalog of binary data. As I
understand it right, Nokia has big plans for this service, Moshe Elisha can
tell you more about them.

And here I want to ask the community - how exactly Glare may be useful in
OpenStack? Glare was developed as a repository for all possible data types,
and it has many possible applications. For example, it's a storage of vm
images for Nova. Currently Glance is used for this, but Glare has much more
features and this transition is easy to implement. Then it's a storage of
Tosca templates. We were discussing integration with Heat and storing
templates and environments in Glare, also it may be interesting for TripleO
project. Mistral will store its workflows in Glare, it has already been
decided. I'm not sure if Murano project is still alive, but they already
use Glare 0.1 from Glance repo and it will be removed soon (in Pike afaik),
so they have no other options except to start using Glare v1. Finally there
were rumors about storing torrent files from Ironic.

Now let me briefly describe Glare features:

 * Versioning of artifacts - each artifact has a version in SemVer format
and you can sort and filter by this field.
 * Multiblob support - there can be several files and folders per one
artifact.
 * The ease of creating new artifact types with oslo_versionedobjects
framework.
 * Fair immutability - no one can change artifact when it's active.
 * Multistore support - each artifact type data may be stored in different
storages: images may go to Swift; heat templates may be stored directly in
sql-database; for Docker Contatiners you can use Ceph, if you want.
 * Advanced sorting and filtering with various operators.
 * Uploaded data validation and conversion with hooks - for example, Glare
may check if uploaded file was a valid Tosca template and return Bad
Request if it's not.

If you're interested, I recorded several demos in asciinema, that describe
how Glare works and present the most useful features. Another demo about
uploading hooks will be recorded and published this week.

So, please tell me what you think and recommend in what direction we should
develop the project. Thanks in advance!

Best,
Mike

Useful links:
[1] Api documentation in rst format:
https://etherpad.openstack.org/p/glare-api
[2] Basic artifact workflow on devstack: https://asciinema.org/a/97985
[3] Listing of artifacts: https://asciinema.org/a/97986
[4] Creating your own artifact type with oslo_vo:
https://asciinema.org/a/97987
[5] Locations, Tags, Links and Folders in Glare:
https://asciinema.org/a/99771
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [acceleration]Team Bi-weekly Meeting 2017.01.18 Agenda

2017-01-18 Thread Zhipeng Huang
Hi Team,

Thanks for attending today's meeting, please find the minutes at
https://docs.google.com/document/d/18Dw90FFNv4n0_y1JSg8k3_Duk7zBXE_lbr8wlhksBkY/pub


On Wed, Jan 18, 2017 at 10:43 PM, Harm Sluiman 
wrote:

> I am afraid I am stuck in an alternate reality meeting this week. my
> apologies
> I will work to get the other meeting moved in the future
>
> On Tue, Jan 17, 2017 at 10:26 PM, Zhipeng Huang 
> wrote:
>
>> Hi Team,
>>
>> Please find the agenda at https://wiki.openstack.org/
>> wiki/Meetings/CyborgTeamMeeting#Agenda_for_next_meeting
>>
>> our IRC channel is #openstack-cyborg
>>
>>
>> --
>> Zhipeng (Howard) Huang
>>
>> Standard Engineer
>> IT Standard & Patent/IT Prooduct Line
>> Huawei Technologies Co,. Ltd
>> Email: huangzhip...@huawei.com
>> Office: Huawei Industrial Base, Longgang, Shenzhen
>>
>> (Previous)
>> Research Assistant
>> Mobile Ad-Hoc Network Lab, Calit2
>> University of California, Irvine
>> Email: zhipe...@uci.edu
>> Office: Calit2 Building Room 2402
>>
>> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>>
>
>
>
> --
> 宋慢
> Harm Sluiman
>
>
>
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Problem with lvm thin provisioning and snapshots

2017-01-18 Thread Marco Marino
Ok, problem solved (it seems)
in /etc/lvm/lvm.conf I need to set
auto_set_activation_skip = 0

and, in my case, I set the same in /etc/cinder/lvm.conf (because I'm using
a cluster configuration for lvm)

Thank you


2017-01-18 16:38 GMT+01:00 Marco Marino :

> Hi, I'm trying to use lvm thin provisioning with openstack cinder (mitaka)
> but I have a problem with snapshots. I'm trying to create a snapshot from a
> volume (detached) and then create a new volume from the snapshot.
>
> 1) Snapshot creation works well and I have (with lvs)
>
>   LV VG
> Attr   LSize  Pool Origin
> Data%  Meta%  Move Log Cpy%Sync Convert
>   *_snapshot-f702a3b0-e021-471b-80c3-56cab0c1c1e6 cinder-volumes2
> Vwi---tz-k  1.00g cinder-volumes2-pool
> volume-a60699c5-55c2-4dbc-b74a-64b51f2f4dd2*
>
>   activationvol  cinder-volumes2
> -wi-a- 16.00m
>
>   cinder-volumes2-pool   cinder-volumes2
> twi-aotz-- 18.98g
> 0.00   0.59
>   *volume-a60699c5-55c2-4dbc-b74a-64b51f2f4dd2cinder-volumes2
> Vwi-a-tz--  1.00g
> cinder-volumes2-pool 0.00*
>
>
> 2) When I try to create the new volume from the snapshot I have an error:
> /bin/dd: failed to open /dev/mapper/cinder--volumes2-_
> snapshot--f702a3b0--e021--471b--80c3--56cab0c1c1e6\xe2\x80\x99: No such
> file or directory\n'
> But the new logical volume is created in the pool:
>
> [root@mitaka-cinder-volume1-env3 ~]# lvs
>   LV VG
> Attr   LSize  Pool Origin
> Data%  Meta%  Move Log Cpy%Sync Convert
>   _snapshot-f702a3b0-e021-471b-80c3-56cab0c1c1e6 cinder-volumes2
> Vwi---tz-k  1.00g cinder-volumes2-pool volume-a60699c5-55c2-4dbc-
> b74a-64b51f2f4dd2
>   activationvol  cinder-volumes2
> -wi-a- 16.00m
>
>   cinder-volumes2-pool   cinder-volumes2
> twi-aotz-- 18.98g
> 0.00   0.59
>   volume-8116-660f-4bfc-a27e-fa5e689578cecinder-volumes2
> Vwi-a-tz--  1.00g cinder-volumes2-pool
>0.00
>   volume-a60699c5-55c2-4dbc-b74a-64b51f2f4dd2cinder-volumes2
> Vwi-a-tz--  1.00g cinder-volumes2-pool
>0.00
>
>
> The problem is that in /dev/mapper I don't have the link associated with
> the "snapshot" device. Is this a problem related to the operating system
> configuration? Or I missing something in cinder.conf?? Should I modify some
> setting in /etc/lvm/lvm.conf?
>
> More details about my configuration:
> [root@mitaka-cinder-volume1-env3 ~]# cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [root@mitaka-cinder-volume1-env3 ~]# rpm -qa | grep lvm
> lvm2-libs-2.02.166-1.el7_3.1.x86_64
> lvm2-2.02.166-1.el7_3.1.x86_64
>
> My configuration in cinder.conf:
>
> [lvm2]
> volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
> volume_group = cinder-volumes2
> volume_backend_name = LVM_iSCSI2
> iscsi_protocol = iscsi
> iscsi_helper = lioadm
> iscsi_ip_address= 192.168.203.4
> volume_clear=zero
> volume_clear_size=30
> lvm_type = thin
> lvm_conf_file = /etc/cinder/lvm.conf <-- locking_type = 1, use_lvmetad =
> 0, volume_list = [ "@pacemaker" ]   (I'm using an active/passive cluster
> with an LVM resource for cinder-volumes2 VG)
> max_over_subscription_ratio = 1.0
>
>
> It seems that if I use thin provisioning the snapshot device doesn't
> exists even though I see it with lvm so dd command fails.
> I'm a bit confused. Any help will be really appreciated
>
> Thank you
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] short term roadmap (actions required)

2017-01-18 Thread Emilien Macchi
On Wed, Jan 18, 2017 at 9:57 AM, John Trowbridge  wrote:
>
>
> On 01/17/2017 04:36 PM, Emilien Macchi wrote:
>> I'm trying to dress a list of things important to know so we can
>> successfully deliver Ocata release, please take some time to read and
>> comment if needed.
>>
>> == Triaging Ocata & Pike bugs
>>
>> As we discussed in our weekly meeting, we decided to:
>>
>> * move ocata-3 low/medium unassigned bugs to pike-1
>> * move ocata-3 high/critical unassigned bugs to ocata-rc1
>> * keep ocata-3 In Progress bugs to ocata-3 until next week and move
>> them to ocata-rc1 if not fixed on time.
>>
>> Which means, if you plan to file a new bug:
>>
>> * low/medium: target it for pike-1
>> * high/critical: target it for ocata-rc1
>>
>> We still have 66 bugs In Progress for ocata-3. The top priority for
>> this week is to make progress on those bugs and close it on time for
>> ocata final release.
>>
>>
>> == Releasing tripleoclient next week
>>
>> If you're working on tripleoclient, you might want to help in fixing
>> the bugs still targeted for Ocata:
>> https://goo.gl/R2hO4Z
>> We'll release python-tripleoclient final ocata by next week.
>>
>>
>> == Freezing features next week
>>
>> If you're working on a feature in TripleO which is part of a blueprint
>> targeted for ocata-3, keep in mind you have until next week to make it
>> merged.
>> After January 27th, We will block (by a -2 in Gerrit) any patch that
>> adds a feature in master until we release Ocata and branch
>> stable/ocata.
>> Some exceptions can be made, but they have to be requested on
>> openstack-dev and team + PTL will decide if whether or not we accept
>> it.
>> If your blueprint is not High or Critical, there are a few chances we accept 
>> it.
>>
>>
>> == Preparing Pike together
>>
>> In case you missed it, we're preparing Pike sessions for next PTG:
>> https://etherpad.openstack.org/p/tripleo-ptg-pike
>> Feel free to propose a session and announce/discuss it on the
>> openstack-dev mailing-list.
>>
>>
>> == CI freeze
>>
>> From January 27th until final Ocata release, we will freeze any chance
>> in our CI, except critical fixes but they need to be reported in
>> Launchpad and team + PTL needs to know (ML openstack-dev).
>>
>
> I think this is a really good idea. Could we have one exception for
> changes to only the tripleo-quickstart toci scripts and the
> scripts/quickstart directory in tripleo-ci? Those files are only
> relevant to the quickstart jobs in the experimental queue, and we want
> to continue making progress stabilizing them in the last weeks of Ocata.

Of course. Changes for quickstart-only things are highly welcome
anytime, as we need to make progress on the transition.

>>
>> If there is any question or feedback, please don't hesitate to use this 
>> thread.
>>
>> Thanks and let's make Ocata our best release ever ;-)
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] monitoring interface

2017-01-18 Thread Brad Marshall
Hi all,

We're looking at adding the monitor interface to the openstack charms to
enable us to use the nagios charm, rather than via an external nagios
using nrpe-external-master.  

I believe this will just be a matter of adding in the interface, adding
an appropriate monitor.yaml that defines the checks, and updating
charmhelpers.contrib.charmsupport.nrpe so that when it adds checks, it
passes the appropriate information onto the relationship.

Are there any concerns with this approach? Any suggestions on things to
watch out for?  It does mean touching every charm, but I can't see any
other way around it.

Thanks,
Brad
-- 
Brad Marshall
Cloud Reliability Engineer
Bootstack Squad, Canonical

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Dave McCowan (dmccowan)

On Mon, Jan 16, 2017 at 7:35 AM, Ian Cordasco 
> wrote:
Hi everyone,

I've seen a few nascent projects wanting to implement their own secret
storage to either replace Barbican or avoid adding a dependency on it.
When I've pressed the developers on this point, the only answer I've
received is to make the operator's lives simpler.


This is my opinion, but I'd like to see Keystone use Barbican for storing 
credentials. It hasn't happened yet because nobody's had the time or 
inclination (what we have works). If this happened, we could deprecate the 
current way of storing credentials and require Barbican in a couple of 
releases. Then Barbican would be a required service. The Barbican team might 
find this to be the easiest route towards convincing other projects to also use 
Barbican.

- Brant

Can you provides some details on how you'd see this work?
Since Barbican typically uses Keystone to authenticate users before determining 
which secrets they have access to, this leads to a circular logic.

Barbican's main purpose is a secret manager.  It supports a variety of RBAC and 
ACL access control methods to determine if a request to read/write/delete a 
secret should be allowed or not.  For secret storage, Barbican itself needs a 
secure backend for storage.  There is a customizable plugin interface to access 
secure storage.  The current implementations can support a database with 
encryption, an HSM via KMIP, and Dogtag.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Problem with lvm thin provisioning and snapshots

2017-01-18 Thread Marco Marino
Hi, I'm trying to use lvm thin provisioning with openstack cinder (mitaka)
but I have a problem with snapshots. I'm trying to create a snapshot from a
volume (detached) and then create a new volume from the snapshot.

1) Snapshot creation works well and I have (with lvs)

  LV VG  Attr
LSize  Pool Origin
Data%  Meta%  Move Log Cpy%Sync Convert
  *_snapshot-f702a3b0-e021-471b-80c3-56cab0c1c1e6 cinder-volumes2
Vwi---tz-k  1.00g cinder-volumes2-pool
volume-a60699c5-55c2-4dbc-b74a-64b51f2f4dd2*

  activationvol  cinder-volumes2 -wi-a-
16.00m

  cinder-volumes2-pool   cinder-volumes2 twi-aotz--
18.98g
0.00   0.59
  *volume-a60699c5-55c2-4dbc-b74a-64b51f2f4dd2cinder-volumes2
Vwi-a-tz--  1.00g
cinder-volumes2-pool 0.00*


2) When I try to create the new volume from the snapshot I have an error:
/bin/dd: failed to open
/dev/mapper/cinder--volumes2-_snapshot--f702a3b0--e021--471b--80c3--56cab0c1c1e6\xe2\x80\x99:
No such file or directory\n'
But the new logical volume is created in the pool:

[root@mitaka-cinder-volume1-env3 ~]# lvs
  LV VG  Attr
LSize  Pool Origin
Data%  Meta%  Move Log Cpy%Sync Convert
  _snapshot-f702a3b0-e021-471b-80c3-56cab0c1c1e6 cinder-volumes2
Vwi---tz-k  1.00g cinder-volumes2-pool
volume-a60699c5-55c2-4dbc-b74a-64b51f2f4dd2

  activationvol  cinder-volumes2 -wi-a-
16.00m

  cinder-volumes2-pool   cinder-volumes2 twi-aotz--
18.98g
0.00   0.59
  volume-8116-660f-4bfc-a27e-fa5e689578cecinder-volumes2
Vwi-a-tz--  1.00g
cinder-volumes2-pool
0.00
  volume-a60699c5-55c2-4dbc-b74a-64b51f2f4dd2cinder-volumes2
Vwi-a-tz--  1.00g
cinder-volumes2-pool
0.00


The problem is that in /dev/mapper I don't have the link associated with
the "snapshot" device. Is this a problem related to the operating system
configuration? Or I missing something in cinder.conf?? Should I modify some
setting in /etc/lvm/lvm.conf?

More details about my configuration:
[root@mitaka-cinder-volume1-env3 ~]# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[root@mitaka-cinder-volume1-env3 ~]# rpm -qa | grep lvm
lvm2-libs-2.02.166-1.el7_3.1.x86_64
lvm2-2.02.166-1.el7_3.1.x86_64

My configuration in cinder.conf:

[lvm2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes2
volume_backend_name = LVM_iSCSI2
iscsi_protocol = iscsi
iscsi_helper = lioadm
iscsi_ip_address= 192.168.203.4
volume_clear=zero
volume_clear_size=30
lvm_type = thin
lvm_conf_file = /etc/cinder/lvm.conf <-- locking_type = 1, use_lvmetad = 0,
volume_list = [ "@pacemaker" ]   (I'm using an active/passive cluster with
an LVM resource for cinder-volumes2 VG)
max_over_subscription_ratio = 1.0


It seems that if I use thin provisioning the snapshot device doesn't exists
even though I see it with lvm so dd command fails.
I'm a bit confused. Any help will be really appreciated

Thank you
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Brant Knudson
On Mon, Jan 16, 2017 at 7:35 AM, Ian Cordasco 
wrote:

> Hi everyone,
>
> I've seen a few nascent projects wanting to implement their own secret
> storage to either replace Barbican or avoid adding a dependency on it.
> When I've pressed the developers on this point, the only answer I've
> received is to make the operator's lives simpler.
>
>
This is my opinion, but I'd like to see Keystone use Barbican for storing
credentials. It hasn't happened yet because nobody's had the time or
inclination (what we have works). If this happened, we could deprecate the
current way of storing credentials and require Barbican in a couple of
releases. Then Barbican would be a required service. The Barbican team
might find this to be the easiest route towards convincing other projects
to also use Barbican.

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress][oslo.config][keystone] NoSuchOptError: no such option project_domain_name in group [keystone_authtoken]

2017-01-18 Thread Brant Knudson
On Thu, Jan 12, 2017 at 4:31 PM, Eric K  wrote:

> On a freshly stacked devstack (Jan 12), attempting to access
> `cfg.CONF.keystone_authtoken.project_domain_name` gave the
> error: NoSuchOptError: no such option project_domain_name in group
> [keystone_authtoken]
>
> I’m a little confused because it’s part of the [keystone_authtoken] config
> section generated by devstack. Could anyone point me to where these options
> are declared (I’ve searched several repos) and maybe why this option
> doesn’t exist? Thanks a lot!
>
>
These options are for the auth token middleware. Services shouldn't be
using them directly.

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Ocata Priority Sprint

2017-01-18 Thread Sean McGinnis
Just to raise awareness - we've discussed this in channel and in the
weekly meeting.

Today and tomorrow we will be doing a short sprint to focus on getting
the Active/Active HA and new Attach/Detach API work reviewed and merged.

Lists of patches for both efforts can be found in our review tracking
etherpad:

https://etherpad.openstack.org/p/cinder-spec-review-tracking

We welcome anyone who has time to review and test these patches to help
us get this out of the way with enough time left in the cycle to make
sure there are not any unintended side effects.

My main priority at this point is that we are able to add this code
without breaking any existing functionality. Hopefully we can get things
in without any major bugs, but I do think we are better getting these
in without breaking things, then iterating on them, rather than holding
things off until we are 100% confident the new features work correctly.

Not that we want to merge bad code. I just don't want to see these
efforts delayed for yet another cycle because we are looking for
perfection. ;)

Any help is greatly appreciated.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] short term roadmap (actions required)

2017-01-18 Thread John Trowbridge


On 01/17/2017 04:36 PM, Emilien Macchi wrote:
> I'm trying to dress a list of things important to know so we can
> successfully deliver Ocata release, please take some time to read and
> comment if needed.
> 
> == Triaging Ocata & Pike bugs
> 
> As we discussed in our weekly meeting, we decided to:
> 
> * move ocata-3 low/medium unassigned bugs to pike-1
> * move ocata-3 high/critical unassigned bugs to ocata-rc1
> * keep ocata-3 In Progress bugs to ocata-3 until next week and move
> them to ocata-rc1 if not fixed on time.
> 
> Which means, if you plan to file a new bug:
> 
> * low/medium: target it for pike-1
> * high/critical: target it for ocata-rc1
> 
> We still have 66 bugs In Progress for ocata-3. The top priority for
> this week is to make progress on those bugs and close it on time for
> ocata final release.
> 
> 
> == Releasing tripleoclient next week
> 
> If you're working on tripleoclient, you might want to help in fixing
> the bugs still targeted for Ocata:
> https://goo.gl/R2hO4Z
> We'll release python-tripleoclient final ocata by next week.
> 
> 
> == Freezing features next week
> 
> If you're working on a feature in TripleO which is part of a blueprint
> targeted for ocata-3, keep in mind you have until next week to make it
> merged.
> After January 27th, We will block (by a -2 in Gerrit) any patch that
> adds a feature in master until we release Ocata and branch
> stable/ocata.
> Some exceptions can be made, but they have to be requested on
> openstack-dev and team + PTL will decide if whether or not we accept
> it.
> If your blueprint is not High or Critical, there are a few chances we accept 
> it.
> 
> 
> == Preparing Pike together
> 
> In case you missed it, we're preparing Pike sessions for next PTG:
> https://etherpad.openstack.org/p/tripleo-ptg-pike
> Feel free to propose a session and announce/discuss it on the
> openstack-dev mailing-list.
> 
> 
> == CI freeze
> 
> From January 27th until final Ocata release, we will freeze any chance
> in our CI, except critical fixes but they need to be reported in
> Launchpad and team + PTL needs to know (ML openstack-dev).
> 

I think this is a really good idea. Could we have one exception for
changes to only the tripleo-quickstart toci scripts and the
scripts/quickstart directory in tripleo-ci? Those files are only
relevant to the quickstart jobs in the experimental queue, and we want
to continue making progress stabilizing them in the last weeks of Ocata.

> 
> If there is any question or feedback, please don't hesitate to use this 
> thread.
> 
> Thanks and let's make Ocata our best release ever ;-)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stable] nominating Alan Pevec (apevec) for stable release core

2017-01-18 Thread Davanum Srinivas
+1 from me. welcome Alan

On Wed, Jan 18, 2017 at 9:16 AM, Doug Hellmann  wrote:
> Based on Tony's recommendation, and Alan's recent review work, I
> am nominating Alan Pevec (apevec) to have core reviewer rights on
> stable releases in the openstack/releases repository.
>
> Release team, please either +1 or raise any concerns you have.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [acceleration]Team Bi-weekly Meeting 2017.01.18 Agenda

2017-01-18 Thread Harm Sluiman
I am afraid I am stuck in an alternate reality meeting this week. my
apologies
I will work to get the other meeting moved in the future

On Tue, Jan 17, 2017 at 10:26 PM, Zhipeng Huang 
wrote:

> Hi Team,
>
> Please find the agenda at https://wiki.openstack.org/
> wiki/Meetings/CyborgTeamMeeting#Agenda_for_next_meeting
>
> our IRC channel is #openstack-cyborg
>
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>



-- 
宋慢
Harm Sluiman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][nova][horizon][release] unable to update constraint for python-novaclient to 7.0.0

2017-01-18 Thread Doug Hellmann
The automatically produced patch to update the constraints to include
python-novaclient 7.0.0 are failing on the horizon test job. Can someone
please look into whether that is still actually an issue so we can be
sure of including the client release in Ocata?

Thanks,
Doug

https://review.openstack.org/#/c/414170/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [architecture] Base services

2017-01-18 Thread Thierry Carrez
Hi everyone,

In OpenStack all components can assume that a number of external
services will be present and available for them to use (think: a message
queue), but we never had a clear name to describe them or a clear list.

Work has started[0] within the Architecture working group[1] to prepare
a definition for those "base services", a current list and a process for
growing that list.

This definition step is a prerequisite before we can have more strategic
discussions about adding new base services (like a distributed lock
manager or a secrets vault) that all OpenStack components could take
advantage of.

You can weigh in on the currently-proposed review[2], propose your own
change to the document, or join in future Architecture WG meetings[3] to
help us make progress on that. Once solidified, the proposal will be
pushed to the Technical Committee for final discussion and approval.

[0] https://review.openstack.org/421956
[1]
https://git.openstack.org/cgit/openstack/arch-wg/tree/doc/source/index.rst
[2] https://review.openstack.org/421957
[3] http://eavesdrop.openstack.org/#Architecture_Working_Group

Thanks!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] webob 1.7

2017-01-18 Thread Sean Dague
With the release targeted for 5 weeks out -
https://releases.openstack.org/ I agree with Ian that this is a
distraction at this point.

It should be a Pike priority, but I think chasing and validating this
for Ocata is the wrong call, as it is likely to impact a bunch of projects.

-Sean


On 01/18/2017 09:22 AM, Chuck Short wrote:
> Hi Ian,
> 
> I just read the bug report and I don't think the correct fix for this
> issue is to blacklist webob 1.7.0.
> The reason for this is that multiple distros are already using webob
> 1.7. Also projects like keystonemiddleware have made backwards
> compatible changes
> to accomodate newer versions of webob.
> 
> Regards
> chuck
> 
> On Wed, Jan 18, 2017 at 9:08 AM, Ian Cordasco  > wrote:
> 
> -Original Message-
> From: Chuck Short  >
> Reply: OpenStack Development Mailing List (not for usage questions)
>  >
> Date: January 18, 2017 at 08:01:46
> To: OpenStack Development Mailing List
>  >
> Subject:  [openstack-dev] [keystone] webob 1.7
> 
> > Hi
> >
> > We have been expericing problems with newer versions of webob (webob 
> 1.7).
> > Reading the changelog, it seems that the upstream developers have
> > introduced some backwards incompatibility with previous versions of 
> webob
> > that seems to be hitting keystone and possibly other projects as well
> > (nova/glance in particular). For keystone this bug has been reported in 
> bug
> > #1657452. I would just like to get more developer's eyes on this 
> particular
> > issue and possibly get a fix. I suspect its starting to hit other 
> distros
> > as well or already have hit.
> 
> Hey Chuck,
> 
> This is also affecting Glance
> (https://bugs.launchpad.net/glance/+bug/1657459
> ). I suspect what we'll
> do for now is blacklist the 1.7.x releases in openstack/requirements.
> It seems a bit late in the cycle to bump the minimum version to 1.7.0
> so we can safely fix this without having to deal with
> incompatibilities between versions.
> 
> --
> Ian Cordasco
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] IRC weekly meeting of Jan.18

2017-01-18 Thread Abed Abu-Dbai
Hi,

Active Active Topology discussion items:


1. Merging commits up to "Distributor image creation"   
https://review.openstack.org/#/c/403594
 
Updated etherpad doc:   
https://etherpad.openstack.org/p/Active-_Active_Topology_commits

2. Bug #1655656: https://bugs.launchpad.net/devstack/+bug/1655656
Meanwhile affecting "Cluster DB Tasks" 
https://review.openstack.org/#/c/409764

IRC details:
https://webchat.freenode.net/
IRC channel: 
#openstack-meeting-alt

Best Regards,
Abed Abu dbai (abeda)
IBM contractor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable] nominating Alan Pevec (apevec) for stable release core

2017-01-18 Thread Doug Hellmann
Based on Tony's recommendation, and Alan's recent review work, I
am nominating Alan Pevec (apevec) to have core reviewer rights on
stable releases in the openstack/releases repository.

Release team, please either +1 or raise any concerns you have.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] webob 1.7

2017-01-18 Thread Chuck Short
Hi Ian,

I just read the bug report and I don't think the correct fix for this issue
is to blacklist webob 1.7.0.
The reason for this is that multiple distros are already using webob 1.7.
Also projects like keystonemiddleware have made backwards compatible changes
to accomodate newer versions of webob.

Regards
chuck

On Wed, Jan 18, 2017 at 9:08 AM, Ian Cordasco 
wrote:

> -Original Message-
> From: Chuck Short 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: January 18, 2017 at 08:01:46
> To: OpenStack Development Mailing List 
> Subject:  [openstack-dev] [keystone] webob 1.7
>
> > Hi
> >
> > We have been expericing problems with newer versions of webob (webob
> 1.7).
> > Reading the changelog, it seems that the upstream developers have
> > introduced some backwards incompatibility with previous versions of webob
> > that seems to be hitting keystone and possibly other projects as well
> > (nova/glance in particular). For keystone this bug has been reported in
> bug
> > #1657452. I would just like to get more developer's eyes on this
> particular
> > issue and possibly get a fix. I suspect its starting to hit other distros
> > as well or already have hit.
>
> Hey Chuck,
>
> This is also affecting Glance
> (https://bugs.launchpad.net/glance/+bug/1657459). I suspect what we'll
> do for now is blacklist the 1.7.x releases in openstack/requirements.
> It seems a bit late in the cycle to bump the minimum version to 1.7.0
> so we can safely fix this without having to deal with
> incompatibilities between versions.
>
> --
> Ian Cordasco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Devstack] Hard code of SSL_ENABLED_SERVICES in stack.sh cause not able to add other service for SSL

2017-01-18 Thread Rob Crittenden
Xin YD He wrote:
> Greetings,
> 
> I try to enable Zun using SSL, and add 2 statments in my local.conf,
> USE_SSL=TRUE
> SSL_ENABLED_SERVICES+=,zun
> 
> but it does not work. I check the log file and found
> SSL_ENABLED_SERVICES=key,nova,cinder,glance,s-proxy,neutron, does not
> have Zun at all.
> later I found in stack.sh, SSL_ENABLED_SERVICES is hard code.
> # Service to enable with SSL if ``USE_SSL`` is True
> SSL_ENABLED_SERVICES="key,nova,cinder,glance,s-proxy,neutron"
> 
> 
> if I add Zun to the hard code, and reinstall, zun SSL is okay. so i
> wonder if this is a devstack bug or i made mistake on the local.conf?

It's a bug. I have fix under review but it's been idle for quite a
while, https://review.openstack.org/#/c/345072/

rob

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [keystone] webob 1.7

2017-01-18 Thread David Stanek
On 18-Jan 08:59, Chuck Short wrote:
> Hi
> 
> We have been expericing problems with newer versions of webob (webob 1.7).
> Reading the changelog, it seems that the upstream developers have
> introduced some backwards incompatibility with previous versions of webob
> that seems to be hitting keystone and possibly other projects as well
> (nova/glance in particular). For keystone this bug has been reported in bug
> #1657452. I would just like to get more developer's eyes on this particular
> issue and possibly get a fix. I suspect its starting to hit other distros
> as well or already have hit.
> 

I've confirmed that this is an issue. I'll work on a fix. We can take 
further discussion to the bug tracker.

--
david stanek
web: https://www.dstanek.com
twitter: https://twitter.com/dstanek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] webob 1.7

2017-01-18 Thread Ian Cordasco
-Original Message-
From: Chuck Short 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 18, 2017 at 08:01:46
To: OpenStack Development Mailing List 
Subject:  [openstack-dev] [keystone] webob 1.7

> Hi
>
> We have been expericing problems with newer versions of webob (webob 1.7).
> Reading the changelog, it seems that the upstream developers have
> introduced some backwards incompatibility with previous versions of webob
> that seems to be hitting keystone and possibly other projects as well
> (nova/glance in particular). For keystone this bug has been reported in bug
> #1657452. I would just like to get more developer's eyes on this particular
> issue and possibly get a fix. I suspect its starting to hit other distros
> as well or already have hit.

Hey Chuck,

This is also affecting Glance
(https://bugs.launchpad.net/glance/+bug/1657459). I suspect what we'll
do for now is blacklist the 1.7.x releases in openstack/requirements.
It seems a bit late in the cycle to bump the minimum version to 1.7.0
so we can safely fix this without having to deal with
incompatibilities between versions.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-18 Thread Chris Dent


Last night in the TC meeting a topic[1] was a review[2] to
introduce a new tag 'assert:supports-api-compatibility' which:

defines the base expectations and requirements for a stable REST
API provided by a service

The tag document uses an API guideline, "Evaluating API Changes"[3],
as the reference for those expectations. That guideline is out of
date (see below) and needs to be refreshed and revalidated with
attention to modern concerns. I've started a review

https://review.openstack.org/#/c/421846/

within the api-wg to evolve the existing document into a new one
that provides an effective guideline for what API compatibility and
stability mean and how to make it happen in a service.

The review starts with the original text. The hope is that
commentary here in this thread and on the review will eventually
lead to the best document.

There are reasons for doing it that way, instead of starting from a
fresh new proposal or doing piecemeal edits on the existing
document:

* API compatibility over time is a fundamental aspect of the
  OpenStack interoperability story. We not only need to get it
  right, we need to make sure we get it understandable.

* We can't write an accurate document if we don't first have the
  conversation which ensures we are all talking about the same thing
  and using the same meanings when we use the same words. Starting
  the evaluation from a new document authored by a single person or
  an existing document predisposes the discussion.

What are the next steps? If you are engaged by this topic then:

* Read all these linked things to get some context.
* Comment in response to this email or on the review[0] with your
  thoughts, concerns, ideas.

The questions we are trying to answer include (but are not limited
to):

* What are API compatibility, stability and maturity?
* Why do we want those things? Or, in other words, why is it bad to
  not have those things?
* What are the situations, if any, when a project can legitimately
  claim to not follow stability (e.g., being new, alpha, etc)?
* When an API claims stability, what changes are considered
  acceptable or unacceptable?
* [Anything else you think is important.]

It's not necessarily the case that the answers to all these
questions will end up in the document, but we do need to know the
answers in order to make the document good. It may seem to some
participants that we've already answered a lot of these questions in
the past. That's fine. The point here is to revalidate and refresh.

If you're confused about where to put your thoughts, default to here
in email where we should be working to build a consensus (through
fairly meandering conversation) about the overall topic. The review
should be more about concrete additions or changes to the document
or annotations to indicate where it is wrong or has failed to
reflect the discussion here correctly.

I will corral the responses and keep the document under review up to
date. I'm away from my computer tomorrow and Friday so I hope that
will provide some time for some content to build up without me
injecting too many of my own opinions.

For a little more background: In the discussion last night I pointed
out that (at least some members of) the API-WG have some concerns
with that document but haven't yet had an opportunity to address
them.

In part the concerns came from the document's use as the letter of
the law in the discussions related to the glance visibility
changes[4]. We need to make sure that if we are wielding the
document in that fashion it is correct with regard to modern
concerns and properly sets the stage for why API compatibility is
important. I committed to starting a process to clear things up.

The first thing I noticed was that the guideline was last changed in
2015, its initial commit into the api-wg repo. The content was fully
based on wiki content that had had no substantial changes since 2012
and reflects some things that were normal then (like extensions)
which are not now. The second thing I noticed was that the document
doesn't really contextualize why API compatibility is important.

[0] Collaborative review to make new guideline
https://review.openstack.org/#/c/421846/

[1] #topic Introduce assert:supports-api-compatibility

http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-01-17-20.00.log.html#l-181

[2] assert:supports-api-compatbility review
https://review.openstack.org/#/c/418010/

[3] "Evaluating API Changes" API guideline

http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html

[4] Email thread about glance visibility with links elsewhere
http://lists.openstack.org/pipermail/openstack-dev/2017-January/109678.html
Related tempest test
https://review.openstack.org/#/c/414261/

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: 

[openstack-dev] [ironic] release timelines

2017-01-18 Thread Jim Rollenhagen
Hi all,

As you probably know, ironic is a cycle-with-intermediary project that
doesn't strictly follow the OpenStack feature freeze.

Ocata is finishing up and we need to define some deadlines/goals around a
final release. Please refer to the schedule here:
https://releases.openstack.org/ocata/schedule.html

We're thinking that we'll do a (soft) feature freeze during week R-3. This
means we will avoid merging feature patches unless we agree they are a
priority item and not too risky.

We'll likely shoot for R-2 for a final release of ironic. This is early
enough that we aren't rushing, but late enough that we have wiggle room if
something comes up (the real deadline for our final release is R-1).

Of course, we are playing this by ear, and will adjust as needed.
Questions/comments welcome :)

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [telemetry][ceilometer][panko] ceilometer event API removal

2017-01-18 Thread gordon chung
hi,

unfortunately i noticed this thread never made it to the operators list. 
there is a thread regarding the removal of Ceilometer Event API in 
Ocata[1]. this is only related to the storage and access of events. the 
generation of events remains in Ceilometer.

as a quick summary: the telemetry upstream team is small and does not 
have resources to maintain duplicate code so we removed the deprecated 
event code from Ceilometer and it is supported in Panko only now. we 
decided not to drag our feet with removal since the Event code has been 
idle for over a year now upstream and seems no one has made an effort to 
support it.

the current gaps we have between code removed from Ceilometer and Panko is:
- ceilometerclient support only works if Ceilometer api is enabled (but 
support for client only redirect to panko is in merge queue)
- there was a single event tempest test in Ceilometer which was not 
ported to Panko. this item remains still

if you have any concerns, i highly suggest you provide feedback, 
specifically if you notice any other gaps. there is a revert patch[2] 
just in case there is a scenario that warrants us putting it back into 
Ocata but again, we are a super small team so it's best to provide some 
resources to actually maintain it if you'd like Events to continue on.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/109839.html

cheers,
-- 
gord
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [keystone] webob 1.7

2017-01-18 Thread Chuck Short
Hi

We have been expericing problems with newer versions of webob (webob 1.7).
Reading the changelog, it seems that the upstream developers have
introduced some backwards incompatibility with previous versions of webob
that seems to be hitting keystone and possibly other projects as well
(nova/glance in particular). For keystone this bug has been reported in bug
#1657452. I would just like to get more developer's eyes on this particular
issue and possibly get a fix. I suspect its starting to hit other distros
as well or already have hit.

Thanks
chuck
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][dpm] multiple nova-compute services on *one* host?

2017-01-18 Thread Markus Zoeller
TL;DR:
Is it advisable to run multiple nova-compute services within the same
operating system while each nova-compute service manages a different
(remote) hypervisor?

The longer version:
Co-workers and I are working on a new (out-of-tree [1]) driver for a
system z hypervisor [2]. A model of what we came up with looks like this:

  compute-node(hostname=hansel)
+-+
| ++ ++++ |
| |nova1.conf  | |nova2.conf  ||nova3.conf  | |
| |  host=foo  | |  host=bar  ||  host=baz  | |
| +-^--+ +-^--++-^--+ |
|   |  | ||
| +-+--+ +-+--++-+--+ |
| || |||| |
| |nova-compute| |nova-compute||nova-compute| |
| || |||| |
| ++ ++++ |
+-+
|  | |
|   +--v--+  |
+---> HMC <--+
+--+--+
   |
++
cpc1|  | | cpc2
+---+  +--+
|   |  ||  | ||
| +-v--+ +-v--+ |  |  +--v-+  |
| || || |  |  ||  |
| | cpc-subset | | cpc-subset | |  |  | cpc-subset |  |
| | name=foo   | | name=bar   | |  |  | name=baz   |  |
| || || |  |  ||  |
| ++ ++ |  |  ++  |
|   |  |  |
+---+  +--+

The hypervisor itself is running inside a CPC [3]. All communication
with these hypervisors need to go via the REST API of a so-called "HMC".
A cpc-subset is a logical constraint of the overall available resources
inside a CPC. That's where the Nova instances will live in, as so-called
"partitions".

This sub-setting means, there is no 1-to-1 relationship of nova-compute
service to host/hypervisor anymore. We already tested that this works
in a small testing environment.

The diagram above shows, that we configured the `host` config option
with a value which is *not* related to the hostname of the compute node
(neither its IP address or FQDN).

The docs of the config option `[DEFAULT].host` makes me believe this is
*not* valid, as is says:

"Hostname, FQDN or IP address of this host. Must be valid within
AMQP key."

The first sentence is the one which raised doubts if our model is a
valid one. The second sentence "weakens" the first one a little. A valid
AMQP name could also be totally different from hostname, FQDN or IP
address.

The functional tests (e.g. [4]) on the other hand, make me believe our
model is a valid one, as the tests have code like this:

self.start_service('compute', host='fake-host')
self.start_service('compute', host='fake-host2')

Also, the developer docs [5] say:

"The one major exception is nova-compute, where a single process
runs on the hypervisor it is managing (except when using the VMware
or Ironic drivers)."

Our model is close to the one of Ironic IMO.

We also spent thought about using one single nova-compute service for
all CPCs. We rejected that idea, as we came to the conclusion that this
would be a single-point-of-failure which is also complicated to
configure. The interaction with the neutron networking-agent was also
not straight forward.

Long story short, are we bending the rules here or did I overlook code
usages of `[DEFAULT].host` where it *has to be* a network related
attribute like IP address / FQDN / hostname?

References:
[1] https://github.com/openstack/nova-dpm
[2] https://blueprints.launchpad.net/nova/+spec/dpm-driver
[3]
https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_mfhwterms.htm
[4]
https://github.com/openstack/nova/blob/bcbfee183e74f696085fcd5c18aff333fc5f1403/nova/tests/unit/conductor/test_conductor.py#L1468-L1469
[5] http://docs.openstack.org/developer/nova/architecture.html


-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [ironic] [infra] Nested KVM + the gate

2017-01-18 Thread Amrith Kumar
Jay,

 

This is the Trove commit …  
 
I85364c6530058e964a8eba7fb515d7deadfd5d72

 

-amrith

 

From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com] 
Sent: Wednesday, January 18, 2017 7:57 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] [infra] Nested KVM + the gate

 

On Tue, Jan 17, 2017 at 6:41 PM, Jay Faulkner  
> wrote:

Hi all,

Back in late October, Vasyl wrote support for devstack to auto detect, and when 
possible, use kvm to power Ironic gate jobs 
(0036d83b330d98e64d656b156001dd2209ab1903). This has lowered some job time when 
it works, but has caused failures — how many? It’s hard to quantify as the log 
messages that show the error don’t appear to be indexed by elastic search. It’s 
something seen often enough that the issue has become a permanent staple on our 
gate whiteboard, and doesn’t appear to be decreasing in quantity.

I pushed up a patch, https://review.openstack.org/#/c/421581, which keeps the 
auto detection behavior, but defaults devstack to use qemu emulation instead of 
kvm.

I have two questions:
1) Is there any way I’m not aware of we can quantify the number of failures 
this is causing? The key log message, "KVM: entry failed, hardware error 0x0”, 
shows up in logs/libvirt/qemu/node-*.txt.gz.
2) Are these failures avoidable or visible in any way?

IMO, if we can’t fix these failures, in my opinion, we have to do a change to 
avoid using nested KVM altogether. Lower reliability for our jobs is not worth 
a small decrease in job run time.

 

+2, especially this late in the cycle, we need our CI to be rock solid.

// jim

 

 



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] Nested KVM + the gate

2017-01-18 Thread Jim Rollenhagen
On Tue, Jan 17, 2017 at 6:41 PM, Jay Faulkner  wrote:

> Hi all,
>
> Back in late October, Vasyl wrote support for devstack to auto detect, and
> when possible, use kvm to power Ironic gate jobs (
> 0036d83b330d98e64d656b156001dd2209ab1903). This has lowered some job time
> when it works, but has caused failures — how many? It’s hard to quantify as
> the log messages that show the error don’t appear to be indexed by elastic
> search. It’s something seen often enough that the issue has become a
> permanent staple on our gate whiteboard, and doesn’t appear to be
> decreasing in quantity.
>
> I pushed up a patch, https://review.openstack.org/#/c/421581, which keeps
> the auto detection behavior, but defaults devstack to use qemu emulation
> instead of kvm.
>
> I have two questions:
> 1) Is there any way I’m not aware of we can quantify the number of
> failures this is causing? The key log message, "KVM: entry failed, hardware
> error 0x0”, shows up in logs/libvirt/qemu/node-*.txt.gz.
> 2) Are these failures avoidable or visible in any way?
>
> IMO, if we can’t fix these failures, in my opinion, we have to do a change
> to avoid using nested KVM altogether. Lower reliability for our jobs is not
> worth a small decrease in job run time.
>

+2, especially this late in the cycle, we need our CI to be rock solid.

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Mistral][Ansible] Calling Ansible from Mistral workflows

2017-01-18 Thread Renat Akhmerov
Ok to both answers.

Renat Akhmerov
@Nokia

> On 18 Jan 2017, at 16:37, Dougal Matthews  wrote:
> 
> 
> 
> On 17 January 2017 at 03:28, Renat Akhmerov  > wrote:
> Dougal, I looked at the source code. Seems like it’s already usable enough.
> Do you think we need to put a section about Ansible actions into Mistral docs?
> I’m also thinking if we need to move this code into the mistral repo or leave 
> it on github.
> 
> I'm happy for it to live on it's own, it will give me a chance to test it out 
> and get feedback. Maybe if it proves useful and stabalises it can be moved 
> into Mistral. At that point I would want to document it.
> 
> 
> Maybe a better time for moving it under Mistral umbrella will be when we 
> finish our actions
> refactoring activity (when actions are moved into a separate repo, e.g. 
> mistral-extra).
> 
> Yup, that seems reasonable. I'm not convinced it ever needs to be moved into 
> Mistral - it would be good if we grew into a larger ecosystem of actions that 
> can be installed, they don't all need to be included with Mistral itself.
> 
> 
> Thoughts?
> 
> Renat Akhmerov
> @Nokia
> 
>> On 12 Jan 2017, at 22:27, Dougal Matthews > > wrote:
>> 
>> Hey all,
>> 
>> I just wanted to share a quick experiment that I tried out. I had heard 
>> there was some interest in native Ansible actions for Mistral. After much 
>> dragging my heels I decided to give it a go, and it turns out to be very 
>> easy.
>> 
>> This code is very raw and has only been lightly tested - I just wanted to 
>> make sure it was going in the right direction and see what everyone thought.
>> 
>> I wont duplicate it all again here, but you can see the details on either 
>> GitHub or a quick blog post that I put together.
>> 
>> https://github.com/d0ugal/mistral-ansible-actions 
>> 
>> http://www.dougalmatthews.com/2017/Jan/12/calling-ansible-from-mistral-workflows/
>>  
>> 
>> 
>> Cheers,
>> Dougal
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ui] FYI, the tripleo-ui package is currently broken

2017-01-18 Thread Julie Pichon
Hi all,

I'm sorry to report we're finding ourselves in the same situation
again - CI will fail on all the UI patches, please don't recheck until
we have a new dependencies package available.

On the plus side, with the help of amoralej on #rdo we figured out why
this is happening: the tripleo-ui rpm used in CI is being built from
the master branch, instead of using the patch under review. So,
instead of happening on the patch itself the CI failures only happen
after it merges. I filed [1] to track this. Any pointer from folks
familiar with TripleO CI as to where we might want to poke to resolve
this is appreciated :)

Thank you.

Julie

[1] https://bugs.launchpad.net/tripleo/+bug/1657416

On 10 January 2017 at 16:27, Julie Pichon  wrote:
> On 9 January 2017 at 13:20, Julie Pichon  wrote:
>> On 6 January 2017 at 14:52, Julie Pichon  wrote:
>>> Hi folks,
>>>
>>> Just a heads-up that the DLRN "current"/dev package for the Tripleo UI
>>> is broken in Ocata and will cause the UI to only show a blank page,
>>> until we resolve some dependencies issues within the -deps package.
>>>
>>> If I understand correctly, we ended up with an incomplete package
>>> because we were silently ignoring errors during builds [1] - many
>>> thanks to Honza for the debugging work, and the patch!!
>>
>> The good news: the 'stop on error' patch merged, meaning we will catch
>> such errors early in the future, and won't be able to merge patches
>> until the dependencies are properly sorted out. A backport was also
>> proposed at [1].
>>
>> The bad news: because currently we're already in a "missing
>> dependencies" state due to patches that merged with silent errors and
>> the older -deps package, no patch can merge on tripleo-ui until the
>> -deps package gets updated. I'm not sure about the ETA for the new
>> -deps package but the good folks on #rdo are looking into it (see
>> also [2]).
>
> Hi all,
>
> The -deps package has been sorted out, so the CI jobs for tripleo-ui
> are passing again. Feel free to recheck away! There is a patch going
> through the gate as well [1], once that's merged I expect a new
> tripleo-ui package will be available at [2] and updating your local dev
> repos to the latest dlrn to get it should be sufficient to have a
> working UI again.
>
> Thank you for your patience, and many many thanks to apevec, honza and
> number80 for resolving this!
>
> Julie
>
> [1] https://review.openstack.org/#/c/416261/
> [2] http://trunk.rdoproject.org/centos7/current/
>
>> Thanks,
>>
>> Julie
>>
>> [1] https://review.openstack.org/#/c/417866/
>> [2] https://review.rdoproject.org/r/#/c/4215/
>>
>>> In the meantime, if you want to work with the UI package you should
>>> get a version built before December 19th, e.g. [2], or you're probably
>>> better off using the UI from source for the time being [3].
>>>
>>> I'll update this thread when this is resolved.
>>>
>>> Thanks,
>>>
>>> Julie
>>>
>>> [1] https://bugs.launchpad.net/tripleo/+bug/1654051
>>> [2] 
>>> https://trunk.rdoproject.org/centos7-master/04/15/0415ee80b5c8354124290ac933a34823f2567800_c211fbe8/openstack-tripleo-ui-2.0.0-0.20161212153814.2dfbb0b.el7.centos.noarch.rpm
>>> [3] 
>>> https://github.com/openstack/tripleo-ui/blob/master/README.md#install-tripleo-ui

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-18 Thread Jens Rosenboom
2017-01-13 17:56 GMT+01:00 Clark Boylan :
> On Fri, Jan 13, 2017, at 07:48 AM, Jakub Libosvar wrote:
>> Does anybody know whether we can bump memory on nodes in the gate
>> without losing resources for running other jobs?
>> Has anybody experience with memory consumption being higher when using
>> linux bridge agents?
>>
>> Any other ideas?
>
> Ideally I think we would see more work to reduce memory consumption.
> Heat has been able to more than halve their memory usage recently [0].
> Perhaps start by identifying the biggest memory hogs and go from there?
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2017-January/109748.html

In order to have some real data, I've run reproduce.sh for a random
full tempest check and aggregated the memory usage from ps output
during the tempest run [1].
To me it looks like the times of 2G are long gone, Nova is using
almost 2G all by itself. And 8G may be getting tight if additional
stuff like Ceph is being added.

As a side note, we are seeing consistent failures for the Chef
OpenStack Cookbook integration tests on infra. We have set up an
external CI now running on 12G instances and are getting successful
results there. [2]

[1] http://paste.openstack.org/show/595348/
[2] https://review.openstack.org/409900

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Project Virtual Gathering (PVG)

2017-01-18 Thread Antoni Segura Puimedon
Hi Kuryrs!

Due to traveling restrictions, we opted not to take part in the Atlanta
PTG. However, the design work sessions won't disappear :-)

Please go through its etherpad[0] and propose, vote, comment about
sessions, format and scheduling.

Regards,

Toni

[0] https://etherpad.openstack.org/p/kuryr_virtual_gathering_2017h1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem in devstack install - No Network found for private

2017-01-18 Thread nidhi.hada
Hi Andreas,


As in between you suggested to try with default devstack

neutron config params. I tried that i set no config option

for neutron part all default.


This local.conf is working well..


for others who are facing problem pasting working local.conf here

http://paste.openstack.org/show/595339/


Attaching too.


Thanks

Nidhi






From: Nidhi Mittal Hada (Product Engineering Service)
Sent: Wednesday, January 18, 2017 2:44 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private


Andreas,


I require nothing specific from neutron side.

Just a basic working setup from neutron side

because my work is mostly on storage side of

OpenStack.


Can you please suggest a working configuration

if  tried recently.


Thanks

nidhi



From: Nidhi Mittal Hada (Product Engineering Service)
Sent: Wednesday, January 18, 2017 2:35:13 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private


HI Andreas,


Thanks for your reply.


I have no specific reason for using this network configuration in local.conf.

I have only basic knowledge of these config options even.

This local.conf network configurations used to work well with earlier

devstack openstack versions. So i did not change it..

Just this time its creating trouble.


I have not created any ovs bridge manually  before running devstack.

just created this local.conf and ran ./stack.sh in devstack folder.


Can you please suggest changes if i have not created ovs-bridge manually.


At present my settings are - from local.conf - for reference -

FIXED_RANGE=10.11.12.0/24
NETWORK_GATEWAY=10.11.12.1
FIXED_NETWORK_SIZE=256

FLOATING_RANGE=10.0.2.0/24
Q_FLOATING_ALLOCATION_POOL=start=10.0.2.104,end=10.0.2.111
PUBLIC_NETWORK_GATEWAY=10.0.2.1
HOST_IP=10.0.2.15

PUBLIC_INTERFACE=eth0


Q_USE_SECGROUP=True
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999
PHYSICAL_NETWORK=default
OVS_PHYSICAL_BRIDGE=br-ex


Q_USE_PROVIDER_NETWORKING=True
Q_L3_ENABLED=False

PROVIDER_SUBNET_NAME="provider_net"
PROVIDER_NETWORK_TYPE="vlan"
SEGMENTATION_ID=2010






Thanks

Nidhi




From: Andreas Scheuring 
Sent: Wednesday, January 18, 2017 1:09:17 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private

** This mail has been sent from an external source **

Without looking into the details

you're specifying
Q_USE_PROVIDER_NETWORKING=True
in your local.conf - usually this results in the creation of a single
provider network called "public". But the manila devstack plugin seems
not to be able to deal with provider networks as it's expecting a
network named "private" to be present.


Why are you using provider networks? Just for sake of VLANs? You can
also configure devstack to use vlans with the default setup. This has
worked for me in the past - results in a private network using vlans
(assuming you have created ovs b bridge br-data manually):


OVS_PHYSICAL_BRIDGE=br-data
PHYSICAL_NETWORK=phys-data

ENABLE_TENANT_TUNNELS=False
Q_ML2_TENANT_NETWORK_TYPE=vlan
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1000




--
-
Andreas
IRC: andreas_s



On Mi, 2017-01-18 at 06:59 +, nidhi.h...@wipro.com wrote:
> Hi All,
>
>
> I was trying to install latest Newton version of OpenStack using
> devstack on my laptop, all in one machine,
>
> using Virtualbox VM. Lately i have been facing same problem in last
> few tries and installation doesn't get successful.
>
>
> My VM network adapter configuration is as below.
>
>
> Adapter1
>
>
>
>
>
>
>
> and 2nd adapter is as
>
> Adapter2
>
>
>
>
>
>
>
> Thats detail of Host Only Networking
>
>
>
>
>
>
>
>
>
>
>
>
>
> Thats my local.conf for devstack
>
>
>
> http://paste.openstack.org/show/595313/
>
>
>
>
> excerpt is
>
> FIXED_RANGE=10.11.12.0/24
>
>
> NETWORK_GATEWAY=10.11.12.1
> FIXED_NETWORK_SIZE=256
>
>
> FLOATING_RANGE=10.0.2.0/24
> Q_FLOATING_ALLOCATION_POOL=start=10.0.2.104,end=10.0.2.111
> PUBLIC_NETWORK_GATEWAY=10.0.2.1
> HOST_IP=10.0.2.15
>
>
> PUBLIC_INTERFACE=eth0
>
>
>
> Thats ubuntu version on VM
> stack@ubuntu:~/devstack$ lsb_release -d
> Description: Ubuntu 14.04.5 LTS
> stack@ubuntu:~/devstack$
>
>
> Thats my machine's network interfaces file
>
>
> stack@ubuntu:~/devstack$ cat /etc/network/interfaces
>
>
> # This file describes the network interfaces available on your system
> # and how to activate them. For more information, see interfaces(5).
>
>
> # The loopback network interface
> auto lo
> iface lo inet loopback
>
>
> # The primary network interface
> auto eth1
> iface eth1 inet static
> address 192.168.56.150
> netmask 255.255.255.0
>
>
> auto 

Re: [openstack-dev] [Vitrage] About alarms reported by datasource and the alarms generated by vitrage evaluator

2017-01-18 Thread Afek, Ifat (Nokia - IL)


From: Yujun Zhang 
Date: Tuesday, 17 January 2017 at 02:41


Sounds good.

Have you created an etherpad page for collecting topics, Ifat?

Here: https://etherpad.openstack.org/p/vitrage-pike-design-sessions


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Mistral][Ansible] Calling Ansible from Mistral workflows

2017-01-18 Thread Dougal Matthews
On 17 January 2017 at 03:28, Renat Akhmerov 
wrote:

> Dougal, I looked at the source code. Seems like it’s already usable enough.
> Do you think we need to put a section about Ansible actions into Mistral
> docs?
> I’m also thinking if we need to move this code into the mistral repo or
> leave it on github.
>

I'm happy for it to live on it's own, it will give me a chance to test it
out and get feedback. Maybe if it proves useful and stabalises it can be
moved into Mistral. At that point I would want to document it.


Maybe a better time for moving it under Mistral umbrella will be when we
> finish our actions
> refactoring activity (when actions are moved into a separate repo, e.g.
> mistral-extra).
>

Yup, that seems reasonable. I'm not convinced it ever needs to be moved
into Mistral - it would be good if we grew into a larger ecosystem of
actions that can be installed, they don't all need to be included with
Mistral itself.


> Thoughts?
>
> Renat Akhmerov
> @Nokia
>
> On 12 Jan 2017, at 22:27, Dougal Matthews  wrote:
>
> Hey all,
>
> I just wanted to share a quick experiment that I tried out. I had heard
> there was some interest in native Ansible actions for Mistral. After much
> dragging my heels I decided to give it a go, and it turns out to be very
> easy.
>
> This code is very raw and has only been lightly tested - I just wanted to
> make sure it was going in the right direction and see what everyone thought.
>
> I wont duplicate it all again here, but you can see the details on either
> GitHub or a quick blog post that I put together.
>
> https://github.com/d0ugal/mistral-ansible-actions
> http://www.dougalmatthews.com/2017/Jan/12/calling-ansible-
> from-mistral-workflows/
>
> Cheers,
> Dougal
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Attempting to proxy websockets through Apache or HAProxy for Zaqar

2017-01-18 Thread Thomas Herve
On Tue, Jan 17, 2017 at 6:23 PM, Dan Trainor  wrote:
> Hi -
>
> In an attempt to work on [0], I've been playing around with proxying all the
> service API endpoints that the UI needs to communicate with, through either
> haproxy or Apache to avoid a bug[1] around how non-Chrome browsers handle
> SSL connections to different ports on the same domain.
>
> The blueprint suggests using haproxy for this, but we're currently using the
> "old" notation of listen/server, not frontend/backend.  The distinction is
> important because the ACLs that would allow any kind of proxying to
> facilitate this are only available in the latter notation.  In order to do
> this in haproxy, tripleo::haproxy would need a rewrite (looks pretty
> trivial, but likely out of scope for this).  So I'd really like to isolate
> this to UI, which is convenient since UI runs largely self-contained inside
> Apache.
>
> I've made some good progress with most all of the services, since they were
> pretty straight-forward - mod_proxy handles them just fine.  The one I'm not
> able to make work right now is the websocket service that UI uses.
> Ultimately, I see the Websocket connection get upgraded and the Websocket
> opens, but stays open indefinitely and will never see more than 0 bytes.  No
> data is transferred from the browser over the Websocket.  This connection
> hangs indefinitely, and UI does not complete any operations that depend on
> the Zaqar Websocket.
>
> Observing trace6[4] output, I can see mod_proxy_wstunnel (which relies on
> mod_proxy) make the connection, I can see Zaqar recognize the request in
> logs, the client (UI) doesn't send or receive any data from it.  It's as if
> immediately after the Upgrade[2], the persistent Websocket connection just
> dies.
>
> I've had limited success using a couple different implementations of this in
> Apache.  ProxyPass/ProxyPassReverse looks as if it should work (so long as
> mod_proxy_wstunnel is loaded), but this is not my experience.  Using a
> mod_rewrite rule[3] to force the specific Websocket proxy for a specific URI
> (/zaqar) has the same outcome.
>
> In its most simple form, the ProxyPass rule I'm attempting to use is:
>
>   ProxyPass "/zaqar""ws://192.0.2.1:9000/"
>   ProxyPassReverse  "/zaqar""ws://192.0.2.1:9000/"

I tried that configuration, and it worked fine with a python websocket
client. I created a queue, a subscription, and was able to retrieve
notifications properly. I used Apache 2.4.23.

Is it possible that you're having a CORS issue?

> Here's Zaqar's Websocket transport answering the request, creating both a
> queue and a subscription but no data after that:

What do you mean by "no data after that"? What kind of data are you
expecting after? Just getting those 2 messages seems to indicate that
it works fine to me. Are you getting timeouts, unexpected closed
connections?

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >