Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-23 Thread Renat Akhmerov
Ok, thanks. That looks more clear now.

Renat Akhmerov
@Nokia

> On 24 Jan 2017, at 14:15, lương hữu tuấn  wrote:
> 
> Hi Renat,
> 
> In short, it is the expression: output: <% $.data %>
> 
> I would like to post the workflow too since it would make more sense to 
> understand the whole picture(IMHO :)). In this case, it would be that the 
> data is too big, AFAIK is around 2MB. Therefore i would just wanna know more 
> information about the performance of YAQL (if we have), i myself do not judge 
> YAQL in this case.
> 
> Br,
> 
> Tuan
> 
> On Tue, Jan 24, 2017 at 6:09 AM, Renat Akhmerov  > wrote:
> While I’m in the loop regarding how this workflow works others may not be. 
> Could please just post your expression and data that you use to evaluate this 
> expression? And times. Workflow itself has nothing to do with what we’re 
> discussing.
> 
> Renat Akhmerov
> @Nokia
> 
>> On 23 Jan 2017, at 21:44, lương hữu tuấn > > wrote:
>> 
>> Hi guys,
>> 
>> I am provide some information about the result of testing YAQL performance 
>> on my devstack stable/newton with RAM of 6GB. The workflow i created is 
>> below:
>> 
>> #
>> input:
>>   - size
>>   - number_of_handovers
>> 
>>   tasks:
>>   generate_input:
>> action: std.javascript
>> input:
>>   context:
>> size: <% $.size %>
>>   script: |
>> result = {}
>> for(i=0; i < $.size; i++) {
>>   result["key_" + i] = {
>> "alma": "korte"
>>   }
>> }
>> return result
>> publish:
>>   data: <% task(generate_input).result %>
>> on-success:
>>   - process
>> 
>>   process:
>> action: std.echo
>> input:
>>   output: <% $.data %>
>> publish:
>>   data: <% task(process).result %>
>>   number_of_handovers: <% $.number_of_handovers - 1 %>
>> on-success:
>>   - process: <% $.number_of_handovers > 0 %>
>> 
>> ##
>> 
>> I test with the size is 1 and the number_of_handover is 50. The result 
>> shows out that time for validating the <% $.data %> is quite long. I do not 
>> know this time is acceptable but imagine that in our use case, the value of 
>> $.data could be a large size. Couple of log file is below:
>> 
>> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function 
>> evaluate finished in 11262.710 ms
>> 
>> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function 
>> evaluate finished in 8146.324 ms
>> 
>> ..
>> 
>> The average value is around 10s each time of valuating.
>> 
>> Br,
>> 
>> Tuan
>> 
>> 
>> On Mon, Jan 23, 2017 at 11:48 AM, lương hữu tuấn > > wrote:
>> Hi Renat,
>> 
>> For more details, i will go to check on the CBAM machine and hope it is not 
>> deleted yet since we have done it for around a week.
>> Another thing is Jinja2 showed us that it run 2-3 times faster with the same 
>> test with YAQL. More information i will also provide it later.
>> 
>> Br,
>> 
>> Tuan
>> 
>> On Mon, Jan 23, 2017 at 8:32 AM, Renat Akhmerov > > wrote:
>> Tuan,
>> 
>> I don’t think that Jinja is something that Kirill is responsible for. It’s 
>> just a coincidence that we in Mistral support both YAQL and Jinja. The 
>> latter has been requested by many people so we finally did it.
>> 
>> As far as performance, could you please provide some numbers? When you say 
>> “takes a lot of time” how much time is it? For what kind of input? Why do 
>> you think it is slow? What are your expectations?Provide as much info as 
>> possible. After that we can ask YAQL authors to comment and help if we 
>> realize that the problem really exists.
>> 
>> I’m interested in this too since I’m always looking for ways to speed 
>> Mistral up.
>> 
>> Thanks
>> 
>> Renat Akhmerov
>> @Nokia
>> 
>>> On 18 Jan 2017, at 16:25, lương hữu tuấn >> > wrote:
>>> 
>>> Hi Kirill,
>>> 
>>> Do you have any information related to the performance of Jinja and Yaql 
>>> validating. With the big size of input, yaql runs quite so slow in our case 
>>> therefore we have plan to switch to jinja.
>>> 
>>> Br,
>>> 
>>> @Nokia/Tuan
>>> 
>>> On Tue, Jan 17, 2017 at 3:02 PM, lương hữu tuấn >> > wrote:
>>> Hi Kirill,
>>> 
>>> Thank you for you information. I hope we will have more information about 
>>> it. Just keep in touch when you guys in Mirantis have some performance 
>>> results about Yaql.
>>> 
>>> Br,
>>> 
>>> @Nokia/Tuan 
>>> 
>>> On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev 

[openstack-dev] [storlets][ptl] PTL candidacy

2017-01-23 Thread Eran Rom

Hi All,

I have been leading the Storlets project from its infancy days as
a research project in IBM to its infancy days as a big-tent project :-)
This would not have been possible without the small yet top and seasoned
developers in our community.

There is still very much I would like to do:
Reach out to more users and hence to developers.
Expand on our use cases portfolio by developing a
rich echo system.
Continuously work on the project's maturity so that
it can be picked up by deployers, and last but not least
enjoy the spirit of open source while at it.

I believe that I can help driving the project to achieve all these
goals, and would be very happy to serve as the project's first PTL.

Thanks!
Eran




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Setting up another compute node

2017-01-23 Thread Artem Plakunov
is there anything about the error in /var/log/neutron/server.log on 
controller node?
Also verify that you've correctly set up config file 
/etc/neutron/plugins/ml2/openvswitch_agent.ini on compute node. Is there 
any difference in this file from your other working compute nodes?


23.01.2017 23:32, Peter Kirby пишет:
I agree.  But I can't figure out why the port isn't getting created.  
Those lines are the only ones that show up in neutron logs.


Here's what shows up in the nova logs:

Jan 23 14:09:21 vhost2 nova-compute[8936]: Traceback (most recent call 
last):
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 115, in 
wait

Jan 23 14:09:21 vhost2 nova-compute[8936]: listener.cb(fileno)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, 
in main
Jan 23 14:09:21 vhost2 nova-compute[8936]: result = function(*args, 
**kwargs)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/utils.py", line 1159, in 
context_wrapper

Jan 23 14:09:21 vhost2 nova-compute[8936]: return func(*args, **kwargs)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1587, 
in _allocate_network_async

Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(*exc_info)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1570, 
in _allocate_network_async

Jan 23 14:09:21 vhost2 nova-compute[8936]: bind_host_id=bind_host_id)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 
685, in allocate_for_instance
Jan 23 14:09:21 vhost2 nova-compute[8936]: self._delete_ports(neutron, 
instance, created_port_ids)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__

Jan 23 14:09:21 vhost2 nova-compute[8936]: self.force_reraise()
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(self.type_, 
self.value, self.tb)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 
674, in allocate_for_instance
Jan 23 14:09:21 vhost2 nova-compute[8936]: security_group_ids, 
available_macs, dhcp_opts)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 
261, in _create_port
Jan 23 14:09:21 vhost2 nova-compute[8936]: raise 
exception.PortBindingFailed(port_id=port_id)
Jan 23 14:09:21 vhost2 nova-compute[8936]: PortBindingFailed: Binding 
failed for port e1058d22-9a7b-4988-9644-d0f476a01015, please check 
neutron logs for more information.

Jan 23 14:09:21 vhost2 nova-compute[8936]: Removing descriptor: 21


Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer Plus 


peter.ki...@objectstream.com 

*Objectstream, Inc. *
Office: 405-942-4477  / Fax: 866-814-0174 


7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 2:21 PM, Trinath Somanchi 
> wrote:


The port doesn't exists at all.

Port e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int

Get Outlook for iOS 


*From:* Peter Kirby >
*Sent:* Tuesday, January 24, 2017 1:43:36 AM

*To:* Trinath Somanchi
*Cc:* OpenStack
*Subject:* Re: [Openstack] Setting up another compute node
I just did another attempt at this so I'd have fresh logs.

There are all the lines produced in the neutron
openvswitch-agent.log file when I attempt that previous command.

2017-01-23 14:09:20.918 8097 INFO neutron.agent.securitygroups_rpc
[req-a9ab1e05-cf41-44ce-8762-d7f0f72e7ba3
582643be48c04603a09250a1be6e6cf3 1dd7b6481aa34ef7ba105a7336845369
- - -] Security group member updated
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:21.132 8097 INFO neutron.agent.securitygroups_rpc
[req-b8cc3ab8-d4f3-4c96-820d-148ae6fd47af
582643be48c04603a09250a1be6e6cf3 1dd7b6481aa34ef7ba105a7336845369
- - -] Security group member updated
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:22.057 8097 INFO neutron.agent.common.ovs_lib
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Port
e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int
2017-01-23 14:09:22.058 8097 INFO

Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-23 Thread lương hữu tuấn
Hi Renat,

In short, it is the expression: output: <% $.data %>

I would like to post the workflow too since it would make more sense to
understand the whole picture(IMHO :)). In this case, it would be that the
data is too big, AFAIK is around 2MB. Therefore i would just wanna know
more information about the performance of YAQL (if we have), i myself do
not judge YAQL in this case.

Br,

Tuan

On Tue, Jan 24, 2017 at 6:09 AM, Renat Akhmerov 
wrote:

> While I’m in the loop regarding how this workflow works others may not be.
> Could please just post your expression and data that you use to evaluate
> this expression? And times. Workflow itself has nothing to do with what
> we’re discussing.
>
> Renat Akhmerov
> @Nokia
>
> On 23 Jan 2017, at 21:44, lương hữu tuấn  wrote:
>
> Hi guys,
>
> I am provide some information about the result of testing YAQL performance
> on my devstack stable/newton with RAM of 6GB. The workflow i created is
> below:
>
> #
> input:
>   - size
>   - number_of_handovers
>
>   tasks:
>   generate_input:
> action: std.javascript
> input:
>   context:
> size: <% $.size %>
>   script: |
> result = {}
> for(i=0; i < $.size; i++) {
>   result["key_" + i] = {
> "alma": "korte"
>   }
> }
> return result
> publish:
>   data: <% task(generate_input).result %>
> on-success:
>   - process
>
>   process:
> action: std.echo
> input:
>   output: <% $.data %>
> publish:
>   data: <% task(process).result %>
>   number_of_handovers: <% $.number_of_handovers - 1 %>
> on-success:
>   - process: <% $.number_of_handovers > 0 %>
>
> ##
>
> I test with the size is 1 and the number_of_handover is 50. The result
> shows out that time for validating the <% $.data %> is quite long. I do not
> know this time is acceptable but imagine that in our use case, the value of
> $.data could be a large size. Couple of log file is below:
>
> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]
>  Function evaluate finished in 11262.710 ms
>
> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]
>  Function evaluate finished in 8146.324 ms
>
> ..
>
> The average value is around 10s each time of valuating.
>
> Br,
>
> Tuan
>
>
> On Mon, Jan 23, 2017 at 11:48 AM, lương hữu tuấn 
> wrote:
>
>> Hi Renat,
>>
>> For more details, i will go to check on the CBAM machine and hope it is
>> not deleted yet since we have done it for around a week.
>> Another thing is Jinja2 showed us that it run 2-3 times faster with the
>> same test with YAQL. More information i will also provide it later.
>>
>> Br,
>>
>> Tuan
>>
>> On Mon, Jan 23, 2017 at 8:32 AM, Renat Akhmerov > > wrote:
>>
>>> Tuan,
>>>
>>> I don’t think that Jinja is something that Kirill is responsible for.
>>> It’s just a coincidence that we in Mistral support both YAQL and Jinja. The
>>> latter has been requested by many people so we finally did it.
>>>
>>> As far as performance, could you please provide some numbers? When you
>>> say “takes a lot of time” how much time is it? For what kind of input? Why
>>> do you think it is slow? What are your expectations?Provide as much info as
>>> possible. After that we can ask YAQL authors to comment and help if we
>>> realize that the problem really exists.
>>>
>>> I’m interested in this too since I’m always looking for ways to speed
>>> Mistral up.
>>>
>>> Thanks
>>>
>>> Renat Akhmerov
>>> @Nokia
>>>
>>> On 18 Jan 2017, at 16:25, lương hữu tuấn  wrote:
>>>
>>> Hi Kirill,
>>>
>>> Do you have any information related to the performance of Jinja and Yaql
>>> validating. With the big size of input, yaql runs quite so slow in our case
>>> therefore we have plan to switch to jinja.
>>>
>>> Br,
>>>
>>> @Nokia/Tuan
>>>
>>> On Tue, Jan 17, 2017 at 3:02 PM, lương hữu tuấn 
>>> wrote:
>>>
 Hi Kirill,

 Thank you for you information. I hope we will have more information
 about it. Just keep in touch when you guys in Mirantis have some
 performance results about Yaql.

 Br,

 @Nokia/Tuan

 On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev 
 wrote:

> I think fuel team encountered similar problems, I’d advice asking them
> around. Also Stan (author of yaql) might shed some light on the problem =)
>
> --
> Kirill Zaitsev
> Murano Project Tech Lead
> Software Engineer at
> Mirantis, Inc
>
> On 17 January 2017 at 15:11:52, lương hữu tuấn (tuantulu...@gmail.com)
> wrote:
>
> Hi,
>
> We are now using yaql in mistral 

[openstack-dev] [nova][bugs] Nova Bugs Team Meeting this Tuesday Cancelled

2017-01-23 Thread Augustina Ragwitz
I've had a scheduling conflict and need to cancel the next Nova Bugs
Team meeting. If anyone is interested in running the meeting in my place
since it's been awhile, please feel free to reach out to me via email or
IRC.

-- 
Augustina Ragwitz
Señora Software Engineer
---
Waiting for your change to get through the gate? Clean up some Nova
bugs!
http://45.55.105.55:8082/bugs-dashboard.html
---
email: aragwitz+n...@pobox.com
irc: auggy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ python-novaclient][ python-glanceclient][ python-cinderclient][ python-neutronclient] Remove x-openstack-request-id logging code as it is logged twice

2017-01-23 Thread Kekane, Abhishek
Hi Dims,

Thank you for the update.

As of now patches for updating requirements.txt in individual clients has been 
proposed by bot, out of which patch for python-novaclient is already merged.
Following patches are still in review queue:

Python-glanceclient: https://review.openstack.org/#/c/423678
Python-cinderclient: https://review.openstack.org/#/c/423674
Python-neutronclient: https://review.openstack.org/#/c/422968

I have submitted patches in python-glanceclient [1], python-cinderclient [2] 
and python-neutronclient [3] to address this issue with dependency on above 
patches.

As client library release is targeted in this week, we need to make sure these 
patches get through and are part of the release otherwise we can hit the issue 
of logging request-id mapping twice in the logs if SessionClient is used.

[1] https://review.openstack.org/422591
[2] https://review.openstack.org/#/c/423940 (one +2)
[3] https://review.openstack.org/#/c/423921


Thank you,

Abhishek Kekane


-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: Saturday, January 21, 2017 6:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ python-novaclient][ python-glanceclient][ 
python-cinderclient][ python-neutronclient] Remove x-openstack-request-id 
logging code as it is logged twice

"keystoneauth1 >= 2.17.0" implies python-novaclient with your fix will work for 
any version including 2.17.0 which is not true. you need to either do 
"keystoneauth1 >= 2.18.0" or "keystoneauth1 > 2.17.0" and we prefer the ">=" 
notation i think.

Thanks,
Dims

On Fri, Jan 20, 2017 at 10:53 PM, Kekane, Abhishek 
 wrote:
> Hi Dims,
>
> Thank you for reply. I will propose a patch soon. Just for curiosity,
> keystoneauth1 >= 2.17.0 will not install 2.18.0?
>
> Abhishek
> 
> From: Davanum Srinivas 
> Sent: Saturday, January 21, 2017 8:27:56 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [ python-novaclient][ 
> python-glanceclient][ python-cinderclient][ python-neutronclient] 
> Remove x-openstack-request-id logging code as it is logged twice
>
> Abhishek,
>
> 1) requirements.txt for all 4 python-*client you mentioned have 
> "keystoneauth1>=2.17.0",
> 2) i do not see a review request to bump the minimum version in global 
> requirements for keystoneauth1 to "keystoneauth1>=2.18.0"
> (https://review.openstack.org/#/q/project:openstack/requirements+is:op
> en)
>
> Can you please file one?
>
> Thanks,
> Dims
>
>
> On Fri, Jan 20, 2017 at 12:52 AM, Kekane, Abhishek 
>  wrote:
>> Hi Devs,
>>
>>
>>
>> In the latest keystoneauth1 version 2.18.0, x-openstack-request-id is 
>> logged for every HTTP response. This keystoneauth1 version will be 
>> used for ocata.
>>
>> The same request id is also logged in 'request' method of 
>> SessionClient class for python-novaclient, python-glanceclient, 
>> python-cinderclient and python-neutronclient. Once requirements.txt 
>> is synced with global-requirements and it uses keystoneauth1 version 
>> 2.18.0 and above, x-openstack-request-id will be logged twice for these 
>> clients.
>>
>>
>>
>> I have submitted patches for python-novaclient [1] and 
>> python-glanceclient [2] and created patches for python-cinderclient 
>> and python-neutronclient but same will not be reviewed unless and 
>> until the requirements.txt is synced with global-requirements and it 
>> uses keystoneauth1 version 2.18.0.
>>
>>
>>
>> As final releases for client libraries are scheduled in the next week 
>> (between Jan 23 - Jan 27) we want to address these issues in the 
>> above mentioned clients.
>>
>>
>>
>> Please let us know your opinion about the same.
>>
>>
>>
>> [1] https://review.openstack.org/422602
>>
>> [2] https://review.openstack.org/422591
>>
>>
>> _
>> _
>> Disclaimer: This email and any attachments are sent in strictest 
>> confidence for the sole use of the addressee and may contain legally 
>> privileged, confidential, and proprietary data. If you are not the 
>> intended recipient, please advise the sender by replying promptly to 
>> this email and then delete and destroy this email and any attachments 
>> without any further use, copying or forwarding.
>>
>> _
>> _ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> 

[Openstack-operators] Newton consoleauth HA tokens

2017-01-23 Thread Chris Apsey

All,

I attempted to deploy the nova service in HA, but when users attempt to 
connect via the console, it doesn't work about 30% of the time and they 
get the 1006 error.  The nova-consoleauth service is reporting their 
token as invalid.  I am running memcached, and have tried referencing it 
using both the legacy memcached_servers directive and in the new [cache] 
configuration section.  No dice.  If I disable the nova-consoleauth 
service on one of the nodes, everything works fine.  I see lots of bug 
reports floating around about this, but I can't quite get the solutions 
I have found reliably working.  I'm on Ubuntu 16.04 LTS+Newton from UCA.


Ideas?

--
v/r

Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Saravanan KR
Thanks Giulio for adding it to PTG discussion pad. I am not yet sure
of my presence in PTG. Hoping that things will fall in place soon.

We have spent a considerable about of time in moving from static roles
to composable roles. If we are planning to introduce static profiles,
then after a while we will end up with the same problem, and
definitely, it actually depends on how the features will be composed
on a role. Looking forward.

Regards,
Saravanan KR

On Mon, Jan 23, 2017 at 6:25 PM, Giulio Fidente  wrote:
> On 01/23/2017 11:07 AM, Saravanan KR wrote:
>> Thanks John for the info.
>>
>> I am going through the spec in detail. And before that, I had few
>> thoughts about how I wanted to approach this, which I have drafted in
>> https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
>> 100% ready yet, I was still working on it.
>
> I've linked this etherpad for the session we'll have at the PTG
>
>> As of now, there are few differences on top of my mind, which I want
>> to highlight, I am still going through the specs in detail:
>> * Profiles vs Features - Considering a overcloud node as a profiles
>> rather than a node which can host these features, would have
>> limitations to it. For example, if i need a Compute node to host both
>> Ceph (OSD) and DPDK, then the node will have multiple profiles or we
>> have to create a profile like -
>> hci_enterprise_many_small_vms_with_dpdk? The first one is not
>> appropriate and the later is not scaleable, may be something else in
>> your mind?
>> * Independent - The initial plan of this was to be independent
>> execution, also can be added to deploy if needed.
>> * Not to expose/duplicate parameters which are straight forward, for
>> example tuned-profile name should be associated with feature
>> internally, Workflows will decide it.
>
> for all of the above, I think we need to decide if we want the
> optimizations to be profile-based and gathered *before* the overcloud
> deployment is started or if we want to set these values during the
> overcloud deployment basing on the data we have at runtime
>
> seems like both approaches have pros and cons and this would be a good
> conversation to have with more people at the PTG
>
>> * And another thing, which I couldn't get is, where will the workflow
>> actions be defined, in THT or tripleo_common?
>
> to me it sounds like executing the workflows before stack creation is
> started would be fine, at least for the initial phase
>
> running workflows from Heat depends on the other blueprint/session we'll
> have about the WorkflowExecution resource and once that will be
> available, we could trigger the workflow execution from tht if beneficial
>
>> The requirements which I thought of, for deriving workflow are:
>> Parameter Deriving workflow should be
>> * independent to run the workflow
>> * take basic parameters inputs, for easy deployment, keep very minimal
>> set of mandatory parameters, and rest as optional parameters
>> * read introspection data from Ironic DB and Swift-stored blob
>>
>> I will add these comments as starting point on the spec. We will work
>> towards bringing down the differences, so that operators headache is
>> reduced to a greater extent.
>
> thanks
>
> --
> Giulio Fidente
> GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-23 Thread Renat Akhmerov
While I’m in the loop regarding how this workflow works others may not be. 
Could please just post your expression and data that you use to evaluate this 
expression? And times. Workflow itself has nothing to do with what we’re 
discussing.

Renat Akhmerov
@Nokia

> On 23 Jan 2017, at 21:44, lương hữu tuấn  wrote:
> 
> Hi guys,
> 
> I am provide some information about the result of testing YAQL performance on 
> my devstack stable/newton with RAM of 6GB. The workflow i created is below:
> 
> #
> input:
>   - size
>   - number_of_handovers
> 
>   tasks:
>   generate_input:
> action: std.javascript
> input:
>   context:
> size: <% $.size %>
>   script: |
> result = {}
> for(i=0; i < $.size; i++) {
>   result["key_" + i] = {
> "alma": "korte"
>   }
> }
> return result
> publish:
>   data: <% task(generate_input).result %>
> on-success:
>   - process
> 
>   process:
> action: std.echo
> input:
>   output: <% $.data %>
> publish:
>   data: <% task(process).result %>
>   number_of_handovers: <% $.number_of_handovers - 1 %>
> on-success:
>   - process: <% $.number_of_handovers > 0 %>
> 
> ##
> 
> I test with the size is 1 and the number_of_handover is 50. The result 
> shows out that time for validating the <% $.data %> is quite long. I do not 
> know this time is acceptable but imagine that in our use case, the value of 
> $.data could be a large size. Couple of log file is below:
> 
> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function 
> evaluate finished in 11262.710 ms
> 
> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function 
> evaluate finished in 8146.324 ms
> 
> ..
> 
> The average value is around 10s each time of valuating.
> 
> Br,
> 
> Tuan
> 
> 
> On Mon, Jan 23, 2017 at 11:48 AM, lương hữu tuấn  > wrote:
> Hi Renat,
> 
> For more details, i will go to check on the CBAM machine and hope it is not 
> deleted yet since we have done it for around a week.
> Another thing is Jinja2 showed us that it run 2-3 times faster with the same 
> test with YAQL. More information i will also provide it later.
> 
> Br,
> 
> Tuan
> 
> On Mon, Jan 23, 2017 at 8:32 AM, Renat Akhmerov  > wrote:
> Tuan,
> 
> I don’t think that Jinja is something that Kirill is responsible for. It’s 
> just a coincidence that we in Mistral support both YAQL and Jinja. The latter 
> has been requested by many people so we finally did it.
> 
> As far as performance, could you please provide some numbers? When you say 
> “takes a lot of time” how much time is it? For what kind of input? Why do you 
> think it is slow? What are your expectations?Provide as much info as 
> possible. After that we can ask YAQL authors to comment and help if we 
> realize that the problem really exists.
> 
> I’m interested in this too since I’m always looking for ways to speed Mistral 
> up.
> 
> Thanks
> 
> Renat Akhmerov
> @Nokia
> 
>> On 18 Jan 2017, at 16:25, lương hữu tuấn > > wrote:
>> 
>> Hi Kirill,
>> 
>> Do you have any information related to the performance of Jinja and Yaql 
>> validating. With the big size of input, yaql runs quite so slow in our case 
>> therefore we have plan to switch to jinja.
>> 
>> Br,
>> 
>> @Nokia/Tuan
>> 
>> On Tue, Jan 17, 2017 at 3:02 PM, lương hữu tuấn > > wrote:
>> Hi Kirill,
>> 
>> Thank you for you information. I hope we will have more information about 
>> it. Just keep in touch when you guys in Mirantis have some performance 
>> results about Yaql.
>> 
>> Br,
>> 
>> @Nokia/Tuan 
>> 
>> On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev > > wrote:
>> I think fuel team encountered similar problems, I’d advice asking them 
>> around. Also Stan (author of yaql) might shed some light on the problem =)
>> 
>> -- 
>> Kirill Zaitsev
>> Murano Project Tech Lead
>> Software Engineer at
>> Mirantis, Inc
>> 
>> On 17 January 2017 at 15:11:52, lương hữu tuấn (tuantulu...@gmail.com 
>> ) wrote:
>> 
>>> Hi,
>>> 
>>> We are now using yaql in mistral and what we see that the process of 
>>> validating yaql expression of input takes a lot of time, especially with 
>>> the big size input. Do you guys have any information about performance of 
>>> yaql? 
>>> 
>>> Br,
>>> 
>>> @Nokia/Tuan
>>> 
>>> __ 
>>> OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-23 Thread Kevin Benton
What I don't understand is why the OOM killer is being invoked when there
is almost no swap space being used at all. Check out the memory output when
it's killed:

http://logs.openstack.org/59/382659/26/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/7de01d0/logs/syslog.txt.gz#_Jan_11_15_54_36

"Jan 11 15:54:36 ubuntu-xenial-rax-ord-6599274 kernel: Free swap  =
7994832kB
Jan 11 15:54:36 ubuntu-xenial-rax-ord-6599274 kernel: Total swap =
7999020kB"

Do we have something set that is effectively disabling the usage of swap
space?

On Wed, Jan 18, 2017 at 4:13 PM, Joe Gordon  wrote:

>
>
> On Thu, Jan 19, 2017 at 10:27 AM, Matt Riedemann <
> mrie...@linux.vnet.ibm.com> wrote:
>
>> On 1/18/2017 4:53 AM, Jens Rosenboom wrote:
>>
>>> To me it looks like the times of 2G are long gone, Nova is using
>>> almost 2G all by itself. And 8G may be getting tight if additional
>>> stuff like Ceph is being added.
>>>
>>>
>> I'm not really surprised at all about Nova being a memory hog with the
>> versioned object stuff we have which does it's own nesting of objects.
>>
>> What tools to people use to be able to profile the memory usage by the
>> types of objects in memory while this is running?
>
>
> objgraph and guppy/heapy
>
> http://smira.ru/wp-content/uploads/2011/08/heapy.html
>
> https://www.huyng.com/posts/python-performance-analysis
>
> You can also use gc.get_objects() (https://docs.python.org/2/
> library/gc.html#gc.get_objects) to get a list of all objects in memory
> and go from there.
>
> Slots (https://docs.python.org/2/reference/datamodel.html#slots) are
> useful for reducing the memory usage of objects.
>
>
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] OsOps Reboot

2017-01-23 Thread Melvin Hillsman
Unfortunately I had not requested travel for the PTG but plan to be at the mid 
cycle should nothing change.

--
Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center
mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
Learner | Ideation | Belief | Responsibility | Command
http://osic.org

> On Jan 23, 2017, at 18:27, Matt Fischer  wrote:
> 
> Will there be enough of us at the PTG for an impromptu session there as well?
> 
>> On Mon, Jan 23, 2017 at 9:18 AM, Mike Dorman  wrote:
>> +1!  Thanks for driving this.
>> 
>>  
>> 
>>  
>> 
>> From: Edgar Magana 
>> Date: Friday, January 20, 2017 at 1:23 PM
>> To: "m...@mattjarvis.org.uk" , Melvin Hillsman 
>> 
>> 
>> 
>> Cc: OpenStack Operators 
>> Subject: Re: [Openstack-operators] OsOps Reboot
>>  
>> 
>> I super second this! Yes, looking forward to amazing contributions there.
>> 
>>  
>> 
>> Edgar
>> 
>>  
>> 
>> From: Matt Jarvis 
>> Reply-To: "m...@mattjarvis.org.uk" 
>> Date: Friday, January 20, 2017 at 12:33 AM
>> To: Melvin Hillsman 
>> Cc: OpenStack Operators 
>> Subject: Re: [Openstack-operators] OsOps Reboot
>> 
>>  
>> 
>> Great stuff Melvin ! Look forward to seeing this move forward. 
>> 
>>  
>> 
>> On Fri, Jan 20, 2017 at 6:32 AM, Melvin Hillsman  
>> wrote:
>> 
>> Good day everyone,
>> 
>>  
>> 
>> As operators we would like to reboot the efforts started around OsOps. 
>> Initial things that may make sense to work towards are starting back 
>> meetings, standardizing the repos (like having a lib or common folder, 
>> READMEs include release(s) tool works with, etc), increasing feedback loop 
>> from operators in general, actionable work items, identifying teams/people 
>> with resources for continuous testing/feedback, etc.
>> 
>>  
>> 
>> We have got to a great place so let's increase the momentum and maximize all 
>> the work that has been done for OsOps so far. Please visit the following 
>> link [ https://goo.gl/forms/eSvmMYGUgRK901533 ] to vote on day of the week 
>> and time (UTC) you would like to have OsOps meeting. And also visit this 
>> etherpad [ https://etherpad.openstack.org/p/osops-meeting ] to help shape 
>> the initial and ongoing agenda items.
>> 
>>  
>> 
>> Really appreciate you taking time to read through this email and looking 
>> forward to all the great things to come.
>> 
>>  
>> 
>> Also we started an etherpad for brainstorming around how OsOps could/would 
>> function; very rough draft/outline/ideas right now again please provide 
>> feedback: https://etherpad.openstack.org/p/osops-project-future
>> 
>> 
>> 
>> --
>> 
>> Kind regards,
>> 
>> Melvin Hillsman
>> Ops Technical Lead
>> OpenStack Innovation Center
>> 
>> mrhills...@gmail.com
>> phone: (210) 312-1267
>> mobile: (210) 413-1659
>> http://osic.org
>> 
>> Learner | Ideation | Belief | Responsibility | Command
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>>  
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
> 


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-23 Thread Pradeep Singh
+1, welcome Kevin. I appreciate your work.

On Tuesday, January 24, 2017, Yanyan Hu  wrote:

> +1 for the change.
>
> 2017-01-24 6:56 GMT+08:00 Hongbin Lu  >:
>
>> Hi Zun cores,
>>
>>
>>
>> I proposed a change of Zun core team membership as below:
>>
>>
>>
>> + Kevin Zhao (kevin-zhao)
>>
>> - Haiwei Xu (xu-haiwei)
>>
>>
>>
>> Kevin has been working for Zun for a while, and made significant
>> contribution. He submitted several non-trivial patches with high quality.
>> One of his challenging task is adding support of container interactive
>> mode, and it looks he is capable to handle this challenging task (his
>> patches are under reviews now). I think he is a good addition to the core
>> team. Haiwei is a member of the initial core team. Unfortunately, his
>> activity dropped down in the past a few months.
>>
>>
>>
>> According to the OpenStack Governance process [1], we require a minimum
>> of 4 +1 votes from Zun core reviewers within a 1 week voting window
>> (consider this proposal as a +1 vote from me). A vote of -1 is a veto. If
>> we cannot get enough votes or there is a veto vote prior to the end of the
>> voting window, this proposal is rejected.
>>
>>
>>
>> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best regards,
>
> Yanyan
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] grenade failures in the gate

2017-01-23 Thread Armando M.
On 23 January 2017 at 13:50, Jeremy Stanley  wrote:

> On 2017-01-23 13:38:58 -0800 (-0800), Armando M. wrote:
> > We spotted [1] in the gate. Please wait for its resolution until pushing
> > patches into the merge queue.
>
> https://review.openstack.org/424323 seems to be the fix, and will
> hopefully merge shortly along with its dependency (they're at the
> top of the gate pipeline now as I write this).
>

Yes, that's the one. It looks like we're out of the woods...for now!

Cheers,
Armando


> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-23 Thread Yanyan Hu
+1 for the change.

2017-01-24 6:56 GMT+08:00 Hongbin Lu :

> Hi Zun cores,
>
>
>
> I proposed a change of Zun core team membership as below:
>
>
>
> + Kevin Zhao (kevin-zhao)
>
> - Haiwei Xu (xu-haiwei)
>
>
>
> Kevin has been working for Zun for a while, and made significant
> contribution. He submitted several non-trivial patches with high quality.
> One of his challenging task is adding support of container interactive
> mode, and it looks he is capable to handle this challenging task (his
> patches are under reviews now). I think he is a good addition to the core
> team. Haiwei is a member of the initial core team. Unfortunately, his
> activity dropped down in the past a few months.
>
>
>
> According to the OpenStack Governance process [1], we require a minimum of
> 4 +1 votes from Zun core reviewers within a 1 week voting window (consider
> this proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot
> get enough votes or there is a veto vote prior to the end of the voting
> window, this proposal is rejected.
>
>
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,

Yanyan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-23 Thread Eli Qiao
+1 for this change. 

-- 
Eli Qiao
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Tuesday, 24 January 2017 at 6:56 AM, Hongbin Lu wrote:

> Hi Zun cores,
>  
> I proposed a change of Zun core team membership as below:
>  
> + Kevin Zhao (kevin-zhao)
> - Haiwei Xu (xu-haiwei)
>  
> Kevin has been working for Zun for a while, and made significant 
> contribution. He submitted several non-trivial patches with high quality. One 
> of his challenging task is adding support of container interactive mode, and 
> it looks he is capable to handle this challenging task (his patches are under 
> reviews now). I think he is a good addition to the core team. Haiwei is a 
> member of the initial core team. Unfortunately, his activity dropped down in 
> the past a few months.
>  
> According to the OpenStack Governance process [1], we require a minimum of 4 
> +1 votes from Zun core reviewers within a 1 week voting window (consider this 
> proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get 
> enough votes or there is a veto vote prior to the end of the voting window, 
> this proposal is rejected.
>  
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>  
> Best regards,
> Hongbin
>  
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Pike design topics discussion

2017-01-23 Thread joehuang
Hello,

(Repost)

As what's discussed during the weekly meeting, let's discuss what's to do in 
Pike in etherpad next Tuesday morning UTC 1:30 am (9:30am Beijing time, 10:30 
Korea/Japan time, Monday 5:30pm PST time)

The etherpad link https://etherpad.openstack.org/p/tricircle-pike-design-topics

Please input your concerns on what to do in Pike into the etherpad, and let's 
discuss that time, the duration is around 1.5 hour.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas][barbican][octavia]certs don't get deregistered in barbican after lbaas listener delete

2017-01-23 Thread Jiahao Liang (Frankie)
Hi community,

I created a loadbalancer with a listener with protocol as
"TERMINATED_HTTPS" and specify --default-tls-container-ref with a ref of
secret container from Barbican.
However, after I deleted the listener, the lbaas wasn't removed from
barbican container consumer list.

$openstack secret container get
http://192.168.20.24:9311/v1/containers/453e8905-d42b-43bd-9947-50e3acf499f4
++-+
| Field  | Value
|
++-+
| Container href |
http://192.168.20.24:9311/v1/containers/453e8905-d42b-43bd-9947-50e3acf499f4
   |
| Name   | tls_container2
   |
| Created| 2017-01-19 12:44:07+00:00
|
| Status | ACTIVE
   |
| Type   | certificate
|
| Certificate|
http://192.168.20.24:9311/v1/secrets/bfc2bf01-0f23-4105-bf09-c75839b6b4cb
|
| Intermediates  | None
   |
| Private Key|
http://192.168.20.24:9311/v1/secrets/c85d150e-ec84-42e0-a65f-9c9ec19767e1
|
| PK Passphrase  | None
   |
| *Consumers  | {u'URL':
u'lbaas://RegionOne/loadbalancer/5e7768b9-7aa9-4146-8a71-6291353b447e',
u'name': u'lbaas'}*


I went through the neutron-lbaas code base. We did register consumer during
the creation of "TERMINATED_HTTPS" listener in [1]. But we somehow doesn't
deregister it during the deletion in [1]:
https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/services/loadbalancer/plugin.py#L642
get_cert() register lbaas as a consumer for barbican cert_manager.  (
https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/common/cert_manager/barbican_cert_manager.py#L177
)
[2]:
https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/services/loadbalancer/plugin.py#L805
we probably need to call delete_cert from barbican cert_manager to remove
the consumer. (
https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/common/cert_manager/barbican_cert_manager.py#L187
)


My questions are:
1. is that a bug?
2. or is it a intentional design letting the vendor driver to handle it?

It looks more like a bug to me.

Any thoughts?


Best,
Jiahao
-- 

*梁嘉豪/Jiahao LIANG (Frankie) *
Email: gzliangjia...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] OsOps Reboot

2017-01-23 Thread Matt Fischer
Will there be enough of us at the PTG for an impromptu session there as
well?

On Mon, Jan 23, 2017 at 9:18 AM, Mike Dorman  wrote:

> +1!  Thanks for driving this.
>
>
>
>
>
> *From: *Edgar Magana 
> *Date: *Friday, January 20, 2017 at 1:23 PM
> *To: *"m...@mattjarvis.org.uk" , Melvin Hillsman <
> mrhills...@gmail.com>
>
> *Cc: *OpenStack Operators 
> *Subject: *Re: [Openstack-operators] OsOps Reboot
>
>
>
> I super second this! Yes, looking forward to amazing contributions there.
>
>
>
> Edgar
>
>
>
> *From: *Matt Jarvis 
> *Reply-To: *"m...@mattjarvis.org.uk" 
> *Date: *Friday, January 20, 2017 at 12:33 AM
> *To: *Melvin Hillsman 
> *Cc: *OpenStack Operators 
> *Subject: *Re: [Openstack-operators] OsOps Reboot
>
>
>
> Great stuff Melvin ! Look forward to seeing this move forward.
>
>
>
> On Fri, Jan 20, 2017 at 6:32 AM, Melvin Hillsman 
> wrote:
>
> Good day everyone,
>
>
>
> As operators we would like to reboot the efforts started around OsOps.
> Initial things that may make sense to work towards are starting back
> meetings, standardizing the repos (like having a lib or common folder,
> READMEs include release(s) tool works with, etc), increasing feedback loop
> from operators in general, actionable work items, identifying teams/people
> with resources for continuous testing/feedback, etc.
>
>
>
> We have got to a great place so let's increase the momentum and maximize
> all the work that has been done for OsOps so far. Please visit the
> following link [ https://goo.gl/forms/eSvmMYGUgRK901533
> 
> ] to vote on day of the week and time (UTC) you would like to have OsOps
> meeting. And also visit this etherpad [ https://etherpad.openstack.
> org/p/osops-meeting
> 
> ] to help shape the initial and ongoing agenda items.
>
>
>
> Really appreciate you taking time to read through this email and looking
> forward to all the great things to come.
>
>
>
> Also we started an etherpad for brainstorming around how OsOps could/would
> function; very rough draft/outline/ideas right now again please provide
> feedback: https://etherpad.openstack.org/p/osops-project-future
> 
>
>
>
> --
>
> Kind regards,
>
> Melvin Hillsman
> Ops Technical Lead
> OpenStack Innovation Center
>
> mrhills...@gmail.com
> phone: (210) 312-1267
> mobile: (210) 413-1659
> http://osic.org
> 
>
> Learner | Ideation | Belief | Responsibility | Command
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [congress] ocata client causes feature regression with pre-ocata server

2017-01-23 Thread Eric K
Thanks Tim and Monty!

I also agree with ( c ). Here’s a simple patch doing that:
https://review.openstack.org/#/c/424385/

From:  Tim Hinrichs 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Monday, January 23, 2017 at 7:55 AM
To:  "OpenStack Development Mailing List (not for usage questions)"

Subject:  Re: [openstack-dev] [congress] ocata client causes feature
regression with pre-ocata server

> At some point the client sometimes made multiple API calls.  I think (c) seems
> right too.  
> 
> Tim 
> 
> On Sun, Jan 22, 2017 at 1:15 AM Monty Taylor  wrote:
>> On 01/21/2017 04:07 AM, Eric K wrote:
>>> > Hi all,
>>> >
>>> > I was getting ready to request release of congress client, but I
>>> > remembered that the new client causes feature regression if used with
>>> > older versions of congress. Specifically, new client with pre-Ocata
>>> > congress cannot refer to datasource by name, something that could be done
>>> > with pre-Ocata client.
>>> >
>>> > Here¹s the patch of interest: https://review.openstack.org/#/c/407329/
>>> > 
>>> >
>>> > A few questions:
>>> >
>>> > Are we okay with the regression? Seems like it could cause a fair bit of
>>> > annoyance for users.
>> 
>> This is right. New client lib should always continue to work with old
>> server. (A user should be able to just pip install python-congressclient
>> and have it work regardless of when their operator decides to upgrade or
>> not upgrade their cloud)
>> 
>>> >1. If we¹re okay with that, what¹s the best way to document that
>>> > pre-Ocata congress should be used with pre-Ocata client?
>>> >2. If not, how we avoid the regression? Here are some candidates I can
>>> > think of.
>>> >   a. Client detects congress version and act accordingly. I don¹t
>>> > think this is possible, nor desirable for client to be concerned with
>>> > congress version not just API version.
>>> >   b. Release backward compatible API version 1.1 that supports
>>> > getting datasource by name_or_id. Then client will take different paths
>>> > depending on API version.
>>> >   c. If datasource not found, client falls back on old method of
>>> > retrieving list of datasources to resolve name into UUID. This would work,
>>> > but causes extra API & DB call in many cases.
>>> >   d. Patch old versions of Congress to support getting datasource
>>> > by name_or_id. Essentially, it was always a bug that the API didn¹t
>>> > support name_or_id.
>> 
>> I'm a fan of d - but I don't believe it will help - since the problem
>> will still manifest for users who do not have control over the server
>> installation.
>> 
>> I'd suggest c is the most robust. It _is_ potentially more expensive -
>> but that's a good motivation for the deployer to upgrade their
>> installation of congress without negatively impacting the consumer in
>> the  meantime.
>> 
>> Monty
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions) Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [operators] Optional resource asking or not?

2017-01-23 Thread Sylvain Bauza


Le 23/01/2017 15:18, Sylvain Bauza a écrit :
> 
> 
> Le 23/01/2017 15:11, Jay Pipes a écrit :
>> On 01/22/2017 04:40 PM, Sylvain Bauza wrote:
>>> Hey folks,
>>>
>>> tl;dr: should we GET /resource_providers for only the related resources
>>> that correspond to enabled filters ?
>>
>> No. Have administrators set the allocation ratios for the resources they
>> do not care about exceeding capacity to a very high number.
>>
>> If someone previously removed a filter, that doesn't mean that the
>> resources were not consumed on a host. It merely means the admin was
>> willing to accept a high amount of oversubscription. That's what the
>> allocation_ratio is for.
>>
>> The flavor should continue to have a consumed disk/vcpu/ram amount,
>> because the VM *does actually consume those resources*. If the operator
>> doesn't care about oversubscribing one or more of those resources, they
>> should set the allocation ratios of those inventories to a high value.
>>
>> No more adding configuration options for this kind of thing (or in this
>> case, looking at an old configuration option and parsing it to see if a
>> certain filter is listed in the list of enabled filters).
>>
>> We have a proper system of modeling these data-driven decisions now, so
>> my opinion is we should use it and ask operators to use the placement
>> REST API for what it was intended.
>>
> 
> I know your point, but please consider mine.
> What if an operator disabled CoreFilter in Newton and wants to upgrade
> to Ocata ?
> All of that implementation being very close to the deadline makes me
> nervous and I really want the seamless path for operators now using the
> placement service.
> 
> Also, like I said in my bigger explanation, we should need to modify a
> shit ton of assertions in our tests that can say "meh, don't use all the
> filters, but just these ones". Pretty risky so close to a FF.
> 

Oh, just discovered a related point : in Devstack, we don't set the
CoreFilter by default !
https://github.com/openstack-dev/devstack/blob/adcf0c50cd87c68abef7c3bb4785a07d3545be5d/lib/nova#L94

TBC, that means that the gate is not verifying the VCPUs by the filter,
just by the compute claims. Heh.

Honestly I think we really need to optionally the filters for Ocata then.

-Sylvain

> -Sylvain
> 
> 
>> Best,
>> -jay
>>
>>> Explanation below why even if I
>>> know we have a current consensus, maybe we should discuss again about it.
>>>
>>>
>>> I'm still trying to implement https://review.openstack.org/#/c/417961/
>>> but when trying to get the functional job being +1, I discovered that we
>>> have at least one functional test [1] asking for just the RAMFilter (and
>>> not for VCPUs or disks).
>>>
>>> Given the current PS is asking for *all* both CPU, RAM and disk, it's
>>> trampling the current test by getting a NoValidHost.
>>>
>>> Okay, I could just modify the test and make sure we have enough
>>> resources for the flavors but I actually now wonder if that's all good
>>> for our operators.
>>>
>>> I know we have a consensus saying that we should still ask for both CPU,
>>> RAM and disk at the same time, but I imagine our users coming back to us
>>> saying "eh, look, I'm no longer able to create instances even if I'm not
>>> using the CoreFilter" for example. It could be a bad day for them and
>>> honestly, I'm not sure just adding documentation or release notes would
>>> help them.
>>>
>>> What are you thinking if we say that for only this cycle, we still try
>>> to only ask for resources that are related to the enabled filters ?
>>> For example, say someone is disabling CoreFilter in the conf opt, then
>>> the scheduler shouldn't ask for VCPUs to the Placement API.
>>>
>>> FWIW, we have another consensus about not removing
>>> CoreFilter/RAMFilter/MemoryFilter because the CachingScheduler is still
>>> using them (and not calling the Placement API).
>>>
>>> Thanks,
>>> -Sylvain
>>>
>>> [1]
>>> https://github.com/openstack/nova/blob/de0eff47f2cfa271735bb754637f979659a2d91a/nova/tests/functional/test_server_group.py#L48
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


[openstack-dev] Sabari Murugesan stepping down from Glance core

2017-01-23 Thread Brian Rosmaita
Sabari Murugesan has communicated to me that he's no longer able to
commit time to working on Glance, and he's stepping down from the core
reviewers' team.

This message isn't all bad news, however: I'm particularly grateful that
Sabari has agreed to continue as the VMware driver maintainer for the
glance_store [0].

Please join me in thanking Sabari for all his past service to Glance.
As anyone who's worked with him knows, he's a great colleague, and I'm
really sorry to see him step down.  I hope that he may find time in the
future to work on Glance again.

thanks,
brian

[0] http://docs.openstack.org/developer/glance_store/drivers/index.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-23 Thread Hongbin Lu
Hi Zun cores,

I proposed a change of Zun core team membership as below:

+ Kevin Zhao (kevin-zhao)
- Haiwei Xu (xu-haiwei)

Kevin has been working for Zun for a while, and made significant contribution. 
He submitted several non-trivial patches with high quality. One of his 
challenging task is adding support of container interactive mode, and it looks 
he is capable to handle this challenging task (his patches are under reviews 
now). I think he is a good addition to the core team. Haiwei is a member of the 
initial core team. Unfortunately, his activity dropped down in the past a few 
months.

According to the OpenStack Governance process [1], we require a minimum of 4 +1 
votes from Zun core reviewers within a 1 week voting window (consider this 
proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get enough 
votes or there is a veto vote prior to the end of the voting window, this 
proposal is rejected.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Release notes for THT (need help)

2017-01-23 Thread Emilien Macchi
I've made progress on Ocata release notes for TripleO Heat Templates:
https://review.openstack.org/424365

I need some help to add some features that I wasn't sure about
wording, please help (in a patch on top of it or in review), asap
please.
I'm looking at containers, split-stack-software-configuration,
upgrades, TLS and any feature I might have missed to document in THT.

Next on my TODO: puppet-tripleo.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Performance][Shaker]

2017-01-23 Thread Sai Sindhur Malleni
Thanks Ilya!

On Mon, Jan 23, 2017 at 6:56 AM, Ilya Shakhat  wrote:

> Hi Sai,
>
> In UDP testing PPS represents packets sent by iperf client to server. Loss
> is the percentage of packets that were not received by server (more
> specifically the server tracks packets and sums gaps between of them,
> https://github.com/esnet/iperf/blob/3.0.7/src/iperf_udp.c#L64).
>
> While reported PPS depends on bandwidth and concurrency it makes sense to
> increase them until loss starts going up, meaning that the communication
> channel is near the limit.
>
> Thanks,
> Ilya
>
> 2017-01-21 1:19 GMT+04:00 Sai Sindhur Malleni :
>
>> Hey,
>>
>> When using the "iperf3" class in shaker for looking at UDP small packet
>> performance, we see that as we scale up the concurrency the average PPS
>> goes up and also the loss % increases. Is the loss % a percentage of the
>> PPS or does the PPS only represent successful transmissions? Thanks!
>>
>> --
>> Sai Sindhur Malleni
>> Software Engineer
>> Red Hat Inc.
>> 100 East Davie Street
>> Raleigh, NC, USA
>> Work: (919) 754-4557 | Cell: (919) 985-1055
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sai Sindhur Malleni
Software Engineer
Red Hat Inc.
100 East Davie Street
Raleigh, NC, USA
Work: (919) 754-4557 | Cell: (919) 985-1055
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] [ptl] PTL Candidacy for Pike

2017-01-23 Thread Andy McCrae
Hi All,

I'm once again running for the PTL position for OpenStack-Ansible during
the Pike cycle.

Here is my candidacy statement: https://review.openstack.org/#/c/424348/

Thanks for all your support during the Ocata cycle, and looking forward to
Pike!

Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] grenade failures in the gate

2017-01-23 Thread Jeremy Stanley
On 2017-01-23 13:38:58 -0800 (-0800), Armando M. wrote:
> We spotted [1] in the gate. Please wait for its resolution until pushing
> patches into the merge queue.

https://review.openstack.org/424323 seems to be the fix, and will
hopefully merge shortly along with its dependency (they're at the
top of the gate pipeline now as I write this).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Cinder-Nova API changes meeting

2017-01-23 Thread Ildiko Vancsa
Hi All,

Unfortunately our current meeting slot (every Monday 1700UTC) is in collision 
for several of the regular attendees.

In an attempt to find a new slot I checked the available meeting channels for 
the same time slot over the week and we have at least on available currently 
for each day. So for the first try let’s see whether we can find another day 
during the week with the SAME (1700UTC) time slot that works better.

You can share your preference on this Doodle poll: 
http://doodle.com/poll/9per237agrdy7rqz 


Thanks,
Ildikó__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] grenade failures in the gate

2017-01-23 Thread Armando M.
Hi neutrinos,

We spotted [1] in the gate. Please wait for its resolution until pushing
patches into the merge queue.

Thanks,
Armando

[1] https://bugs.launchpad.net/neutron/+bug/1658806
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] PTL nomination is open until Jan 29

2017-01-23 Thread Hongbin Lu
Hi all,

I sent this email to encourage you to run for the Magnum PTL for Pike [1]. I 
think most of the audience are in this ML so I sent the message to here.

First, I would like to thank for your interest in the Magnum project. It is 
great to work with you to build the project and make it better and better. 
Second, I would like to relay a reminder that the Pike PTL nomination is open 
*now*, and will be closed at Jan 29 23:45 UTC [1]. I wish more than one of you 
will step up to run for Magnum PTL position. I think the community will be 
healthier if there are more than one PTL candidates. If you are considering to 
run, I think the blog post below will help you understand more about this role.

  http://blog.flaper87.com/something-about-being-a-ptl

I strongly agree with the following key points of being a PTL:
* Make sure you will have enough time dedicated to the upstream.
* Prepare to step down in a cycle or two and create the next PTLs.
* Community decides: PTLs are not dictators.

If you have any query to decide, feel free to reach out to me and I am happy to 
share my past experience of being a Magnum PTL. Below is the history of Magnum 
PTLs. I sincerely thank them for their leaderships, but I would encourage a 
change in the upcoming cycles, simply for following the convention of other 
OpenStack projects to circulate the PTL position, ideally to a new person of a 
different affiliation. I think this will let everyone feel the ownership of the 
project and help the community in the long run.

Juno and earlier: Adrian Otto
Kilo: Adrian Otto
Liberty: Adrian Otto
Mitaka: Adrian Otto
Newton: Hongbin Lu
Ocata: Adrian Otto

[1] https://governance.openstack.org/election/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] feature freeze exception request -- nova simple tenant usages api pagination

2017-01-23 Thread Richard Jones
[I'm on vacation, so can't look into this too deeply, sorry]

I'm not sure I follow Rob's point here. Does the patch
https://review.openstack.org/#/c/410337 just check the version to see
if it's >= 2.40 and take action appropriately? I don't see how it
changes anything to force requesting 2.40 with every request? Then
again, I've not been able to look into how the current clients'
microversion code is implemented/broken. Is it just that *declaring*
the 2.40 version in https://review.openstack.org/#/c/422642 results in
all requests being forced to use that version?


 Richard

On 23 January 2017 at 23:10, Radomir Dopieralski  wrote:
> Yes, to do it differently we need to add the microversion support patch that
> you are working on, and make use of it, or write a patch that has equivalent
> functionality.
>
> On Fri, Jan 20, 2017 at 6:57 PM, Rob Cresswell
>  wrote:
>>
>> Just a thought: With the way we currently do microversions, wouldnt this
>> request 2.40 for every request ? There's a pretty good chance that would
>> break things.
>>
>> Rob
>>
>> On 20 January 2017 at 00:02, Richard Jones  wrote:
>>>
>>> FFE granted for the three patches. We need to support that nova API
>>> change.
>>>
>>> On 20 January 2017 at 01:28, Radomir Dopieralski 
>>> wrote:
>>> > I would like to request a feature freeze exception for the following
>>> > patch:
>>> >
>>> > https://review.openstack.org/#/c/410337
>>> >
>>> > This patch adds support for retrieving the simple tenant usages from
>>> > Nova in
>>> > chunks, and it is necessary for correct data given that related patches
>>> > have
>>> > been already merged in Nova. Without
>>> > it, the data received will be truncated.
>>> >
>>> > In order to actually use that patch, however, it is necessary to set
>>> > the
>>> > Nova API version to at least
>>> > version 3.40. For this, it's necessary to also add this patch:
>>> >
>>> > https://review.openstack.org/422642
>>> >
>>> > However, that patch will not work, because of a bug in the
>>> > VersionManager,
>>> > which for some reason
>>> > uses floating point numbers for specifying versions, and thus
>>> > understands
>>> > 2.40 as 2.4. To fix that, it
>>> > is also necessary to merge this patch:
>>> >
>>> > https://review.openstack.org/#/c/410688
>>> >
>>> > I would like to request an exception for all those three patches.
>>> >
>>> > An alternative to this would be to finish and merge the microversion
>>> > support, and modify the first patch to make use of it. Then we would
>>> > need
>>> > exceptions for those two patches.
>>> >
>>> >
>>> > __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Setting up another compute node

2017-01-23 Thread Trinath Somanchi

PortBindingFailed: Binding failed for port e1058d22-9a7b-4988-9644-d0f476a01015,


Please reverify neutron configuration


Get Outlook for iOS


From: Peter Kirby 
Sent: Tuesday, January 24, 2017 2:02:57 AM
To: Trinath Somanchi
Cc: OpenStack
Subject: Re: [Openstack] Setting up another compute node

I agree.  But I can't figure out why the port isn't getting created.  Those 
lines are the only ones that show up in neutron logs.

Here's what shows up in the nova logs:

Jan 23 14:09:21 vhost2 nova-compute[8936]: Traceback (most recent call last):
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 115, in wait
Jan 23 14:09:21 vhost2 nova-compute[8936]: listener.cb(fileno)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main
Jan 23 14:09:21 vhost2 nova-compute[8936]: result = function(*args, **kwargs)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/utils.py", line 1159, in context_wrapper
Jan 23 14:09:21 vhost2 nova-compute[8936]: return func(*args, **kwargs)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1587, in 
_allocate_network_async
Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(*exc_info)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1570, in 
_allocate_network_async
Jan 23 14:09:21 vhost2 nova-compute[8936]: bind_host_id=bind_host_id)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 685, in 
allocate_for_instance
Jan 23 14:09:21 vhost2 nova-compute[8936]: self._delete_ports(neutron, 
instance, created_port_ids)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
Jan 23 14:09:21 vhost2 nova-compute[8936]: self.force_reraise()
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(self.type_, self.value, 
self.tb)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 674, in 
allocate_for_instance
Jan 23 14:09:21 vhost2 nova-compute[8936]: security_group_ids, available_macs, 
dhcp_opts)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 261, in 
_create_port
Jan 23 14:09:21 vhost2 nova-compute[8936]: raise 
exception.PortBindingFailed(port_id=port_id)
Jan 23 14:09:21 vhost2 nova-compute[8936]: PortBindingFailed: Binding failed 
for port e1058d22-9a7b-4988-9644-d0f476a01015, please check neutron logs for 
more information.
Jan 23 14:09:21 vhost2 nova-compute[8936]: Removing descriptor: 21



Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer 
Plus
peter.ki...@objectstream.com

Objectstream, Inc.
Office: 405-942-4477 / Fax: 866-814-0174
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 2:21 PM, Trinath Somanchi 
> wrote:
The port doesn't exists at all.

Port e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int

Get Outlook for iOS


From: Peter Kirby 
>
Sent: Tuesday, January 24, 2017 1:43:36 AM

To: Trinath Somanchi
Cc: OpenStack
Subject: Re: [Openstack] Setting up another compute node

I just did another attempt at this so I'd have fresh logs.

There are all the lines produced in the neutron openvswitch-agent.log file when 
I attempt that previous command.

2017-01-23 14:09:20.918 8097 INFO neutron.agent.securitygroups_rpc 
[req-a9ab1e05-cf41-44ce-8762-d7f0f72e7ba3 582643be48c04603a09250a1be6e6cf3 
1dd7b6481aa34ef7ba105a7336845369 - - -] Security group member updated 
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:21.132 8097 INFO neutron.agent.securitygroups_rpc 
[req-b8cc3ab8-d4f3-4c96-820d-148ae6fd47af 582643be48c04603a09250a1be6e6cf3 
1dd7b6481aa34ef7ba105a7336845369 - - -] Security group member updated 
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:22.057 8097 INFO neutron.agent.common.ovs_lib 
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Port 
e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int
2017-01-23 14:09:22.058 8097 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] port_unbound(): net_uuid 

Re: [OpenStack-Infra] Unable to add myself to the python-redfish-core group I created

2017-01-23 Thread Jeremy Stanley
On 2017-01-23 15:53:59 -0500 (-0500), Paul Belanger wrote:
> On Mon, Jan 23, 2017 at 09:10:56PM +0100, Bruno Cornec wrote:
> > Hello,
> > 
> > I'm unable to add myself to the python-redfish-core group I created.
> > 
> > When using the Web interface at
> > https://review.openstack.org/#/admin/groups/99,members the
> > fields are greyed and I cannot follow the doc at
> > https://review.openstack.org/Documentation/access-control.html
> > to add myself to the group.
> > 
> You cannot self approve yourself to a gerrit group, so in the
> example of trove-core, you need to ask the trove PTL for the
> rights.
[...]

I believe he linked the trove-core group in error. I assumed he
meant https://review.openstack.org/#/admin/groups/1648,members
instead.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Unable to add myself to the python-redfish-core group I created

2017-01-23 Thread Paul Belanger
On Mon, Jan 23, 2017 at 09:10:56PM +0100, Bruno Cornec wrote:
> Hello,
> 
> I'm unable to add myself to the python-redfish-core group I created.
> 
> When using the Web interface at 
> https://review.openstack.org/#/admin/groups/99,members the fields are greyed 
> and I cannot follow the doc at 
> https://review.openstack.org/Documentation/access-control.html to add myself 
> to the group.
> 
You cannot self approve yourself to a gerrit group, so in the example of
trove-core, you need to ask the trove PTL for the rights.

> I tried to use another method:
> 
> ssh -p 29418 bruno-cor...@review.openstack.org gerrit set-members 
> python-redfish-core --add bruno-cornec
> fatal: internal server error
> 
> So I think I need help from an admin to be able to modify that group.
> 
> I have the same issue with the other group python-redfish-release at 
> https://review.openstack.org/#/admin/groups/1649,members
> This is annoying as I cannot +2 our first patch 
> https://review.openstack.org/#/c/410852/7 since the integration of the 
> project !
> 
> Any help to solve this is welcome.
> Thanks in advance and best regards,
> Bruno.
> -- 

It is currently a manual process to be added to the gerrit group[1]. I've gone a
head an done that.

[1] 
http://docs.openstack.org/infra/manual/creators.html#update-the-gerrit-group-members

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [Openstack] Setting up another compute node

2017-01-23 Thread Peter Kirby
I agree.  But I can't figure out why the port isn't getting created.  Those
lines are the only ones that show up in neutron logs.

Here's what shows up in the nova logs:

Jan 23 14:09:21 vhost2 nova-compute[8936]: Traceback (most recent call
last):
Jan 23 14:09:21 vhost2 nova-compute[8936]: File
"/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 115, in wait
Jan 23 14:09:21 vhost2 nova-compute[8936]: listener.cb(fileno)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in
main
Jan 23 14:09:21 vhost2 nova-compute[8936]: result = function(*args,
**kwargs)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File
"/usr/lib/python2.7/site-packages/nova/utils.py", line 1159, in
context_wrapper
Jan 23 14:09:21 vhost2 nova-compute[8936]: return func(*args, **kwargs)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1587, in
_allocate_network_async
Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(*exc_info)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1570, in
_allocate_network_async
Jan 23 14:09:21 vhost2 nova-compute[8936]: bind_host_id=bind_host_id)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 685,
in allocate_for_instance
Jan 23 14:09:21 vhost2 nova-compute[8936]: self._delete_ports(neutron,
instance, created_port_ids)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
__exit__
Jan 23 14:09:21 vhost2 nova-compute[8936]: self.force_reraise()
Jan 23 14:09:21 vhost2 nova-compute[8936]: File
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
force_reraise
Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(self.type_,
self.value, self.tb)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 674,
in allocate_for_instance
Jan 23 14:09:21 vhost2 nova-compute[8936]: security_group_ids,
available_macs, dhcp_opts)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 261,
in _create_port
Jan 23 14:09:21 vhost2 nova-compute[8936]: raise
exception.PortBindingFailed(port_id=port_id)
Jan 23 14:09:21 vhost2 nova-compute[8936]: PortBindingFailed: Binding
failed for port e1058d22-9a7b-4988-9644-d0f476a01015, please check neutron
logs for more information.
Jan 23 14:09:21 vhost2 nova-compute[8936]: Removing descriptor: 21


Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer Plus

peter.ki...@objectstream.com
*Objectstream, Inc. *
Office: 405-942-4477 / Fax: 866-814-0174
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 2:21 PM, Trinath Somanchi 
wrote:

> The port doesn't exists at all.
>
> Port e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int
>
> Get Outlook for iOS 
>
> --
> *From:* Peter Kirby 
> *Sent:* Tuesday, January 24, 2017 1:43:36 AM
>
> *To:* Trinath Somanchi
> *Cc:* OpenStack
> *Subject:* Re: [Openstack] Setting up another compute node
>
> I just did another attempt at this so I'd have fresh logs.
>
> There are all the lines produced in the neutron openvswitch-agent.log file
> when I attempt that previous command.
>
> 2017-01-23 14:09:20.918 8097 INFO neutron.agent.securitygroups_rpc
> [req-a9ab1e05-cf41-44ce-8762-d7f0f72e7ba3 582643be48c04603a09250a1be6e6cf3
> 1dd7b6481aa34ef7ba105a7336845369 - - -] Security group member updated
> [u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
> 2017-01-23 14:09:21.132 8097 INFO neutron.agent.securitygroups_rpc
> [req-b8cc3ab8-d4f3-4c96-820d-148ae6fd47af 582643be48c04603a09250a1be6e6cf3
> 1dd7b6481aa34ef7ba105a7336845369 - - -] Security group member updated
> [u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
> 2017-01-23 14:09:22.057 8097 INFO neutron.agent.common.ovs_lib
> [req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Port
> e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int
> 2017-01-23 14:09:22.058 8097 INFO neutron.plugins.ml2.drivers.
> openvswitch.agent.ovs_neutron_agent [req-d4d61032-5071-4792-a2a1-3d645d44ccfa
> - - - - -] port_unbound(): net_uuid None not in local_vlan_map
> 2017-01-23 14:09:22.059 8097 INFO neutron.agent.securitygroups_rpc
> [req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Remove device filter
> for [u'e1058d22-9a7b-4988-9644-d0f476a01015']
>
>
> When I attempt to check the status of the port mentioned there, it doesn't
> exist on either compute node.
>
> (neutron) port-show e1058d22-9a7b-4988-9644-d0f476a01015
> Unable to find port with name or id 

Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-23 Thread Jason Rist
On 01/23/2017 12:03 PM, Emilien Macchi wrote:
> Greeting folks,
>
> I would like to propose some changes in our core members:
>
> - Remove Jay Dobies who has not been active in TripleO for a while
> (thanks Jay for your hard work!).
> - Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
> docker bits.
> - Add Steve Backer on os-collect-config and also docker bits in
> tripleo-common and tripleo-heat-templates.
>
> Indeed, both Flavio and Steve have been involved in deploying TripleO
> in containers, their contributions are very valuable. I would like to
> encourage them to keep doing more reviews in and out container bits.
>
> As usual, core members are welcome to vote on the changes.
>
> Thanks,
>
+1 - Related - can we get a review of some of the other 'sub teams' within 
TripleO, for instance UI?  We've had 2 core reviewers for a long time, and it 
would help to have one or two more.

-J
-- 
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] PTL Candidacy

2017-01-23 Thread Mike Perez
On 18:35 Jan 22, Kevin Benton wrote:
> I would like to propose my candidacy for the Neutron PTL.
> 
> I have been contributing to Neutron since the Havana development
> cycle working for a network vendor and then a distribution vendor.
> I have been a core reviewer since the Kilo development cycle and
> I am on the Neutron stable maintenance team as well as the drivers
> team.
> 
> I have a few priorities that I would focus on as PTL:

Do you have any thoughts/plans with plugin validation? [1][2][3]

[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110151.html
[2] - https://review.openstack.org/#/c/391594/
[3] - https://etherpad.openstack.org/p/driverlog-validation

-- 
Mike Perez


pgpxgOUqLRXO4.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Setting up another compute node

2017-01-23 Thread Peter Kirby
I just did another attempt at this so I'd have fresh logs.

There are all the lines produced in the neutron openvswitch-agent.log file
when I attempt that previous command.

2017-01-23 14:09:20.918 8097 INFO neutron.agent.securitygroups_rpc
[req-a9ab1e05-cf41-44ce-8762-d7f0f72e7ba3 582643be48c04603a09250a1be6e6cf3
1dd7b6481aa34ef7ba105a7336845369 - - -] Security group member updated
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:21.132 8097 INFO neutron.agent.securitygroups_rpc
[req-b8cc3ab8-d4f3-4c96-820d-148ae6fd47af 582643be48c04603a09250a1be6e6cf3
1dd7b6481aa34ef7ba105a7336845369 - - -] Security group member updated
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:22.057 8097 INFO neutron.agent.common.ovs_lib
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Port
e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int
2017-01-23 14:09:22.058 8097 INFO
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] port_unbound():
net_uuid None not in local_vlan_map
2017-01-23 14:09:22.059 8097 INFO neutron.agent.securitygroups_rpc
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Remove device filter
for [u'e1058d22-9a7b-4988-9644-d0f476a01015']


When I attempt to check the status of the port mentioned there, it doesn't
exist on either compute node.

(neutron) port-show e1058d22-9a7b-4988-9644-d0f476a01015
Unable to find port with name or id 'e1058d22-9a7b-4988-9644-d0f476a01015'


Thank you very much for your input.



Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer Plus

peter.ki...@objectstream.com
*Objectstream, Inc. *
Office: 405-942-4477 / Fax: 866-814-0174
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 1:48 PM, Trinath Somanchi 
wrote:

> This is the error
>
>  port_unbound(): net_uuid None not in local_vlan_map
>
>
> Get Outlook for iOS 
>
> --
> *From:* Peter Kirby 
> *Sent:* Tuesday, January 24, 2017 12:45:01 AM
> *To:* Trinath Somanchi
> *Cc:* OpenStack
> *Subject:* Re: [Openstack] Setting up another compute node
>
> I am using the CLI so I can force the VM to create on the new host.  Using
> this command:
>
> openstack server create \
>   --image db090568-9500-4092-ac23-364f25940b2f \
>   --flavor m1.small \
>   --availability-zone nova:vhost2 \
>   --nic net-id=ce7f1bf3-b6b3-45c3-8251-2cbcdc9d4595 \
>   temptest
>
> If I do not specify vhost2, this command does successfully create the VM
> on vhost1.
>
>
> Peter Kirby / Infrastructure and Build Engineer
> Magento Certified Developer Plus
> 
> peter.ki...@objectstream.com
> *Objectstream, Inc. *
> Office: 405-942-4477 / Fax: 866-814-0174
> 7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
> http://www.objectstream.com/
>
> On Mon, Jan 23, 2017 at 12:27 PM, Trinath Somanchi <
> trinath.soman...@nxp.com> wrote:
>
>> Can you post how did you spawn the VM ? I guess network is not added.
>>
>>
>> /Trinath
>> --
>> *From:* Peter Kirby 
>> *Sent:* Monday, January 23, 2017 9:22:10 PM
>> *To:* OpenStack
>> *Subject:* [Openstack] Setting up another compute node
>>
>> Hi,
>>
>> I'm currently running OpenStack Mitaka on CentOS 7.2 and I'm trying to
>> setup another compute node.
>>
>> I have nova installed and running and the following neutron packages:
>> openstack-neutron.noarch  1:8.3.0-1.el7
>> @openstack-mitaka
>> openstack-neutron-common.noarch   1:8.3.0-1.el7
>> @openstack-mitaka
>> openstack-neutron-ml2.noarch  1:8.3.0-1.el7
>> @openstack-mitaka
>> openstack-neutron-openvswitch.noarch  1:8.3.0-1.el7
>> @openstack-mitaka
>> python-neutron.noarch 1:8.3.0-1.el7
>> @openstack-mitaka
>> python-neutron-lib.noarch 0.0.3-1.el7
>> @openstack-mitaka
>> python2-neutronclient.noarch  4.1.2-1.el7
>> @openstack-mitaka
>>
>> The neutron-openvswitch-agent is up and running and I can see it and nova
>> from the OpenStack commandline.  Neutron agent-list says the new host has
>> the openvswitch agent and it's alive.
>>
>> However, when I try to deploy an instance to this new host, I get the
>> following error and the the instances fails to deploy:
>>
>> 2017-01-20 10:51:21.132 24644 INFO neutron.agent.common.ovs_lib
>> [req-2be33822-4a69-4521-9267-a81315b20b6b - - - - -] Port
>> 67b72a38-c553-4f06-953c-92f43d5dea60 not present in bridge br-int
>> 2017-01-20 10:51:21.133 24644 INFO neutron.plugins.ml2.drivers.op
>> envswitch.agent.ovs_neutron_agent [req-2be33822-4a69-4521-9267-a81315b20b6b
>> - - - - -] port_unbound(): net_uuid None not in local_vlan_map
>>
>> Here is the output from ovs-vsctl show:
>> 

Re: [Openstack] Setting up another compute node

2017-01-23 Thread Trinath Somanchi
The port doesn't exists at all.

Port e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int

Get Outlook for iOS


From: Peter Kirby 
Sent: Tuesday, January 24, 2017 1:43:36 AM
To: Trinath Somanchi
Cc: OpenStack
Subject: Re: [Openstack] Setting up another compute node

I just did another attempt at this so I'd have fresh logs.

There are all the lines produced in the neutron openvswitch-agent.log file when 
I attempt that previous command.

2017-01-23 14:09:20.918 8097 INFO neutron.agent.securitygroups_rpc 
[req-a9ab1e05-cf41-44ce-8762-d7f0f72e7ba3 582643be48c04603a09250a1be6e6cf3 
1dd7b6481aa34ef7ba105a7336845369 - - -] Security group member updated 
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:21.132 8097 INFO neutron.agent.securitygroups_rpc 
[req-b8cc3ab8-d4f3-4c96-820d-148ae6fd47af 582643be48c04603a09250a1be6e6cf3 
1dd7b6481aa34ef7ba105a7336845369 - - -] Security group member updated 
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:22.057 8097 INFO neutron.agent.common.ovs_lib 
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Port 
e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int
2017-01-23 14:09:22.058 8097 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] port_unbound(): net_uuid 
None not in local_vlan_map
2017-01-23 14:09:22.059 8097 INFO neutron.agent.securitygroups_rpc 
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Remove device filter for 
[u'e1058d22-9a7b-4988-9644-d0f476a01015']


When I attempt to check the status of the port mentioned there, it doesn't 
exist on either compute node.

(neutron) port-show e1058d22-9a7b-4988-9644-d0f476a01015
Unable to find port with name or id 'e1058d22-9a7b-4988-9644-d0f476a01015'


Thank you very much for your input.




Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer 
Plus
peter.ki...@objectstream.com

Objectstream, Inc.
Office: 405-942-4477 / Fax: 866-814-0174
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 1:48 PM, Trinath Somanchi 
> wrote:
This is the error

 port_unbound(): net_uuid None not in local_vlan_map


Get Outlook for iOS


From: Peter Kirby 
>
Sent: Tuesday, January 24, 2017 12:45:01 AM
To: Trinath Somanchi
Cc: OpenStack
Subject: Re: [Openstack] Setting up another compute node

I am using the CLI so I can force the VM to create on the new host.  Using this 
command:

openstack server create \
  --image db090568-9500-4092-ac23-364f25940b2f \
  --flavor m1.small \
  --availability-zone nova:vhost2 \
  --nic net-id=ce7f1bf3-b6b3-45c3-8251-2cbcdc9d4595 \
  temptest

If I do not specify vhost2, this command does successfully create the VM on 
vhost1.



Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer 
Plus
peter.ki...@objectstream.com

Objectstream, Inc.
Office: 405-942-4477 / Fax: 866-814-0174
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 12:27 PM, Trinath Somanchi 
> wrote:

Can you post how did you spawn the VM ? I guess network is not added.


/Trinath


From: Peter Kirby 
>
Sent: Monday, January 23, 2017 9:22:10 PM
To: OpenStack
Subject: [Openstack] Setting up another compute node

Hi,

I'm currently running OpenStack Mitaka on CentOS 7.2 and I'm trying to setup 
another compute node.

I have nova installed and running and the following neutron packages:
openstack-neutron.noarch  1:8.3.0-1.el7@openstack-mitaka
openstack-neutron-common.noarch   1:8.3.0-1.el7@openstack-mitaka
openstack-neutron-ml2.noarch  1:8.3.0-1.el7@openstack-mitaka
openstack-neutron-openvswitch.noarch  1:8.3.0-1.el7@openstack-mitaka
python-neutron.noarch 1:8.3.0-1.el7@openstack-mitaka
python-neutron-lib.noarch 0.0.3-1.el7  @openstack-mitaka
python2-neutronclient.noarch  4.1.2-1.el7  @openstack-mitaka

The neutron-openvswitch-agent is up and running and I can see it and nova from 
the OpenStack commandline.  Neutron agent-list says the new host has the 
openvswitch agent and it's alive.

However, when I try to deploy an instance to this new host, I get the following 
error and the the instances 

[OpenStack-Infra] Unable to add myself to the python-redfish-core group I created

2017-01-23 Thread Bruno Cornec

Hello,

I'm unable to add myself to the python-redfish-core group I created.

When using the Web interface at 
https://review.openstack.org/#/admin/groups/99,members the fields are greyed 
and I cannot follow the doc at 
https://review.openstack.org/Documentation/access-control.html to add myself to 
the group.

I tried to use another method:

ssh -p 29418 bruno-cor...@review.openstack.org gerrit set-members 
python-redfish-core --add bruno-cornec
fatal: internal server error

So I think I need help from an admin to be able to modify that group.

I have the same issue with the other group python-redfish-release at 
https://review.openstack.org/#/admin/groups/1649,members
This is annoying as I cannot +2 our first patch 
https://review.openstack.org/#/c/410852/7 since the integration of the project !

Any help to solve this is welcome.
Thanks in advance and best regards,
Bruno.
--
HPE EMEA EG FLOSS Technology Strategist http://www.hpe.com/engage/opensource
Open Source Profession, WW Linux Community Leadhttp://github.com/bcornec   
FLOSS projects:http://mondorescue.org http://project-builder.org

Musique ancienne?   http://www.musique-ancienne.org  http://www.medieval.org

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [Openstack-operators] Encrypted Cinder Volume Deployment

2017-01-23 Thread Joe Topjian
Hi Kris,

I came across that as well and I believe it has been fixed and ensures
existing volumes are accessible:

https://github.com/openstack/nova/blob/8c3f775743914fe083371a31433ef5563015b029/releasenotes/notes/bug-1633518-0646722faac1a4b9.yaml

Definitely worthwhile to bring up :)

Joe

On Mon, Jan 23, 2017 at 12:53 PM, Kris G. Lindgren 
wrote:

> Slightly off topic,
>
>
>
> But I remember a discussion involving encrypted volumes and nova(?) and
> there was an issue where an issue/bug where nova was using the wrong key –
> like it got hashed wrong and was using the badly hashed key/password vs’s
> what was configured.
>
>
>
>
>
> ___
>
> Kris Lindgren
>
> Senior Linux Systems Engineer
>
> GoDaddy
>
>
>
> *From: *Joe Topjian 
> *Date: *Monday, January 23, 2017 at 12:41 PM
> *To: *"openstack-operators@lists.openstack.org" <
> openstack-operators@lists.openstack.org>
> *Subject: *[Openstack-operators] Encrypted Cinder Volume Deployment
>
>
>
> Hi all,
>
>
>
> I'm investigating the options for configuring Cinder with encrypted
> volumes and have a few questions.
>
>
>
> The Cinder environment is currently running Kilo which will be upgraded to
> something between M-O later this year. The Kilo release supports the
> fixed_key setting. I see fixed_key is still supported, but has been
> abstracted into Castellan.
>
>
>
> Question: If I configure Kilo with a fixed key, will existing volumes
> still be able to work with that same fixed key in an M, N, O release?
>
>
>
> Next, fixed_key is discouraged because of it being a single key for all
> tenants. My understanding is that Barbican provides a way for each tenant
> to generate their own key.
>
>
>
> Question: If I deploy with fixed_key (either now or in a later release),
> can I move from a master key to Barbican without bricking all existing
> volumes?
>
>
>
> Are there any other issues to be aware of? I've done a bunch of Googling
> and searching on bugs.launchpad.net and am pretty satisfied with the
> current state of support. My intention is to provide users with simple
> native encrypted volume support - not so much supporting uploaded volumes,
> bootable volumes, etc.
>
>
>
> But what I want to make sure of is that I'm not in a position where in
> order to upgrade, a bunch of volumes become irrecoverable.
>
>
>
> Thanks,
>
> Joe
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] Setting up another compute node

2017-01-23 Thread Trinath Somanchi
This is the error

 port_unbound(): net_uuid None not in local_vlan_map


Get Outlook for iOS


From: Peter Kirby 
Sent: Tuesday, January 24, 2017 12:45:01 AM
To: Trinath Somanchi
Cc: OpenStack
Subject: Re: [Openstack] Setting up another compute node

I am using the CLI so I can force the VM to create on the new host.  Using this 
command:

openstack server create \
  --image db090568-9500-4092-ac23-364f25940b2f \
  --flavor m1.small \
  --availability-zone nova:vhost2 \
  --nic net-id=ce7f1bf3-b6b3-45c3-8251-2cbcdc9d4595 \
  temptest

If I do not specify vhost2, this command does successfully create the VM on 
vhost1.



Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer 
Plus
peter.ki...@objectstream.com

Objectstream, Inc.
Office: 405-942-4477 / Fax: 866-814-0174
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 12:27 PM, Trinath Somanchi 
> wrote:

Can you post how did you spawn the VM ? I guess network is not added.


/Trinath


From: Peter Kirby 
>
Sent: Monday, January 23, 2017 9:22:10 PM
To: OpenStack
Subject: [Openstack] Setting up another compute node

Hi,

I'm currently running OpenStack Mitaka on CentOS 7.2 and I'm trying to setup 
another compute node.

I have nova installed and running and the following neutron packages:
openstack-neutron.noarch  1:8.3.0-1.el7@openstack-mitaka
openstack-neutron-common.noarch   1:8.3.0-1.el7@openstack-mitaka
openstack-neutron-ml2.noarch  1:8.3.0-1.el7@openstack-mitaka
openstack-neutron-openvswitch.noarch  1:8.3.0-1.el7@openstack-mitaka
python-neutron.noarch 1:8.3.0-1.el7@openstack-mitaka
python-neutron-lib.noarch 0.0.3-1.el7  @openstack-mitaka
python2-neutronclient.noarch  4.1.2-1.el7  @openstack-mitaka

The neutron-openvswitch-agent is up and running and I can see it and nova from 
the OpenStack commandline.  Neutron agent-list says the new host has the 
openvswitch agent and it's alive.

However, when I try to deploy an instance to this new host, I get the following 
error and the the instances fails to deploy:

2017-01-20 10:51:21.132 24644 INFO neutron.agent.common.ovs_lib 
[req-2be33822-4a69-4521-9267-a81315b20b6b - - - - -] Port 
67b72a38-c553-4f06-953c-92f43d5dea60 not present in bridge br-int
2017-01-20 10:51:21.133 24644 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-2be33822-4a69-4521-9267-a81315b20b6b - - - - -] port_unbound(): net_uuid 
None not in local_vlan_map

Here is the output from ovs-vsctl show:
2e5497fc-6f3a-4761-a99b-d4e95d0614f7
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port "eno1"
Interface "eno1"
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.5.0"

I suspect I'm missing one small step, but I've been searching google and logs 
for days now and I can't seem to nail down the problem.  Does anyone have any 
suggestions where I should look next?

Thank you.




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-01-23 Thread Loo, Ruby
Hi,

We are jittery to present this week's priorities and subteam report for Ironic. 
As usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. nova patch for soft power/reboot: https://review.openstack.org/#/c/407977/
2. ironicclient queue: 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironicclient
3. ironic-inspector-client queue: 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironic-inspector-client
4. Continue reviewing driver composition things (see notes below, some of the 
WIP patches are ready other than docs/reno): 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1524745
5. Node tags: https://review.openstack.org/#/q/topic:bug/1526266


Bugs (dtantsur)
===
- Stats (diff between 16 Jan 2017 and 23 Jan 2017)
- Ironic: 227 bugs (-3) + 237 wishlist items (-1). 19 new, 190 in progress 
(-1), 0 critical, 28 high (-1) and 31 incomplete (+1)
- Inspector: 11 bugs (-1) + 24 wishlist items (+1). 0 new, 16 in progress (+2), 
0 critical, 3 high (+1) and 4 incomplete (-1)
- Nova bugs with Ironic tag: 10. 0 new, 0 critical, 0 high

Portgroups support (sambetts, vdrok)

* trello: https://trello.com/c/KvVjeK5j/29-portgroups-support
- status as of most recent weekly meeting:
- everything is done, except for tempest tests and documentation.(need to 
be written still)

Interface attach/detach API (sambetts)
==
* trello: https://trello.com/c/nryU4w58/39-interface-attach-detach-api
- status as of most recent weekly meeting:
done

CI refactoring (dtantsur, lucasagomes)
==
* trello: https://trello.com/c/c96zb3dm/32-ci-refactoring
- status as of most recent weekly meeting:
- Two more patches to go to add support for deploying UEFI images with 
Ironic in devstack: 1) https://review.openstack.org/#/c/414604/ (DevStack) 2) 
https://review.openstack.org/#/c/374988/ BOTH MERGED
- focus (lucasagomes) is to get UEFI testing in gate. More patches needed 
for this.

Rolling upgrades and grenade-partial (rloo, jlvillal)
=
* trello: 
https://trello.com/c/GAlhSzLm/2-rolling-upgrades-and-grenade-with-multi-node
- status as of most recent weekly meeting:
- leaning towards moving this to Pike.
- patches need reviews: https://review.openstack.org/#/q/topic:bug/1526283.
- concerns about https://review.openstack.org/#/c/420728/ (Add 
compatibility with Newton when creating a node)
- had irc discussion about status: 
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2017-01-23.log.html#t2017-01-23T16:17:41
- Testing work:
- Great progress this last week! Able to fix issue that had blocked us 
for several weeks in the multi-tenant grenade job!
- Tempest smoke is now working for the multi-tenant grenade job during 
the initial pre-grenade run.
- The grenade portion passes for the multi-tenant grenade job
- 
http://logs.openstack.org/49/422149/5/experimental/gate-grenade-dsvm-ironic-multitenant-ubuntu-xenial-nv/74c9ed9/logs/grenade.sh.summary.txt.gz
- The final tempest "smoke" test is failing after the grenade run in 
the multi-tenant grenade job.
- 
http://logs.openstack.org/49/422149/5/experimental/gate-grenade-dsvm-ironic-multitenant-ubuntu-xenial-nv/74c9ed9/console.html
- Testing being done in: https://review.openstack.org/#/c/422149/
- This needs multi-node testing, and multi-node has a very low 
probability of working in Ocata

Generic boot-from-volume (TheJulia)
===
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- status as of most recent weekly meeting:
- API side changes for volume connector information has a procedural -2 
until we can begin making use of the data in the conductor, but should stil be 
reviewed
- https://review.openstack.org/#/c/214586/
- This change has been rebased on top of the iPXE template update 
revision to support cinder/iscsi booting.
- Boot from volume/storage cinder interface is up for review
- Last patch set for cinder common client interface was reverted in a 
rebase.  TheJulia expects to address this Monday afternoon.
- 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691
- Original volume connection information client patches
- They need OSC support added into the revisions.
- These changes should be expected to land once Pike opens.
- 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironicclient+branch:master+topic:bug/1526231

Driver composition (dtantsur, jroll)

Re: [Openstack-operators] Encrypted Cinder Volume Deployment

2017-01-23 Thread Kris G. Lindgren
Slightly off topic,

But I remember a discussion involving encrypted volumes and nova(?) and there 
was an issue where an issue/bug where nova was using the wrong key – like it 
got hashed wrong and was using the badly hashed key/password vs’s what was 
configured.


___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Joe Topjian 
Date: Monday, January 23, 2017 at 12:41 PM
To: "openstack-operators@lists.openstack.org" 

Subject: [Openstack-operators] Encrypted Cinder Volume Deployment

Hi all,

I'm investigating the options for configuring Cinder with encrypted volumes and 
have a few questions.

The Cinder environment is currently running Kilo which will be upgraded to 
something between M-O later this year. The Kilo release supports the fixed_key 
setting. I see fixed_key is still supported, but has been abstracted into 
Castellan.

Question: If I configure Kilo with a fixed key, will existing volumes still be 
able to work with that same fixed key in an M, N, O release?

Next, fixed_key is discouraged because of it being a single key for all 
tenants. My understanding is that Barbican provides a way for each tenant to 
generate their own key.

Question: If I deploy with fixed_key (either now or in a later release), can I 
move from a master key to Barbican without bricking all existing volumes?

Are there any other issues to be aware of? I've done a bunch of Googling and 
searching on bugs.launchpad.net and am pretty 
satisfied with the current state of support. My intention is to provide users 
with simple native encrypted volume support - not so much supporting uploaded 
volumes, bootable volumes, etc.

But what I want to make sure of is that I'm not in a position where in order to 
upgrade, a bunch of volumes become irrecoverable.

Thanks,
Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Encrypted Cinder Volume Deployment

2017-01-23 Thread Joe Topjian
Hi all,

I'm investigating the options for configuring Cinder with encrypted volumes
and have a few questions.

The Cinder environment is currently running Kilo which will be upgraded to
something between M-O later this year. The Kilo release supports the
fixed_key setting. I see fixed_key is still supported, but has been
abstracted into Castellan.

Question: If I configure Kilo with a fixed key, will existing volumes still
be able to work with that same fixed key in an M, N, O release?

Next, fixed_key is discouraged because of it being a single key for all
tenants. My understanding is that Barbican provides a way for each tenant
to generate their own key.

Question: If I deploy with fixed_key (either now or in a later release),
can I move from a master key to Barbican without bricking all existing
volumes?

Are there any other issues to be aware of? I've done a bunch of Googling
and searching on bugs.launchpad.net and am pretty satisfied with the
current state of support. My intention is to provide users with simple
native encrypted volume support - not so much supporting uploaded volumes,
bootable volumes, etc.

But what I want to make sure of is that I'm not in a position where in
order to upgrade, a bunch of volumes become irrecoverable.

Thanks,
Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] Setting up another compute node

2017-01-23 Thread Peter Kirby
I am using the CLI so I can force the VM to create on the new host.  Using
this command:

openstack server create \
  --image db090568-9500-4092-ac23-364f25940b2f \
  --flavor m1.small \
  --availability-zone nova:vhost2 \
  --nic net-id=ce7f1bf3-b6b3-45c3-8251-2cbcdc9d4595 \
  temptest

If I do not specify vhost2, this command does successfully create the VM on
vhost1.


Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer Plus

peter.ki...@objectstream.com
*Objectstream, Inc. *
Office: 405-942-4477 / Fax: 866-814-0174
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 12:27 PM, Trinath Somanchi  wrote:

> Can you post how did you spawn the VM ? I guess network is not added.
>
>
> /Trinath
> --
> *From:* Peter Kirby 
> *Sent:* Monday, January 23, 2017 9:22:10 PM
> *To:* OpenStack
> *Subject:* [Openstack] Setting up another compute node
>
> Hi,
>
> I'm currently running OpenStack Mitaka on CentOS 7.2 and I'm trying to
> setup another compute node.
>
> I have nova installed and running and the following neutron packages:
> openstack-neutron.noarch  1:8.3.0-1.el7
> @openstack-mitaka
> openstack-neutron-common.noarch   1:8.3.0-1.el7
> @openstack-mitaka
> openstack-neutron-ml2.noarch  1:8.3.0-1.el7
> @openstack-mitaka
> openstack-neutron-openvswitch.noarch  1:8.3.0-1.el7
> @openstack-mitaka
> python-neutron.noarch 1:8.3.0-1.el7
> @openstack-mitaka
> python-neutron-lib.noarch 0.0.3-1.el7
> @openstack-mitaka
> python2-neutronclient.noarch  4.1.2-1.el7
> @openstack-mitaka
>
> The neutron-openvswitch-agent is up and running and I can see it and nova
> from the OpenStack commandline.  Neutron agent-list says the new host has
> the openvswitch agent and it's alive.
>
> However, when I try to deploy an instance to this new host, I get the
> following error and the the instances fails to deploy:
>
> 2017-01-20 10:51:21.132 24644 INFO neutron.agent.common.ovs_lib
> [req-2be33822-4a69-4521-9267-a81315b20b6b - - - - -] Port
> 67b72a38-c553-4f06-953c-92f43d5dea60 not present in bridge br-int
> 2017-01-20 10:51:21.133 24644 INFO neutron.plugins.ml2.drivers.
> openvswitch.agent.ovs_neutron_agent [req-2be33822-4a69-4521-9267-a81315b20b6b
> - - - - -] port_unbound(): net_uuid None not in local_vlan_map
>
> Here is the output from ovs-vsctl show:
> 2e5497fc-6f3a-4761-a99b-d4e95d0614f7
> Bridge br-int
> fail_mode: secure
> Port br-int
> Interface br-int
> type: internal
> Bridge br-ex
> Port "eno1"
> Interface "eno1"
> Port br-ex
> Interface br-ex
> type: internal
> ovs_version: "2.5.0"
>
> I suspect I'm missing one small step, but I've been searching google and
> logs for days now and I can't seem to nail down the problem.  Does anyone
> have any suggestions where I should look next?
>
> Thank you.
>
>
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [tripleo] Update TripleO core members

2017-01-23 Thread Emilien Macchi
Greeting folks,

I would like to propose some changes in our core members:

- Remove Jay Dobies who has not been active in TripleO for a while
(thanks Jay for your hard work!).
- Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
docker bits.
- Add Steve Backer on os-collect-config and also docker bits in
tripleo-common and tripleo-heat-templates.

Indeed, both Flavio and Steve have been involved in deploying TripleO
in containers, their contributions are very valuable. I would like to
encourage them to keep doing more reviews in and out container bits.

As usual, core members are welcome to vote on the changes.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][swg] Sessions for Atlanta PTG - writing a TC Vision/Other options

2017-01-23 Thread Colette Alexander
Hello Stackers,

As we move into the last four weeks of work before the PTG in Atlanta, I
wanted to check in to talk about what the Stewardship Working Group has
planned and what we're looking to accomplish during our one day (Monday) at
the gathering.

Currently, discussing among SWG members has focused around a few things:

1) Assisting the TC with writing a vision
2) Assessing our previous work list [0] and prioritizing future work for
the SWG based on the community's interest in that list
3) Creating space for drop-in/community feedback on what we should work on
next, and how it should be prioritized.

1) the TC Vision is on hold, and won't happen at the PTG. Since so many
contributors would've been in other cross project sessions throughout
Monday we thought it would be wiser to hold off, and do a facilitated
vision during another time when the TC might be able to fully focus on it
for a day. We're still working on timing for that, but I will keep everyone
posted once I know what we've settled on.

2 and 3 are listed on our etherpad/vision for the PTG [1], with 2 currently
scheduled in the morning, and 3 planned for a general availability for
drop-in feedback in the afternoon (as we assume more people will have more
sessions and less flexible schedules at that point, so drop-ins will be the
easiest way to allow anyone who is interested in what we do to stop by and
participate).

I'd love feedback on that, scheduling wise, from any folks interested in
participating. I'd also love to hear any other ideas about what we could
cover during our day that might be useful.

I spent quite a bit of our cross project session at the Ocata Summit doing
a quick recap of the concept of Servant Leadership and it seemed like
plenty of attendees appreciated that. Would a series of quick recaps of
basic leadership concepts (Servant Leadership, Visioning, Principles &
Culture, and Change Management) - if anyone is interested in having a small
discussion covering some of those topics, I'd love to hear from you!

Thanks so much, everyone - can't wait to see you all in Atlanta!

-colette/gothicmindfood


[0] https://etherpad.openstack.org/p/swg-short-list-deliverables
[1] https://etherpad.openstack.org/p/AtlantaPTG-SWG
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All projects that use Alembic] Absence of pk on alembic_version table

2017-01-23 Thread Ihar Hrachyshka
An alternative could also be, for Newton and earlier, to release a
note saying that operators should not run the code against ENFORCING
galera mode. What are the reasons to enable that mode in OpenStack
scope that would not allow operators to live without it for another
cycle?

Ihar

On Mon, Jan 23, 2017 at 10:12 AM, Anna Taraday
 wrote:
> Hello everyone!
>
> Guys in our team faced an issue when they try to run alembic migrations on
> Galera with ENFORCING mode. [1]
>
> This was an issue with Alembic [2], which was quickly fixed by Mike Bayer
> (many thanks!) and new version of alembic was resealed [3].
> The global requirements are updated [4].
>
> I think that it is desired to fix this for Newton at least. We cannot bump
> requirements for Newton, so hot fix can be putting pk on this table in the
> first migration like proposed [5].  Any other ideas?
>
> [1] - https://bugs.launchpad.net/neutron/+bug/1655610
> [2] - https://bitbucket.org/zzzeek/alembic/issues/406
> [3] - http://alembic.zzzcomputing.com/en/latest/changelog.html#change-0.8.10
> [4] - https://review.openstack.org/#/c/423118/
> [5] - https://review.openstack.org/#/c/419320/
>
>
> --
> Regards,
> Ann Taraday
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Announcing my PTL candidacy for Pike

2017-01-23 Thread Davanum Srinivas
+1 to "mentoring people that are newer to Nova but are stepping
into leadership positions" Matt.

Thanks,
Dims

On Mon, Jan 23, 2017 at 1:54 PM, Matt Riedemann  wrote:
> Hi everyone,
>
> This is my self-nomination to continue running as Nova PTL for the Pike
> cycle.
>
> If elected, this would be a third term for me as Nova PTL. In Ocata I
> thought that I did a better job of keeping on top of a broader set of
> efforts than I was able to in Newton, including several non-priority
> vendor-specific blueprints.
>
> I have also tried to make regular communication a priority. The topics vary,
> but in general I try to keep people informed about the release schedule,
> upcoming deadlines, areas that need attention, and recaps of smaller group
> discussions back to the wider team. We have a widely distributed team and a
> lot of groups are impacted by decisions made within Nova so it's important
> to continue with that communication. Despite my best efforts I have also
> learned in Ocata that we need to get earlier feedback on changes which
> impact deployment tooling, and make documentation of such changes a high
> priority earlier in the development of new features so that people working
> on tooling are not left in the dark.
>
> Ocata has been a tough release, and I think we knew that was going to be the
> case going in. It was a shorter cycle but still had some very high-priority
> and high-visibility work items such as integrating the placement service
> with the scheduler and further integrating support for cells v2, along with
> making both of those required in a Nova deployment for Ocata. We also had to
> deal with losing some key people and filling those spots. But people have
> stepped up and we still made some incredible progress in Ocata despite the
> difficulties.
>
> For Pike I want to focus on the following:
>
> * Continue integration of the placement service into making scheduling
> decisions, including working with Neutron routed networks and work on
> defining traits for resource providers so we can model the qualitative
> aspects of resources in making placement decisions.
>
> * Continue working on cells v2 for multi-cell support including
> investigating the concept of auto-registration of compute nodes to simplify
> deployment automation, and also focus on multi-cell testing and Searchlight
> integration.
>
> * Work on volume multi-attach support with the new Cinder v3 APIs introduced
> in Ocata for creating and deleting volume attachments. I think we are
> finally at a place where we can make some solid progress on the Nova side
> with improved understanding between the Nova and Cinder teams.
>
> * There are going to be several efforts going on across several projects in
> the Pike release, including modeling capabilities in the REST API, and Nova
> is going to have to be a part of those efforts. We also need to get teams
> together to figure out what are the issues with hierarchical quotas and what
> progress can be made there since that is a high priority item that lots of
> operators have been requesting for a long time.
>
> In general, we are going to have to improve our review throughput,
> especially given the change in resources we experienced in Ocata. To me, a
> lot of this will have to do with mentoring people that are newer to Nova but
> are stepping into leadership positions, and having a shorter feedback loop
> on "leveling up".
>
> To summarize, I aim to be of service to those using and contributing to Nova
> and want to continue doing that in the PTL role for the project in the Pike
> release if you will have me for another round.
>
> Thank you for your consideration,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Announcing my PTL candidacy for Pike

2017-01-23 Thread Matt Riedemann

Hi everyone,

This is my self-nomination to continue running as Nova PTL for the Pike 
cycle.


If elected, this would be a third term for me as Nova PTL. In Ocata I 
thought that I did a better job of keeping on top of a broader set of 
efforts than I was able to in Newton, including several non-priority 
vendor-specific blueprints.


I have also tried to make regular communication a priority. The topics 
vary, but in general I try to keep people informed about the release 
schedule, upcoming deadlines, areas that need attention, and recaps of 
smaller group discussions back to the wider team. We have a widely 
distributed team and a lot of groups are impacted by decisions made 
within Nova so it's important to continue with that communication. 
Despite my best efforts I have also learned in Ocata that we need to get 
earlier feedback on changes which impact deployment tooling, and make 
documentation of such changes a high priority earlier in the development 
of new features so that people working on tooling are not left in the dark.


Ocata has been a tough release, and I think we knew that was going to be 
the case going in. It was a shorter cycle but still had some very 
high-priority and high-visibility work items such as integrating the 
placement service with the scheduler and further integrating support for 
cells v2, along with making both of those required in a Nova deployment 
for Ocata. We also had to deal with losing some key people and filling 
those spots. But people have stepped up and we still made some 
incredible progress in Ocata despite the difficulties.


For Pike I want to focus on the following:

* Continue integration of the placement service into making scheduling 
decisions, including working with Neutron routed networks and work on 
defining traits for resource providers so we can model the qualitative 
aspects of resources in making placement decisions.


* Continue working on cells v2 for multi-cell support including 
investigating the concept of auto-registration of compute nodes to 
simplify deployment automation, and also focus on multi-cell testing and 
Searchlight integration.


* Work on volume multi-attach support with the new Cinder v3 APIs 
introduced in Ocata for creating and deleting volume attachments. I 
think we are finally at a place where we can make some solid progress on 
the Nova side with improved understanding between the Nova and Cinder teams.


* There are going to be several efforts going on across several projects 
in the Pike release, including modeling capabilities in the REST API, 
and Nova is going to have to be a part of those efforts. We also need to 
get teams together to figure out what are the issues with hierarchical 
quotas and what progress can be made there since that is a high priority 
item that lots of operators have been requesting for a long time.


In general, we are going to have to improve our review throughput, 
especially given the change in resources we experienced in Ocata. To me, 
a lot of this will have to do with mentoring people that are newer to 
Nova but are stepping into leadership positions, and having a shorter 
feedback loop on "leveling up".


To summarize, I aim to be of service to those using and contributing to 
Nova and want to continue doing that in the PTL role for the project in 
the Pike release if you will have me for another round.


Thank you for your consideration,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] nova backup - instances unreachable

2017-01-23 Thread John Petrini
Adding disable_libvirt_livesnapshot = false to the nova.conf enabled live
snapshots.

Thanks everyone for the help.

___

John Petrini

NOC Systems Administrator   //   *CoreDial, LLC*   //   coredial.com
//   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

Hillcrest I, 751 Arbor Way, Suite 150, Blue Bell PA, 19422
*P: *215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

[image: Exceptional people. Proven Processes. Innovative Technology.
Discover CoreDial - watch our video]


The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission,  dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.

On Mon, Jan 23, 2017 at 3:55 AM, John Petrini  wrote:

> Hi Eugen,
>
> disable_libvirt_livesnapshot is not present in the nova.conf. During the
> snapshot the nova logs says "Beginning cold snapshot".
>
> I read about this option in the nova documentation but did not realize it
> was the default. In fact I assumed it wasn't since it's in the workarounds
> section. I'll try setting it to false.
>
> Thank You,
>
>
>
> ___
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission,  dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>
> On Mon, Jan 23, 2017 at 3:16 AM, Eugen Block  wrote:
>
>> Have you enabled live snapshots in nova.conf?
>>
>> The default for this option is "true", so you should check that:
>>
>> disable_libvirt_livesnapshot = false
>>
>> Is it really a live snaphot? What's in the nova-compute.log? It should
>> say something like
>>
>> [instance: XXX] Beginning live snapshot process
>>
>>
>>
>> Regards,
>> Eugen
>>
>>
>>
>> Zitat von John Petrini :
>>
>> Hi All,
>>>
>>> Following up after making this change. Adding write permissions to the
>>> images pool in Ceph did the trick and RBD snapshots now work. However the
>>> instance is still paused for the duration of the snapshot. Is it possible
>>> to do a live snapshot without pausing the instance?
>>>
>>> Thanks,
>>>
>>> John
>>>
>>> On Fri, Jan 13, 2017 at 5:49 AM, Eugen Block  wrote:
>>>
>>> Thanks,

 for anyone interested in this issue, I filed a bug report:
 https://bugs.launchpad.net/nova/+bug/1656242


 Regards,
 Eugen


 Zitat von Mohammed Naser :

 It is likely because this has been tested with QEMU only. I think you

> might want to bring this up with the Nova team.
>
> Sent from my iPhone
>
> On Jan 12, 2017, at 11:28 AM, Eugen Block  wrote:
>
>>
>> I'm not sure if this is the right spot, but I added some log
>> statements
>> into driver.py.
>> First, there's this if-block:
>>
>>if (self._host.has_min_version(MI
>> N_LIBVIRT_LIVESNAPSHOT_VERSION,
>>   MIN_QEMU_LIVESNAPSHOT_VERSION,
>>   host.HV_DRIVER_QEMU)
>> and source_type not in ('lvm')
>> and not CONF.ephemeral_storage_encryption.enabled
>> and not CONF.workarounds.disable_libvirt_livesnapshot):
>>live_snapshot = True
>>   [...]
>>else:
>>live_snapshot = False
>>
>> And I know that it lands in the else-statement. Turns out that
>> _host.has_min_version is "false", because of host.HV_DRIVER_QEMU. We
>> are
>> running on Xen hypervisors. So I tried it with host.HV_DRIVER_XEN and
>> now
>> nova-compute says:
>>
>> [instance: 14b75237-7619-481f-9636-792b64d1be17] instance
>> snapshotting
>> [instance: 14b75237-7619-481f-9636-792b64d1be17] Beginning live
>> snapshot process
>>
>> Now I'm waiting for the result, but at least the VM is still running,
>> so
>> it looks quite promising...
>>
>> And there it is:
>>
>> [instance: 14b75237-7619-481f-9636-792b64d1be17] Snapshot image
>> upload
>> complete
>>
>> I'm testing the image now, and it 

Re: [Openstack] Setting up another compute node

2017-01-23 Thread Trinath Somanchi
Can you post how did you spawn the VM ? I guess network is not added.


/Trinath


From: Peter Kirby 
Sent: Monday, January 23, 2017 9:22:10 PM
To: OpenStack
Subject: [Openstack] Setting up another compute node

Hi,

I'm currently running OpenStack Mitaka on CentOS 7.2 and I'm trying to setup 
another compute node.

I have nova installed and running and the following neutron packages:
openstack-neutron.noarch  1:8.3.0-1.el7@openstack-mitaka
openstack-neutron-common.noarch   1:8.3.0-1.el7@openstack-mitaka
openstack-neutron-ml2.noarch  1:8.3.0-1.el7@openstack-mitaka
openstack-neutron-openvswitch.noarch  1:8.3.0-1.el7@openstack-mitaka
python-neutron.noarch 1:8.3.0-1.el7@openstack-mitaka
python-neutron-lib.noarch 0.0.3-1.el7  @openstack-mitaka
python2-neutronclient.noarch  4.1.2-1.el7  @openstack-mitaka

The neutron-openvswitch-agent is up and running and I can see it and nova from 
the OpenStack commandline.  Neutron agent-list says the new host has the 
openvswitch agent and it's alive.

However, when I try to deploy an instance to this new host, I get the following 
error and the the instances fails to deploy:

2017-01-20 10:51:21.132 24644 INFO neutron.agent.common.ovs_lib 
[req-2be33822-4a69-4521-9267-a81315b20b6b - - - - -] Port 
67b72a38-c553-4f06-953c-92f43d5dea60 not present in bridge br-int
2017-01-20 10:51:21.133 24644 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-2be33822-4a69-4521-9267-a81315b20b6b - - - - -] port_unbound(): net_uuid 
None not in local_vlan_map

Here is the output from ovs-vsctl show:
2e5497fc-6f3a-4761-a99b-d4e95d0614f7
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port "eno1"
Interface "eno1"
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.5.0"

I suspect I'm missing one small step, but I've been searching google and logs 
for days now and I can't seem to nail down the problem.  Does anyone have any 
suggestions where I should look next?

Thank you.



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [OpenStack-Infra] Zuul DependentPipeline conceptual questions

2017-01-23 Thread James E. Blair
"Karpenko, Oleksandr (Nokia - DE/Ulm)" 
writes:

> Hallo together,
>
> Sorry for bothering you but I am not able to find any zuul specific mailing 
> list.
> First of all if there is some better place to ask questions, please
> let me know and I will do it there.

You've come to the right place.

> We are considering to use Zuul in a project split across many git
> repositories and we are in a proof of concept phase now. I have
> submitted for the review first problems we have found
> (https://review.openstack.org/#/c/423337/ ,
> https://review.openstack.org/#/c/424055/).

Thanks for those changes to the gearman-plugin for Jenkins.  I think the
first especially may be of use to a lot of people who have run into
permissions problems with newer Jenkins.

> Gerrit has a nice feature called 'Submit whole topic'. You can have
> reviews in several repositories within the same topic and gerrit
> allows to submit them together very close in time. When the first
> change is ready for Submit, 'Submit' button stays gray until all other
> reviews in the same topic are ready and then by pressing the button in
> any of those changes all of them are submitted.
>
> We have gate pipeline (DependentPipeline) in Zuul for Cross Project
> Testing which is triggered on code-review: 2 and on success does
> gerrit submit. The gate pipeline runs when the first change gets
> code-review: 2 and if all other changes have also code-review: 2, zuul
> submits all of them without testing the rest.
>
> Does Zuul have any support for 'Submit whole topic' gerrit feature? I
> am not able to find any.
>
> Does DependentPipeline have support for gerrit topic concept? It
> sounds at least reasonable to wait until all changes in the same topic
> get reviewed and then start DependentPipeline once (instead of 3 times
> in case of 2 repositories).
>
> Does our concept make sense at all or there are other ways to do it?
> We would like to reduce an effect of 'diamond dependency problem' by
> submitting related to each other changes close in time. For example
> when A depends on B and C. B and C depends on D. We would like to test
> A with related to each other changes in B and C (e.g. D integration)
> and submit changes in B and C very close in time. Because when B
> integrates D is much faster as C, A cannot be built until C is ready
> with D integration. This is not exactly what 'Depends-On:' is doing
> because changes in B and C does not depend on each other.
>
> Once more sorry for your time and looking forward for your answer.

Clint replied earlier with some good and correct information -- here is
a little more context and some thoughts about future development.

We implemented the Depens-On header feature in Zuul based on some
earlier work on cross-repo dependencies in Gerrit.  We have found the
'topic' to be useful in gerrit for far more things than just indicating
changes that should be merged together, and I personally am sad to see
it used for cross-repo dependencies as it robs Gerrit users of a useful
tool.

Regardless, you are correct that we only support one-way dependencies in
Zuul.  That is because we decided that, for the OpenStack project, we
wanted to avoid changing two repositories at once.  Many of our projects
are continuously deployed, and an OpenStack installation spans many
hosts (and also contains client-server relationships).  Therefore, we
wanted to ensure that developers changed each component in such a way
that it was backwards-compatible with other components.  This is
especially useful in the environment that I described, but also ends up
being good practice in more traditional application + library
situations.

However, I can see that some environments might prefer bidirectional
cross-repo dependencies.  Many of the building blocks needed to develop
that are already in zuul (we wrote the cross-repo-dependency code with
the idea that we might extend it to support this case in the future),
but there is still some work remaining to implement that.

If one were to implement it, I would recommend continuing to avoid using
the submit-whole-topic feature in Gerrit (as it is specific to Gerrit
and therefore not applicable to other systems with which we want to
interface Zuul in the future, and also, it is not necessary for this
work).  An implementation in Zuul should simply test both repos with the
state of the other, and then push the prepared repo states that it
tested up to Gerrit rather than 'submitting' the change using Gerrit.
This would allow Zuul to be certain that both merges will succeed (since
it has already performed them locally).

I think that we will end up implementing this eventually, but it is not
high on our list of priorities at the moment.  If you are interested in
working on this sooner, please let me know.

Finally, I would like to note that since you are in the exploration
stage that we are currently working heavily on version 3 of Zuul which
uses Ansible for running 

[openstack-dev] [All projects that use Alembic] Absence of pk on alembic_version table

2017-01-23 Thread Anna Taraday
Hello everyone!

Guys in our team faced an issue when they try to run alembic migrations on
Galera with ENFORCING mode. [1]

This was an issue with Alembic [2], which was quickly fixed by Mike Bayer
(many thanks!) and new version of alembic was resealed [3].
The global requirements are updated [4].

I think that it is desired to fix this for Newton at least. We cannot bump
requirements for Newton, so hot fix can be putting pk on this table in the
first migration like proposed [5].  Any other ideas?

[1] - https://bugs.launchpad.net/neutron/+bug/1655610
[2] - https://bitbucket.org/zzzeek/alembic/issues/406
[3] - http://alembic.zzzcomputing.com/en/latest/changelog.html#change-0.8.10
[4] - https://review.openstack.org/#/c/423118/
[5] - https://review.openstack.org/#/c/419320/


-- 
Regards,
Ann Taraday
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla-ansible] [kolla] Am I doing this wrong?

2017-01-23 Thread Kris G. Lindgren
Hi Paul,

Thanks for responding.

> The fact gathering on every server is a compromise taken by Kolla to
> work around limitations in Ansible. It works well for the majority of
> situations; for more detail and potential improvements on this please
> have a read of this post:
> http://lists.openstack.org/pipermail/openstack-dev/2016-November/107833.html

So my problem with this is the logging in to the compute nodes.  While this may 
be fine for a smaller deployment.  Logging into thousands, even hundreds, of 
nodes via ansible to gather facts, just to do a deployment against 2 or 3 of 
them is not tenable.  Additionally, in our higher audited environments 
(pki/pci) will cause our auditors heartburn.

> I'm not quite following you here, the config templates from
> kolla-ansible are one of it's stronger pieces imo, they're reasonably
> well tested and maintained. What leads you to believe they shouldn't be
> used?
>
> > * Certain parts of it are 'reference only' (the config tasks),
>  > are not recommended
>
> This is untrue - kolla-ansible is designed to stand up a stable and
> usable OpenStack 'out of the box'. There are definitely gaps in the
> operator type tasks as you've highlighted, but I would not call it
> ‘reference only'.

http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2017-01-09.log.html#t2017-01-09T21:33:15

This is where we were told the config stuff was “reference only”?

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [kolla] How to re-run failed ansible tasks in kolla

2017-01-23 Thread pothuganti sridhar
Hello All,

I am trying to setup kolla environment. When I run the "vagrant up"
command, it brought up four kubes and then running the ansible play books.
It got failed while upgrading the OS with following error.


*TASK [upgrade-os : upgrade the entire system in preparation for next
steps] *

The complete log is available in following link.
http://paste.openstack.org/show/596104/

Can any one help me about running the ansible playbooks for failed tasks.

Any pointer would be a great help.

Regards,
Sridhar
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [OpenStack-Infra] Zuul DependentPipeline conceptual questions

2017-01-23 Thread Clint Byrum
Excerpts from Karpenko, Oleksandr (Nokia - DE/Ulm)'s message of 2017-01-23 
13:24:08 +:
> Hallo together,
> 
> Sorry for bothering you but I am not able to find any zuul specific mailing 
> list.
> First of all if there is some better place to ask questions, please let me 
> know and I will do it there.
> 
> We are considering to use Zuul in a project split across many git 
> repositories and we are in a proof of concept phase now. I have submitted for 
> the review first problems we have found 
> (https://review.openstack.org/#/c/423337/ , 
> https://review.openstack.org/#/c/424055/).
> 
> Gerrit has a nice feature called 'Submit whole topic'. You can have reviews 
> in several repositories within the same topic and gerrit allows to submit 
> them together very close in time. When the first change is ready for Submit, 
> 'Submit' button stays gray until all other reviews in the same topic are 
> ready and then by pressing the button in any of those changes all of them are 
> submitted.
> 
> We have gate pipeline (DependentPipeline) in Zuul for Cross Project Testing 
> which is triggered on code-review: 2  and on success does gerrit submit. The 
> gate pipeline runs when the first change gets code-review: 2  and if all 
> other changes have also code-review: 2, zuul submits all of them without 
> testing the rest.
> 
> Does Zuul have any support for 'Submit whole topic' gerrit feature? I am not 
> able to find any.
> 
> Does DependentPipeline have support for gerrit topic concept? It sounds at 
> least reasonable to wait until all changes in the same topic get reviewed and 
> then start DependentPipeline once (instead of 3 times in case of 2 
> repositories).
> 
> Does our concept make sense at all or there are other ways to do it? We would 
> like to reduce an effect of 'diamond dependency problem' by submitting 
> related to each other changes close in time. For example when A depends on B 
> and C. B and C depends on D. We would like to test A with related to each 
> other changes in B and C (e.g. D integration) and submit changes in B and C 
> very close in time. Because when B integrates D is much faster as C, A cannot 
> be built until C is ready with D integration. This is not exactly what 
> 'Depends-On:' is doing because changes in B and C does not depend on each 
> other.
> 

The intention of DependentPipeline is to make sure every commit, of
every repository, works together with the commits that it claims to
depend on.

So what you have is this graph:


A>B-->D
  \
   -->C-->D


If you approve A first, and D last, I believe the Dependent Pipeline
Manager will try to test them like this:

D
D+B+C
D+B+C+A

But that's only if you approve A first, and D last. Developers would
definitely be able to approve D, then B, and D would go in before C.

There's no good way to resolve this with DependentPipelineManager.
Because ideally you could make B and C depend on each other. But
DependentPipelineManager will fail because that creates a cycle.

Perhaps this a use case for making a pipeline manager that allows
for cycles.

For now, you're best off unrolling in your code. That means that you
likely need a "Z" that depends on A, B, and C and turns on whatever
behavior you wanted with all of them in place.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] [cinder] Can I use lvm thin provisioning in mitaka?

2017-01-23 Thread Chris Friesen

On 01/23/2017 11:29 AM, Marco Marino wrote:

At the moment I have:
volume_clear=zero
volume_clear_size=30 <-- MBR will be deleted here!
with tick provisioning
I think this can be a good solution in my case. Let me know what do you think
about this.


If security is not a concern then that's fine.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Can I use lvm thin provisioning in mitaka?

2017-01-23 Thread Marco Marino
At the moment I have:
volume_clear=zero
volume_clear_size=30 <-- MBR will be deleted here!
with tick provisioning
I think this can be a good solution in my case. Let me know what do you
think about this.
Thank you
Marco



2017-01-23 17:21 GMT+01:00 Chris Friesen :

> On 01/21/2017 03:00 AM, Marco Marino wrote:
>
>> Really thank you!! It's difficult for me find help on cinder and I think
>> this is
>> the right place!
>> @Duncan, if my goal is to speeding up bootable volume creation, I can
>> avoid to
>> use thin provisioning. I can use image cache and in this way the
>> "retrieve from
>> glance" and the "qemu-img convert to RAW" parts will be skipped. Is this
>> correct? And whit this method I don't have a performancy penalty
>> mentioned by Chris.
>> @Chris: Yes, I'm using volume_clear option and volume deletion is very
>> fast
>>
>
> Just to be clear, you should not use "volume_clear=none" unless you are
> using thin provisioning or you do not care about security.
>
> If you have "volume_clear=none" with thick LVM, then newly created cinder
> volumes may contain data written to the disk via other cinder volumes that
> were later deleted.
>
>
> Chris
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Glance swift multi-user storage backend survey

2017-01-23 Thread Brian Rosmaita
Hello operators,

The Glance team is conducting another survey about Glance usage.  This
one is for operators who are currently using (or contemplating using)
the swift multi-tenant storage backend for Glance, a feature which is
*not* enabled by default.

The survey is only 5 questions long, so it won't take much time to fill out.

https://goo.gl/forms/WJh6PvRw4f42mStc2

The survey will be open until 23:59 UTC on 31 January 2017.


thanks,
brian



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [ocatvia]Newton Octavia lbaas creation error

2017-01-23 Thread Santhosh Fernandes
Thanks Michael,

After adding service_auth section in neutron.conf I was able overcome this
error. Now I am getting new exception Unable to retrieve ready devices.
Here is the stacktrace.

http://paste.openstack.org/show/596019/

Any clue to resolve this issue.

Thanks,
Santhosh


On Mon, Jan 23, 2017 at 9:48 PM, Michael Johnson 
wrote:

> Santhosh,
>
>
>
> From the traceback below it looks like the neutron process is unable to
> access keystone.
>
>
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource DriverError:
> Driver error: Unable to establish connection to
> http://127.0.0.1:5000/v2.0/tokens: HTTPConnectionPool(host='127.0.0.1',
> port=5000): Max retries exceeded with url: /v2.0/tokens (Caused by
> NewConnectionError(' object at 0x7f9f36b91790>: Failed to establish a new connection: [Errno
> 111] ECONNREFUSED',))
>
>
>
> So, I would check the neutron.conf settings for keystone like the
> user/password and that the neutron process can reach keystone on
> http://127.0.0.1:5000  Maybe there is a bad security group or keystone
> isn’t running?
>
>
>
> Michael
>
>
>
> *From:* Santhosh Fernandes [mailto:santhosh.fernan...@gmail.com]
> *Sent:* Sunday, January 22, 2017 10:48 AM
> *To:* openstack-dev@lists.openstack.org; Michael Johnson <
> johnso...@gmail.com>
> *Subject:* [openstack-dev][ocatvia]Newton Octavia lbaas creation error
>
>
>
> Hi all,
>
>
>
> I am getting driver connection error while creation the LB from octavia.
>
>
>
> Stack trace -
>
>
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> [req-c6f19e4c-dfbd-4b1c-8198-925d05f9fcdf cf13e167c1884e7a8d63293a454ca774
> 48ab507e206741c4ba304efaf5209963 - - -] create failed: No details.
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource Traceback
> (most recent call last):
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/neutron/api/v2/resource.py", line 79, in resource
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource result =
> method(request=request, **args)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/neutron/api/v2/base.py", line 430, in create
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return
> self._create(request, body, **kwargs)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
> line 88, in wrapped
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource setattr(e,
> '_RETRY_EXCEEDED', True)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/oslo_utils/excutils.py", line 220, in __exit__
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> self.force_reraise()
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/oslo_utils/excutils.py", line 196, in force_reraise
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> six.reraise(self.type_, self.value, self.tb)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
> line 84, in wrapped
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return
> f(*args, **kwargs)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_db/api.py",
> line 151, in wrapper
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> ectxt.value = e.inner_exc
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/oslo_utils/excutils.py", line 220, in __exit__
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> self.force_reraise()
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/oslo_utils/excutils.py", line 196, in force_reraise
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> six.reraise(self.type_, self.value, self.tb)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_db/api.py",
> line 139, in wrapper
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return
> f(*args, **kwargs)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
> line 124, in wrapped
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> traceback.format_exc())
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> 

Re: [openstack-dev] [octavia] Nominating German Eichberger for Octavia core reviewer

2017-01-23 Thread Michael Johnson
With that vote we have quorum.  Welcome back German!

 

Michael

 

 

From: Kosnik, Lubosz [mailto:lubosz.kos...@intel.com] 
Sent: Sunday, January 22, 2017 12:24 PM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [octavia] Nominating German Eichberger for
Octavia core reviewer

 

+1, welcome back. 

 

Lubosz

 

On Jan 20, 2017, at 2:11 PM, Miguel Lavalle  > wrote:

 

Well, I don't vote here but it's nice to see German back in the community.
Welcome!

 

On Fri, Jan 20, 2017 at 1:26 PM, Brandon Logan  > wrote:

+1, yes welcome back German.

On Fri, 2017-01-20 at 09:41 -0800, Michael Johnson wrote:
> Hello Octavia Cores,
>
> I would like to nominate German Eichberger (xgerman) for
> reinstatement as an
> Octavia core reviewer.
>
> German was previously a core reviewer for Octavia and neutron-lbaas
> as well
> as a former co-PTL for Octavia.  Work dynamics required him to step
> away
> from the project for a period of time, but now he has moved back into
> a
> position that allows him to contribute to Octavia.  His review
> numbers are
> back in line with other core reviewers [1] and I feel he would be a
> solid
> asset to the core reviewing team.
>
> Current Octavia cores, please respond with your +1 vote or an
> objections.
>
> Michael
>
> [1] http://stackalytics.com/report/contribution/octavia-group/90
>
>
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
 
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org
 ?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] cinder volume create error, oslo service killed by signal 11

2017-01-23 Thread yang sheng
Hi ALL

our testing environment  (liberty with ceph) has been running for a while
and everything was working properly.

There was a cinder volume error happened yesterday.

from volume.log, it is showing:

2017-01-22 16:17:28.416 30027 INFO
cinder.volume.flows.manager.create_volume
[req-61267bd5-bfae-4f49-b21f-21eee2e49ea6 00b221e9dbac43c0b48b844e7ef1d835
42a0d7dfbe944f8b88964646ceccefa5 - - -] Volume
21cccf9f-53ab-47b4-a0f7-128a3a897c8d: being created as image with
specification: {'status': u'creating', 'image_location':
(u'rbd://cf59374d-9745-4274-a5c6-34fcea3203d7/eosimages/9e6cc09c-1ad5-4037-901a-4cf13135d2b8/snap',
None), 'volume_size': 20, 'volume_name':
u'volume-21cccf9f-53ab-47b4-a0f7-128a3a897c8d', 'image_id':
u'9e6cc09c-1ad5-4037-901a-4cf13135d2b8', 'image_service':
, 'image_meta':
{u'status': u'active', u'virtual_size': None, u'name': u'CentOS 6 64bit',
u'tags': [], u'container_format': u'bare', u'created_at':
datetime.datetime(2017, 1, 3, 23, 20, 30, tzinfo=),
u'disk_format': u'raw', u'updated_at': datetime.datetime(2017, 1, 9, 15,
19, 45, tzinfo=), u'visibility': u'public', 'properties': {},
u'owner': u'd0b1018826cb95e10fbe75e47cb2', u'protected': True, u'id':
u'9e6cc09c-1ad5-4037-901a-4cf13135d2b8', u'file':
u'/v2/images/9e6cc09c-1ad5-4037-901a-4cf13135d2b8/file', u'checksum':
u'9a44adfc62adf520e63298dac0bda27f', u'min_disk': 0, u'direct_url':
u'rbd://cf59374d-9745-4274-a5c6-34fcea3203d7/eosimages/9e6cc09c-1ad5-4037-901a-4cf13135d2b8/snap',
u'min_ram': 0, u'size': 8589934592}}
2017-01-22 16:17:28.526 3 INFO oslo_service.service
[req-f5ba453f-fcc3-4de4-959f-f62e06d9c425 - - - - -] Child 30027 killed by
signal 11
2017-01-22 16:17:28.529 3 INFO oslo_service.service
[req-f5ba453f-fcc3-4de4-959f-f62e06d9c425 - - - - -] Started child 14043
2017-01-22 16:17:28.531 14043 INFO cinder.service [-] Starting
cinder-volume node (version 7.0.3)
2017-01-22 16:17:28.532 14043 INFO cinder.volume.manager
[req-ee1c544d-44a2-461a-9ed8-cd1b2a1611f6 - - - - -] Starting volume driver
RBDDriver (1.2.0)
2017-01-22 16:17:34.724 14043 WARNING cinder.volume.manager
[req-ee1c544d-44a2-461a-9ed8-cd1b2a1611f6 - - - - -] Detected volume stuck
in {'curr_status': u'creating'}(curr_status)s status, setting to ERROR.

Seems cinder volume service got killed and restarted.

is there any suggestion to avoid this error?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [keystone]Error while setting up a keystone development environment

2017-01-23 Thread Steve Martinelli
why not use devstack [1] with a minimal local.conf (used to specify which
components to install) ?

[1] http://docs.openstack.org/developer/devstack/

minimal local.conf:

[[local|localrc]]
RECLONE=yes

# Credentials
DATABASE_PASSWORD=openstack
ADMIN_PASSWORD=openstack
SERVICE_PASSWORD=openstack
RABBIT_PASSWORD=openstack

# Services
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,horizon

# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs

On Mon, Jan 23, 2017 at 8:39 AM, Daniel Gitu  wrote:

> Hello,
>
> I'm new to all this and I am in need of help to find out where I went
> wrong.
> This is a bit lengthy, I have left a blank space between the text and the
> error
> messages I received.
> I first set up and activated a virtual environment then cloned the keystone
> project into that environment.
> I then proceeded to cd into keystone and executed pip install -r
> requirements.txt and got the following errors:
>
> Failed building wheel for cryptography
> Failed cleaning build dir for cryptography
> Failed building wheel for netifaces
> Failed building wheel for pycrypto
> Command "/home/grenouille/openstack/bin/python -u -c "import
> setuptools, 
> tokenize;__file__='/tmp/pip-build-XqTJv_/cryptography/setup.py';f=getattr(tokenize,
> 'open', open)(__file__);code=f.read().replace('\r\n',
> '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record
> /tmp/pip-nFp6dT-record/install-record.txt --single-version-externally-managed
> --compile --install-headers /home/grenouille/openstack/inc
> lude/site/python2.7/cryptography" failed with error code 1 in
> /tmp/pip-build-XqTJv_/cryptography/
>
> The above errors were resolved by executing; sudo apt-get install
> build-essential libssl-dev libffi-dev python-dev and re-running
> pipinstall -r requirements.txt
> I ran sudo apt install tox and executed tox in the keystone directory
> As tox was installing dependencies the first line read:
>
> ERROR: invocation failed (exit code 1), logfile:
> /home/grenouille/openstack/keystone/.tox/docs  /log/docs-1.log
> ERROR: actionid: docs
>
> The final error message read:
>
> ERROR: could not install deps [-r/home/grenouille/openstack/
> keystone/test-requirements.txt, .[ldap,memcache,mongodb]]; v =
> InvocationError('/home/grenouille/openstack/keystone/.tox/docs/bin/pip
> install -chttps://git.openstack.org/cgit/openstack/requirements/plai
> n/upper-constraints.txt 
> -r/home/grenouille/openstack/keystone/test-requirements.txt
> .[ldap,memcache,mongodb] (see /home/grenouille/openstack/key
> stone/.tox/docs/log/docs-1.log)', 1)
>
>
> Regards,
> Daniel.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-23 Thread Mark Baker
Hi Adrian,

I'm unlikely to attend the PTG myself but James Page and other members of
our team will be there who can help cover. Certainly we'd like to better
understand what the cluster drivers need from the underlying operating
system and what we need to do to make sure Ubuntu does all those things
really well.



Best Regards


Mark Baker

On 18 January 2017 at 19:08, Adrian Otto  wrote:

>
> On Jan 18, 2017, at 10:48 AM, Mark Baker  wrote:
>
> Hi Adrian,
>
> Let me know if you have similar questions or concerns about Ubuntu Core
> with Magnum.
>
> Mark
>
>
> Thanks Mark! Is there any chance you, or an Ubuntu Core representative
> could join us for a discussion at the PTG, and/or an upcoming IRC team
> meeting? The topic of supported operating system images for our cluster
> drivers is a current topic of team conversation, and it would be helpful to
> have clarity on what (support/dev/test) resources upstream Linux packagers
> may be able to offer to help guide our conversation.
>
> To give you a sense, we do have a Suse specific k8s driver that has been
> maturing during the Ocata release cycle, our Mesos driver uses Ubuntu
> Server, our Swarm and k8s drivers use Fedora Atomic, and another newer k8s
> driver uses Fedora. The topic of Operating System (OS) support for cluster
> nodes (versus what OS containers are based on) is confusing for many cloud
> operators, so it would be helpful we worked on clarifying the options, and
> involve stakeholders from various OS distributions so that suitable options
> are available for those who prefer to form Magnum clusters from OS images
> composed from one particular OS or another.
>
> Ideally we could have this discussion at the PTG in Atlanta with
> participants like our core reviewers, Josh Berkus, you, our Suse
> contributors, and any other representatives from OS distribution
> organizations who may have an interest in cluster drivers for their
> respective OS types. If that discussion proves productive, we could also
> engage our wider contributor base in a followup IRC team meeting with a
> dedicated agenda item to cover what’s possible, and summarize what various
> stakeholders provided to us as input at the PTG. This might give us a
> chance to source further input from a wider audience than our PTG attendees.
>
> Thoughts?
>
> Thanks,
>
> Adrian
>
>
> On 18 Jan 2017 8:36 p.m., "Adrian Otto"  wrote:
>
>> Josh,
>>
>> > On Jan 18, 2017, at 10:18 AM, Josh Berkus  wrote:
>> >
>> > Magnum Devs:
>> >
>> > Is there going to be a magnum team meeting around OpenStack Summit in
>> > Boston?
>> >
>> > I'm the community manager for Atomic Host, so if you're going to have
>> > Magnum meetings, I'd like to send you some Atomic engineers to field any
>> > questions/issues at the Summit.
>>
>> Thanks for your question. We are planning to have our team design
>> meetings at the upcoming PTG event in Atlanta. We are not currently
>> planning to have any such meetings in Boston. With that said, we would very
>> much like to involve you in an important Atomic related design decision
>> that has recently surfaced, and would like to welcome you to an upcoming
>> Magnum IRC team meeting to meet you and explain our interests and concerns.
>> I do expect to attend the Boston summit myself, so I’m willing to meet you
>> and your engineers on behalf of our team if you are unable to attend the
>> PTG. I’ll reach out to you individually by email to explore our options for
>> an Atomic Host meeting agenda item in the mean time.
>>
>> Regards,
>>
>> Adrian
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] PTL candidacy for Pike series

2017-01-23 Thread Michael Johnson

Hello Octavia folks,

I wanted to let you know that I am running for the PTL position again for
Pike.

My candidacy statement is available here:
https://git.openstack.org/cgit/openstack/election/plain/candidates/pike/Octa
via/johnsom.txt

Thank you for your consideration,

Michael



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ocatvia]Newton Octavia lbaas creation error

2017-01-23 Thread Michael Johnson
Santhosh,

 

>From the traceback below it looks like the neutron process is unable to access 
>keystone.

 

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource DriverError: Driver 
error: Unable to establish connection to http://127.0.0.1:5000/v2.0/tokens: 
HTTPConnectionPool(host='127.0.0.1', port=5000): Max retries exceeded with url: 
/v2.0/tokens (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 111] 
ECONNREFUSED',))

 

So, I would check the neutron.conf settings for keystone like the user/password 
and that the neutron process can reach keystone on http://127.0.0.1:5000  Maybe 
there is a bad security group or keystone isn’t running?

 

Michael



 

From: Santhosh Fernandes [mailto:santhosh.fernan...@gmail.com] 
Sent: Sunday, January 22, 2017 10:48 AM
To: openstack-dev@lists.openstack.org; Michael Johnson 
Subject: [openstack-dev][ocatvia]Newton Octavia lbaas creation error

 

Hi all,

 

I am getting driver connection error while creation the LB from octavia. 

 

Stack trace - 

 

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
[req-c6f19e4c-dfbd-4b1c-8198-925d05f9fcdf cf13e167c1884e7a8d63293a454ca774 
48ab507e206741c4ba304efaf5209963 - - -] create failed: No details.

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource Traceback (most 
recent call last):

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/api/v2/resource.py",
 line 79, in resource

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource result = 
method(request=request, **args)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/api/v2/base.py",
 line 430, in create

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
 line 88, in wrapped

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
self.force_reraise()

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
 line 84, in wrapped

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_db/api.py", 
line 151, in wrapper

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
self.force_reraise()

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_db/api.py", 
line 139, in wrapper

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
 line 124, in wrapped

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
traceback.format_exc())

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
self.force_reraise()

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)

2017-01-22 12:21:51.569 14448 ERROR 

Re: [openstack-dev] [cinder] Can I use lvm thin provisioning in mitaka?

2017-01-23 Thread Chris Friesen

On 01/21/2017 03:00 AM, Marco Marino wrote:

Really thank you!! It's difficult for me find help on cinder and I think this is
the right place!
@Duncan, if my goal is to speeding up bootable volume creation, I can avoid to
use thin provisioning. I can use image cache and in this way the "retrieve from
glance" and the "qemu-img convert to RAW" parts will be skipped. Is this
correct? And whit this method I don't have a performancy penalty mentioned by 
Chris.
@Chris: Yes, I'm using volume_clear option and volume deletion is very fast


Just to be clear, you should not use "volume_clear=none" unless you are using 
thin provisioning or you do not care about security.


If you have "volume_clear=none" with thick LVM, then newly created cinder 
volumes may contain data written to the disk via other cinder volumes that were 
later deleted.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] OsOps Reboot

2017-01-23 Thread Mike Dorman
+1!  Thanks for driving this.


From: Edgar Magana 
Date: Friday, January 20, 2017 at 1:23 PM
To: "m...@mattjarvis.org.uk" , Melvin Hillsman 

Cc: OpenStack Operators 
Subject: Re: [Openstack-operators] OsOps Reboot

I super second this! Yes, looking forward to amazing contributions there.

Edgar

From: Matt Jarvis 
Reply-To: "m...@mattjarvis.org.uk" 
Date: Friday, January 20, 2017 at 12:33 AM
To: Melvin Hillsman 
Cc: OpenStack Operators 
Subject: Re: [Openstack-operators] OsOps Reboot

Great stuff Melvin ! Look forward to seeing this move forward.

On Fri, Jan 20, 2017 at 6:32 AM, Melvin Hillsman 
> wrote:
Good day everyone,

As operators we would like to reboot the efforts started around OsOps. Initial 
things that may make sense to work towards are starting back meetings, 
standardizing the repos (like having a lib or common folder, READMEs include 
release(s) tool works with, etc), increasing feedback loop from operators in 
general, actionable work items, identifying teams/people with resources for 
continuous testing/feedback, etc.

We have got to a great place so let's increase the momentum and maximize all 
the work that has been done for OsOps so far. Please visit the following link [ 
https://goo.gl/forms/eSvmMYGUgRK901533
 ] to vote on day of the week and time (UTC) you would like to have OsOps 
meeting. And also visit this etherpad [ 
https://etherpad.openstack.org/p/osops-meeting
 ] to help shape the initial and ongoing agenda items.

Really appreciate you taking time to read through this email and looking 
forward to all the great things to come.

Also we started an etherpad for brainstorming around how OsOps could/would 
function; very rough draft/outline/ideas right now again please provide 
feedback: 
https://etherpad.openstack.org/p/osops-project-future


--
Kind regards,

Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center

mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
http://osic.org

Learner | Ideation | Belief | Responsibility | Command

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [Magnum] Feature freeze coming today

2017-01-23 Thread Adrian Otto
Team,

I will be starting our feature freeze today. We have a few more patches to 
consider for merge before we enter the freeze. I’ll let you all know when each 
has been considered, and we are ready to begin the freeze.

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [Keystone][Tempest]

2017-01-23 Thread Liam Young
Hi,

Basically my question is: Should I expect a tempest full run to pass
against a Newton deployment using the policy.json from
https://github.com/openstack/keystone/blob/stable/newton/etc/policy.v3cloudsample.json
?

What I'm seeing is that some tests (like
tempest.api.compute.admin.test_quotas) fail when they try and list_domains.
This seems to be because the test creates:

1) A new project in the admin domain
2) A new user in the admin domain
3) Grants the admin role on the new project to the new user.

The test then authenticates with the new users credentials and attempts to
list_domains. The policy.json, however, has:


"cloud_admin": "role:admin and (token.is_admin_project:True or
domain_id:363ab68785c24c81a784edca1bceb935)",
...
"identity:list_domains": "rule:cloud_admin",

>From tempest I see:

==
FAIL:
tempest.api.compute.admin.test_quotas.QuotasAdminTestJSON.test_delete_quota[id-389d04f0-3a41-405f-9317-e5f86e3c44f0]
tags: worker-0
--
Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{2017-01-23 15:57:09,806 2014 INFO
[tempest.lib.common.rest_client] Request
(QuotasAdminTestJSON:test_delete_quota): 403 GET
http://10.5.36.109:35357/v3/domains?name=admin_domain 0.066s}}}

Traceback (most recent call last):
  File "tempest/api/compute/admin/test_quotas.py", line 128, in
test_delete_quota
project = self.identity_utils.create_project(name=project_name,
  File "tempest/test.py", line 470, in identity_utils
project_domain_name=domain)
  File "tempest/lib/common/cred_client.py", line 210, in get_creds_client
roles_client, domains_client, project_domain_name)
  File "tempest/lib/common/cred_client.py", line 142, in __init__
name=domain_name)['domains'][0]
  File "tempest/lib/services/identity/v3/domains_client.py", line 57, in
list_domains
resp, body = self.get(url)
  File "tempest/lib/common/rest_client.py", line 290, in get
return self.request('GET', url, extra_headers, headers)
  File "tempest/lib/common/rest_client.py", line 663, in request
self._error_checker(resp, resp_body)
  File "tempest/lib/common/rest_client.py", line 755, in _error_checker
raise exceptions.Forbidden(resp_body, resp=resp)
tempest.lib.exceptions.Forbidden: Forbidden
Details: {u'message': u'You are not authorized to perform the requested
action: identity:list_domains', u'code': 403, u'title': u'Forbidden'}

In the keystone log I see:

(keystone.policy.backends.rules): 2017-01-23 15:35:57,198 DEBUG enforce
identity:list_domains: {'is_delegated_auth': False,
'access_token_id': None,
'user_id': u'3fd9e70825d648d996080d855cf9c181',
'roles': [u'Admin'],
'user_domain_id': u'363ab68785c24c81a784edca1bceb935',
'consumer_id': None,
'trustee_id': None,
'is_domain': False,
'trustor_id': None,
'token': ,
'project_id': u'b48ba24e96d84de4a48077b9310faac7',
'trust_id': None,
'project_domain_id': u'363ab68785c24c81a784edca1bceb935'}
(keystone.common.wsgi): 2017-01-23 15:35:57,199 WARNING You are not
authorized to perform the requested action: identity:list_domains

This appears to be project scoped. If I update the policy.json to grant
cloud_admin if the project is the admin domain then that seems to fix
things. The change I'm trying is:

 3c3,4
< "cloud_admin": "role:admin and (token.is_admin_project:True or
domain_id:admin_domain_id)",
---
> "bob": "project_domain_id:363ab68785c24c81a784edca1bceb935 or
domain_id:363ab68785c24c81a784edca1bceb935",
> "cloud_admin": "role:admin and (token.is_admin_project:True or
rule:bob)",

I did notice this comment on Bug #1451987:

If you see following errors for all identity api v3 tests, then please be
known that its not a a bug in tempest, rather you need to change keystone
v3 policy.json and make it more relaxed so tempest can authorize with users
created for each test with separate projects(tenants) because we set
tenant_isolation to True in tempest.conf ...

( https://bugs.launchpad.net/tempest/+bug/1451987/comments/2 )

This suggests to me that it is expected for policy.json to need tweaking.

Regards
Liam
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[OpenStack-Infra] Zuul DependentPipeline conceptual questions

2017-01-23 Thread Karpenko, Oleksandr (Nokia - DE/Ulm)
Hallo together,

Sorry for bothering you but I am not able to find any zuul specific mailing 
list.
First of all if there is some better place to ask questions, please let me know 
and I will do it there.

We are considering to use Zuul in a project split across many git repositories 
and we are in a proof of concept phase now. I have submitted for the review 
first problems we have found (https://review.openstack.org/#/c/423337/ , 
https://review.openstack.org/#/c/424055/).

Gerrit has a nice feature called 'Submit whole topic'. You can have reviews in 
several repositories within the same topic and gerrit allows to submit them 
together very close in time. When the first change is ready for Submit, 
'Submit' button stays gray until all other reviews in the same topic are ready 
and then by pressing the button in any of those changes all of them are 
submitted.

We have gate pipeline (DependentPipeline) in Zuul for Cross Project Testing 
which is triggered on code-review: 2  and on success does gerrit submit. The 
gate pipeline runs when the first change gets code-review: 2  and if all other 
changes have also code-review: 2, zuul submits all of them without testing the 
rest.

Does Zuul have any support for 'Submit whole topic' gerrit feature? I am not 
able to find any.

Does DependentPipeline have support for gerrit topic concept? It sounds at 
least reasonable to wait until all changes in the same topic get reviewed and 
then start DependentPipeline once (instead of 3 times in case of 2 
repositories).

Does our concept make sense at all or there are other ways to do it? We would 
like to reduce an effect of 'diamond dependency problem' by submitting related 
to each other changes close in time. For example when A depends on B and C. B 
and C depends on D. We would like to test A with related to each other changes 
in B and C (e.g. D integration) and submit changes in B and C very close in 
time. Because when B integrates D is much faster as C, A cannot be built until 
C is ready with D integration. This is not exactly what 'Depends-On:' is doing 
because changes in B and C does not depend on each other.

Once more sorry for your time and looking forward for your answer.

WBR,
Oleksandr

Nokia Solutions and Networks GmbH & Co. KG
Sitz der Gesellschaft: München / Registered office: Munich
Registergericht: München / Commercial registry: Munich, HRA 88537
WEEE-Reg.-Nr.: DE 52984304
Persönlich haftende Gesellschafterin / General Partner: Nokia Solutions and 
Networks Management GmbH
Geschäftsleitung / Board of Directors: Dr. Hermann Rodler, Gernot Kurfer
Vorsitzender des Aufsichtsrats / Chairman of supervisory board: Hans-Jürgen Bill
Sitz der Gesellschaft: München / Registered office: Munich
Registergericht: München / Commercial registry: Munich, HRB 163416


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[Openstack] Setting up another compute node

2017-01-23 Thread Peter Kirby
Hi,

I'm currently running OpenStack Mitaka on CentOS 7.2 and I'm trying to
setup another compute node.

I have nova installed and running and the following neutron packages:
openstack-neutron.noarch  1:8.3.0-1.el7
@openstack-mitaka
openstack-neutron-common.noarch   1:8.3.0-1.el7
@openstack-mitaka
openstack-neutron-ml2.noarch  1:8.3.0-1.el7
@openstack-mitaka
openstack-neutron-openvswitch.noarch  1:8.3.0-1.el7
@openstack-mitaka
python-neutron.noarch 1:8.3.0-1.el7
@openstack-mitaka
python-neutron-lib.noarch 0.0.3-1.el7
@openstack-mitaka
python2-neutronclient.noarch  4.1.2-1.el7
@openstack-mitaka

The neutron-openvswitch-agent is up and running and I can see it and nova
from the OpenStack commandline.  Neutron agent-list says the new host has
the openvswitch agent and it's alive.

However, when I try to deploy an instance to this new host, I get the
following error and the the instances fails to deploy:

2017-01-20 10:51:21.132 24644 INFO neutron.agent.common.ovs_lib
[req-2be33822-4a69-4521-9267-a81315b20b6b - - - - -] Port
67b72a38-c553-4f06-953c-92f43d5dea60 not present in bridge br-int
2017-01-20 10:51:21.133 24644 INFO
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
[req-2be33822-4a69-4521-9267-a81315b20b6b - - - - -] port_unbound():
net_uuid None not in local_vlan_map

Here is the output from ovs-vsctl show:
2e5497fc-6f3a-4761-a99b-d4e95d0614f7
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port "eno1"
Interface "eno1"
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.5.0"

I suspect I'm missing one small step, but I've been searching google and
logs for days now and I can't seem to nail down the problem.  Does anyone
have any suggestions where I should look next?

Thank you.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [congress] ocata client causes feature regression with pre-ocata server

2017-01-23 Thread Tim Hinrichs
At some point the client sometimes made multiple API calls.  I think (c)
seems right too.

Tim

On Sun, Jan 22, 2017 at 1:15 AM Monty Taylor  wrote:

> On 01/21/2017 04:07 AM, Eric K wrote:
> > Hi all,
> >
> > I was getting ready to request release of congress client, but I
> > remembered that the new client causes feature regression if used with
> > older versions of congress. Specifically, new client with pre-Ocata
> > congress cannot refer to datasource by name, something that could be done
> > with pre-Ocata client.
> >
> > Here¹s the patch of interest: https://review.openstack.org/#/c/407329/
> > 
> >
> > A few questions:
> >
> > Are we okay with the regression? Seems like it could cause a fair bit of
> > annoyance for users.
>
> This is right. New client lib should always continue to work with old
> server. (A user should be able to just pip install python-congressclient
> and have it work regardless of when their operator decides to upgrade or
> not upgrade their cloud)
>
> >1. If we¹re okay with that, what¹s the best way to document that
> > pre-Ocata congress should be used with pre-Ocata client?
> >2. If not, how we avoid the regression? Here are some candidates I can
> > think of.
> >   a. Client detects congress version and act accordingly. I don¹t
> > think this is possible, nor desirable for client to be concerned with
> > congress version not just API version.
> >   b. Release backward compatible API version 1.1 that supports
> > getting datasource by name_or_id. Then client will take different paths
> > depending on API version.
> >   c. If datasource not found, client falls back on old method of
> > retrieving list of datasources to resolve name into UUID. This would
> work,
> > but causes extra API & DB call in many cases.
> >   d. Patch old versions of Congress to support getting datasource
> > by name_or_id. Essentially, it was always a bug that the API didn¹t
> > support name_or_id.
>
> I'm a fan of d - but I don't believe it will help - since the problem
> will still manifest for users who do not have control over the server
> installation.
>
> I'd suggest c is the most robust. It _is_ potentially more expensive -
> but that's a good motivation for the deployer to upgrade their
> installation of congress without negatively impacting the consumer in
> the  meantime.
>
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Unable Upload Image

2017-01-23 Thread Eugen Block

Is [1] somehow applicable to your problem?

Have you configured  
/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py


OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}

to use image-api v2?

[1]  
https://ask.openstack.org/en/question/95828/mitaka-horizon-unable-to-retrieve-image-list/



Zitat von Trinath Somanchi :


Very Good.

Here is the culprit, ServiceCatalogException: Invalid service  
catalog service: image


Please debug it.

/ Trinath

From: Bjorn Mork [mailto:bjron.m...@gmail.com]
Sent: Sunday, January 22, 2017 4:24 PM
To: Trinath Somanchi 
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Unable Upload Image

Thanks for sharing link. Logs given below;

Error_log

[Sun Jan 22 10:44:40.804142 2017] [:error] [pid 4817]  
DEBUG:keystoneauth.session:REQ: curl -g -i -X GET  
http://controller:5000/v3/users/bac999c177474e08976c54a362ba6a70/projects -H  
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H  
"X-Auth-Token: {SHA1}cbccf767336b5e5e11dfd1eb113243f01893cc16"
[Sun Jan 22 10:44:40.882088 2017] [:error] [pid 4817]  
DEBUG:keystoneauth.session:RESP: [200] Date: Sun, 22 Jan 2017  
10:44:40 GMT Server: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5  
Vary: X-Auth-Token x-openstack-request-id:  
req-4cb2d8ce-8894-40b2-afcf-1ae68bfec228 Content-Length: 460  
Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type:  
application/json
[Sun Jan 22 10:44:40.882121 2017] [:error] [pid 4817] RESP BODY:  
{"links": {"self":  
"http://controller:5000/v3/users/bac999c177474e08976c54a362ba6a70/projects;,  
"previous": null, "next": null}, "projects": [{"is_domain": false,  
"description": "Admin Project", "links": {"self":  
"http://controller:5000/v3/projects/1c53d92609ca40a290a8a552b466b30a"},  
"enabled": true, "id": "1c53d92609ca40a290a8a552b466b30a",  
"parent_id": "183a01ba69194a9fac15d02b4c9aa118", "domain_id":  
"183a01ba69194a9fac15d02b4c9aa118", "name": "admin"}]}

[Sun Jan 22 10:44:40.882141 2017] [:error] [pid 4817]
[Sun Jan 22 10:44:42.707364 2017] [:error] [pid 4817] HTTP exception  
with no status/code
[Sun Jan 22 10:44:42.707418 2017] [:error] [pid 4817] Traceback  
(most recent call last):
[Sun Jan 22 10:44:42.707426 2017] [:error] [pid 4817]   File  
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/rest/utils.py", line 126, in  
_wrapped
[Sun Jan 22 10:44:42.707431 2017] [:error] [pid 4817] data =  
function(self, request, *args, **kw)
[Sun Jan 22 10:44:42.707435 2017] [:error] [pid 4817]   File  
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/rest/glance.py", line 163, in  
get
[Sun Jan 22 10:44:42.707440 2017] [:error] [pid 4817] request,  
filters=filters, **kwargs)
[Sun Jan 22 10:44:42.707444 2017] [:error] [pid 4817]   File  
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py", line 293, in  
image_list_detailed
[Sun Jan 22 10:44:42.707448 2017] [:error] [pid 4817]  
images_iter =  
glanceclient(request).images.list(page_size=request_size,
[Sun Jan 22 10:44:42.707452 2017] [:error] [pid 4817]   File  
"/usr/lib/python2.7/site-packages/horizon/utils/memoized.py", line  
90, in wrapped
[Sun Jan 22 10:44:42.707511 2017] [:error] [pid 4817] value =  
cache[key] = func(*args, **kwargs)
[Sun Jan 22 10:44:42.707519 2017] [:error] [pid 4817]   File  
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py", line 136, in  
glanceclient
[Sun Jan 22 10:44:42.707537 2017] [:error] [pid 4817] url =  
base.url_for(request, 'image')
[Sun Jan 22 10:44:42.707578 2017] [:error] [pid 4817]   File  
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/base.py", line 321, in  
url_for
[Sun Jan 22 10:44:42.707746 2017] [:error] [pid 4817] raise  
exceptions.ServiceCatalogException(service_type)
[Sun Jan 22 10:44:42.707753 2017] [:error] [pid 4817]  
ServiceCatalogException: Invalid service catalog service: image


Access_LOG

192.168.0.196 - - [22/Jan/2017:14:46:19 +0400] "GET  
/dashboard/project/images/ HTTP/1.1" 200 22767  
"http://192.168.0.172/dashboard/project/images; "Mozilla/5.0  
(Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0"
192.168.0.196 - - [22/Jan/2017:14:46:20 +0400] "GET  
/dashboard/i18n/js/horizon+openstack_dashboard/ HTTP/1.1" 200 2372  
"http://192.168.0.172/dashboard/project/images/; "Mozilla/5.0  
(Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0"
192.168.0.196 - - [22/Jan/2017:14:46:21 +0400] "POST  
/dashboard/api/policy/ HTTP/1.1" 200 17  
"http://192.168.0.172/dashboard/project/images; "Mozilla/5.0  
(Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0"
192.168.0.196 - - [22/Jan/2017:14:46:21 +0400] "GET  
/dashboard/api/keystone/user-session/ HTTP/1.1" 200 626  

[openstack-dev] [vitrage] Vitrage Hands-On Webinar

2017-01-23 Thread Afek, Ifat (Nokia - IL)
Hi,

Nokia is hosting a Webinar about Vitrage, tomorrow January 24th at 10:00 a.m. 
EDT/ 5:00 pm, Northern Europe Time.

This will be a hands-on lab, where Dan Offek (a Vitrage core developer) will 
present an overview of what Vitrage is all about, and guide you through the 
process of installing, configuring and experimenting with Vitrage.

You are welcome to register: http://go.nokia.com/UotW0pK01Y0JQR02d8000a4 

Best Regards,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-23 Thread lương hữu tuấn
Hi guys,

I am provide some information about the result of testing YAQL performance
on my devstack stable/newton with RAM of 6GB. The workflow i created is
below:

#
input:
  - size
  - number_of_handovers

  tasks:
  generate_input:
action: std.javascript
input:
  context:
size: <% $.size %>
  script: |
result = {}
for(i=0; i < $.size; i++) {
  result["key_" + i] = {
"alma": "korte"
  }
}
return result
publish:
  data: <% task(generate_input).result %>
on-success:
  - process

  process:
action: std.echo
input:
  output: <% $.data %>
publish:
  data: <% task(process).result %>
  number_of_handovers: <% $.number_of_handovers - 1 %>
on-success:
  - process: <% $.number_of_handovers > 0 %>

##

I test with the size is 1 and the number_of_handover is 50. The result
shows out that time for validating the <% $.data %> is quite long. I do not
know this time is acceptable but imagine that in our use case, the value of
$.data could be a large size. Couple of log file is below:

INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function
evaluate finished in 11262.710 ms

INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function
evaluate finished in 8146.324 ms

..

The average value is around 10s each time of valuating.

Br,

Tuan


On Mon, Jan 23, 2017 at 11:48 AM, lương hữu tuấn 
wrote:

> Hi Renat,
>
> For more details, i will go to check on the CBAM machine and hope it is
> not deleted yet since we have done it for around a week.
> Another thing is Jinja2 showed us that it run 2-3 times faster with the
> same test with YAQL. More information i will also provide it later.
>
> Br,
>
> Tuan
>
> On Mon, Jan 23, 2017 at 8:32 AM, Renat Akhmerov 
> wrote:
>
>> Tuan,
>>
>> I don’t think that Jinja is something that Kirill is responsible for.
>> It’s just a coincidence that we in Mistral support both YAQL and Jinja. The
>> latter has been requested by many people so we finally did it.
>>
>> As far as performance, could you please provide some numbers? When you
>> say “takes a lot of time” how much time is it? For what kind of input? Why
>> do you think it is slow? What are your expectations?Provide as much info as
>> possible. After that we can ask YAQL authors to comment and help if we
>> realize that the problem really exists.
>>
>> I’m interested in this too since I’m always looking for ways to speed
>> Mistral up.
>>
>> Thanks
>>
>> Renat Akhmerov
>> @Nokia
>>
>> On 18 Jan 2017, at 16:25, lương hữu tuấn  wrote:
>>
>> Hi Kirill,
>>
>> Do you have any information related to the performance of Jinja and Yaql
>> validating. With the big size of input, yaql runs quite so slow in our case
>> therefore we have plan to switch to jinja.
>>
>> Br,
>>
>> @Nokia/Tuan
>>
>> On Tue, Jan 17, 2017 at 3:02 PM, lương hữu tuấn 
>> wrote:
>>
>>> Hi Kirill,
>>>
>>> Thank you for you information. I hope we will have more information
>>> about it. Just keep in touch when you guys in Mirantis have some
>>> performance results about Yaql.
>>>
>>> Br,
>>>
>>> @Nokia/Tuan
>>>
>>> On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev 
>>> wrote:
>>>
 I think fuel team encountered similar problems, I’d advice asking them
 around. Also Stan (author of yaql) might shed some light on the problem =)

 --
 Kirill Zaitsev
 Murano Project Tech Lead
 Software Engineer at
 Mirantis, Inc

 On 17 January 2017 at 15:11:52, lương hữu tuấn (tuantulu...@gmail.com)
 wrote:

 Hi,

 We are now using yaql in mistral and what we see that the process of
 validating yaql expression of input takes a lot of time, especially with
 the big size input. Do you guys have any information about performance of
 yaql?

 Br,

 @Nokia/Tuan
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> 
>> __

Re: [openstack-dev] [Neutron] PTL Candidacy

2017-01-23 Thread Jay Pipes

On 01/22/2017 09:35 PM, Kevin Benton wrote:

I would like to propose my candidacy for the Neutron PTL.

I have been contributing to Neutron since the Havana development
cycle working for a network vendor and then a distribution vendor.
I have been a core reviewer since the Kilo development cycle and
I am on the Neutron stable maintenance team as well as the drivers
team.

I have a few priorities that I would focus on as PTL:

* Cleanup and simplification of the existing code: In addition to
supporting the ongoing work of converting all data access into OVO
models, I would like the community to continue breaking down code using
the callback event system. We should eliminate as many
extension-specific mixins and special-cases from the core as possible so
it becomes very easy to reason about and stable from a code-churn
perspective. This approach forces us to add appropriate event
notifications to the core to build service plugins and drivers out of
tree without requiriing modifications to the core.


++ Great initiative, Kevin.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [operators] Optional resource asking or not?

2017-01-23 Thread Sylvain Bauza


Le 23/01/2017 15:11, Jay Pipes a écrit :
> On 01/22/2017 04:40 PM, Sylvain Bauza wrote:
>> Hey folks,
>>
>> tl;dr: should we GET /resource_providers for only the related resources
>> that correspond to enabled filters ?
> 
> No. Have administrators set the allocation ratios for the resources they
> do not care about exceeding capacity to a very high number.
> 
> If someone previously removed a filter, that doesn't mean that the
> resources were not consumed on a host. It merely means the admin was
> willing to accept a high amount of oversubscription. That's what the
> allocation_ratio is for.
> 
> The flavor should continue to have a consumed disk/vcpu/ram amount,
> because the VM *does actually consume those resources*. If the operator
> doesn't care about oversubscribing one or more of those resources, they
> should set the allocation ratios of those inventories to a high value.
> 
> No more adding configuration options for this kind of thing (or in this
> case, looking at an old configuration option and parsing it to see if a
> certain filter is listed in the list of enabled filters).
> 
> We have a proper system of modeling these data-driven decisions now, so
> my opinion is we should use it and ask operators to use the placement
> REST API for what it was intended.
> 

I know your point, but please consider mine.
What if an operator disabled CoreFilter in Newton and wants to upgrade
to Ocata ?
All of that implementation being very close to the deadline makes me
nervous and I really want the seamless path for operators now using the
placement service.

Also, like I said in my bigger explanation, we should need to modify a
shit ton of assertions in our tests that can say "meh, don't use all the
filters, but just these ones". Pretty risky so close to a FF.

-Sylvain


> Best,
> -jay
> 
>> Explanation below why even if I
>> know we have a current consensus, maybe we should discuss again about it.
>>
>>
>> I'm still trying to implement https://review.openstack.org/#/c/417961/
>> but when trying to get the functional job being +1, I discovered that we
>> have at least one functional test [1] asking for just the RAMFilter (and
>> not for VCPUs or disks).
>>
>> Given the current PS is asking for *all* both CPU, RAM and disk, it's
>> trampling the current test by getting a NoValidHost.
>>
>> Okay, I could just modify the test and make sure we have enough
>> resources for the flavors but I actually now wonder if that's all good
>> for our operators.
>>
>> I know we have a consensus saying that we should still ask for both CPU,
>> RAM and disk at the same time, but I imagine our users coming back to us
>> saying "eh, look, I'm no longer able to create instances even if I'm not
>> using the CoreFilter" for example. It could be a bad day for them and
>> honestly, I'm not sure just adding documentation or release notes would
>> help them.
>>
>> What are you thinking if we say that for only this cycle, we still try
>> to only ask for resources that are related to the enabled filters ?
>> For example, say someone is disabling CoreFilter in the conf opt, then
>> the scheduler shouldn't ask for VCPUs to the Placement API.
>>
>> FWIW, we have another consensus about not removing
>> CoreFilter/RAMFilter/MemoryFilter because the CachingScheduler is still
>> using them (and not calling the Placement API).
>>
>> Thanks,
>> -Sylvain
>>
>> [1]
>> https://github.com/openstack/nova/blob/de0eff47f2cfa271735bb754637f979659a2d91a/nova/tests/functional/test_server_group.py#L48
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [operators] Optional resource asking or not?

2017-01-23 Thread Sylvain Bauza


Le 22/01/2017 22:40, Sylvain Bauza a écrit :
> Hey folks,
> 
> tl;dr: should we GET /resource_providers for only the related resources
> that correspond to enabled filters ? Explanation below why even if I
> know we have a current consensus, maybe we should discuss again about it.
> 
> 
> I'm still trying to implement https://review.openstack.org/#/c/417961/
> but when trying to get the functional job being +1, I discovered that we
> have at least one functional test [1] asking for just the RAMFilter (and
> not for VCPUs or disks).
> 
> Given the current PS is asking for *all* both CPU, RAM and disk, it's
> trampling the current test by getting a NoValidHost.
> 
> Okay, I could just modify the test and make sure we have enough
> resources for the flavors but I actually now wonder if that's all good
> for our operators.
> 
> I know we have a consensus saying that we should still ask for both CPU,
> RAM and disk at the same time, but I imagine our users coming back to us
> saying "eh, look, I'm no longer able to create instances even if I'm not
> using the CoreFilter" for example. It could be a bad day for them and
> honestly, I'm not sure just adding documentation or release notes would
> help them.
> 
> What are you thinking if we say that for only this cycle, we still try
> to only ask for resources that are related to the enabled filters ?
> For example, say someone is disabling CoreFilter in the conf opt, then
> the scheduler shouldn't ask for VCPUs to the Placement API.
> 
> FWIW, we have another consensus about not removing
> CoreFilter/RAMFilter/MemoryFilter because the CachingScheduler is still
> using them (and not calling the Placement API).
> 

A quick follow-up :
I first thought on some operators already disabling the DiskFilter
because they don't trust its calculations for shared disk.
We also have people that don't run the CoreFilter because they prefer
having only the compute claims doing the math and they don't care of
allocation ratios at all.


All those people would be trampled if we now begin to count resources
based on things they explicitely disabled.
That's why I updated my patch series and I wrote a quick verification on
which filter is running :

https://review.openstack.org/#/c/417961/16/nova/scheduler/host_manager.py@640

Ideally, I would refine that so that we would modify the BaseFilter
structure by having a method that would return the resource amount
needed by the RequestSpec and that would also disable the filter so it
would return always true (no need to doublecheck the filter if the
placement service already told this compute is sane). That way, we could
slowly but surely keep the existing interface for optionally verify
resources (ie. people would still use filters) but we would have the new
logic made by the Placement engine.

Given the very short window, that can be done in Pike, but at least
operators wouldn't be impacted in the upgrade path.

-Sylvain

> Thanks,
> -Sylvain
> 
> [1]
> https://github.com/openstack/nova/blob/de0eff47f2cfa271735bb754637f979659a2d91a/nova/tests/functional/test_server_group.py#L48
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [operators] Optional resource asking or not?

2017-01-23 Thread Jay Pipes

On 01/22/2017 04:40 PM, Sylvain Bauza wrote:

Hey folks,

tl;dr: should we GET /resource_providers for only the related resources
that correspond to enabled filters ?


No. Have administrators set the allocation ratios for the resources they 
do not care about exceeding capacity to a very high number.


If someone previously removed a filter, that doesn't mean that the 
resources were not consumed on a host. It merely means the admin was 
willing to accept a high amount of oversubscription. That's what the 
allocation_ratio is for.


The flavor should continue to have a consumed disk/vcpu/ram amount, 
because the VM *does actually consume those resources*. If the operator 
doesn't care about oversubscribing one or more of those resources, they 
should set the allocation ratios of those inventories to a high value.


No more adding configuration options for this kind of thing (or in this 
case, looking at an old configuration option and parsing it to see if a 
certain filter is listed in the list of enabled filters).


We have a proper system of modeling these data-driven decisions now, so 
my opinion is we should use it and ask operators to use the placement 
REST API for what it was intended.


Best,
-jay

> Explanation below why even if I

know we have a current consensus, maybe we should discuss again about it.


I'm still trying to implement https://review.openstack.org/#/c/417961/
but when trying to get the functional job being +1, I discovered that we
have at least one functional test [1] asking for just the RAMFilter (and
not for VCPUs or disks).

Given the current PS is asking for *all* both CPU, RAM and disk, it's
trampling the current test by getting a NoValidHost.

Okay, I could just modify the test and make sure we have enough
resources for the flavors but I actually now wonder if that's all good
for our operators.

I know we have a consensus saying that we should still ask for both CPU,
RAM and disk at the same time, but I imagine our users coming back to us
saying "eh, look, I'm no longer able to create instances even if I'm not
using the CoreFilter" for example. It could be a bad day for them and
honestly, I'm not sure just adding documentation or release notes would
help them.

What are you thinking if we say that for only this cycle, we still try
to only ask for resources that are related to the enabled filters ?
For example, say someone is disabling CoreFilter in the conf opt, then
the scheduler shouldn't ask for VCPUs to the Placement API.

FWIW, we have another consensus about not removing
CoreFilter/RAMFilter/MemoryFilter because the CachingScheduler is still
using them (and not calling the Placement API).

Thanks,
-Sylvain

[1]
https://github.com/openstack/nova/blob/de0eff47f2cfa271735bb754637f979659a2d91a/nova/tests/functional/test_server_group.py#L48

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Nominating zhongshengping for core of the Puppet OpenStack modules

2017-01-23 Thread Iury Gregory
+1

2017-01-23 5:49 GMT-03:00 Ivan Berezovskiy :

> +1
>
> 2017-01-21 3:07 GMT+04:00 Emilien Macchi :
>
>> plus one
>>
>> On Fri, Jan 20, 2017 at 12:19 PM, Alex Schultz 
>> wrote:
>> > Hey Puppet Cores,
>> >
>> > I would like to nominate Zhong Shengping as a Core reviewer for the
>> > Puppet OpenStack modules.  He is an excellent contributor to our
>> > modules over the last several cycles. His stats for the last 90 days
>> > can be viewed here[0].
>> >
>> > Please response with your +1 or any objections. If there are no
>> > objections by Jan 27 I will add him to the core list.
>> >
>> > Thanks,
>> > -Alex
>> >
>> > [0] http://stackalytics.com/report/contribution/puppet%20opensta
>> ck-group/90
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Thanks, Ivan Berezovskiy
> Senior Deployment Engineer
> at Mirantis 
>
> slack: iberezovskiy
> skype: bouhforever
> phone: + 7-960-343-42-46 <+7%20960%20343-42-46>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

~


*Att[]'sIury Gregory Melo Ferreira *
*Master student in Computer Science at UFCG*

*Part of the puppet-manager-core team in OpenStack*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-23 Thread Sean Dague
On 01/23/2017 08:11 AM, Chris Dent wrote:
> On Wed, 18 Jan 2017, Chris Dent wrote:
> 
>> The review starts with the original text. The hope is that
>> commentary here in this thread and on the review will eventually
>> lead to the best document.
> 
> https://review.openstack.org/#/c/421846
> 
> There's been a bit of commentary on the review which I'll try to
> summarize below. I hope people will join in. There have been plenty
> of people talking about this but unless you provide your input
> either here or on the review it will be lost.
> 
> Most of the people who have commented on the review are generally in
> favor of what's there with a few nits on details:
> 
> * Header changes should be noted as breaking compatibility/stability
> * Changing an error code should be signalled as a breaking change
> * The concept of extensions should be removed in favor of "version
>   boundaries"
> * The examples section needs to be modernized (notably getting rid
>   of XML)
> 
> There's some concern that "security fixes" (as a justification for a
> breaking change) is too broad and could be used too easily.
> 
> These all seem to be good practical comments that can be integrated
> into a future version but they are, as a whole, based upon a model
> of stability based around versioning and "signalling" largely in the
> form of microversions. This is not necessarily bad, but it doesn't
> address the need to come to mutual terms about what stability,
> compatibility and interoperability really mean for both users and
> developers. I hope we can figure that out.
> 
> If my read of what people have said in the past is correct at least
> one definition of HTTP API stability/compatibility is:
> 
>Any extant client code that works should continue working.
> 
> If that's correct then a stability guideline needs to serve two
> purposes:
> 
> * Enumerate the rare circumstances in which that rule may be broken
>   (catastrophic security/data integrity problems?).
> * Describe how to manage inevitable change (e.g., microversion,
>   macroversions, versioned media types) and what "version
>   boundaries" are.
> 
> And if that's correct then what we are really talking about is
> reaching consensus on how (or if) to manage versions. And that's
> where the real contention lies. Do we want to commit to
> microversions across the board? If we assert that versioning is
> something we need across the board then certainly we don't want to
> be using different techniques from service to service do we?
> 
> If you don't think those things above are correct or miss some
> nuance, I hope you will speak up.
> 
> Here's some internally-conflicting, hippy-dippy, personal opinion
> from me, just for the sake of grist for the mill because nobody else
> is yet coughing up:
> 
> I'm not sure I fully accept the original assertion. If extant client
> code is poor, perhaps because it allows the client to make an
> unhealthy demand upon a service, maybe it shouldn't be allowed? If
> way A to do something existing, but way B comes along that is better
> are we doing a disservice to people's self-improvement by letting A
> continue? Breaking stuff can sometimes increase community
> engagement, whether that community is OpenStack at large or the
> community of users in any given deployment.

This counter assertion seems a lot like blaming the consumer for trying
to use the software, and getting something working. Then pulling that
working thing out from under them with no warning.

We all inherited a bunch of odd and poorly defined behaviors in the
system we're using. They were made because at the time they seemed like
reasonable tradeoffs, and a couple of years later we learned more, or
needed to address a different use case that people didn't consider before.

If you don't guaruntee that existing applications will work in the
future (for some reasonable window of time), it's a massive turn off to
anyone deciding to use this interface at all. You suppress your user base.

If, when operators upgrade their OpenStack environments, there consumers
start complaining to them about things breaking, operators are going to
be much more reticent on upgrading anything, ever.

If upgrades get made harder for any reason, then getting security fixes
or features out to operators/users is not possible. They stopped taking
them. And when they are far enough back from master, it's going to be
easier to move to something else entirely than both upgrading OpenStack,
which effectively will be something else entirely for their entire user
base.

This is the spiral we are trying to avoid. It's the spiral we were in.
The one where people would show up to design summit sessions for years
saying "for the love of god can you people stop breaking everything
every release". The one where the only effective way to talk to 2
"OpenStack Clouds" and get them to do the same thing for even medium
complexity applications what to write your own intermediary layer.

This is a real 

Re: [openstack-dev] [tripleo] Atlanta PTG

2017-01-23 Thread John Trowbridge


On 01/21/2017 05:37 AM, Michele Baldessari wrote:
> Hi Emilien,
> 
> while not a design session per se, I would love to propose a short slot
> for TripleO CI Q, if we have some time left. In short, I'd like to be
> more useful around CI failures, but I lack the understanding of a few
> aspects of our current CI (promotion, when do images get built, etc.),
> that would benefit quite a bit from a short session where we have a few
> CI folks in the room that could answer questions or give some tips.
> I know of quite few other people that are in the same boat and maybe
> this will help a bit our current issue where only a few folks always
> chase CI issues.
> 
> If there is consensus (and some CI folks willing to attend ;) and time
> for this, I'll be happy to organize this and prepare a bunch of
> questions ideas beforehand.
> 

Great idea. We have a room for three days, so it is not like summit
where there is really limited time.

> Thoughts?
> Michele
> 
> On Wed, Jan 04, 2017 at 07:26:52AM -0500, Emilien Macchi wrote:
>> I would like to bring this topic up on your inbox, so we can continue
>> to make progress on the agenda. Feel free to follow existing examples
>> in the etherpad and propose a design dession.
>>
>> Thanks,
>>
>> On Wed, Dec 21, 2016 at 9:06 AM, Emilien Macchi  wrote:
>>> General infos about PTG: https://www.openstack.org/ptg/
>>>
>>> Some useful informations about PTG/TripleO:
>>>
>>> * When? We have a room between Wednesday and Friday included.
>>> Important sessions will happen on Wednesday and Thursday. We'll
>>> probably have sessions on Friday, but it might be more hands-on and
>>> hackfest, where people can enjoy the day to work together.
>>>
>>> * Let's start to brainstorm our topics:
>>> https://etherpad.openstack.org/p/tripleo-ptg-pike
>>>   Feel free to add any topic, as soon as you can. We need to know asap
>>> which sessions will be share with other projects (eg: tripleo/mistral,
>>> tripleo/ironic, tripleo/heat, etc).
>>>
>>>
>>> Please let us know any question or feedback,
>>> Looking forward to seeing you there!
>>> --
>>> Emilien Macchi
>>
>>
>>
>> -- 
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone]Error while setting up a keystone development environment

2017-01-23 Thread Daniel Gitu

Hello,

I'm new to all this and I am in need of help to find out where I went wrong.
This is a bit lengthy, I have left a blank space between the text and 
the error

messages I received.
I first set up and activated a virtual environment then cloned the keystone
project into that environment.
I then proceeded to cd into keystone and executed pip install -r
requirements.txt and got the following errors:

Failed building wheel for cryptography
Failed cleaning build dir for cryptography
Failed building wheel for netifaces
Failed building wheel for pycrypto
Command "/home/grenouille/openstack/bin/python -u -c "import 
setuptools, 
tokenize;__file__='/tmp/pip-build-XqTJv_/cryptography/setup.py';f=getattr(tokenize, 
'open', open)(__file__);code=f.read().replace('\r\n', 
'\n');f.close();exec(compile(code, __file__, 'exec'))" install --record 
/tmp/pip-nFp6dT-record/install-record.txt 
--single-version-externally-managed --compile --install-headers 
/home/grenouille/openstack/include/site/python2.7/cryptography" failed 
with error code 1 in /tmp/pip-build-XqTJv_/cryptography/


The above errors were resolved by executing; sudo apt-get install
build-essential libssl-dev libffi-dev python-dev and re-running
pipinstall -r requirements.txt
I ran sudo apt install tox and executed tox in the keystone directory
As tox was installing dependencies the first line read:

ERROR: invocation failed (exit code 1), logfile: 
/home/grenouille/openstack/keystone/.tox/docs  /log/docs-1.log

ERROR: actionid: docs

The final error message read:

ERROR: could not install deps 
[-r/home/grenouille/openstack/keystone/test-requirements.txt, 
.[ldap,memcache,mongodb]]; v = 
InvocationError('/home/grenouille/openstack/keystone/.tox/docs/bin/pip 
install 
-chttps://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt 
-r/home/grenouille/openstack/keystone/test-requirements.txt 
.[ldap,memcache,mongodb] (see 
/home/grenouille/openstack/keystone/.tox/docs/log/docs-1.log)', 1)



Regards,
Daniel.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-23 Thread Chris Dent

On Wed, 18 Jan 2017, Chris Dent wrote:


The review starts with the original text. The hope is that
commentary here in this thread and on the review will eventually
lead to the best document.


https://review.openstack.org/#/c/421846

There's been a bit of commentary on the review which I'll try to
summarize below. I hope people will join in. There have been plenty
of people talking about this but unless you provide your input
either here or on the review it will be lost.

Most of the people who have commented on the review are generally in
favor of what's there with a few nits on details:

* Header changes should be noted as breaking compatibility/stability
* Changing an error code should be signalled as a breaking change
* The concept of extensions should be removed in favor of "version
  boundaries"
* The examples section needs to be modernized (notably getting rid
  of XML)

There's some concern that "security fixes" (as a justification for a
breaking change) is too broad and could be used too easily.

These all seem to be good practical comments that can be integrated
into a future version but they are, as a whole, based upon a model
of stability based around versioning and "signalling" largely in the
form of microversions. This is not necessarily bad, but it doesn't
address the need to come to mutual terms about what stability,
compatibility and interoperability really mean for both users and
developers. I hope we can figure that out.

If my read of what people have said in the past is correct at least
one definition of HTTP API stability/compatibility is:

   Any extant client code that works should continue working.

If that's correct then a stability guideline needs to serve two
purposes:

* Enumerate the rare circumstances in which that rule may be broken
  (catastrophic security/data integrity problems?).
* Describe how to manage inevitable change (e.g., microversion,
  macroversions, versioned media types) and what "version
  boundaries" are.

And if that's correct then what we are really talking about is
reaching consensus on how (or if) to manage versions. And that's
where the real contention lies. Do we want to commit to
microversions across the board? If we assert that versioning is
something we need across the board then certainly we don't want to
be using different techniques from service to service do we?

If you don't think those things above are correct or miss some
nuance, I hope you will speak up.

Here's some internally-conflicting, hippy-dippy, personal opinion
from me, just for the sake of grist for the mill because nobody else
is yet coughing up:

I'm not sure I fully accept the original assertion. If extant client
code is poor, perhaps because it allows the client to make an
unhealthy demand upon a service, maybe it shouldn't be allowed? If
way A to do something existing, but way B comes along that is better
are we doing a disservice to people's self-improvement by letting A
continue? Breaking stuff can sometimes increase community
engagement, whether that community is OpenStack at large or the
community of users in any given deployment.

Many projects that do not currently have microversions (or other
system) need to manage change in some fashion. It seems backwards to
me that they must subscribe to eternal backwards compatibility when
they don't yet have a mechanism for managing forward motion. I
suppose the benefit of the tag being proposed is that it allows a
project to say "actually, for now, we're not worrying about that;
we'll let you know when we do". In which case they would then have
license to do what they like (and presumably adapt tempest as they
like).

Microversions are an interesting system. They allow for eternal
backwards compatibility by defaulting to being in the past unless
you actively choose a particular point in time or choose to be
always in the present with "latest". When I first started thinking
about this stability concept in the context of OpenStack I felt that
microversions were anti-stability because not only do they help
developers manage change, they give them license to change whenever
they are willing to create a new microversion. That seems contrary
to what I originally perceived as a desire to minimize change.

Further, microversions are a feature that is (as far as I know?)
implemented in a way unique to OpenStack. In other universes some
strategies for versioning are:

* don't ever change
* change aligned with semver of the "product"
* use macroversions in the URL or service definitions
* use versioned media-types (e.g.,
  'application/vnd.os.compute.servers+json; version=1.2') and
  content-negotiation (and keep urls always the same)
* hypermedia

I would guess we have enough commitment to microversions in
production that using something else would be nutbar, but it is
probably worth comparing with some of those systems so that we can
at least clearly state the benefits when making everyone settle in
the same place.

--
Chris Dent

Re: [openstack-dev] [tripleo] tripleoclient release : January 26th

2017-01-23 Thread Emilien Macchi
Reminder: we'll release tripleoclient this week.

Please let us know any blocker!
Thanks,

On Mon, Jan 16, 2017 at 9:32 AM, Emilien Macchi  wrote:
> One day I'll read calendars correctly :-)
> Client releases are next week, so we'll release tripleoclient by January 26th.
>
> Sorry for confusion.
>
> On Sun, Jan 15, 2017 at 6:41 PM, Emilien Macchi  wrote:
>> https://releases.openstack.org/ocata/schedule.html
>>
>> It's time to release python-tripleoclient this week.
>> We still have 15 bugs in progress targeted for ocata-3.
>> https://goo.gl/R2hO4Z
>>
>> Please triage them to pike-1 unless they are critical or high, so we
>> need to fix them afterward and backport it to stable/ocata.
>>
>> We'll release the client by Thursday 19th end of day.
>> Please let us know any blocker,
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Giulio Fidente
On 01/23/2017 11:07 AM, Saravanan KR wrote:
> Thanks John for the info.
> 
> I am going through the spec in detail. And before that, I had few
> thoughts about how I wanted to approach this, which I have drafted in
> https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
> 100% ready yet, I was still working on it.

I've linked this etherpad for the session we'll have at the PTG

> As of now, there are few differences on top of my mind, which I want
> to highlight, I am still going through the specs in detail:
> * Profiles vs Features - Considering a overcloud node as a profiles
> rather than a node which can host these features, would have
> limitations to it. For example, if i need a Compute node to host both
> Ceph (OSD) and DPDK, then the node will have multiple profiles or we
> have to create a profile like -
> hci_enterprise_many_small_vms_with_dpdk? The first one is not
> appropriate and the later is not scaleable, may be something else in
> your mind?
> * Independent - The initial plan of this was to be independent
> execution, also can be added to deploy if needed.
> * Not to expose/duplicate parameters which are straight forward, for
> example tuned-profile name should be associated with feature
> internally, Workflows will decide it.

for all of the above, I think we need to decide if we want the
optimizations to be profile-based and gathered *before* the overcloud
deployment is started or if we want to set these values during the
overcloud deployment basing on the data we have at runtime

seems like both approaches have pros and cons and this would be a good
conversation to have with more people at the PTG

> * And another thing, which I couldn't get is, where will the workflow
> actions be defined, in THT or tripleo_common?

to me it sounds like executing the workflows before stack creation is
started would be fine, at least for the initial phase

running workflows from Heat depends on the other blueprint/session we'll
have about the WorkflowExecution resource and once that will be
available, we could trigger the workflow execution from tht if beneficial

> The requirements which I thought of, for deriving workflow are:
> Parameter Deriving workflow should be
> * independent to run the workflow
> * take basic parameters inputs, for easy deployment, keep very minimal
> set of mandatory parameters, and rest as optional parameters
> * read introspection data from Ironic DB and Swift-stored blob
> 
> I will add these comments as starting point on the spec. We will work
> towards bringing down the differences, so that operators headache is
> reduced to a greater extent.

thanks

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] feature freeze exception request -- nova simple tenant usages api pagination

2017-01-23 Thread Radomir Dopieralski
Yes, to do it differently we need to add the microversion support patch
that you are working on, and make use of it, or write a patch that has
equivalent functionality.

On Fri, Jan 20, 2017 at 6:57 PM, Rob Cresswell  wrote:

> Just a thought: With the way we currently do microversions, wouldnt this
> request 2.40 for every request ? There's a pretty good chance that would
> break things.
>
> Rob
>
> On 20 January 2017 at 00:02, Richard Jones  wrote:
>
>> FFE granted for the three patches. We need to support that nova API
>> change.
>>
>> On 20 January 2017 at 01:28, Radomir Dopieralski 
>> wrote:
>> > I would like to request a feature freeze exception for the following
>> patch:
>> >
>> > https://review.openstack.org/#/c/410337
>> >
>> > This patch adds support for retrieving the simple tenant usages from
>> Nova in
>> > chunks, and it is necessary for correct data given that related patches
>> have
>> > been already merged in Nova. Without
>> > it, the data received will be truncated.
>> >
>> > In order to actually use that patch, however, it is necessary to set the
>> > Nova API version to at least
>> > version 3.40. For this, it's necessary to also add this patch:
>> >
>> > https://review.openstack.org/422642
>> >
>> > However, that patch will not work, because of a bug in the
>> VersionManager,
>> > which for some reason
>> > uses floating point numbers for specifying versions, and thus
>> understands
>> > 2.40 as 2.4. To fix that, it
>> > is also necessary to merge this patch:
>> >
>> > https://review.openstack.org/#/c/410688
>> >
>> > I would like to request an exception for all those three patches.
>> >
>> > An alternative to this would be to finish and merge the microversion
>> > support, and modify the first patch to make use of it. Then we would
>> need
>> > exceptions for those two patches.
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cloudkitty] [ptl] Candidacy for cloudkitty PTL

2017-01-23 Thread Christophe Sauthier

Hello everyone,

I would like to announce my candidacy for PTL of Cloudkitty.

During the Ocata cycle we have been able open up our community with the 
integration of some new contributors and new cores from different 
companies (which was a key from my point of view).
We also have been to add many improvment mainly to ease the usage and 
configuration of cloudkitty.


During the Pike cycle the focus I am looking for is to extend to 
spectrum of cloukitty integration (being able to fetch more metrics for 
more services),lso to continue to help developers when they want to 
participate in cloudkitty (which is already an ongoing work) and to 
continue to extend the collaboration with other OpenStack projects.
Finally I am decided also to continue to work to support the wider 
ecosystem adoption of Cloudkitty as the best solution for chargeback and 
rating.


I would also like to take this opportunity to thank all members of the 
OpenStack community who helped our team during the lasts cycles.


Thank you,

Christophe Sauthier



Christophe Sauthier   Mail : 
christophe.sauth...@objectif-libre.com

CEO   Mob : +33 (0) 6 16 98 63 96
Objectif LibreURL : www.objectif-libre.com
Au service de votre Cloud Twitter : @objectiflibre

Suivez les actualités OpenStack en français en vous abonnant à la Pause 
OpenStack

http://olib.re/pause-openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Performance][Shaker]

2017-01-23 Thread Ilya Shakhat
Hi Sai,

In UDP testing PPS represents packets sent by iperf client to server. Loss
is the percentage of packets that were not received by server (more
specifically the server tracks packets and sums gaps between of them,
https://github.com/esnet/iperf/blob/3.0.7/src/iperf_udp.c#L64).

While reported PPS depends on bandwidth and concurrency it makes sense to
increase them until loss starts going up, meaning that the communication
channel is near the limit.

Thanks,
Ilya

2017-01-21 1:19 GMT+04:00 Sai Sindhur Malleni :

> Hey,
>
> When using the "iperf3" class in shaker for looking at UDP small packet
> performance, we see that as we scale up the concurrency the average PPS
> goes up and also the loss % increases. Is the loss % a percentage of the
> PPS or does the PPS only represent successful transmissions? Thanks!
>
> --
> Sai Sindhur Malleni
> Software Engineer
> Red Hat Inc.
> 100 East Davie Street
> Raleigh, NC, USA
> Work: (919) 754-4557 | Cell: (919) 985-1055
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] allowed_address_pairs for port in neutron

2017-01-23 Thread Dale Smith
Hi George,

It would be worth checking to see if that extension is available in your
installation:

$ openstack extension list --network -c Name -c Alias
+---+---+
| Name | Alias |
+---+---+
... | Allowed Address Pairs | allowed-address-pairs |
...

We've documented a full working example here, which may be useful, but I
can't see anything incorrect with your request.
http://docs.catalystcloud.io/tutorials/deploying-highly-available-instances-with-keepalived.html#allowed-address-pairs
(see further down under 'Virtual Address Setup' for CLI command
examples) cheers, Dale
On 23/01/17 10:41, George Shuklin wrote:
> Hello. I'm trying to allow more than one IP on interface for tenant,
> but neutron (Mitaka) rejects my requests: $ neutron port-update
> b59bc3bb-7d34-4fbb-8e55-a9f1c5c88411 --allowed-address-pairs type=dict
> list=true ip_address=10.254.15.4 Unrecognized attribute(s)
> 'allowed_address_pairs' Neutron server returns request_ids:
> ['req-9168f1f4-6e78-42fb-8521-c69b1cfd4f67'] Is someone done this? Can
> you show your commands to neutron and name version you are using?
> Thanks. ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] allowed_address_pairs for port in neutron

2017-01-23 Thread shiva m
I have used the same way on Juno,  it worked for me.

neutron port-update 63c9933f-7ecb-4f29-9a34-faece384530d \
--allowed-address-pairs type=dict list=true \
mac_address='fa:16:3e:89:11:22',ip_address='10.0.2.0/24' \
mac_address='fa:16:3e:89:33:44',ip_address='10.0.3.0/24'

Thanks,
Shiva


On Mon, Jan 23, 2017 at 4:11 PM, George Shuklin 
wrote:

> Hello.
>
> I'm trying to allow more than one IP on interface for tenant, but neutron
> (Mitaka) rejects my requests:
>
> $ neutron port-update b59bc3bb-7d34-4fbb-8e55-a9f1c5c88411
> --allowed-address-pairs type=dict list=true ip_address=10.254.15.4
>
> Unrecognized attribute(s) 'allowed_address_pairs'
> Neutron server returns request_ids: ['req-9168f1f4-6e78-42fb-8521-
> c69b1cfd4f67']
>
> Is someone done this? Can you show your commands to neutron and name
> version you are using?
>
>
> Thanks.
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-23 Thread lương hữu tuấn
Hi Renat,

For more details, i will go to check on the CBAM machine and hope it is not
deleted yet since we have done it for around a week.
Another thing is Jinja2 showed us that it run 2-3 times faster with the
same test with YAQL. More information i will also provide it later.

Br,

Tuan

On Mon, Jan 23, 2017 at 8:32 AM, Renat Akhmerov 
wrote:

> Tuan,
>
> I don’t think that Jinja is something that Kirill is responsible for. It’s
> just a coincidence that we in Mistral support both YAQL and Jinja. The
> latter has been requested by many people so we finally did it.
>
> As far as performance, could you please provide some numbers? When you say
> “takes a lot of time” how much time is it? For what kind of input? Why do
> you think it is slow? What are your expectations?Provide as much info as
> possible. After that we can ask YAQL authors to comment and help if we
> realize that the problem really exists.
>
> I’m interested in this too since I’m always looking for ways to speed
> Mistral up.
>
> Thanks
>
> Renat Akhmerov
> @Nokia
>
> On 18 Jan 2017, at 16:25, lương hữu tuấn  wrote:
>
> Hi Kirill,
>
> Do you have any information related to the performance of Jinja and Yaql
> validating. With the big size of input, yaql runs quite so slow in our case
> therefore we have plan to switch to jinja.
>
> Br,
>
> @Nokia/Tuan
>
> On Tue, Jan 17, 2017 at 3:02 PM, lương hữu tuấn 
> wrote:
>
>> Hi Kirill,
>>
>> Thank you for you information. I hope we will have more information about
>> it. Just keep in touch when you guys in Mirantis have some performance
>> results about Yaql.
>>
>> Br,
>>
>> @Nokia/Tuan
>>
>> On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev 
>> wrote:
>>
>>> I think fuel team encountered similar problems, I’d advice asking them
>>> around. Also Stan (author of yaql) might shed some light on the problem =)
>>>
>>> --
>>> Kirill Zaitsev
>>> Murano Project Tech Lead
>>> Software Engineer at
>>> Mirantis, Inc
>>>
>>> On 17 January 2017 at 15:11:52, lương hữu tuấn (tuantulu...@gmail.com)
>>> wrote:
>>>
>>> Hi,
>>>
>>> We are now using yaql in mistral and what we see that the process of
>>> validating yaql expression of input takes a lot of time, especially with
>>> the big size input. Do you guys have any information about performance of
>>> yaql?
>>>
>>> Br,
>>>
>>> @Nokia/Tuan
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla-ansible] [kolla] Am I doing this wrong?

2017-01-23 Thread Paul Bourke

Hi Kris,

Thanks for the feedback, I think everyone involved in kolla-ansible 
should take the time to read through it, as it definitely highlights 
some areas that we need to improve.


There's a lot of questions here, so I haven't gone into too much detail 
on any specific one; my hope is that I can clear up the majority of it 
and then we can follow up on some of the topics that require more 
discussion.


Hope it helps,
-Paul

>  * I need to define a number of servers in my inventory outside of
> the specific servers that I want to perform actions against.  I need
> to define groups baremetal, rabbitmq, memcached, and control (IN
> addition to the glance specific groups) most of these seem to be
> gathering information for config? (Baremetal was needed soley to try
> to run the bootstrap play)

You only need to define the top level groups, i.e. control, network, 
storage, monitoring, etc. If you don't want or have dedicated nodes for 
each of these groups it's fine to put the same node into multiple 
groups. So for example, if you're not interested in monitoring right 
now, you can just put your control node(s) under this and forget about 
it. The groups marked with [*:children] (e.g. bootstrap) are "groups of 
groups" and you shouldn't need to modify these at all.


> Running a change specifically against
> "glance" causes fact gathering on a number of other servers not
> specifically where glance is running?  My concern here is that I
> want to be able to run kola-ansible against a specific service and
> know that only those servers are being logged into.

The fact gathering on every server is a compromise taken by Kolla to 
work around limitations in Ansible. It works well for the majority of 
situations; for more detail and potential improvements on this please 
have a read of this post: 
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107833.html


> * I want to run a dry-run only, being able to see what will happen
> before it happens, not during; during makes it really hard to see
> what will happen until it happens. Also supporting  `ansible --diff`
> would really help in understanding what will be changed (before it
> happens).

Agree a dry run would be useful, I believe it came up during the 
Barcelona design summit but has not yet been looked at. The ansible 
--diff sounds like something we could easily do, if you could log a 
blueprint at blueprints.launchpad.net/kolla-ansible I think that would help.


> * Database task are ran on every deploy and status of change DB
> permissions always reports as changed? Even when nothing happens,
> which makes you wonder "what changed"?

This shouldn't be the case, I just double checked taking Glance as an 
example, it reports "ok" (no change) for all runs after the initial 
deploy. Perhaps you've come across a bug, if you think this is the case 
please log one.


> Also, Can someone tell me why the DB stuff is done on a
> deployment task?  Seems like the db checks/migration work should
> only be done on a upgrade or a bootstrap?

Deploy includes bootstrap, but bootstrap is only done if the database is 
not found (or on upgrade). Again it sounds like you're coming across 
some unusual behavior here, suggest checking in with us on 
#openstack-kolla or filing a bug.


> * Database services (that at least we have) our not managed by our
> team, so don't want kolla-ansible touching those (since it won't be
> able to). No way to mark the DB as "externally managed"?  IE we dont
> have permissions to create databases or add users.  But we got all
> other permissions on the databases that are created, so normal
> db-manage tooling works.

This is definitely something we need - I'm pretty sure I saw something 
around this in the review queue very recently. I can't find it off hand 
so hopefully someone can chip in here on the status of this work.


> * Maintenance level operations; doesn't seem to be any built-in to
> say 'take a server out  of a production state, deploy to it, test
> it, put it back into production'  Seems like if kola-ansible is
> doing haproxy for API's, it should be managing this?  Or an
> extension point to allow us to run our own maintenance/testing 
scripts?


Again, discussed, needs to happen, but not there as of yet.

> * Config must come from kolla-ansible and generated templates.  I
> know we have a patch up for externally managed service
> configuration.  But if we aren't suppose to use kolla-ansible for
> generating configs (see below), why cant we override this piece?

I'm not quite following you here, the config templates from 
kolla-ansible are one of it's stronger pieces imo, they're reasonably 
well tested and maintained. What leads you to believe they shouldn't be 
used?


> * Certain parts of it are 'reference only' (the config tasks),
> are not recommended


[Openstack-operators] allowed_address_pairs for port in neutron

2017-01-23 Thread George Shuklin

Hello.

I'm trying to allow more than one IP on interface for tenant, but 
neutron (Mitaka) rejects my requests:


$ neutron port-update b59bc3bb-7d34-4fbb-8e55-a9f1c5c88411 
--allowed-address-pairs type=dict list=true ip_address=10.254.15.4


Unrecognized attribute(s) 'allowed_address_pairs'
Neutron server returns request_ids: 
['req-9168f1f4-6e78-42fb-8521-c69b1cfd4f67']


Is someone done this? Can you show your commands to neutron and name 
version you are using?



Thanks.


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [all] PTG deadline reminders

2017-01-23 Thread Thierry Carrez
Hi everyone,

The PTG is less than a month away ! We have two deadlines coming up this
week.

First, if you haven't registered yet but intend to come, you should
probably book now. There are less than 65 tickets left as of this
morning, so it is very likely to sell out soon. Prices will also
increase in two days, at the end of the day on Wednesday, January 25th
(from $100 to $150). So book now if you want to secure your attendance!

https://pikeptg.eventbrite.com/

Second, our hotel block in the hotel where the event happens is closing
this Friday, January 27th. Booking there ensures you can maximize your
time with other event attendees and make the most of the event. It also
helps supporting the financial model behind the event. Book before
Friday using this link:

https://www.starwoodmeeting.com/events/start.action?id=1609140999=381BF4AA

Thanks!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms][ptl] PTL candidacy

2017-01-23 Thread James Page
Hi All

I would like to announce my candidacy for PTL of the OpenStack Charms
project.

Over the Ocata cycle, we've been incubating the community of developers
around
the Charms, with new charms for Murano, Trove, Mistral and CloudKitty all
due
to be included in the release in February.

We've also started to engage successfully with the vendor ecosystem around
OpenStack, with PLUMgrid, Calico and 6wind all working towards aligment and
inclusion in the OpenStack Charm release.

This is all helping to diversify the development community around the
Charms.

I'll continue to work to support the wider ecosystem adoption of the Charms
as a great way to deploy and manage OpenStack.

We've made some in-roads into improving the developer experience for charm
authors, but I feel there is still progress to be made so I will continue to
focus on this aspect of the project during the Pike cycle.

I look forward to working with the team and steering the project for another
cycle!

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Saravanan KR
Thanks John for the info.

I am going through the spec in detail. And before that, I had few
thoughts about how I wanted to approach this, which I have drafted in
https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
100% ready yet, I was still working on it.

As of now, there are few differences on top of my mind, which I want
to highlight, I am still going through the specs in detail:
* Profiles vs Features - Considering a overcloud node as a profiles
rather than a node which can host these features, would have
limitations to it. For example, if i need a Compute node to host both
Ceph (OSD) and DPDK, then the node will have multiple profiles or we
have to create a profile like -
hci_enterprise_many_small_vms_with_dpdk? The first one is not
appropriate and the later is not scaleable, may be something else in
your mind?
* Independent - The initial plan of this was to be independent
execution, also can be added to deploy if needed.
* Not to expose/duplicate parameters which are straight forward, for
example tuned-profile name should be associated with feature
internally, Workflows will decide it.
* And another thing, which I couldn't get is, where will the workflow
actions be defined, in THT or tripleo_common?


The requirements which I thought of, for deriving workflow are:
Parameter Deriving workflow should be
* independent to run the workflow
* take basic parameters inputs, for easy deployment, keep very minimal
set of mandatory parameters, and rest as optional parameters
* read introspection data from Ironic DB and Swift-stored blob

I will add these comments as starting point on the spec. We will work
towards bringing down the differences, so that operators headache is
reduced to a greater extent.

Regards,
Saravanan KR

On Fri, Jan 20, 2017 at 9:56 PM, John Fulton  wrote:
> On 01/11/2017 11:34 PM, Saravanan KR wrote:
>>
>> Thanks John, I would really appreciate if you could tag me on the
>> reviews. I will do the same for mine too.
>
>
> Hi Saravanan,
>
> Following up on this, have a look at the OS::Mistral::WorflowExecution
> Heat spec [1] to trigger Mistral workflows. I'm hoping to use it for
> deriving THT parameters for optimal resource isolation in HCI
> deployments as I mentioned below. I have a spec [2] which describes
> the derivation of the values, but this is provided as an example for
> the more general problem of capturing the rules used to derive the
> values so that deployers may easily apply them.
>
> Thanks,
>   John
>
> [1] OS::Mistral::WorflowExecution https://review.openstack.org/#/c/267770/
> [2] TripleO Performance Profiles https://review.openstack.org/#/c/423304/
>
>> On Wed, Jan 11, 2017 at 8:03 PM, John Fulton  wrote:
>>>
>>> On 01/11/2017 12:56 AM, Saravanan KR wrote:


 Thanks Emilien and Giulio for your valuable feedback. I will start
 working towards finalizing the workbook and the actions required.
>>>
>>>
>>>
>>> Saravanan,
>>>
>>> If you can add me to the review for your workbook, I'd appreciate it. I'm
>>> trying to solve a similar problem, of computing THT params for HCI
>>> deployments in order to isolate resources between CephOSDs and
>>> NovaComputes,
>>> and I was also looking to use a Mistral workflow. I'll add you to the
>>> review
>>> of any related work, if you don't mind. Your proposal to get NUMA info
>>> into
>>> Ironic [1] helps me there too. Hope to see you at the PTG.
>>>
>>> Thanks,
>>>   John
>>>
>>> [1] https://review.openstack.org/396147
>>>
>>>
> would you be able to join the PTG to help us with the session on the
> overcloud settings optimization?


 I will come back on this, as I have not planned for it yet. If it
 works out, I will update the etherpad.

 Regards,
 Saravanan KR


 On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente 
 wrote:
>
>
> On 01/04/2017 09:13 AM, Saravanan KR wrote:
>>
>>
>>
>> Hello,
>>
>> The aim of this mail is to ease the DPDK deployment with TripleO. I
>> would like to see if the approach of deriving THT parameter based on
>> introspection data, with a high level input would be feasible.
>>
>> Let me brief on the complexity of certain parameters, which are
>> related to DPDK. Following parameters should be configured for a good
>> performing DPDK cluster:
>> * NeutronDpdkCoreList (puppet-vswitch)
>> * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
>> review)
>> * NovaVcpuPinset (puppet-nova)
>>
>> * NeutronDpdkSocketMemory (puppet-vswitch)
>> * NeutronDpdkMemoryChannels (puppet-vswitch)
>> * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
>> * Interface to bind DPDK driver (network config templates)
>>
>> The complexity of deciding some of these parameters is explained in
>> the blog [1], where the CPUs has to be chosen in accordance with the

Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-23 Thread Flavio Percoco

On 12/01/17 08:11 -0500, Zane Bitter wrote:

On 11/01/17 10:01, Thomas Herve wrote:

On Wed, Jan 11, 2017 at 3:34 PM, Emilien Macchi  wrote:

On Wed, Jan 11, 2017 at 2:50 AM, Thomas Herve  wrote:

I think this is going where I thought it would: let's not do anything.
The image resource is there for v1 compatibility, but there is no
point to have a v2 resource, at least right now.


If we do nothing, we force our heat-template users to keep Glance v1
API enabled in their cloud (+ running Glance Registry service), which
at some point doens't make sense, since Glance team asked to moved
forward with Glance v2 API.

I would really recommend to move forward and stop ignoring the new API version.


Emilien was right: by defaulting to Glance v1, we still required it
for the image constraint, which is used everywhere like servers and
volumes. We can easily switch to v2 for this, I'll do it right away.


For those following along at home, this merged: 
https://review.openstack.org/#/c/418987/


Patch to deprecate the resource type: 
https://review.openstack.org/#/c/419043/


Thanks for the work here, folks!
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >