Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-23 Thread Renat Akhmerov
Ok, thanks. That looks more clear now.

Renat Akhmerov
@Nokia

> On 24 Jan 2017, at 14:15, lương hữu tuấn  wrote:
> 
> Hi Renat,
> 
> In short, it is the expression: output: <% $.data %>
> 
> I would like to post the workflow too since it would make more sense to 
> understand the whole picture(IMHO :)). In this case, it would be that the 
> data is too big, AFAIK is around 2MB. Therefore i would just wanna know more 
> information about the performance of YAQL (if we have), i myself do not judge 
> YAQL in this case.
> 
> Br,
> 
> Tuan
> 
> On Tue, Jan 24, 2017 at 6:09 AM, Renat Akhmerov  > wrote:
> While I’m in the loop regarding how this workflow works others may not be. 
> Could please just post your expression and data that you use to evaluate this 
> expression? And times. Workflow itself has nothing to do with what we’re 
> discussing.
> 
> Renat Akhmerov
> @Nokia
> 
>> On 23 Jan 2017, at 21:44, lương hữu tuấn > > wrote:
>> 
>> Hi guys,
>> 
>> I am provide some information about the result of testing YAQL performance 
>> on my devstack stable/newton with RAM of 6GB. The workflow i created is 
>> below:
>> 
>> #
>> input:
>>   - size
>>   - number_of_handovers
>> 
>>   tasks:
>>   generate_input:
>> action: std.javascript
>> input:
>>   context:
>> size: <% $.size %>
>>   script: |
>> result = {}
>> for(i=0; i < $.size; i++) {
>>   result["key_" + i] = {
>> "alma": "korte"
>>   }
>> }
>> return result
>> publish:
>>   data: <% task(generate_input).result %>
>> on-success:
>>   - process
>> 
>>   process:
>> action: std.echo
>> input:
>>   output: <% $.data %>
>> publish:
>>   data: <% task(process).result %>
>>   number_of_handovers: <% $.number_of_handovers - 1 %>
>> on-success:
>>   - process: <% $.number_of_handovers > 0 %>
>> 
>> ##
>> 
>> I test with the size is 1 and the number_of_handover is 50. The result 
>> shows out that time for validating the <% $.data %> is quite long. I do not 
>> know this time is acceptable but imagine that in our use case, the value of 
>> $.data could be a large size. Couple of log file is below:
>> 
>> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function 
>> evaluate finished in 11262.710 ms
>> 
>> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function 
>> evaluate finished in 8146.324 ms
>> 
>> ..
>> 
>> The average value is around 10s each time of valuating.
>> 
>> Br,
>> 
>> Tuan
>> 
>> 
>> On Mon, Jan 23, 2017 at 11:48 AM, lương hữu tuấn > > wrote:
>> Hi Renat,
>> 
>> For more details, i will go to check on the CBAM machine and hope it is not 
>> deleted yet since we have done it for around a week.
>> Another thing is Jinja2 showed us that it run 2-3 times faster with the same 
>> test with YAQL. More information i will also provide it later.
>> 
>> Br,
>> 
>> Tuan
>> 
>> On Mon, Jan 23, 2017 at 8:32 AM, Renat Akhmerov > > wrote:
>> Tuan,
>> 
>> I don’t think that Jinja is something that Kirill is responsible for. It’s 
>> just a coincidence that we in Mistral support both YAQL and Jinja. The 
>> latter has been requested by many people so we finally did it.
>> 
>> As far as performance, could you please provide some numbers? When you say 
>> “takes a lot of time” how much time is it? For what kind of input? Why do 
>> you think it is slow? What are your expectations?Provide as much info as 
>> possible. After that we can ask YAQL authors to comment and help if we 
>> realize that the problem really exists.
>> 
>> I’m interested in this too since I’m always looking for ways to speed 
>> Mistral up.
>> 
>> Thanks
>> 
>> Renat Akhmerov
>> @Nokia
>> 
>>> On 18 Jan 2017, at 16:25, lương hữu tuấn >> > wrote:
>>> 
>>> Hi Kirill,
>>> 
>>> Do you have any information related to the performance of Jinja and Yaql 
>>> validating. With the big size of input, yaql runs quite so slow in our case 
>>> therefore we have plan to switch to jinja.
>>> 
>>> Br,
>>> 
>>> @Nokia/Tuan
>>> 
>>> On Tue, Jan 17, 2017 at 3:02 PM, lương hữu tuấn >> > wrote:
>>> Hi Kirill,
>>> 
>>> Thank you for you information. I hope we will have more information about 
>>> it. Just keep in touch when you guys in Mirantis have some performance 
>>> results about Yaql.
>>> 
>>> Br,
>>> 
>>> @Nokia/Tuan 
>>> 
>>> On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev >> > wrote:
>>> I think fuel team encountered similar problems, I’d advice asking them 
>>> around. Also Stan (author of yaql) might shed some light on the 

[openstack-dev] [storlets][ptl] PTL candidacy

2017-01-23 Thread Eran Rom

Hi All,

I have been leading the Storlets project from its infancy days as
a research project in IBM to its infancy days as a big-tent project :-)
This would not have been possible without the small yet top and seasoned
developers in our community.

There is still very much I would like to do:
Reach out to more users and hence to developers.
Expand on our use cases portfolio by developing a
rich echo system.
Continuously work on the project's maturity so that
it can be picked up by deployers, and last but not least
enjoy the spirit of open source while at it.

I believe that I can help driving the project to achieve all these
goals, and would be very happy to serve as the project's first PTL.

Thanks!
Eran




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-23 Thread lương hữu tuấn
Hi Renat,

In short, it is the expression: output: <% $.data %>

I would like to post the workflow too since it would make more sense to
understand the whole picture(IMHO :)). In this case, it would be that the
data is too big, AFAIK is around 2MB. Therefore i would just wanna know
more information about the performance of YAQL (if we have), i myself do
not judge YAQL in this case.

Br,

Tuan

On Tue, Jan 24, 2017 at 6:09 AM, Renat Akhmerov 
wrote:

> While I’m in the loop regarding how this workflow works others may not be.
> Could please just post your expression and data that you use to evaluate
> this expression? And times. Workflow itself has nothing to do with what
> we’re discussing.
>
> Renat Akhmerov
> @Nokia
>
> On 23 Jan 2017, at 21:44, lương hữu tuấn  wrote:
>
> Hi guys,
>
> I am provide some information about the result of testing YAQL performance
> on my devstack stable/newton with RAM of 6GB. The workflow i created is
> below:
>
> #
> input:
>   - size
>   - number_of_handovers
>
>   tasks:
>   generate_input:
> action: std.javascript
> input:
>   context:
> size: <% $.size %>
>   script: |
> result = {}
> for(i=0; i < $.size; i++) {
>   result["key_" + i] = {
> "alma": "korte"
>   }
> }
> return result
> publish:
>   data: <% task(generate_input).result %>
> on-success:
>   - process
>
>   process:
> action: std.echo
> input:
>   output: <% $.data %>
> publish:
>   data: <% task(process).result %>
>   number_of_handovers: <% $.number_of_handovers - 1 %>
> on-success:
>   - process: <% $.number_of_handovers > 0 %>
>
> ##
>
> I test with the size is 1 and the number_of_handover is 50. The result
> shows out that time for validating the <% $.data %> is quite long. I do not
> know this time is acceptable but imagine that in our use case, the value of
> $.data could be a large size. Couple of log file is below:
>
> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]
>  Function evaluate finished in 11262.710 ms
>
> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]
>  Function evaluate finished in 8146.324 ms
>
> ..
>
> The average value is around 10s each time of valuating.
>
> Br,
>
> Tuan
>
>
> On Mon, Jan 23, 2017 at 11:48 AM, lương hữu tuấn 
> wrote:
>
>> Hi Renat,
>>
>> For more details, i will go to check on the CBAM machine and hope it is
>> not deleted yet since we have done it for around a week.
>> Another thing is Jinja2 showed us that it run 2-3 times faster with the
>> same test with YAQL. More information i will also provide it later.
>>
>> Br,
>>
>> Tuan
>>
>> On Mon, Jan 23, 2017 at 8:32 AM, Renat Akhmerov > > wrote:
>>
>>> Tuan,
>>>
>>> I don’t think that Jinja is something that Kirill is responsible for.
>>> It’s just a coincidence that we in Mistral support both YAQL and Jinja. The
>>> latter has been requested by many people so we finally did it.
>>>
>>> As far as performance, could you please provide some numbers? When you
>>> say “takes a lot of time” how much time is it? For what kind of input? Why
>>> do you think it is slow? What are your expectations?Provide as much info as
>>> possible. After that we can ask YAQL authors to comment and help if we
>>> realize that the problem really exists.
>>>
>>> I’m interested in this too since I’m always looking for ways to speed
>>> Mistral up.
>>>
>>> Thanks
>>>
>>> Renat Akhmerov
>>> @Nokia
>>>
>>> On 18 Jan 2017, at 16:25, lương hữu tuấn  wrote:
>>>
>>> Hi Kirill,
>>>
>>> Do you have any information related to the performance of Jinja and Yaql
>>> validating. With the big size of input, yaql runs quite so slow in our case
>>> therefore we have plan to switch to jinja.
>>>
>>> Br,
>>>
>>> @Nokia/Tuan
>>>
>>> On Tue, Jan 17, 2017 at 3:02 PM, lương hữu tuấn 
>>> wrote:
>>>
 Hi Kirill,

 Thank you for you information. I hope we will have more information
 about it. Just keep in touch when you guys in Mirantis have some
 performance results about Yaql.

 Br,

 @Nokia/Tuan

 On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev 
 wrote:

> I think fuel team encountered similar problems, I’d advice asking them
> around. Also Stan (author of yaql) might shed some light on the problem =)
>
> --
> Kirill Zaitsev
> Murano Project Tech Lead
> Software Engineer at
> Mirantis, Inc
>
> On 17 January 2017 at 15:11:52, lương hữu tuấn (tuantulu...@gmail.com)
> wrote:
>
> Hi,
>
> We are now using yaql in mistral and what we see that the process of
> validating yaql expression of input takes a lot of time, especially with
> the big size input. Do you guys have any informat

[openstack-dev] [nova][bugs] Nova Bugs Team Meeting this Tuesday Cancelled

2017-01-23 Thread Augustina Ragwitz
I've had a scheduling conflict and need to cancel the next Nova Bugs
Team meeting. If anyone is interested in running the meeting in my place
since it's been awhile, please feel free to reach out to me via email or
IRC.

-- 
Augustina Ragwitz
Señora Software Engineer
---
Waiting for your change to get through the gate? Clean up some Nova
bugs!
http://45.55.105.55:8082/bugs-dashboard.html
---
email: aragwitz+n...@pobox.com
irc: auggy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ python-novaclient][ python-glanceclient][ python-cinderclient][ python-neutronclient] Remove x-openstack-request-id logging code as it is logged twice

2017-01-23 Thread Kekane, Abhishek
Hi Dims,

Thank you for the update.

As of now patches for updating requirements.txt in individual clients has been 
proposed by bot, out of which patch for python-novaclient is already merged.
Following patches are still in review queue:

Python-glanceclient: https://review.openstack.org/#/c/423678
Python-cinderclient: https://review.openstack.org/#/c/423674
Python-neutronclient: https://review.openstack.org/#/c/422968

I have submitted patches in python-glanceclient [1], python-cinderclient [2] 
and python-neutronclient [3] to address this issue with dependency on above 
patches.

As client library release is targeted in this week, we need to make sure these 
patches get through and are part of the release otherwise we can hit the issue 
of logging request-id mapping twice in the logs if SessionClient is used.

[1] https://review.openstack.org/422591
[2] https://review.openstack.org/#/c/423940 (one +2)
[3] https://review.openstack.org/#/c/423921


Thank you,

Abhishek Kekane


-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: Saturday, January 21, 2017 6:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ python-novaclient][ python-glanceclient][ 
python-cinderclient][ python-neutronclient] Remove x-openstack-request-id 
logging code as it is logged twice

"keystoneauth1 >= 2.17.0" implies python-novaclient with your fix will work for 
any version including 2.17.0 which is not true. you need to either do 
"keystoneauth1 >= 2.18.0" or "keystoneauth1 > 2.17.0" and we prefer the ">=" 
notation i think.

Thanks,
Dims

On Fri, Jan 20, 2017 at 10:53 PM, Kekane, Abhishek 
 wrote:
> Hi Dims,
>
> Thank you for reply. I will propose a patch soon. Just for curiosity,
> keystoneauth1 >= 2.17.0 will not install 2.18.0?
>
> Abhishek
> 
> From: Davanum Srinivas 
> Sent: Saturday, January 21, 2017 8:27:56 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [ python-novaclient][ 
> python-glanceclient][ python-cinderclient][ python-neutronclient] 
> Remove x-openstack-request-id logging code as it is logged twice
>
> Abhishek,
>
> 1) requirements.txt for all 4 python-*client you mentioned have 
> "keystoneauth1>=2.17.0",
> 2) i do not see a review request to bump the minimum version in global 
> requirements for keystoneauth1 to "keystoneauth1>=2.18.0"
> (https://review.openstack.org/#/q/project:openstack/requirements+is:op
> en)
>
> Can you please file one?
>
> Thanks,
> Dims
>
>
> On Fri, Jan 20, 2017 at 12:52 AM, Kekane, Abhishek 
>  wrote:
>> Hi Devs,
>>
>>
>>
>> In the latest keystoneauth1 version 2.18.0, x-openstack-request-id is 
>> logged for every HTTP response. This keystoneauth1 version will be 
>> used for ocata.
>>
>> The same request id is also logged in 'request' method of 
>> SessionClient class for python-novaclient, python-glanceclient, 
>> python-cinderclient and python-neutronclient. Once requirements.txt 
>> is synced with global-requirements and it uses keystoneauth1 version 
>> 2.18.0 and above, x-openstack-request-id will be logged twice for these 
>> clients.
>>
>>
>>
>> I have submitted patches for python-novaclient [1] and 
>> python-glanceclient [2] and created patches for python-cinderclient 
>> and python-neutronclient but same will not be reviewed unless and 
>> until the requirements.txt is synced with global-requirements and it 
>> uses keystoneauth1 version 2.18.0.
>>
>>
>>
>> As final releases for client libraries are scheduled in the next week 
>> (between Jan 23 - Jan 27) we want to address these issues in the 
>> above mentioned clients.
>>
>>
>>
>> Please let us know your opinion about the same.
>>
>>
>>
>> [1] https://review.openstack.org/422602
>>
>> [2] https://review.openstack.org/422591
>>
>>
>> _
>> _
>> Disclaimer: This email and any attachments are sent in strictest 
>> confidence for the sole use of the addressee and may contain legally 
>> privileged, confidential, and proprietary data. If you are not the 
>> intended recipient, please advise the sender by replying promptly to 
>> this email and then delete and destroy this email and any attachments 
>> without any further use, copying or forwarding.
>>
>> _
>> _ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___

Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Saravanan KR
Thanks Giulio for adding it to PTG discussion pad. I am not yet sure
of my presence in PTG. Hoping that things will fall in place soon.

We have spent a considerable about of time in moving from static roles
to composable roles. If we are planning to introduce static profiles,
then after a while we will end up with the same problem, and
definitely, it actually depends on how the features will be composed
on a role. Looking forward.

Regards,
Saravanan KR

On Mon, Jan 23, 2017 at 6:25 PM, Giulio Fidente  wrote:
> On 01/23/2017 11:07 AM, Saravanan KR wrote:
>> Thanks John for the info.
>>
>> I am going through the spec in detail. And before that, I had few
>> thoughts about how I wanted to approach this, which I have drafted in
>> https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
>> 100% ready yet, I was still working on it.
>
> I've linked this etherpad for the session we'll have at the PTG
>
>> As of now, there are few differences on top of my mind, which I want
>> to highlight, I am still going through the specs in detail:
>> * Profiles vs Features - Considering a overcloud node as a profiles
>> rather than a node which can host these features, would have
>> limitations to it. For example, if i need a Compute node to host both
>> Ceph (OSD) and DPDK, then the node will have multiple profiles or we
>> have to create a profile like -
>> hci_enterprise_many_small_vms_with_dpdk? The first one is not
>> appropriate and the later is not scaleable, may be something else in
>> your mind?
>> * Independent - The initial plan of this was to be independent
>> execution, also can be added to deploy if needed.
>> * Not to expose/duplicate parameters which are straight forward, for
>> example tuned-profile name should be associated with feature
>> internally, Workflows will decide it.
>
> for all of the above, I think we need to decide if we want the
> optimizations to be profile-based and gathered *before* the overcloud
> deployment is started or if we want to set these values during the
> overcloud deployment basing on the data we have at runtime
>
> seems like both approaches have pros and cons and this would be a good
> conversation to have with more people at the PTG
>
>> * And another thing, which I couldn't get is, where will the workflow
>> actions be defined, in THT or tripleo_common?
>
> to me it sounds like executing the workflows before stack creation is
> started would be fine, at least for the initial phase
>
> running workflows from Heat depends on the other blueprint/session we'll
> have about the WorkflowExecution resource and once that will be
> available, we could trigger the workflow execution from tht if beneficial
>
>> The requirements which I thought of, for deriving workflow are:
>> Parameter Deriving workflow should be
>> * independent to run the workflow
>> * take basic parameters inputs, for easy deployment, keep very minimal
>> set of mandatory parameters, and rest as optional parameters
>> * read introspection data from Ironic DB and Swift-stored blob
>>
>> I will add these comments as starting point on the spec. We will work
>> towards bringing down the differences, so that operators headache is
>> reduced to a greater extent.
>
> thanks
>
> --
> Giulio Fidente
> GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-23 Thread Renat Akhmerov
While I’m in the loop regarding how this workflow works others may not be. 
Could please just post your expression and data that you use to evaluate this 
expression? And times. Workflow itself has nothing to do with what we’re 
discussing.

Renat Akhmerov
@Nokia

> On 23 Jan 2017, at 21:44, lương hữu tuấn  wrote:
> 
> Hi guys,
> 
> I am provide some information about the result of testing YAQL performance on 
> my devstack stable/newton with RAM of 6GB. The workflow i created is below:
> 
> #
> input:
>   - size
>   - number_of_handovers
> 
>   tasks:
>   generate_input:
> action: std.javascript
> input:
>   context:
> size: <% $.size %>
>   script: |
> result = {}
> for(i=0; i < $.size; i++) {
>   result["key_" + i] = {
> "alma": "korte"
>   }
> }
> return result
> publish:
>   data: <% task(generate_input).result %>
> on-success:
>   - process
> 
>   process:
> action: std.echo
> input:
>   output: <% $.data %>
> publish:
>   data: <% task(process).result %>
>   number_of_handovers: <% $.number_of_handovers - 1 %>
> on-success:
>   - process: <% $.number_of_handovers > 0 %>
> 
> ##
> 
> I test with the size is 1 and the number_of_handover is 50. The result 
> shows out that time for validating the <% $.data %> is quite long. I do not 
> know this time is acceptable but imagine that in our use case, the value of 
> $.data could be a large size. Couple of log file is below:
> 
> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function 
> evaluate finished in 11262.710 ms
> 
> INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function 
> evaluate finished in 8146.324 ms
> 
> ..
> 
> The average value is around 10s each time of valuating.
> 
> Br,
> 
> Tuan
> 
> 
> On Mon, Jan 23, 2017 at 11:48 AM, lương hữu tuấn  > wrote:
> Hi Renat,
> 
> For more details, i will go to check on the CBAM machine and hope it is not 
> deleted yet since we have done it for around a week.
> Another thing is Jinja2 showed us that it run 2-3 times faster with the same 
> test with YAQL. More information i will also provide it later.
> 
> Br,
> 
> Tuan
> 
> On Mon, Jan 23, 2017 at 8:32 AM, Renat Akhmerov  > wrote:
> Tuan,
> 
> I don’t think that Jinja is something that Kirill is responsible for. It’s 
> just a coincidence that we in Mistral support both YAQL and Jinja. The latter 
> has been requested by many people so we finally did it.
> 
> As far as performance, could you please provide some numbers? When you say 
> “takes a lot of time” how much time is it? For what kind of input? Why do you 
> think it is slow? What are your expectations?Provide as much info as 
> possible. After that we can ask YAQL authors to comment and help if we 
> realize that the problem really exists.
> 
> I’m interested in this too since I’m always looking for ways to speed Mistral 
> up.
> 
> Thanks
> 
> Renat Akhmerov
> @Nokia
> 
>> On 18 Jan 2017, at 16:25, lương hữu tuấn > > wrote:
>> 
>> Hi Kirill,
>> 
>> Do you have any information related to the performance of Jinja and Yaql 
>> validating. With the big size of input, yaql runs quite so slow in our case 
>> therefore we have plan to switch to jinja.
>> 
>> Br,
>> 
>> @Nokia/Tuan
>> 
>> On Tue, Jan 17, 2017 at 3:02 PM, lương hữu tuấn > > wrote:
>> Hi Kirill,
>> 
>> Thank you for you information. I hope we will have more information about 
>> it. Just keep in touch when you guys in Mirantis have some performance 
>> results about Yaql.
>> 
>> Br,
>> 
>> @Nokia/Tuan 
>> 
>> On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev > > wrote:
>> I think fuel team encountered similar problems, I’d advice asking them 
>> around. Also Stan (author of yaql) might shed some light on the problem =)
>> 
>> -- 
>> Kirill Zaitsev
>> Murano Project Tech Lead
>> Software Engineer at
>> Mirantis, Inc
>> 
>> On 17 January 2017 at 15:11:52, lương hữu tuấn (tuantulu...@gmail.com 
>> ) wrote:
>> 
>>> Hi,
>>> 
>>> We are now using yaql in mistral and what we see that the process of 
>>> validating yaql expression of input takes a lot of time, especially with 
>>> the big size input. Do you guys have any information about performance of 
>>> yaql? 
>>> 
>>> Br,
>>> 
>>> @Nokia/Tuan
>>> 
>>> __ 
>>> OpenStack Development Mailing List (not for usage questions) 
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>>> 

Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-23 Thread Kevin Benton
What I don't understand is why the OOM killer is being invoked when there
is almost no swap space being used at all. Check out the memory output when
it's killed:

http://logs.openstack.org/59/382659/26/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/7de01d0/logs/syslog.txt.gz#_Jan_11_15_54_36

"Jan 11 15:54:36 ubuntu-xenial-rax-ord-6599274 kernel: Free swap  =
7994832kB
Jan 11 15:54:36 ubuntu-xenial-rax-ord-6599274 kernel: Total swap =
7999020kB"

Do we have something set that is effectively disabling the usage of swap
space?

On Wed, Jan 18, 2017 at 4:13 PM, Joe Gordon  wrote:

>
>
> On Thu, Jan 19, 2017 at 10:27 AM, Matt Riedemann <
> mrie...@linux.vnet.ibm.com> wrote:
>
>> On 1/18/2017 4:53 AM, Jens Rosenboom wrote:
>>
>>> To me it looks like the times of 2G are long gone, Nova is using
>>> almost 2G all by itself. And 8G may be getting tight if additional
>>> stuff like Ceph is being added.
>>>
>>>
>> I'm not really surprised at all about Nova being a memory hog with the
>> versioned object stuff we have which does it's own nesting of objects.
>>
>> What tools to people use to be able to profile the memory usage by the
>> types of objects in memory while this is running?
>
>
> objgraph and guppy/heapy
>
> http://smira.ru/wp-content/uploads/2011/08/heapy.html
>
> https://www.huyng.com/posts/python-performance-analysis
>
> You can also use gc.get_objects() (https://docs.python.org/2/
> library/gc.html#gc.get_objects) to get a list of all objects in memory
> and go from there.
>
> Slots (https://docs.python.org/2/reference/datamodel.html#slots) are
> useful for reducing the memory usage of objects.
>
>
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-23 Thread Pradeep Singh
+1, welcome Kevin. I appreciate your work.

On Tuesday, January 24, 2017, Yanyan Hu  wrote:

> +1 for the change.
>
> 2017-01-24 6:56 GMT+08:00 Hongbin Lu  >:
>
>> Hi Zun cores,
>>
>>
>>
>> I proposed a change of Zun core team membership as below:
>>
>>
>>
>> + Kevin Zhao (kevin-zhao)
>>
>> - Haiwei Xu (xu-haiwei)
>>
>>
>>
>> Kevin has been working for Zun for a while, and made significant
>> contribution. He submitted several non-trivial patches with high quality.
>> One of his challenging task is adding support of container interactive
>> mode, and it looks he is capable to handle this challenging task (his
>> patches are under reviews now). I think he is a good addition to the core
>> team. Haiwei is a member of the initial core team. Unfortunately, his
>> activity dropped down in the past a few months.
>>
>>
>>
>> According to the OpenStack Governance process [1], we require a minimum
>> of 4 +1 votes from Zun core reviewers within a 1 week voting window
>> (consider this proposal as a +1 vote from me). A vote of -1 is a veto. If
>> we cannot get enough votes or there is a veto vote prior to the end of the
>> voting window, this proposal is rejected.
>>
>>
>>
>> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best regards,
>
> Yanyan
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] grenade failures in the gate

2017-01-23 Thread Armando M.
On 23 January 2017 at 13:50, Jeremy Stanley  wrote:

> On 2017-01-23 13:38:58 -0800 (-0800), Armando M. wrote:
> > We spotted [1] in the gate. Please wait for its resolution until pushing
> > patches into the merge queue.
>
> https://review.openstack.org/424323 seems to be the fix, and will
> hopefully merge shortly along with its dependency (they're at the
> top of the gate pipeline now as I write this).
>

Yes, that's the one. It looks like we're out of the woods...for now!

Cheers,
Armando


> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-23 Thread Yanyan Hu
+1 for the change.

2017-01-24 6:56 GMT+08:00 Hongbin Lu :

> Hi Zun cores,
>
>
>
> I proposed a change of Zun core team membership as below:
>
>
>
> + Kevin Zhao (kevin-zhao)
>
> - Haiwei Xu (xu-haiwei)
>
>
>
> Kevin has been working for Zun for a while, and made significant
> contribution. He submitted several non-trivial patches with high quality.
> One of his challenging task is adding support of container interactive
> mode, and it looks he is capable to handle this challenging task (his
> patches are under reviews now). I think he is a good addition to the core
> team. Haiwei is a member of the initial core team. Unfortunately, his
> activity dropped down in the past a few months.
>
>
>
> According to the OpenStack Governance process [1], we require a minimum of
> 4 +1 votes from Zun core reviewers within a 1 week voting window (consider
> this proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot
> get enough votes or there is a veto vote prior to the end of the voting
> window, this proposal is rejected.
>
>
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,

Yanyan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-23 Thread Eli Qiao
+1 for this change. 

-- 
Eli Qiao
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Tuesday, 24 January 2017 at 6:56 AM, Hongbin Lu wrote:

> Hi Zun cores,
>  
> I proposed a change of Zun core team membership as below:
>  
> + Kevin Zhao (kevin-zhao)
> - Haiwei Xu (xu-haiwei)
>  
> Kevin has been working for Zun for a while, and made significant 
> contribution. He submitted several non-trivial patches with high quality. One 
> of his challenging task is adding support of container interactive mode, and 
> it looks he is capable to handle this challenging task (his patches are under 
> reviews now). I think he is a good addition to the core team. Haiwei is a 
> member of the initial core team. Unfortunately, his activity dropped down in 
> the past a few months.
>  
> According to the OpenStack Governance process [1], we require a minimum of 4 
> +1 votes from Zun core reviewers within a 1 week voting window (consider this 
> proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get 
> enough votes or there is a veto vote prior to the end of the voting window, 
> this proposal is rejected.
>  
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>  
> Best regards,
> Hongbin
>  
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Pike design topics discussion

2017-01-23 Thread joehuang
Hello,

(Repost)

As what's discussed during the weekly meeting, let's discuss what's to do in 
Pike in etherpad next Tuesday morning UTC 1:30 am (9:30am Beijing time, 10:30 
Korea/Japan time, Monday 5:30pm PST time)

The etherpad link https://etherpad.openstack.org/p/tricircle-pike-design-topics

Please input your concerns on what to do in Pike into the etherpad, and let's 
discuss that time, the duration is around 1.5 hour.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas][barbican][octavia]certs don't get deregistered in barbican after lbaas listener delete

2017-01-23 Thread Jiahao Liang (Frankie)
Hi community,

I created a loadbalancer with a listener with protocol as
"TERMINATED_HTTPS" and specify --default-tls-container-ref with a ref of
secret container from Barbican.
However, after I deleted the listener, the lbaas wasn't removed from
barbican container consumer list.

$openstack secret container get
http://192.168.20.24:9311/v1/containers/453e8905-d42b-43bd-9947-50e3acf499f4
++-+
| Field  | Value
|
++-+
| Container href |
http://192.168.20.24:9311/v1/containers/453e8905-d42b-43bd-9947-50e3acf499f4
   |
| Name   | tls_container2
   |
| Created| 2017-01-19 12:44:07+00:00
|
| Status | ACTIVE
   |
| Type   | certificate
|
| Certificate|
http://192.168.20.24:9311/v1/secrets/bfc2bf01-0f23-4105-bf09-c75839b6b4cb
|
| Intermediates  | None
   |
| Private Key|
http://192.168.20.24:9311/v1/secrets/c85d150e-ec84-42e0-a65f-9c9ec19767e1
|
| PK Passphrase  | None
   |
| *Consumers  | {u'URL':
u'lbaas://RegionOne/loadbalancer/5e7768b9-7aa9-4146-8a71-6291353b447e',
u'name': u'lbaas'}*


I went through the neutron-lbaas code base. We did register consumer during
the creation of "TERMINATED_HTTPS" listener in [1]. But we somehow doesn't
deregister it during the deletion in [1]:
https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/services/loadbalancer/plugin.py#L642
get_cert() register lbaas as a consumer for barbican cert_manager.  (
https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/common/cert_manager/barbican_cert_manager.py#L177
)
[2]:
https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/services/loadbalancer/plugin.py#L805
we probably need to call delete_cert from barbican cert_manager to remove
the consumer. (
https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/common/cert_manager/barbican_cert_manager.py#L187
)


My questions are:
1. is that a bug?
2. or is it a intentional design letting the vendor driver to handle it?

It looks more like a bug to me.

Any thoughts?


Best,
Jiahao
-- 

*梁嘉豪/Jiahao LIANG (Frankie) *
Email: gzliangjia...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] ocata client causes feature regression with pre-ocata server

2017-01-23 Thread Eric K
Thanks Tim and Monty!

I also agree with ( c ). Here’s a simple patch doing that:
https://review.openstack.org/#/c/424385/

From:  Tim Hinrichs 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Monday, January 23, 2017 at 7:55 AM
To:  "OpenStack Development Mailing List (not for usage questions)"

Subject:  Re: [openstack-dev] [congress] ocata client causes feature
regression with pre-ocata server

> At some point the client sometimes made multiple API calls.  I think (c) seems
> right too.  
> 
> Tim 
> 
> On Sun, Jan 22, 2017 at 1:15 AM Monty Taylor  wrote:
>> On 01/21/2017 04:07 AM, Eric K wrote:
>>> > Hi all,
>>> >
>>> > I was getting ready to request release of congress client, but I
>>> > remembered that the new client causes feature regression if used with
>>> > older versions of congress. Specifically, new client with pre-Ocata
>>> > congress cannot refer to datasource by name, something that could be done
>>> > with pre-Ocata client.
>>> >
>>> > Here¹s the patch of interest: https://review.openstack.org/#/c/407329/
>>> > 
>>> >
>>> > A few questions:
>>> >
>>> > Are we okay with the regression? Seems like it could cause a fair bit of
>>> > annoyance for users.
>> 
>> This is right. New client lib should always continue to work with old
>> server. (A user should be able to just pip install python-congressclient
>> and have it work regardless of when their operator decides to upgrade or
>> not upgrade their cloud)
>> 
>>> >1. If we¹re okay with that, what¹s the best way to document that
>>> > pre-Ocata congress should be used with pre-Ocata client?
>>> >2. If not, how we avoid the regression? Here are some candidates I can
>>> > think of.
>>> >   a. Client detects congress version and act accordingly. I don¹t
>>> > think this is possible, nor desirable for client to be concerned with
>>> > congress version not just API version.
>>> >   b. Release backward compatible API version 1.1 that supports
>>> > getting datasource by name_or_id. Then client will take different paths
>>> > depending on API version.
>>> >   c. If datasource not found, client falls back on old method of
>>> > retrieving list of datasources to resolve name into UUID. This would work,
>>> > but causes extra API & DB call in many cases.
>>> >   d. Patch old versions of Congress to support getting datasource
>>> > by name_or_id. Essentially, it was always a bug that the API didn¹t
>>> > support name_or_id.
>> 
>> I'm a fan of d - but I don't believe it will help - since the problem
>> will still manifest for users who do not have control over the server
>> installation.
>> 
>> I'd suggest c is the most robust. It _is_ potentially more expensive -
>> but that's a good motivation for the deployer to upgrade their
>> installation of congress without negatively impacting the consumer in
>> the  meantime.
>> 
>> Monty
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions) Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [operators] Optional resource asking or not?

2017-01-23 Thread Sylvain Bauza


Le 23/01/2017 15:18, Sylvain Bauza a écrit :
> 
> 
> Le 23/01/2017 15:11, Jay Pipes a écrit :
>> On 01/22/2017 04:40 PM, Sylvain Bauza wrote:
>>> Hey folks,
>>>
>>> tl;dr: should we GET /resource_providers for only the related resources
>>> that correspond to enabled filters ?
>>
>> No. Have administrators set the allocation ratios for the resources they
>> do not care about exceeding capacity to a very high number.
>>
>> If someone previously removed a filter, that doesn't mean that the
>> resources were not consumed on a host. It merely means the admin was
>> willing to accept a high amount of oversubscription. That's what the
>> allocation_ratio is for.
>>
>> The flavor should continue to have a consumed disk/vcpu/ram amount,
>> because the VM *does actually consume those resources*. If the operator
>> doesn't care about oversubscribing one or more of those resources, they
>> should set the allocation ratios of those inventories to a high value.
>>
>> No more adding configuration options for this kind of thing (or in this
>> case, looking at an old configuration option and parsing it to see if a
>> certain filter is listed in the list of enabled filters).
>>
>> We have a proper system of modeling these data-driven decisions now, so
>> my opinion is we should use it and ask operators to use the placement
>> REST API for what it was intended.
>>
> 
> I know your point, but please consider mine.
> What if an operator disabled CoreFilter in Newton and wants to upgrade
> to Ocata ?
> All of that implementation being very close to the deadline makes me
> nervous and I really want the seamless path for operators now using the
> placement service.
> 
> Also, like I said in my bigger explanation, we should need to modify a
> shit ton of assertions in our tests that can say "meh, don't use all the
> filters, but just these ones". Pretty risky so close to a FF.
> 

Oh, just discovered a related point : in Devstack, we don't set the
CoreFilter by default !
https://github.com/openstack-dev/devstack/blob/adcf0c50cd87c68abef7c3bb4785a07d3545be5d/lib/nova#L94

TBC, that means that the gate is not verifying the VCPUs by the filter,
just by the compute claims. Heh.

Honestly I think we really need to optionally the filters for Ocata then.

-Sylvain

> -Sylvain
> 
> 
>> Best,
>> -jay
>>
>>> Explanation below why even if I
>>> know we have a current consensus, maybe we should discuss again about it.
>>>
>>>
>>> I'm still trying to implement https://review.openstack.org/#/c/417961/
>>> but when trying to get the functional job being +1, I discovered that we
>>> have at least one functional test [1] asking for just the RAMFilter (and
>>> not for VCPUs or disks).
>>>
>>> Given the current PS is asking for *all* both CPU, RAM and disk, it's
>>> trampling the current test by getting a NoValidHost.
>>>
>>> Okay, I could just modify the test and make sure we have enough
>>> resources for the flavors but I actually now wonder if that's all good
>>> for our operators.
>>>
>>> I know we have a consensus saying that we should still ask for both CPU,
>>> RAM and disk at the same time, but I imagine our users coming back to us
>>> saying "eh, look, I'm no longer able to create instances even if I'm not
>>> using the CoreFilter" for example. It could be a bad day for them and
>>> honestly, I'm not sure just adding documentation or release notes would
>>> help them.
>>>
>>> What are you thinking if we say that for only this cycle, we still try
>>> to only ask for resources that are related to the enabled filters ?
>>> For example, say someone is disabling CoreFilter in the conf opt, then
>>> the scheduler shouldn't ask for VCPUs to the Placement API.
>>>
>>> FWIW, we have another consensus about not removing
>>> CoreFilter/RAMFilter/MemoryFilter because the CachingScheduler is still
>>> using them (and not calling the Placement API).
>>>
>>> Thanks,
>>> -Sylvain
>>>
>>> [1]
>>> https://github.com/openstack/nova/blob/de0eff47f2cfa271735bb754637f979659a2d91a/nova/tests/functional/test_server_group.py#L48
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__

[openstack-dev] Sabari Murugesan stepping down from Glance core

2017-01-23 Thread Brian Rosmaita
Sabari Murugesan has communicated to me that he's no longer able to
commit time to working on Glance, and he's stepping down from the core
reviewers' team.

This message isn't all bad news, however: I'm particularly grateful that
Sabari has agreed to continue as the VMware driver maintainer for the
glance_store [0].

Please join me in thanking Sabari for all his past service to Glance.
As anyone who's worked with him knows, he's a great colleague, and I'm
really sorry to see him step down.  I hope that he may find time in the
future to work on Glance again.

thanks,
brian

[0] http://docs.openstack.org/developer/glance_store/drivers/index.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-23 Thread Hongbin Lu
Hi Zun cores,

I proposed a change of Zun core team membership as below:

+ Kevin Zhao (kevin-zhao)
- Haiwei Xu (xu-haiwei)

Kevin has been working for Zun for a while, and made significant contribution. 
He submitted several non-trivial patches with high quality. One of his 
challenging task is adding support of container interactive mode, and it looks 
he is capable to handle this challenging task (his patches are under reviews 
now). I think he is a good addition to the core team. Haiwei is a member of the 
initial core team. Unfortunately, his activity dropped down in the past a few 
months.

According to the OpenStack Governance process [1], we require a minimum of 4 +1 
votes from Zun core reviewers within a 1 week voting window (consider this 
proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get enough 
votes or there is a veto vote prior to the end of the voting window, this 
proposal is rejected.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Release notes for THT (need help)

2017-01-23 Thread Emilien Macchi
I've made progress on Ocata release notes for TripleO Heat Templates:
https://review.openstack.org/424365

I need some help to add some features that I wasn't sure about
wording, please help (in a patch on top of it or in review), asap
please.
I'm looking at containers, split-stack-software-configuration,
upgrades, TLS and any feature I might have missed to document in THT.

Next on my TODO: puppet-tripleo.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Performance][Shaker]

2017-01-23 Thread Sai Sindhur Malleni
Thanks Ilya!

On Mon, Jan 23, 2017 at 6:56 AM, Ilya Shakhat  wrote:

> Hi Sai,
>
> In UDP testing PPS represents packets sent by iperf client to server. Loss
> is the percentage of packets that were not received by server (more
> specifically the server tracks packets and sums gaps between of them,
> https://github.com/esnet/iperf/blob/3.0.7/src/iperf_udp.c#L64).
>
> While reported PPS depends on bandwidth and concurrency it makes sense to
> increase them until loss starts going up, meaning that the communication
> channel is near the limit.
>
> Thanks,
> Ilya
>
> 2017-01-21 1:19 GMT+04:00 Sai Sindhur Malleni :
>
>> Hey,
>>
>> When using the "iperf3" class in shaker for looking at UDP small packet
>> performance, we see that as we scale up the concurrency the average PPS
>> goes up and also the loss % increases. Is the loss % a percentage of the
>> PPS or does the PPS only represent successful transmissions? Thanks!
>>
>> --
>> Sai Sindhur Malleni
>> Software Engineer
>> Red Hat Inc.
>> 100 East Davie Street
>> Raleigh, NC, USA
>> Work: (919) 754-4557 | Cell: (919) 985-1055
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sai Sindhur Malleni
Software Engineer
Red Hat Inc.
100 East Davie Street
Raleigh, NC, USA
Work: (919) 754-4557 | Cell: (919) 985-1055
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] [ptl] PTL Candidacy for Pike

2017-01-23 Thread Andy McCrae
Hi All,

I'm once again running for the PTL position for OpenStack-Ansible during
the Pike cycle.

Here is my candidacy statement: https://review.openstack.org/#/c/424348/

Thanks for all your support during the Ocata cycle, and looking forward to
Pike!

Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] grenade failures in the gate

2017-01-23 Thread Jeremy Stanley
On 2017-01-23 13:38:58 -0800 (-0800), Armando M. wrote:
> We spotted [1] in the gate. Please wait for its resolution until pushing
> patches into the merge queue.

https://review.openstack.org/424323 seems to be the fix, and will
hopefully merge shortly along with its dependency (they're at the
top of the gate pipeline now as I write this).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Cinder-Nova API changes meeting

2017-01-23 Thread Ildiko Vancsa
Hi All,

Unfortunately our current meeting slot (every Monday 1700UTC) is in collision 
for several of the regular attendees.

In an attempt to find a new slot I checked the available meeting channels for 
the same time slot over the week and we have at least on available currently 
for each day. So for the first try let’s see whether we can find another day 
during the week with the SAME (1700UTC) time slot that works better.

You can share your preference on this Doodle poll: 
http://doodle.com/poll/9per237agrdy7rqz 


Thanks,
Ildikó__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] grenade failures in the gate

2017-01-23 Thread Armando M.
Hi neutrinos,

We spotted [1] in the gate. Please wait for its resolution until pushing
patches into the merge queue.

Thanks,
Armando

[1] https://bugs.launchpad.net/neutron/+bug/1658806
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] PTL nomination is open until Jan 29

2017-01-23 Thread Hongbin Lu
Hi all,

I sent this email to encourage you to run for the Magnum PTL for Pike [1]. I 
think most of the audience are in this ML so I sent the message to here.

First, I would like to thank for your interest in the Magnum project. It is 
great to work with you to build the project and make it better and better. 
Second, I would like to relay a reminder that the Pike PTL nomination is open 
*now*, and will be closed at Jan 29 23:45 UTC [1]. I wish more than one of you 
will step up to run for Magnum PTL position. I think the community will be 
healthier if there are more than one PTL candidates. If you are considering to 
run, I think the blog post below will help you understand more about this role.

  http://blog.flaper87.com/something-about-being-a-ptl

I strongly agree with the following key points of being a PTL:
* Make sure you will have enough time dedicated to the upstream.
* Prepare to step down in a cycle or two and create the next PTLs.
* Community decides: PTLs are not dictators.

If you have any query to decide, feel free to reach out to me and I am happy to 
share my past experience of being a Magnum PTL. Below is the history of Magnum 
PTLs. I sincerely thank them for their leaderships, but I would encourage a 
change in the upcoming cycles, simply for following the convention of other 
OpenStack projects to circulate the PTL position, ideally to a new person of a 
different affiliation. I think this will let everyone feel the ownership of the 
project and help the community in the long run.

Juno and earlier: Adrian Otto
Kilo: Adrian Otto
Liberty: Adrian Otto
Mitaka: Adrian Otto
Newton: Hongbin Lu
Ocata: Adrian Otto

[1] https://governance.openstack.org/election/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] feature freeze exception request -- nova simple tenant usages api pagination

2017-01-23 Thread Richard Jones
[I'm on vacation, so can't look into this too deeply, sorry]

I'm not sure I follow Rob's point here. Does the patch
https://review.openstack.org/#/c/410337 just check the version to see
if it's >= 2.40 and take action appropriately? I don't see how it
changes anything to force requesting 2.40 with every request? Then
again, I've not been able to look into how the current clients'
microversion code is implemented/broken. Is it just that *declaring*
the 2.40 version in https://review.openstack.org/#/c/422642 results in
all requests being forced to use that version?


 Richard

On 23 January 2017 at 23:10, Radomir Dopieralski  wrote:
> Yes, to do it differently we need to add the microversion support patch that
> you are working on, and make use of it, or write a patch that has equivalent
> functionality.
>
> On Fri, Jan 20, 2017 at 6:57 PM, Rob Cresswell
>  wrote:
>>
>> Just a thought: With the way we currently do microversions, wouldnt this
>> request 2.40 for every request ? There's a pretty good chance that would
>> break things.
>>
>> Rob
>>
>> On 20 January 2017 at 00:02, Richard Jones  wrote:
>>>
>>> FFE granted for the three patches. We need to support that nova API
>>> change.
>>>
>>> On 20 January 2017 at 01:28, Radomir Dopieralski 
>>> wrote:
>>> > I would like to request a feature freeze exception for the following
>>> > patch:
>>> >
>>> > https://review.openstack.org/#/c/410337
>>> >
>>> > This patch adds support for retrieving the simple tenant usages from
>>> > Nova in
>>> > chunks, and it is necessary for correct data given that related patches
>>> > have
>>> > been already merged in Nova. Without
>>> > it, the data received will be truncated.
>>> >
>>> > In order to actually use that patch, however, it is necessary to set
>>> > the
>>> > Nova API version to at least
>>> > version 3.40. For this, it's necessary to also add this patch:
>>> >
>>> > https://review.openstack.org/422642
>>> >
>>> > However, that patch will not work, because of a bug in the
>>> > VersionManager,
>>> > which for some reason
>>> > uses floating point numbers for specifying versions, and thus
>>> > understands
>>> > 2.40 as 2.4. To fix that, it
>>> > is also necessary to merge this patch:
>>> >
>>> > https://review.openstack.org/#/c/410688
>>> >
>>> > I would like to request an exception for all those three patches.
>>> >
>>> > An alternative to this would be to finish and merge the microversion
>>> > support, and modify the first patch to make use of it. Then we would
>>> > need
>>> > exceptions for those two patches.
>>> >
>>> >
>>> > __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-23 Thread Jason Rist
On 01/23/2017 12:03 PM, Emilien Macchi wrote:
> Greeting folks,
>
> I would like to propose some changes in our core members:
>
> - Remove Jay Dobies who has not been active in TripleO for a while
> (thanks Jay for your hard work!).
> - Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
> docker bits.
> - Add Steve Backer on os-collect-config and also docker bits in
> tripleo-common and tripleo-heat-templates.
>
> Indeed, both Flavio and Steve have been involved in deploying TripleO
> in containers, their contributions are very valuable. I would like to
> encourage them to keep doing more reviews in and out container bits.
>
> As usual, core members are welcome to vote on the changes.
>
> Thanks,
>
+1 - Related - can we get a review of some of the other 'sub teams' within 
TripleO, for instance UI?  We've had 2 core reviewers for a long time, and it 
would help to have one or two more.

-J
-- 
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] PTL Candidacy

2017-01-23 Thread Mike Perez
On 18:35 Jan 22, Kevin Benton wrote:
> I would like to propose my candidacy for the Neutron PTL.
> 
> I have been contributing to Neutron since the Havana development
> cycle working for a network vendor and then a distribution vendor.
> I have been a core reviewer since the Kilo development cycle and
> I am on the Neutron stable maintenance team as well as the drivers
> team.
> 
> I have a few priorities that I would focus on as PTL:

Do you have any thoughts/plans with plugin validation? [1][2][3]

[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110151.html
[2] - https://review.openstack.org/#/c/391594/
[3] - https://etherpad.openstack.org/p/driverlog-validation

-- 
Mike Perez


pgpxgOUqLRXO4.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-01-23 Thread Loo, Ruby
Hi,

We are jittery to present this week's priorities and subteam report for Ironic. 
As usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. nova patch for soft power/reboot: https://review.openstack.org/#/c/407977/
2. ironicclient queue: 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironicclient
3. ironic-inspector-client queue: 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironic-inspector-client
4. Continue reviewing driver composition things (see notes below, some of the 
WIP patches are ready other than docs/reno): 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1524745
5. Node tags: https://review.openstack.org/#/q/topic:bug/1526266


Bugs (dtantsur)
===
- Stats (diff between 16 Jan 2017 and 23 Jan 2017)
- Ironic: 227 bugs (-3) + 237 wishlist items (-1). 19 new, 190 in progress 
(-1), 0 critical, 28 high (-1) and 31 incomplete (+1)
- Inspector: 11 bugs (-1) + 24 wishlist items (+1). 0 new, 16 in progress (+2), 
0 critical, 3 high (+1) and 4 incomplete (-1)
- Nova bugs with Ironic tag: 10. 0 new, 0 critical, 0 high

Portgroups support (sambetts, vdrok)

* trello: https://trello.com/c/KvVjeK5j/29-portgroups-support
- status as of most recent weekly meeting:
- everything is done, except for tempest tests and documentation.(need to 
be written still)

Interface attach/detach API (sambetts)
==
* trello: https://trello.com/c/nryU4w58/39-interface-attach-detach-api
- status as of most recent weekly meeting:
done

CI refactoring (dtantsur, lucasagomes)
==
* trello: https://trello.com/c/c96zb3dm/32-ci-refactoring
- status as of most recent weekly meeting:
- Two more patches to go to add support for deploying UEFI images with 
Ironic in devstack: 1) https://review.openstack.org/#/c/414604/ (DevStack) 2) 
https://review.openstack.org/#/c/374988/ BOTH MERGED
- focus (lucasagomes) is to get UEFI testing in gate. More patches needed 
for this.

Rolling upgrades and grenade-partial (rloo, jlvillal)
=
* trello: 
https://trello.com/c/GAlhSzLm/2-rolling-upgrades-and-grenade-with-multi-node
- status as of most recent weekly meeting:
- leaning towards moving this to Pike.
- patches need reviews: https://review.openstack.org/#/q/topic:bug/1526283.
- concerns about https://review.openstack.org/#/c/420728/ (Add 
compatibility with Newton when creating a node)
- had irc discussion about status: 
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2017-01-23.log.html#t2017-01-23T16:17:41
- Testing work:
- Great progress this last week! Able to fix issue that had blocked us 
for several weeks in the multi-tenant grenade job!
- Tempest smoke is now working for the multi-tenant grenade job during 
the initial pre-grenade run.
- The grenade portion passes for the multi-tenant grenade job
- 
http://logs.openstack.org/49/422149/5/experimental/gate-grenade-dsvm-ironic-multitenant-ubuntu-xenial-nv/74c9ed9/logs/grenade.sh.summary.txt.gz
- The final tempest "smoke" test is failing after the grenade run in 
the multi-tenant grenade job.
- 
http://logs.openstack.org/49/422149/5/experimental/gate-grenade-dsvm-ironic-multitenant-ubuntu-xenial-nv/74c9ed9/console.html
- Testing being done in: https://review.openstack.org/#/c/422149/
- This needs multi-node testing, and multi-node has a very low 
probability of working in Ocata

Generic boot-from-volume (TheJulia)
===
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- status as of most recent weekly meeting:
- API side changes for volume connector information has a procedural -2 
until we can begin making use of the data in the conductor, but should stil be 
reviewed
- https://review.openstack.org/#/c/214586/
- This change has been rebased on top of the iPXE template update 
revision to support cinder/iscsi booting.
- Boot from volume/storage cinder interface is up for review
- Last patch set for cinder common client interface was reverted in a 
rebase.  TheJulia expects to address this Monday afternoon.
- 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691
- Original volume connection information client patches
- They need OSC support added into the revisions.
- These changes should be expected to land once Pike opens.
- 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironicclient+branch:master+topic:bug/1526231

Driver composition (dtantsur, jroll)

[openstack-dev] [tripleo] Update TripleO core members

2017-01-23 Thread Emilien Macchi
Greeting folks,

I would like to propose some changes in our core members:

- Remove Jay Dobies who has not been active in TripleO for a while
(thanks Jay for your hard work!).
- Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
docker bits.
- Add Steve Backer on os-collect-config and also docker bits in
tripleo-common and tripleo-heat-templates.

Indeed, both Flavio and Steve have been involved in deploying TripleO
in containers, their contributions are very valuable. I would like to
encourage them to keep doing more reviews in and out container bits.

As usual, core members are welcome to vote on the changes.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][swg] Sessions for Atlanta PTG - writing a TC Vision/Other options

2017-01-23 Thread Colette Alexander
Hello Stackers,

As we move into the last four weeks of work before the PTG in Atlanta, I
wanted to check in to talk about what the Stewardship Working Group has
planned and what we're looking to accomplish during our one day (Monday) at
the gathering.

Currently, discussing among SWG members has focused around a few things:

1) Assisting the TC with writing a vision
2) Assessing our previous work list [0] and prioritizing future work for
the SWG based on the community's interest in that list
3) Creating space for drop-in/community feedback on what we should work on
next, and how it should be prioritized.

1) the TC Vision is on hold, and won't happen at the PTG. Since so many
contributors would've been in other cross project sessions throughout
Monday we thought it would be wiser to hold off, and do a facilitated
vision during another time when the TC might be able to fully focus on it
for a day. We're still working on timing for that, but I will keep everyone
posted once I know what we've settled on.

2 and 3 are listed on our etherpad/vision for the PTG [1], with 2 currently
scheduled in the morning, and 3 planned for a general availability for
drop-in feedback in the afternoon (as we assume more people will have more
sessions and less flexible schedules at that point, so drop-ins will be the
easiest way to allow anyone who is interested in what we do to stop by and
participate).

I'd love feedback on that, scheduling wise, from any folks interested in
participating. I'd also love to hear any other ideas about what we could
cover during our day that might be useful.

I spent quite a bit of our cross project session at the Ocata Summit doing
a quick recap of the concept of Servant Leadership and it seemed like
plenty of attendees appreciated that. Would a series of quick recaps of
basic leadership concepts (Servant Leadership, Visioning, Principles &
Culture, and Change Management) - if anyone is interested in having a small
discussion covering some of those topics, I'd love to hear from you!

Thanks so much, everyone - can't wait to see you all in Atlanta!

-colette/gothicmindfood


[0] https://etherpad.openstack.org/p/swg-short-list-deliverables
[1] https://etherpad.openstack.org/p/AtlantaPTG-SWG
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All projects that use Alembic] Absence of pk on alembic_version table

2017-01-23 Thread Ihar Hrachyshka
An alternative could also be, for Newton and earlier, to release a
note saying that operators should not run the code against ENFORCING
galera mode. What are the reasons to enable that mode in OpenStack
scope that would not allow operators to live without it for another
cycle?

Ihar

On Mon, Jan 23, 2017 at 10:12 AM, Anna Taraday
 wrote:
> Hello everyone!
>
> Guys in our team faced an issue when they try to run alembic migrations on
> Galera with ENFORCING mode. [1]
>
> This was an issue with Alembic [2], which was quickly fixed by Mike Bayer
> (many thanks!) and new version of alembic was resealed [3].
> The global requirements are updated [4].
>
> I think that it is desired to fix this for Newton at least. We cannot bump
> requirements for Newton, so hot fix can be putting pk on this table in the
> first migration like proposed [5].  Any other ideas?
>
> [1] - https://bugs.launchpad.net/neutron/+bug/1655610
> [2] - https://bitbucket.org/zzzeek/alembic/issues/406
> [3] - http://alembic.zzzcomputing.com/en/latest/changelog.html#change-0.8.10
> [4] - https://review.openstack.org/#/c/423118/
> [5] - https://review.openstack.org/#/c/419320/
>
>
> --
> Regards,
> Ann Taraday
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Announcing my PTL candidacy for Pike

2017-01-23 Thread Davanum Srinivas
+1 to "mentoring people that are newer to Nova but are stepping
into leadership positions" Matt.

Thanks,
Dims

On Mon, Jan 23, 2017 at 1:54 PM, Matt Riedemann  wrote:
> Hi everyone,
>
> This is my self-nomination to continue running as Nova PTL for the Pike
> cycle.
>
> If elected, this would be a third term for me as Nova PTL. In Ocata I
> thought that I did a better job of keeping on top of a broader set of
> efforts than I was able to in Newton, including several non-priority
> vendor-specific blueprints.
>
> I have also tried to make regular communication a priority. The topics vary,
> but in general I try to keep people informed about the release schedule,
> upcoming deadlines, areas that need attention, and recaps of smaller group
> discussions back to the wider team. We have a widely distributed team and a
> lot of groups are impacted by decisions made within Nova so it's important
> to continue with that communication. Despite my best efforts I have also
> learned in Ocata that we need to get earlier feedback on changes which
> impact deployment tooling, and make documentation of such changes a high
> priority earlier in the development of new features so that people working
> on tooling are not left in the dark.
>
> Ocata has been a tough release, and I think we knew that was going to be the
> case going in. It was a shorter cycle but still had some very high-priority
> and high-visibility work items such as integrating the placement service
> with the scheduler and further integrating support for cells v2, along with
> making both of those required in a Nova deployment for Ocata. We also had to
> deal with losing some key people and filling those spots. But people have
> stepped up and we still made some incredible progress in Ocata despite the
> difficulties.
>
> For Pike I want to focus on the following:
>
> * Continue integration of the placement service into making scheduling
> decisions, including working with Neutron routed networks and work on
> defining traits for resource providers so we can model the qualitative
> aspects of resources in making placement decisions.
>
> * Continue working on cells v2 for multi-cell support including
> investigating the concept of auto-registration of compute nodes to simplify
> deployment automation, and also focus on multi-cell testing and Searchlight
> integration.
>
> * Work on volume multi-attach support with the new Cinder v3 APIs introduced
> in Ocata for creating and deleting volume attachments. I think we are
> finally at a place where we can make some solid progress on the Nova side
> with improved understanding between the Nova and Cinder teams.
>
> * There are going to be several efforts going on across several projects in
> the Pike release, including modeling capabilities in the REST API, and Nova
> is going to have to be a part of those efforts. We also need to get teams
> together to figure out what are the issues with hierarchical quotas and what
> progress can be made there since that is a high priority item that lots of
> operators have been requesting for a long time.
>
> In general, we are going to have to improve our review throughput,
> especially given the change in resources we experienced in Ocata. To me, a
> lot of this will have to do with mentoring people that are newer to Nova but
> are stepping into leadership positions, and having a shorter feedback loop
> on "leveling up".
>
> To summarize, I aim to be of service to those using and contributing to Nova
> and want to continue doing that in the PTL role for the project in the Pike
> release if you will have me for another round.
>
> Thank you for your consideration,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Announcing my PTL candidacy for Pike

2017-01-23 Thread Matt Riedemann

Hi everyone,

This is my self-nomination to continue running as Nova PTL for the Pike 
cycle.


If elected, this would be a third term for me as Nova PTL. In Ocata I 
thought that I did a better job of keeping on top of a broader set of 
efforts than I was able to in Newton, including several non-priority 
vendor-specific blueprints.


I have also tried to make regular communication a priority. The topics 
vary, but in general I try to keep people informed about the release 
schedule, upcoming deadlines, areas that need attention, and recaps of 
smaller group discussions back to the wider team. We have a widely 
distributed team and a lot of groups are impacted by decisions made 
within Nova so it's important to continue with that communication. 
Despite my best efforts I have also learned in Ocata that we need to get 
earlier feedback on changes which impact deployment tooling, and make 
documentation of such changes a high priority earlier in the development 
of new features so that people working on tooling are not left in the dark.


Ocata has been a tough release, and I think we knew that was going to be 
the case going in. It was a shorter cycle but still had some very 
high-priority and high-visibility work items such as integrating the 
placement service with the scheduler and further integrating support for 
cells v2, along with making both of those required in a Nova deployment 
for Ocata. We also had to deal with losing some key people and filling 
those spots. But people have stepped up and we still made some 
incredible progress in Ocata despite the difficulties.


For Pike I want to focus on the following:

* Continue integration of the placement service into making scheduling 
decisions, including working with Neutron routed networks and work on 
defining traits for resource providers so we can model the qualitative 
aspects of resources in making placement decisions.


* Continue working on cells v2 for multi-cell support including 
investigating the concept of auto-registration of compute nodes to 
simplify deployment automation, and also focus on multi-cell testing and 
Searchlight integration.


* Work on volume multi-attach support with the new Cinder v3 APIs 
introduced in Ocata for creating and deleting volume attachments. I 
think we are finally at a place where we can make some solid progress on 
the Nova side with improved understanding between the Nova and Cinder teams.


* There are going to be several efforts going on across several projects 
in the Pike release, including modeling capabilities in the REST API, 
and Nova is going to have to be a part of those efforts. We also need to 
get teams together to figure out what are the issues with hierarchical 
quotas and what progress can be made there since that is a high priority 
item that lots of operators have been requesting for a long time.


In general, we are going to have to improve our review throughput, 
especially given the change in resources we experienced in Ocata. To me, 
a lot of this will have to do with mentoring people that are newer to 
Nova but are stepping into leadership positions, and having a shorter 
feedback loop on "leveling up".


To summarize, I aim to be of service to those using and contributing to 
Nova and want to continue doing that in the PTL role for the project in 
the Pike release if you will have me for another round.


Thank you for your consideration,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All projects that use Alembic] Absence of pk on alembic_version table

2017-01-23 Thread Anna Taraday
Hello everyone!

Guys in our team faced an issue when they try to run alembic migrations on
Galera with ENFORCING mode. [1]

This was an issue with Alembic [2], which was quickly fixed by Mike Bayer
(many thanks!) and new version of alembic was resealed [3].
The global requirements are updated [4].

I think that it is desired to fix this for Newton at least. We cannot bump
requirements for Newton, so hot fix can be putting pk on this table in the
first migration like proposed [5].  Any other ideas?

[1] - https://bugs.launchpad.net/neutron/+bug/1655610
[2] - https://bitbucket.org/zzzeek/alembic/issues/406
[3] - http://alembic.zzzcomputing.com/en/latest/changelog.html#change-0.8.10
[4] - https://review.openstack.org/#/c/423118/
[5] - https://review.openstack.org/#/c/419320/


-- 
Regards,
Ann Taraday
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla-ansible] [kolla] Am I doing this wrong?

2017-01-23 Thread Kris G. Lindgren
Hi Paul,

Thanks for responding.

> The fact gathering on every server is a compromise taken by Kolla to
> work around limitations in Ansible. It works well for the majority of
> situations; for more detail and potential improvements on this please
> have a read of this post:
> http://lists.openstack.org/pipermail/openstack-dev/2016-November/107833.html

So my problem with this is the logging in to the compute nodes.  While this may 
be fine for a smaller deployment.  Logging into thousands, even hundreds, of 
nodes via ansible to gather facts, just to do a deployment against 2 or 3 of 
them is not tenable.  Additionally, in our higher audited environments 
(pki/pci) will cause our auditors heartburn.

> I'm not quite following you here, the config templates from
> kolla-ansible are one of it's stronger pieces imo, they're reasonably
> well tested and maintained. What leads you to believe they shouldn't be
> used?
>
> > * Certain parts of it are 'reference only' (the config tasks),
>  > are not recommended
>
> This is untrue - kolla-ansible is designed to stand up a stable and
> usable OpenStack 'out of the box'. There are definitely gaps in the
> operator type tasks as you've highlighted, but I would not call it
> ‘reference only'.

http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2017-01-09.log.html#t2017-01-09T21:33:15

This is where we were told the config stuff was “reference only”?

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Can I use lvm thin provisioning in mitaka?

2017-01-23 Thread Chris Friesen

On 01/23/2017 11:29 AM, Marco Marino wrote:

At the moment I have:
volume_clear=zero
volume_clear_size=30 <-- MBR will be deleted here!
with tick provisioning
I think this can be a good solution in my case. Let me know what do you think
about this.


If security is not a concern then that's fine.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Can I use lvm thin provisioning in mitaka?

2017-01-23 Thread Marco Marino
At the moment I have:
volume_clear=zero
volume_clear_size=30 <-- MBR will be deleted here!
with tick provisioning
I think this can be a good solution in my case. Let me know what do you
think about this.
Thank you
Marco



2017-01-23 17:21 GMT+01:00 Chris Friesen :

> On 01/21/2017 03:00 AM, Marco Marino wrote:
>
>> Really thank you!! It's difficult for me find help on cinder and I think
>> this is
>> the right place!
>> @Duncan, if my goal is to speeding up bootable volume creation, I can
>> avoid to
>> use thin provisioning. I can use image cache and in this way the
>> "retrieve from
>> glance" and the "qemu-img convert to RAW" parts will be skipped. Is this
>> correct? And whit this method I don't have a performancy penalty
>> mentioned by Chris.
>> @Chris: Yes, I'm using volume_clear option and volume deletion is very
>> fast
>>
>
> Just to be clear, you should not use "volume_clear=none" unless you are
> using thin provisioning or you do not care about security.
>
> If you have "volume_clear=none" with thick LVM, then newly created cinder
> volumes may contain data written to the disk via other cinder volumes that
> were later deleted.
>
>
> Chris
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ocatvia]Newton Octavia lbaas creation error

2017-01-23 Thread Santhosh Fernandes
Thanks Michael,

After adding service_auth section in neutron.conf I was able overcome this
error. Now I am getting new exception Unable to retrieve ready devices.
Here is the stacktrace.

http://paste.openstack.org/show/596019/

Any clue to resolve this issue.

Thanks,
Santhosh


On Mon, Jan 23, 2017 at 9:48 PM, Michael Johnson 
wrote:

> Santhosh,
>
>
>
> From the traceback below it looks like the neutron process is unable to
> access keystone.
>
>
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource DriverError:
> Driver error: Unable to establish connection to
> http://127.0.0.1:5000/v2.0/tokens: HTTPConnectionPool(host='127.0.0.1',
> port=5000): Max retries exceeded with url: /v2.0/tokens (Caused by
> NewConnectionError(' object at 0x7f9f36b91790>: Failed to establish a new connection: [Errno
> 111] ECONNREFUSED',))
>
>
>
> So, I would check the neutron.conf settings for keystone like the
> user/password and that the neutron process can reach keystone on
> http://127.0.0.1:5000  Maybe there is a bad security group or keystone
> isn’t running?
>
>
>
> Michael
>
>
>
> *From:* Santhosh Fernandes [mailto:santhosh.fernan...@gmail.com]
> *Sent:* Sunday, January 22, 2017 10:48 AM
> *To:* openstack-dev@lists.openstack.org; Michael Johnson <
> johnso...@gmail.com>
> *Subject:* [openstack-dev][ocatvia]Newton Octavia lbaas creation error
>
>
>
> Hi all,
>
>
>
> I am getting driver connection error while creation the LB from octavia.
>
>
>
> Stack trace -
>
>
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> [req-c6f19e4c-dfbd-4b1c-8198-925d05f9fcdf cf13e167c1884e7a8d63293a454ca774
> 48ab507e206741c4ba304efaf5209963 - - -] create failed: No details.
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource Traceback
> (most recent call last):
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/neutron/api/v2/resource.py", line 79, in resource
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource result =
> method(request=request, **args)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/neutron/api/v2/base.py", line 430, in create
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return
> self._create(request, body, **kwargs)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
> line 88, in wrapped
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource setattr(e,
> '_RETRY_EXCEEDED', True)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/oslo_utils/excutils.py", line 220, in __exit__
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> self.force_reraise()
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/oslo_utils/excutils.py", line 196, in force_reraise
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> six.reraise(self.type_, self.value, self.tb)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
> line 84, in wrapped
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return
> f(*args, **kwargs)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_db/api.py",
> line 151, in wrapper
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> ectxt.value = e.inner_exc
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/oslo_utils/excutils.py", line 220, in __exit__
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> self.force_reraise()
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-
> packages/oslo_utils/excutils.py", line 196, in force_reraise
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> six.reraise(self.type_, self.value, self.tb)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_db/api.py",
> line 139, in wrapper
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return
> f(*args, **kwargs)
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
> line 124, in wrapped
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource
> traceback.format_exc())
>
> 2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File
> "/openstack/venvs/neutron-14.0.3/

Re: [openstack-dev] [octavia] Nominating German Eichberger for Octavia core reviewer

2017-01-23 Thread Michael Johnson
With that vote we have quorum.  Welcome back German!

 

Michael

 

 

From: Kosnik, Lubosz [mailto:lubosz.kos...@intel.com] 
Sent: Sunday, January 22, 2017 12:24 PM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [octavia] Nominating German Eichberger for
Octavia core reviewer

 

+1, welcome back. 

 

Lubosz

 

On Jan 20, 2017, at 2:11 PM, Miguel Lavalle mailto:mig...@mlavalle.com> > wrote:

 

Well, I don't vote here but it's nice to see German back in the community.
Welcome!

 

On Fri, Jan 20, 2017 at 1:26 PM, Brandon Logan mailto:brandon.lo...@rackspace.com> > wrote:

+1, yes welcome back German.

On Fri, 2017-01-20 at 09:41 -0800, Michael Johnson wrote:
> Hello Octavia Cores,
>
> I would like to nominate German Eichberger (xgerman) for
> reinstatement as an
> Octavia core reviewer.
>
> German was previously a core reviewer for Octavia and neutron-lbaas
> as well
> as a former co-PTL for Octavia.  Work dynamics required him to step
> away
> from the project for a period of time, but now he has moved back into
> a
> position that allows him to contribute to Octavia.  His review
> numbers are
> back in line with other core reviewers [1] and I feel he would be a
> solid
> asset to the core reviewing team.
>
> Current Octavia cores, please respond with your +1 vote or an
> objections.
>
> Michael
>
> [1] http://stackalytics.com/report/contribution/octavia-group/90
>
>
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
 
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org
 ?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Error while setting up a keystone development environment

2017-01-23 Thread Steve Martinelli
why not use devstack [1] with a minimal local.conf (used to specify which
components to install) ?

[1] http://docs.openstack.org/developer/devstack/

minimal local.conf:

[[local|localrc]]
RECLONE=yes

# Credentials
DATABASE_PASSWORD=openstack
ADMIN_PASSWORD=openstack
SERVICE_PASSWORD=openstack
RABBIT_PASSWORD=openstack

# Services
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,horizon

# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs

On Mon, Jan 23, 2017 at 8:39 AM, Daniel Gitu  wrote:

> Hello,
>
> I'm new to all this and I am in need of help to find out where I went
> wrong.
> This is a bit lengthy, I have left a blank space between the text and the
> error
> messages I received.
> I first set up and activated a virtual environment then cloned the keystone
> project into that environment.
> I then proceeded to cd into keystone and executed pip install -r
> requirements.txt and got the following errors:
>
> Failed building wheel for cryptography
> Failed cleaning build dir for cryptography
> Failed building wheel for netifaces
> Failed building wheel for pycrypto
> Command "/home/grenouille/openstack/bin/python -u -c "import
> setuptools, 
> tokenize;__file__='/tmp/pip-build-XqTJv_/cryptography/setup.py';f=getattr(tokenize,
> 'open', open)(__file__);code=f.read().replace('\r\n',
> '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record
> /tmp/pip-nFp6dT-record/install-record.txt --single-version-externally-managed
> --compile --install-headers /home/grenouille/openstack/inc
> lude/site/python2.7/cryptography" failed with error code 1 in
> /tmp/pip-build-XqTJv_/cryptography/
>
> The above errors were resolved by executing; sudo apt-get install
> build-essential libssl-dev libffi-dev python-dev and re-running
> pipinstall -r requirements.txt
> I ran sudo apt install tox and executed tox in the keystone directory
> As tox was installing dependencies the first line read:
>
> ERROR: invocation failed (exit code 1), logfile:
> /home/grenouille/openstack/keystone/.tox/docs  /log/docs-1.log
> ERROR: actionid: docs
>
> The final error message read:
>
> ERROR: could not install deps [-r/home/grenouille/openstack/
> keystone/test-requirements.txt, .[ldap,memcache,mongodb]]; v =
> InvocationError('/home/grenouille/openstack/keystone/.tox/docs/bin/pip
> install -chttps://git.openstack.org/cgit/openstack/requirements/plai
> n/upper-constraints.txt 
> -r/home/grenouille/openstack/keystone/test-requirements.txt
> .[ldap,memcache,mongodb] (see /home/grenouille/openstack/key
> stone/.tox/docs/log/docs-1.log)', 1)
>
>
> Regards,
> Daniel.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-23 Thread Mark Baker
Hi Adrian,

I'm unlikely to attend the PTG myself but James Page and other members of
our team will be there who can help cover. Certainly we'd like to better
understand what the cluster drivers need from the underlying operating
system and what we need to do to make sure Ubuntu does all those things
really well.



Best Regards


Mark Baker

On 18 January 2017 at 19:08, Adrian Otto  wrote:

>
> On Jan 18, 2017, at 10:48 AM, Mark Baker  wrote:
>
> Hi Adrian,
>
> Let me know if you have similar questions or concerns about Ubuntu Core
> with Magnum.
>
> Mark
>
>
> Thanks Mark! Is there any chance you, or an Ubuntu Core representative
> could join us for a discussion at the PTG, and/or an upcoming IRC team
> meeting? The topic of supported operating system images for our cluster
> drivers is a current topic of team conversation, and it would be helpful to
> have clarity on what (support/dev/test) resources upstream Linux packagers
> may be able to offer to help guide our conversation.
>
> To give you a sense, we do have a Suse specific k8s driver that has been
> maturing during the Ocata release cycle, our Mesos driver uses Ubuntu
> Server, our Swarm and k8s drivers use Fedora Atomic, and another newer k8s
> driver uses Fedora. The topic of Operating System (OS) support for cluster
> nodes (versus what OS containers are based on) is confusing for many cloud
> operators, so it would be helpful we worked on clarifying the options, and
> involve stakeholders from various OS distributions so that suitable options
> are available for those who prefer to form Magnum clusters from OS images
> composed from one particular OS or another.
>
> Ideally we could have this discussion at the PTG in Atlanta with
> participants like our core reviewers, Josh Berkus, you, our Suse
> contributors, and any other representatives from OS distribution
> organizations who may have an interest in cluster drivers for their
> respective OS types. If that discussion proves productive, we could also
> engage our wider contributor base in a followup IRC team meeting with a
> dedicated agenda item to cover what’s possible, and summarize what various
> stakeholders provided to us as input at the PTG. This might give us a
> chance to source further input from a wider audience than our PTG attendees.
>
> Thoughts?
>
> Thanks,
>
> Adrian
>
>
> On 18 Jan 2017 8:36 p.m., "Adrian Otto"  wrote:
>
>> Josh,
>>
>> > On Jan 18, 2017, at 10:18 AM, Josh Berkus  wrote:
>> >
>> > Magnum Devs:
>> >
>> > Is there going to be a magnum team meeting around OpenStack Summit in
>> > Boston?
>> >
>> > I'm the community manager for Atomic Host, so if you're going to have
>> > Magnum meetings, I'd like to send you some Atomic engineers to field any
>> > questions/issues at the Summit.
>>
>> Thanks for your question. We are planning to have our team design
>> meetings at the upcoming PTG event in Atlanta. We are not currently
>> planning to have any such meetings in Boston. With that said, we would very
>> much like to involve you in an important Atomic related design decision
>> that has recently surfaced, and would like to welcome you to an upcoming
>> Magnum IRC team meeting to meet you and explain our interests and concerns.
>> I do expect to attend the Boston summit myself, so I’m willing to meet you
>> and your engineers on behalf of our team if you are unable to attend the
>> PTG. I’ll reach out to you individually by email to explore our options for
>> an Atomic Host meeting agenda item in the mean time.
>>
>> Regards,
>>
>> Adrian
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] PTL candidacy for Pike series

2017-01-23 Thread Michael Johnson

Hello Octavia folks,

I wanted to let you know that I am running for the PTL position again for
Pike.

My candidacy statement is available here:
https://git.openstack.org/cgit/openstack/election/plain/candidates/pike/Octa
via/johnsom.txt

Thank you for your consideration,

Michael



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ocatvia]Newton Octavia lbaas creation error

2017-01-23 Thread Michael Johnson
Santhosh,

 

>From the traceback below it looks like the neutron process is unable to access 
>keystone.

 

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource DriverError: Driver 
error: Unable to establish connection to http://127.0.0.1:5000/v2.0/tokens: 
HTTPConnectionPool(host='127.0.0.1', port=5000): Max retries exceeded with url: 
/v2.0/tokens (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 111] 
ECONNREFUSED',))

 

So, I would check the neutron.conf settings for keystone like the user/password 
and that the neutron process can reach keystone on http://127.0.0.1:5000  Maybe 
there is a bad security group or keystone isn’t running?

 

Michael



 

From: Santhosh Fernandes [mailto:santhosh.fernan...@gmail.com] 
Sent: Sunday, January 22, 2017 10:48 AM
To: openstack-dev@lists.openstack.org; Michael Johnson 
Subject: [openstack-dev][ocatvia]Newton Octavia lbaas creation error

 

Hi all,

 

I am getting driver connection error while creation the LB from octavia. 

 

Stack trace - 

 

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
[req-c6f19e4c-dfbd-4b1c-8198-925d05f9fcdf cf13e167c1884e7a8d63293a454ca774 
48ab507e206741c4ba304efaf5209963 - - -] create failed: No details.

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource Traceback (most 
recent call last):

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/api/v2/resource.py",
 line 79, in resource

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource result = 
method(request=request, **args)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/api/v2/base.py",
 line 430, in create

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
 line 88, in wrapped

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
self.force_reraise()

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
 line 84, in wrapped

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_db/api.py", 
line 151, in wrapper

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
self.force_reraise()

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_db/api.py", 
line 139, in wrapper

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
 line 124, in wrapped

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
traceback.format_exc())

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
self.force_reraise()

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstac

Re: [openstack-dev] [cinder] Can I use lvm thin provisioning in mitaka?

2017-01-23 Thread Chris Friesen

On 01/21/2017 03:00 AM, Marco Marino wrote:

Really thank you!! It's difficult for me find help on cinder and I think this is
the right place!
@Duncan, if my goal is to speeding up bootable volume creation, I can avoid to
use thin provisioning. I can use image cache and in this way the "retrieve from
glance" and the "qemu-img convert to RAW" parts will be skipped. Is this
correct? And whit this method I don't have a performancy penalty mentioned by 
Chris.
@Chris: Yes, I'm using volume_clear option and volume deletion is very fast


Just to be clear, you should not use "volume_clear=none" unless you are using 
thin provisioning or you do not care about security.


If you have "volume_clear=none" with thick LVM, then newly created cinder 
volumes may contain data written to the disk via other cinder volumes that were 
later deleted.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Feature freeze coming today

2017-01-23 Thread Adrian Otto
Team,

I will be starting our feature freeze today. We have a few more patches to 
consider for merge before we enter the freeze. I’ll let you all know when each 
has been considered, and we are ready to begin the freeze.

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] ocata client causes feature regression with pre-ocata server

2017-01-23 Thread Tim Hinrichs
At some point the client sometimes made multiple API calls.  I think (c)
seems right too.

Tim

On Sun, Jan 22, 2017 at 1:15 AM Monty Taylor  wrote:

> On 01/21/2017 04:07 AM, Eric K wrote:
> > Hi all,
> >
> > I was getting ready to request release of congress client, but I
> > remembered that the new client causes feature regression if used with
> > older versions of congress. Specifically, new client with pre-Ocata
> > congress cannot refer to datasource by name, something that could be done
> > with pre-Ocata client.
> >
> > Here¹s the patch of interest: https://review.openstack.org/#/c/407329/
> > 
> >
> > A few questions:
> >
> > Are we okay with the regression? Seems like it could cause a fair bit of
> > annoyance for users.
>
> This is right. New client lib should always continue to work with old
> server. (A user should be able to just pip install python-congressclient
> and have it work regardless of when their operator decides to upgrade or
> not upgrade their cloud)
>
> >1. If we¹re okay with that, what¹s the best way to document that
> > pre-Ocata congress should be used with pre-Ocata client?
> >2. If not, how we avoid the regression? Here are some candidates I can
> > think of.
> >   a. Client detects congress version and act accordingly. I don¹t
> > think this is possible, nor desirable for client to be concerned with
> > congress version not just API version.
> >   b. Release backward compatible API version 1.1 that supports
> > getting datasource by name_or_id. Then client will take different paths
> > depending on API version.
> >   c. If datasource not found, client falls back on old method of
> > retrieving list of datasources to resolve name into UUID. This would
> work,
> > but causes extra API & DB call in many cases.
> >   d. Patch old versions of Congress to support getting datasource
> > by name_or_id. Essentially, it was always a bug that the API didn¹t
> > support name_or_id.
>
> I'm a fan of d - but I don't believe it will help - since the problem
> will still manifest for users who do not have control over the server
> installation.
>
> I'd suggest c is the most robust. It _is_ potentially more expensive -
> but that's a good motivation for the deployer to upgrade their
> installation of congress without negatively impacting the consumer in
> the  meantime.
>
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage Hands-On Webinar

2017-01-23 Thread Afek, Ifat (Nokia - IL)
Hi,

Nokia is hosting a Webinar about Vitrage, tomorrow January 24th at 10:00 a.m. 
EDT/ 5:00 pm, Northern Europe Time.

This will be a hands-on lab, where Dan Offek (a Vitrage core developer) will 
present an overview of what Vitrage is all about, and guide you through the 
process of installing, configuring and experimenting with Vitrage.

You are welcome to register: http://go.nokia.com/UotW0pK01Y0JQR02d8000a4 

Best Regards,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-23 Thread lương hữu tuấn
Hi guys,

I am provide some information about the result of testing YAQL performance
on my devstack stable/newton with RAM of 6GB. The workflow i created is
below:

#
input:
  - size
  - number_of_handovers

  tasks:
  generate_input:
action: std.javascript
input:
  context:
size: <% $.size %>
  script: |
result = {}
for(i=0; i < $.size; i++) {
  result["key_" + i] = {
"alma": "korte"
  }
}
return result
publish:
  data: <% task(generate_input).result %>
on-success:
  - process

  process:
action: std.echo
input:
  output: <% $.data %>
publish:
  data: <% task(process).result %>
  number_of_handovers: <% $.number_of_handovers - 1 %>
on-success:
  - process: <% $.number_of_handovers > 0 %>

##

I test with the size is 1 and the number_of_handover is 50. The result
shows out that time for validating the <% $.data %> is quite long. I do not
know this time is acceptable but imagine that in our use case, the value of
$.data could be a large size. Couple of log file is below:

INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function
evaluate finished in 11262.710 ms

INFO mistral.expressions.yaql_expression.InlineYAQLEvaluator [-]  Function
evaluate finished in 8146.324 ms

..

The average value is around 10s each time of valuating.

Br,

Tuan


On Mon, Jan 23, 2017 at 11:48 AM, lương hữu tuấn 
wrote:

> Hi Renat,
>
> For more details, i will go to check on the CBAM machine and hope it is
> not deleted yet since we have done it for around a week.
> Another thing is Jinja2 showed us that it run 2-3 times faster with the
> same test with YAQL. More information i will also provide it later.
>
> Br,
>
> Tuan
>
> On Mon, Jan 23, 2017 at 8:32 AM, Renat Akhmerov 
> wrote:
>
>> Tuan,
>>
>> I don’t think that Jinja is something that Kirill is responsible for.
>> It’s just a coincidence that we in Mistral support both YAQL and Jinja. The
>> latter has been requested by many people so we finally did it.
>>
>> As far as performance, could you please provide some numbers? When you
>> say “takes a lot of time” how much time is it? For what kind of input? Why
>> do you think it is slow? What are your expectations?Provide as much info as
>> possible. After that we can ask YAQL authors to comment and help if we
>> realize that the problem really exists.
>>
>> I’m interested in this too since I’m always looking for ways to speed
>> Mistral up.
>>
>> Thanks
>>
>> Renat Akhmerov
>> @Nokia
>>
>> On 18 Jan 2017, at 16:25, lương hữu tuấn  wrote:
>>
>> Hi Kirill,
>>
>> Do you have any information related to the performance of Jinja and Yaql
>> validating. With the big size of input, yaql runs quite so slow in our case
>> therefore we have plan to switch to jinja.
>>
>> Br,
>>
>> @Nokia/Tuan
>>
>> On Tue, Jan 17, 2017 at 3:02 PM, lương hữu tuấn 
>> wrote:
>>
>>> Hi Kirill,
>>>
>>> Thank you for you information. I hope we will have more information
>>> about it. Just keep in touch when you guys in Mirantis have some
>>> performance results about Yaql.
>>>
>>> Br,
>>>
>>> @Nokia/Tuan
>>>
>>> On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev 
>>> wrote:
>>>
 I think fuel team encountered similar problems, I’d advice asking them
 around. Also Stan (author of yaql) might shed some light on the problem =)

 --
 Kirill Zaitsev
 Murano Project Tech Lead
 Software Engineer at
 Mirantis, Inc

 On 17 January 2017 at 15:11:52, lương hữu tuấn (tuantulu...@gmail.com)
 wrote:

 Hi,

 We are now using yaql in mistral and what we see that the process of
 validating yaql expression of input takes a lot of time, especially with
 the big size input. Do you guys have any information about performance of
 yaql?

 Br,

 @Nokia/Tuan
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.

Re: [openstack-dev] [Neutron] PTL Candidacy

2017-01-23 Thread Jay Pipes

On 01/22/2017 09:35 PM, Kevin Benton wrote:

I would like to propose my candidacy for the Neutron PTL.

I have been contributing to Neutron since the Havana development
cycle working for a network vendor and then a distribution vendor.
I have been a core reviewer since the Kilo development cycle and
I am on the Neutron stable maintenance team as well as the drivers
team.

I have a few priorities that I would focus on as PTL:

* Cleanup and simplification of the existing code: In addition to
supporting the ongoing work of converting all data access into OVO
models, I would like the community to continue breaking down code using
the callback event system. We should eliminate as many
extension-specific mixins and special-cases from the core as possible so
it becomes very easy to reason about and stable from a code-churn
perspective. This approach forces us to add appropriate event
notifications to the core to build service plugins and drivers out of
tree without requiriing modifications to the core.


++ Great initiative, Kevin.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [operators] Optional resource asking or not?

2017-01-23 Thread Sylvain Bauza


Le 23/01/2017 15:11, Jay Pipes a écrit :
> On 01/22/2017 04:40 PM, Sylvain Bauza wrote:
>> Hey folks,
>>
>> tl;dr: should we GET /resource_providers for only the related resources
>> that correspond to enabled filters ?
> 
> No. Have administrators set the allocation ratios for the resources they
> do not care about exceeding capacity to a very high number.
> 
> If someone previously removed a filter, that doesn't mean that the
> resources were not consumed on a host. It merely means the admin was
> willing to accept a high amount of oversubscription. That's what the
> allocation_ratio is for.
> 
> The flavor should continue to have a consumed disk/vcpu/ram amount,
> because the VM *does actually consume those resources*. If the operator
> doesn't care about oversubscribing one or more of those resources, they
> should set the allocation ratios of those inventories to a high value.
> 
> No more adding configuration options for this kind of thing (or in this
> case, looking at an old configuration option and parsing it to see if a
> certain filter is listed in the list of enabled filters).
> 
> We have a proper system of modeling these data-driven decisions now, so
> my opinion is we should use it and ask operators to use the placement
> REST API for what it was intended.
> 

I know your point, but please consider mine.
What if an operator disabled CoreFilter in Newton and wants to upgrade
to Ocata ?
All of that implementation being very close to the deadline makes me
nervous and I really want the seamless path for operators now using the
placement service.

Also, like I said in my bigger explanation, we should need to modify a
shit ton of assertions in our tests that can say "meh, don't use all the
filters, but just these ones". Pretty risky so close to a FF.

-Sylvain


> Best,
> -jay
> 
>> Explanation below why even if I
>> know we have a current consensus, maybe we should discuss again about it.
>>
>>
>> I'm still trying to implement https://review.openstack.org/#/c/417961/
>> but when trying to get the functional job being +1, I discovered that we
>> have at least one functional test [1] asking for just the RAMFilter (and
>> not for VCPUs or disks).
>>
>> Given the current PS is asking for *all* both CPU, RAM and disk, it's
>> trampling the current test by getting a NoValidHost.
>>
>> Okay, I could just modify the test and make sure we have enough
>> resources for the flavors but I actually now wonder if that's all good
>> for our operators.
>>
>> I know we have a consensus saying that we should still ask for both CPU,
>> RAM and disk at the same time, but I imagine our users coming back to us
>> saying "eh, look, I'm no longer able to create instances even if I'm not
>> using the CoreFilter" for example. It could be a bad day for them and
>> honestly, I'm not sure just adding documentation or release notes would
>> help them.
>>
>> What are you thinking if we say that for only this cycle, we still try
>> to only ask for resources that are related to the enabled filters ?
>> For example, say someone is disabling CoreFilter in the conf opt, then
>> the scheduler shouldn't ask for VCPUs to the Placement API.
>>
>> FWIW, we have another consensus about not removing
>> CoreFilter/RAMFilter/MemoryFilter because the CachingScheduler is still
>> using them (and not calling the Placement API).
>>
>> Thanks,
>> -Sylvain
>>
>> [1]
>> https://github.com/openstack/nova/blob/de0eff47f2cfa271735bb754637f979659a2d91a/nova/tests/functional/test_server_group.py#L48
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [operators] Optional resource asking or not?

2017-01-23 Thread Sylvain Bauza


Le 22/01/2017 22:40, Sylvain Bauza a écrit :
> Hey folks,
> 
> tl;dr: should we GET /resource_providers for only the related resources
> that correspond to enabled filters ? Explanation below why even if I
> know we have a current consensus, maybe we should discuss again about it.
> 
> 
> I'm still trying to implement https://review.openstack.org/#/c/417961/
> but when trying to get the functional job being +1, I discovered that we
> have at least one functional test [1] asking for just the RAMFilter (and
> not for VCPUs or disks).
> 
> Given the current PS is asking for *all* both CPU, RAM and disk, it's
> trampling the current test by getting a NoValidHost.
> 
> Okay, I could just modify the test and make sure we have enough
> resources for the flavors but I actually now wonder if that's all good
> for our operators.
> 
> I know we have a consensus saying that we should still ask for both CPU,
> RAM and disk at the same time, but I imagine our users coming back to us
> saying "eh, look, I'm no longer able to create instances even if I'm not
> using the CoreFilter" for example. It could be a bad day for them and
> honestly, I'm not sure just adding documentation or release notes would
> help them.
> 
> What are you thinking if we say that for only this cycle, we still try
> to only ask for resources that are related to the enabled filters ?
> For example, say someone is disabling CoreFilter in the conf opt, then
> the scheduler shouldn't ask for VCPUs to the Placement API.
> 
> FWIW, we have another consensus about not removing
> CoreFilter/RAMFilter/MemoryFilter because the CachingScheduler is still
> using them (and not calling the Placement API).
> 

A quick follow-up :
I first thought on some operators already disabling the DiskFilter
because they don't trust its calculations for shared disk.
We also have people that don't run the CoreFilter because they prefer
having only the compute claims doing the math and they don't care of
allocation ratios at all.


All those people would be trampled if we now begin to count resources
based on things they explicitely disabled.
That's why I updated my patch series and I wrote a quick verification on
which filter is running :

https://review.openstack.org/#/c/417961/16/nova/scheduler/host_manager.py@640

Ideally, I would refine that so that we would modify the BaseFilter
structure by having a method that would return the resource amount
needed by the RequestSpec and that would also disable the filter so it
would return always true (no need to doublecheck the filter if the
placement service already told this compute is sane). That way, we could
slowly but surely keep the existing interface for optionally verify
resources (ie. people would still use filters) but we would have the new
logic made by the Placement engine.

Given the very short window, that can be done in Pike, but at least
operators wouldn't be impacted in the upgrade path.

-Sylvain

> Thanks,
> -Sylvain
> 
> [1]
> https://github.com/openstack/nova/blob/de0eff47f2cfa271735bb754637f979659a2d91a/nova/tests/functional/test_server_group.py#L48
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [operators] Optional resource asking or not?

2017-01-23 Thread Jay Pipes

On 01/22/2017 04:40 PM, Sylvain Bauza wrote:

Hey folks,

tl;dr: should we GET /resource_providers for only the related resources
that correspond to enabled filters ?


No. Have administrators set the allocation ratios for the resources they 
do not care about exceeding capacity to a very high number.


If someone previously removed a filter, that doesn't mean that the 
resources were not consumed on a host. It merely means the admin was 
willing to accept a high amount of oversubscription. That's what the 
allocation_ratio is for.


The flavor should continue to have a consumed disk/vcpu/ram amount, 
because the VM *does actually consume those resources*. If the operator 
doesn't care about oversubscribing one or more of those resources, they 
should set the allocation ratios of those inventories to a high value.


No more adding configuration options for this kind of thing (or in this 
case, looking at an old configuration option and parsing it to see if a 
certain filter is listed in the list of enabled filters).


We have a proper system of modeling these data-driven decisions now, so 
my opinion is we should use it and ask operators to use the placement 
REST API for what it was intended.


Best,
-jay

> Explanation below why even if I

know we have a current consensus, maybe we should discuss again about it.


I'm still trying to implement https://review.openstack.org/#/c/417961/
but when trying to get the functional job being +1, I discovered that we
have at least one functional test [1] asking for just the RAMFilter (and
not for VCPUs or disks).

Given the current PS is asking for *all* both CPU, RAM and disk, it's
trampling the current test by getting a NoValidHost.

Okay, I could just modify the test and make sure we have enough
resources for the flavors but I actually now wonder if that's all good
for our operators.

I know we have a consensus saying that we should still ask for both CPU,
RAM and disk at the same time, but I imagine our users coming back to us
saying "eh, look, I'm no longer able to create instances even if I'm not
using the CoreFilter" for example. It could be a bad day for them and
honestly, I'm not sure just adding documentation or release notes would
help them.

What are you thinking if we say that for only this cycle, we still try
to only ask for resources that are related to the enabled filters ?
For example, say someone is disabling CoreFilter in the conf opt, then
the scheduler shouldn't ask for VCPUs to the Placement API.

FWIW, we have another consensus about not removing
CoreFilter/RAMFilter/MemoryFilter because the CachingScheduler is still
using them (and not calling the Placement API).

Thanks,
-Sylvain

[1]
https://github.com/openstack/nova/blob/de0eff47f2cfa271735bb754637f979659a2d91a/nova/tests/functional/test_server_group.py#L48

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Nominating zhongshengping for core of the Puppet OpenStack modules

2017-01-23 Thread Iury Gregory
+1

2017-01-23 5:49 GMT-03:00 Ivan Berezovskiy :

> +1
>
> 2017-01-21 3:07 GMT+04:00 Emilien Macchi :
>
>> plus one
>>
>> On Fri, Jan 20, 2017 at 12:19 PM, Alex Schultz 
>> wrote:
>> > Hey Puppet Cores,
>> >
>> > I would like to nominate Zhong Shengping as a Core reviewer for the
>> > Puppet OpenStack modules.  He is an excellent contributor to our
>> > modules over the last several cycles. His stats for the last 90 days
>> > can be viewed here[0].
>> >
>> > Please response with your +1 or any objections. If there are no
>> > objections by Jan 27 I will add him to the core list.
>> >
>> > Thanks,
>> > -Alex
>> >
>> > [0] http://stackalytics.com/report/contribution/puppet%20opensta
>> ck-group/90
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Thanks, Ivan Berezovskiy
> Senior Deployment Engineer
> at Mirantis 
>
> slack: iberezovskiy
> skype: bouhforever
> phone: + 7-960-343-42-46 <+7%20960%20343-42-46>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

~


*Att[]'sIury Gregory Melo Ferreira *
*Master student in Computer Science at UFCG*

*Part of the puppet-manager-core team in OpenStack*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-23 Thread Sean Dague
On 01/23/2017 08:11 AM, Chris Dent wrote:
> On Wed, 18 Jan 2017, Chris Dent wrote:
> 
>> The review starts with the original text. The hope is that
>> commentary here in this thread and on the review will eventually
>> lead to the best document.
> 
> https://review.openstack.org/#/c/421846
> 
> There's been a bit of commentary on the review which I'll try to
> summarize below. I hope people will join in. There have been plenty
> of people talking about this but unless you provide your input
> either here or on the review it will be lost.
> 
> Most of the people who have commented on the review are generally in
> favor of what's there with a few nits on details:
> 
> * Header changes should be noted as breaking compatibility/stability
> * Changing an error code should be signalled as a breaking change
> * The concept of extensions should be removed in favor of "version
>   boundaries"
> * The examples section needs to be modernized (notably getting rid
>   of XML)
> 
> There's some concern that "security fixes" (as a justification for a
> breaking change) is too broad and could be used too easily.
> 
> These all seem to be good practical comments that can be integrated
> into a future version but they are, as a whole, based upon a model
> of stability based around versioning and "signalling" largely in the
> form of microversions. This is not necessarily bad, but it doesn't
> address the need to come to mutual terms about what stability,
> compatibility and interoperability really mean for both users and
> developers. I hope we can figure that out.
> 
> If my read of what people have said in the past is correct at least
> one definition of HTTP API stability/compatibility is:
> 
>Any extant client code that works should continue working.
> 
> If that's correct then a stability guideline needs to serve two
> purposes:
> 
> * Enumerate the rare circumstances in which that rule may be broken
>   (catastrophic security/data integrity problems?).
> * Describe how to manage inevitable change (e.g., microversion,
>   macroversions, versioned media types) and what "version
>   boundaries" are.
> 
> And if that's correct then what we are really talking about is
> reaching consensus on how (or if) to manage versions. And that's
> where the real contention lies. Do we want to commit to
> microversions across the board? If we assert that versioning is
> something we need across the board then certainly we don't want to
> be using different techniques from service to service do we?
> 
> If you don't think those things above are correct or miss some
> nuance, I hope you will speak up.
> 
> Here's some internally-conflicting, hippy-dippy, personal opinion
> from me, just for the sake of grist for the mill because nobody else
> is yet coughing up:
> 
> I'm not sure I fully accept the original assertion. If extant client
> code is poor, perhaps because it allows the client to make an
> unhealthy demand upon a service, maybe it shouldn't be allowed? If
> way A to do something existing, but way B comes along that is better
> are we doing a disservice to people's self-improvement by letting A
> continue? Breaking stuff can sometimes increase community
> engagement, whether that community is OpenStack at large or the
> community of users in any given deployment.

This counter assertion seems a lot like blaming the consumer for trying
to use the software, and getting something working. Then pulling that
working thing out from under them with no warning.

We all inherited a bunch of odd and poorly defined behaviors in the
system we're using. They were made because at the time they seemed like
reasonable tradeoffs, and a couple of years later we learned more, or
needed to address a different use case that people didn't consider before.

If you don't guaruntee that existing applications will work in the
future (for some reasonable window of time), it's a massive turn off to
anyone deciding to use this interface at all. You suppress your user base.

If, when operators upgrade their OpenStack environments, there consumers
start complaining to them about things breaking, operators are going to
be much more reticent on upgrading anything, ever.

If upgrades get made harder for any reason, then getting security fixes
or features out to operators/users is not possible. They stopped taking
them. And when they are far enough back from master, it's going to be
easier to move to something else entirely than both upgrading OpenStack,
which effectively will be something else entirely for their entire user
base.

This is the spiral we are trying to avoid. It's the spiral we were in.
The one where people would show up to design summit sessions for years
saying "for the love of god can you people stop breaking everything
every release". The one where the only effective way to talk to 2
"OpenStack Clouds" and get them to do the same thing for even medium
complexity applications what to write your own intermediary layer.

This is a real iss

Re: [openstack-dev] [tripleo] Atlanta PTG

2017-01-23 Thread John Trowbridge


On 01/21/2017 05:37 AM, Michele Baldessari wrote:
> Hi Emilien,
> 
> while not a design session per se, I would love to propose a short slot
> for TripleO CI Q&A, if we have some time left. In short, I'd like to be
> more useful around CI failures, but I lack the understanding of a few
> aspects of our current CI (promotion, when do images get built, etc.),
> that would benefit quite a bit from a short session where we have a few
> CI folks in the room that could answer questions or give some tips.
> I know of quite few other people that are in the same boat and maybe
> this will help a bit our current issue where only a few folks always
> chase CI issues.
> 
> If there is consensus (and some CI folks willing to attend ;) and time
> for this, I'll be happy to organize this and prepare a bunch of
> questions ideas beforehand.
> 

Great idea. We have a room for three days, so it is not like summit
where there is really limited time.

> Thoughts?
> Michele
> 
> On Wed, Jan 04, 2017 at 07:26:52AM -0500, Emilien Macchi wrote:
>> I would like to bring this topic up on your inbox, so we can continue
>> to make progress on the agenda. Feel free to follow existing examples
>> in the etherpad and propose a design dession.
>>
>> Thanks,
>>
>> On Wed, Dec 21, 2016 at 9:06 AM, Emilien Macchi  wrote:
>>> General infos about PTG: https://www.openstack.org/ptg/
>>>
>>> Some useful informations about PTG/TripleO:
>>>
>>> * When? We have a room between Wednesday and Friday included.
>>> Important sessions will happen on Wednesday and Thursday. We'll
>>> probably have sessions on Friday, but it might be more hands-on and
>>> hackfest, where people can enjoy the day to work together.
>>>
>>> * Let's start to brainstorm our topics:
>>> https://etherpad.openstack.org/p/tripleo-ptg-pike
>>>   Feel free to add any topic, as soon as you can. We need to know asap
>>> which sessions will be share with other projects (eg: tripleo/mistral,
>>> tripleo/ironic, tripleo/heat, etc).
>>>
>>>
>>> Please let us know any question or feedback,
>>> Looking forward to seeing you there!
>>> --
>>> Emilien Macchi
>>
>>
>>
>> -- 
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone]Error while setting up a keystone development environment

2017-01-23 Thread Daniel Gitu

Hello,

I'm new to all this and I am in need of help to find out where I went wrong.
This is a bit lengthy, I have left a blank space between the text and 
the error

messages I received.
I first set up and activated a virtual environment then cloned the keystone
project into that environment.
I then proceeded to cd into keystone and executed pip install -r
requirements.txt and got the following errors:

Failed building wheel for cryptography
Failed cleaning build dir for cryptography
Failed building wheel for netifaces
Failed building wheel for pycrypto
Command "/home/grenouille/openstack/bin/python -u -c "import 
setuptools, 
tokenize;__file__='/tmp/pip-build-XqTJv_/cryptography/setup.py';f=getattr(tokenize, 
'open', open)(__file__);code=f.read().replace('\r\n', 
'\n');f.close();exec(compile(code, __file__, 'exec'))" install --record 
/tmp/pip-nFp6dT-record/install-record.txt 
--single-version-externally-managed --compile --install-headers 
/home/grenouille/openstack/include/site/python2.7/cryptography" failed 
with error code 1 in /tmp/pip-build-XqTJv_/cryptography/


The above errors were resolved by executing; sudo apt-get install
build-essential libssl-dev libffi-dev python-dev and re-running
pipinstall -r requirements.txt
I ran sudo apt install tox and executed tox in the keystone directory
As tox was installing dependencies the first line read:

ERROR: invocation failed (exit code 1), logfile: 
/home/grenouille/openstack/keystone/.tox/docs  /log/docs-1.log

ERROR: actionid: docs

The final error message read:

ERROR: could not install deps 
[-r/home/grenouille/openstack/keystone/test-requirements.txt, 
.[ldap,memcache,mongodb]]; v = 
InvocationError('/home/grenouille/openstack/keystone/.tox/docs/bin/pip 
install 
-chttps://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt 
-r/home/grenouille/openstack/keystone/test-requirements.txt 
.[ldap,memcache,mongodb] (see 
/home/grenouille/openstack/keystone/.tox/docs/log/docs-1.log)', 1)



Regards,
Daniel.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-23 Thread Chris Dent

On Wed, 18 Jan 2017, Chris Dent wrote:


The review starts with the original text. The hope is that
commentary here in this thread and on the review will eventually
lead to the best document.


https://review.openstack.org/#/c/421846

There's been a bit of commentary on the review which I'll try to
summarize below. I hope people will join in. There have been plenty
of people talking about this but unless you provide your input
either here or on the review it will be lost.

Most of the people who have commented on the review are generally in
favor of what's there with a few nits on details:

* Header changes should be noted as breaking compatibility/stability
* Changing an error code should be signalled as a breaking change
* The concept of extensions should be removed in favor of "version
  boundaries"
* The examples section needs to be modernized (notably getting rid
  of XML)

There's some concern that "security fixes" (as a justification for a
breaking change) is too broad and could be used too easily.

These all seem to be good practical comments that can be integrated
into a future version but they are, as a whole, based upon a model
of stability based around versioning and "signalling" largely in the
form of microversions. This is not necessarily bad, but it doesn't
address the need to come to mutual terms about what stability,
compatibility and interoperability really mean for both users and
developers. I hope we can figure that out.

If my read of what people have said in the past is correct at least
one definition of HTTP API stability/compatibility is:

   Any extant client code that works should continue working.

If that's correct then a stability guideline needs to serve two
purposes:

* Enumerate the rare circumstances in which that rule may be broken
  (catastrophic security/data integrity problems?).
* Describe how to manage inevitable change (e.g., microversion,
  macroversions, versioned media types) and what "version
  boundaries" are.

And if that's correct then what we are really talking about is
reaching consensus on how (or if) to manage versions. And that's
where the real contention lies. Do we want to commit to
microversions across the board? If we assert that versioning is
something we need across the board then certainly we don't want to
be using different techniques from service to service do we?

If you don't think those things above are correct or miss some
nuance, I hope you will speak up.

Here's some internally-conflicting, hippy-dippy, personal opinion
from me, just for the sake of grist for the mill because nobody else
is yet coughing up:

I'm not sure I fully accept the original assertion. If extant client
code is poor, perhaps because it allows the client to make an
unhealthy demand upon a service, maybe it shouldn't be allowed? If
way A to do something existing, but way B comes along that is better
are we doing a disservice to people's self-improvement by letting A
continue? Breaking stuff can sometimes increase community
engagement, whether that community is OpenStack at large or the
community of users in any given deployment.

Many projects that do not currently have microversions (or other
system) need to manage change in some fashion. It seems backwards to
me that they must subscribe to eternal backwards compatibility when
they don't yet have a mechanism for managing forward motion. I
suppose the benefit of the tag being proposed is that it allows a
project to say "actually, for now, we're not worrying about that;
we'll let you know when we do". In which case they would then have
license to do what they like (and presumably adapt tempest as they
like).

Microversions are an interesting system. They allow for eternal
backwards compatibility by defaulting to being in the past unless
you actively choose a particular point in time or choose to be
always in the present with "latest". When I first started thinking
about this stability concept in the context of OpenStack I felt that
microversions were anti-stability because not only do they help
developers manage change, they give them license to change whenever
they are willing to create a new microversion. That seems contrary
to what I originally perceived as a desire to minimize change.

Further, microversions are a feature that is (as far as I know?)
implemented in a way unique to OpenStack. In other universes some
strategies for versioning are:

* don't ever change
* change aligned with semver of the "product"
* use macroversions in the URL or service definitions
* use versioned media-types (e.g.,
  'application/vnd.os.compute.servers+json; version=1.2') and
  content-negotiation (and keep urls always the same)
* hypermedia

I would guess we have enough commitment to microversions in
production that using something else would be nutbar, but it is
probably worth comparing with some of those systems so that we can
at least clearly state the benefits when making everyone settle in
the same place.

--
Chris Dent

Re: [openstack-dev] [tripleo] tripleoclient release : January 26th

2017-01-23 Thread Emilien Macchi
Reminder: we'll release tripleoclient this week.

Please let us know any blocker!
Thanks,

On Mon, Jan 16, 2017 at 9:32 AM, Emilien Macchi  wrote:
> One day I'll read calendars correctly :-)
> Client releases are next week, so we'll release tripleoclient by January 26th.
>
> Sorry for confusion.
>
> On Sun, Jan 15, 2017 at 6:41 PM, Emilien Macchi  wrote:
>> https://releases.openstack.org/ocata/schedule.html
>>
>> It's time to release python-tripleoclient this week.
>> We still have 15 bugs in progress targeted for ocata-3.
>> https://goo.gl/R2hO4Z
>>
>> Please triage them to pike-1 unless they are critical or high, so we
>> need to fix them afterward and backport it to stable/ocata.
>>
>> We'll release the client by Thursday 19th end of day.
>> Please let us know any blocker,
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Giulio Fidente
On 01/23/2017 11:07 AM, Saravanan KR wrote:
> Thanks John for the info.
> 
> I am going through the spec in detail. And before that, I had few
> thoughts about how I wanted to approach this, which I have drafted in
> https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
> 100% ready yet, I was still working on it.

I've linked this etherpad for the session we'll have at the PTG

> As of now, there are few differences on top of my mind, which I want
> to highlight, I am still going through the specs in detail:
> * Profiles vs Features - Considering a overcloud node as a profiles
> rather than a node which can host these features, would have
> limitations to it. For example, if i need a Compute node to host both
> Ceph (OSD) and DPDK, then the node will have multiple profiles or we
> have to create a profile like -
> hci_enterprise_many_small_vms_with_dpdk? The first one is not
> appropriate and the later is not scaleable, may be something else in
> your mind?
> * Independent - The initial plan of this was to be independent
> execution, also can be added to deploy if needed.
> * Not to expose/duplicate parameters which are straight forward, for
> example tuned-profile name should be associated with feature
> internally, Workflows will decide it.

for all of the above, I think we need to decide if we want the
optimizations to be profile-based and gathered *before* the overcloud
deployment is started or if we want to set these values during the
overcloud deployment basing on the data we have at runtime

seems like both approaches have pros and cons and this would be a good
conversation to have with more people at the PTG

> * And another thing, which I couldn't get is, where will the workflow
> actions be defined, in THT or tripleo_common?

to me it sounds like executing the workflows before stack creation is
started would be fine, at least for the initial phase

running workflows from Heat depends on the other blueprint/session we'll
have about the WorkflowExecution resource and once that will be
available, we could trigger the workflow execution from tht if beneficial

> The requirements which I thought of, for deriving workflow are:
> Parameter Deriving workflow should be
> * independent to run the workflow
> * take basic parameters inputs, for easy deployment, keep very minimal
> set of mandatory parameters, and rest as optional parameters
> * read introspection data from Ironic DB and Swift-stored blob
> 
> I will add these comments as starting point on the spec. We will work
> towards bringing down the differences, so that operators headache is
> reduced to a greater extent.

thanks

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] feature freeze exception request -- nova simple tenant usages api pagination

2017-01-23 Thread Radomir Dopieralski
Yes, to do it differently we need to add the microversion support patch
that you are working on, and make use of it, or write a patch that has
equivalent functionality.

On Fri, Jan 20, 2017 at 6:57 PM, Rob Cresswell  wrote:

> Just a thought: With the way we currently do microversions, wouldnt this
> request 2.40 for every request ? There's a pretty good chance that would
> break things.
>
> Rob
>
> On 20 January 2017 at 00:02, Richard Jones  wrote:
>
>> FFE granted for the three patches. We need to support that nova API
>> change.
>>
>> On 20 January 2017 at 01:28, Radomir Dopieralski 
>> wrote:
>> > I would like to request a feature freeze exception for the following
>> patch:
>> >
>> > https://review.openstack.org/#/c/410337
>> >
>> > This patch adds support for retrieving the simple tenant usages from
>> Nova in
>> > chunks, and it is necessary for correct data given that related patches
>> have
>> > been already merged in Nova. Without
>> > it, the data received will be truncated.
>> >
>> > In order to actually use that patch, however, it is necessary to set the
>> > Nova API version to at least
>> > version 3.40. For this, it's necessary to also add this patch:
>> >
>> > https://review.openstack.org/422642
>> >
>> > However, that patch will not work, because of a bug in the
>> VersionManager,
>> > which for some reason
>> > uses floating point numbers for specifying versions, and thus
>> understands
>> > 2.40 as 2.4. To fix that, it
>> > is also necessary to merge this patch:
>> >
>> > https://review.openstack.org/#/c/410688
>> >
>> > I would like to request an exception for all those three patches.
>> >
>> > An alternative to this would be to finish and merge the microversion
>> > support, and modify the first patch to make use of it. Then we would
>> need
>> > exceptions for those two patches.
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cloudkitty] [ptl] Candidacy for cloudkitty PTL

2017-01-23 Thread Christophe Sauthier

Hello everyone,

I would like to announce my candidacy for PTL of Cloudkitty.

During the Ocata cycle we have been able open up our community with the 
integration of some new contributors and new cores from different 
companies (which was a key from my point of view).
We also have been to add many improvment mainly to ease the usage and 
configuration of cloudkitty.


During the Pike cycle the focus I am looking for is to extend to 
spectrum of cloukitty integration (being able to fetch more metrics for 
more services),lso to continue to help developers when they want to 
participate in cloudkitty (which is already an ongoing work) and to 
continue to extend the collaboration with other OpenStack projects.
Finally I am decided also to continue to work to support the wider 
ecosystem adoption of Cloudkitty as the best solution for chargeback and 
rating.


I would also like to take this opportunity to thank all members of the 
OpenStack community who helped our team during the lasts cycles.


Thank you,

Christophe Sauthier



Christophe Sauthier   Mail : 
christophe.sauth...@objectif-libre.com

CEO   Mob : +33 (0) 6 16 98 63 96
Objectif LibreURL : www.objectif-libre.com
Au service de votre Cloud Twitter : @objectiflibre

Suivez les actualités OpenStack en français en vous abonnant à la Pause 
OpenStack

http://olib.re/pause-openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Performance][Shaker]

2017-01-23 Thread Ilya Shakhat
Hi Sai,

In UDP testing PPS represents packets sent by iperf client to server. Loss
is the percentage of packets that were not received by server (more
specifically the server tracks packets and sums gaps between of them,
https://github.com/esnet/iperf/blob/3.0.7/src/iperf_udp.c#L64).

While reported PPS depends on bandwidth and concurrency it makes sense to
increase them until loss starts going up, meaning that the communication
channel is near the limit.

Thanks,
Ilya

2017-01-21 1:19 GMT+04:00 Sai Sindhur Malleni :

> Hey,
>
> When using the "iperf3" class in shaker for looking at UDP small packet
> performance, we see that as we scale up the concurrency the average PPS
> goes up and also the loss % increases. Is the loss % a percentage of the
> PPS or does the PPS only represent successful transmissions? Thanks!
>
> --
> Sai Sindhur Malleni
> Software Engineer
> Red Hat Inc.
> 100 East Davie Street
> Raleigh, NC, USA
> Work: (919) 754-4557 | Cell: (919) 985-1055
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-23 Thread lương hữu tuấn
Hi Renat,

For more details, i will go to check on the CBAM machine and hope it is not
deleted yet since we have done it for around a week.
Another thing is Jinja2 showed us that it run 2-3 times faster with the
same test with YAQL. More information i will also provide it later.

Br,

Tuan

On Mon, Jan 23, 2017 at 8:32 AM, Renat Akhmerov 
wrote:

> Tuan,
>
> I don’t think that Jinja is something that Kirill is responsible for. It’s
> just a coincidence that we in Mistral support both YAQL and Jinja. The
> latter has been requested by many people so we finally did it.
>
> As far as performance, could you please provide some numbers? When you say
> “takes a lot of time” how much time is it? For what kind of input? Why do
> you think it is slow? What are your expectations?Provide as much info as
> possible. After that we can ask YAQL authors to comment and help if we
> realize that the problem really exists.
>
> I’m interested in this too since I’m always looking for ways to speed
> Mistral up.
>
> Thanks
>
> Renat Akhmerov
> @Nokia
>
> On 18 Jan 2017, at 16:25, lương hữu tuấn  wrote:
>
> Hi Kirill,
>
> Do you have any information related to the performance of Jinja and Yaql
> validating. With the big size of input, yaql runs quite so slow in our case
> therefore we have plan to switch to jinja.
>
> Br,
>
> @Nokia/Tuan
>
> On Tue, Jan 17, 2017 at 3:02 PM, lương hữu tuấn 
> wrote:
>
>> Hi Kirill,
>>
>> Thank you for you information. I hope we will have more information about
>> it. Just keep in touch when you guys in Mirantis have some performance
>> results about Yaql.
>>
>> Br,
>>
>> @Nokia/Tuan
>>
>> On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev 
>> wrote:
>>
>>> I think fuel team encountered similar problems, I’d advice asking them
>>> around. Also Stan (author of yaql) might shed some light on the problem =)
>>>
>>> --
>>> Kirill Zaitsev
>>> Murano Project Tech Lead
>>> Software Engineer at
>>> Mirantis, Inc
>>>
>>> On 17 January 2017 at 15:11:52, lương hữu tuấn (tuantulu...@gmail.com)
>>> wrote:
>>>
>>> Hi,
>>>
>>> We are now using yaql in mistral and what we see that the process of
>>> validating yaql expression of input takes a lot of time, especially with
>>> the big size input. Do you guys have any information about performance of
>>> yaql?
>>>
>>> Br,
>>>
>>> @Nokia/Tuan
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla-ansible] [kolla] Am I doing this wrong?

2017-01-23 Thread Paul Bourke

Hi Kris,

Thanks for the feedback, I think everyone involved in kolla-ansible 
should take the time to read through it, as it definitely highlights 
some areas that we need to improve.


There's a lot of questions here, so I haven't gone into too much detail 
on any specific one; my hope is that I can clear up the majority of it 
and then we can follow up on some of the topics that require more 
discussion.


Hope it helps,
-Paul

>  * I need to define a number of servers in my inventory outside of
> the specific servers that I want to perform actions against.  I need
> to define groups baremetal, rabbitmq, memcached, and control (IN
> addition to the glance specific groups) most of these seem to be
> gathering information for config? (Baremetal was needed soley to try
> to run the bootstrap play)

You only need to define the top level groups, i.e. control, network, 
storage, monitoring, etc. If you don't want or have dedicated nodes for 
each of these groups it's fine to put the same node into multiple 
groups. So for example, if you're not interested in monitoring right 
now, you can just put your control node(s) under this and forget about 
it. The groups marked with [*:children] (e.g. bootstrap) are "groups of 
groups" and you shouldn't need to modify these at all.


> Running a change specifically against
> "glance" causes fact gathering on a number of other servers not
> specifically where glance is running?  My concern here is that I
> want to be able to run kola-ansible against a specific service and
> know that only those servers are being logged into.

The fact gathering on every server is a compromise taken by Kolla to 
work around limitations in Ansible. It works well for the majority of 
situations; for more detail and potential improvements on this please 
have a read of this post: 
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107833.html


> * I want to run a dry-run only, being able to see what will happen
> before it happens, not during; during makes it really hard to see
> what will happen until it happens. Also supporting  `ansible --diff`
> would really help in understanding what will be changed (before it
> happens).

Agree a dry run would be useful, I believe it came up during the 
Barcelona design summit but has not yet been looked at. The ansible 
--diff sounds like something we could easily do, if you could log a 
blueprint at blueprints.launchpad.net/kolla-ansible I think that would help.


> * Database task are ran on every deploy and status of change DB
> permissions always reports as changed? Even when nothing happens,
> which makes you wonder "what changed"?

This shouldn't be the case, I just double checked taking Glance as an 
example, it reports "ok" (no change) for all runs after the initial 
deploy. Perhaps you've come across a bug, if you think this is the case 
please log one.


> Also, Can someone tell me why the DB stuff is done on a
> deployment task?  Seems like the db checks/migration work should
> only be done on a upgrade or a bootstrap?

Deploy includes bootstrap, but bootstrap is only done if the database is 
not found (or on upgrade). Again it sounds like you're coming across 
some unusual behavior here, suggest checking in with us on 
#openstack-kolla or filing a bug.


> * Database services (that at least we have) our not managed by our
> team, so don't want kolla-ansible touching those (since it won't be
> able to). No way to mark the DB as "externally managed"?  IE we dont
> have permissions to create databases or add users.  But we got all
> other permissions on the databases that are created, so normal
> db-manage tooling works.

This is definitely something we need - I'm pretty sure I saw something 
around this in the review queue very recently. I can't find it off hand 
so hopefully someone can chip in here on the status of this work.


> * Maintenance level operations; doesn't seem to be any built-in to
> say 'take a server out  of a production state, deploy to it, test
> it, put it back into production'  Seems like if kola-ansible is
> doing haproxy for API's, it should be managing this?  Or an
> extension point to allow us to run our own maintenance/testing 
scripts?


Again, discussed, needs to happen, but not there as of yet.

> * Config must come from kolla-ansible and generated templates.  I
> know we have a patch up for externally managed service
> configuration.  But if we aren't suppose to use kolla-ansible for
> generating configs (see below), why cant we override this piece?

I'm not quite following you here, the config templates from 
kolla-ansible are one of it's stronger pieces imo, they're reasonably 
well tested and maintained. What leads you to believe they shouldn't be 
used?


> * Certain parts of it are 'reference only' (the config tasks),
> are not recommended

Thi

[openstack-dev] [all] PTG deadline reminders

2017-01-23 Thread Thierry Carrez
Hi everyone,

The PTG is less than a month away ! We have two deadlines coming up this
week.

First, if you haven't registered yet but intend to come, you should
probably book now. There are less than 65 tickets left as of this
morning, so it is very likely to sell out soon. Prices will also
increase in two days, at the end of the day on Wednesday, January 25th
(from $100 to $150). So book now if you want to secure your attendance!

https://pikeptg.eventbrite.com/

Second, our hotel block in the hotel where the event happens is closing
this Friday, January 27th. Booking there ensures you can maximize your
time with other event attendees and make the most of the event. It also
helps supporting the financial model behind the event. Book before
Friday using this link:

https://www.starwoodmeeting.com/events/start.action?id=1609140999&key=381BF4AA

Thanks!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms][ptl] PTL candidacy

2017-01-23 Thread James Page
Hi All

I would like to announce my candidacy for PTL of the OpenStack Charms
project.

Over the Ocata cycle, we've been incubating the community of developers
around
the Charms, with new charms for Murano, Trove, Mistral and CloudKitty all
due
to be included in the release in February.

We've also started to engage successfully with the vendor ecosystem around
OpenStack, with PLUMgrid, Calico and 6wind all working towards aligment and
inclusion in the OpenStack Charm release.

This is all helping to diversify the development community around the
Charms.

I'll continue to work to support the wider ecosystem adoption of the Charms
as a great way to deploy and manage OpenStack.

We've made some in-roads into improving the developer experience for charm
authors, but I feel there is still progress to be made so I will continue to
focus on this aspect of the project during the Pike cycle.

I look forward to working with the team and steering the project for another
cycle!

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Saravanan KR
Thanks John for the info.

I am going through the spec in detail. And before that, I had few
thoughts about how I wanted to approach this, which I have drafted in
https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
100% ready yet, I was still working on it.

As of now, there are few differences on top of my mind, which I want
to highlight, I am still going through the specs in detail:
* Profiles vs Features - Considering a overcloud node as a profiles
rather than a node which can host these features, would have
limitations to it. For example, if i need a Compute node to host both
Ceph (OSD) and DPDK, then the node will have multiple profiles or we
have to create a profile like -
hci_enterprise_many_small_vms_with_dpdk? The first one is not
appropriate and the later is not scaleable, may be something else in
your mind?
* Independent - The initial plan of this was to be independent
execution, also can be added to deploy if needed.
* Not to expose/duplicate parameters which are straight forward, for
example tuned-profile name should be associated with feature
internally, Workflows will decide it.
* And another thing, which I couldn't get is, where will the workflow
actions be defined, in THT or tripleo_common?


The requirements which I thought of, for deriving workflow are:
Parameter Deriving workflow should be
* independent to run the workflow
* take basic parameters inputs, for easy deployment, keep very minimal
set of mandatory parameters, and rest as optional parameters
* read introspection data from Ironic DB and Swift-stored blob

I will add these comments as starting point on the spec. We will work
towards bringing down the differences, so that operators headache is
reduced to a greater extent.

Regards,
Saravanan KR

On Fri, Jan 20, 2017 at 9:56 PM, John Fulton  wrote:
> On 01/11/2017 11:34 PM, Saravanan KR wrote:
>>
>> Thanks John, I would really appreciate if you could tag me on the
>> reviews. I will do the same for mine too.
>
>
> Hi Saravanan,
>
> Following up on this, have a look at the OS::Mistral::WorflowExecution
> Heat spec [1] to trigger Mistral workflows. I'm hoping to use it for
> deriving THT parameters for optimal resource isolation in HCI
> deployments as I mentioned below. I have a spec [2] which describes
> the derivation of the values, but this is provided as an example for
> the more general problem of capturing the rules used to derive the
> values so that deployers may easily apply them.
>
> Thanks,
>   John
>
> [1] OS::Mistral::WorflowExecution https://review.openstack.org/#/c/267770/
> [2] TripleO Performance Profiles https://review.openstack.org/#/c/423304/
>
>> On Wed, Jan 11, 2017 at 8:03 PM, John Fulton  wrote:
>>>
>>> On 01/11/2017 12:56 AM, Saravanan KR wrote:


 Thanks Emilien and Giulio for your valuable feedback. I will start
 working towards finalizing the workbook and the actions required.
>>>
>>>
>>>
>>> Saravanan,
>>>
>>> If you can add me to the review for your workbook, I'd appreciate it. I'm
>>> trying to solve a similar problem, of computing THT params for HCI
>>> deployments in order to isolate resources between CephOSDs and
>>> NovaComputes,
>>> and I was also looking to use a Mistral workflow. I'll add you to the
>>> review
>>> of any related work, if you don't mind. Your proposal to get NUMA info
>>> into
>>> Ironic [1] helps me there too. Hope to see you at the PTG.
>>>
>>> Thanks,
>>>   John
>>>
>>> [1] https://review.openstack.org/396147
>>>
>>>
> would you be able to join the PTG to help us with the session on the
> overcloud settings optimization?


 I will come back on this, as I have not planned for it yet. If it
 works out, I will update the etherpad.

 Regards,
 Saravanan KR


 On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente 
 wrote:
>
>
> On 01/04/2017 09:13 AM, Saravanan KR wrote:
>>
>>
>>
>> Hello,
>>
>> The aim of this mail is to ease the DPDK deployment with TripleO. I
>> would like to see if the approach of deriving THT parameter based on
>> introspection data, with a high level input would be feasible.
>>
>> Let me brief on the complexity of certain parameters, which are
>> related to DPDK. Following parameters should be configured for a good
>> performing DPDK cluster:
>> * NeutronDpdkCoreList (puppet-vswitch)
>> * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
>> review)
>> * NovaVcpuPinset (puppet-nova)
>>
>> * NeutronDpdkSocketMemory (puppet-vswitch)
>> * NeutronDpdkMemoryChannels (puppet-vswitch)
>> * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
>> * Interface to bind DPDK driver (network config templates)
>>
>> The complexity of deciding some of these parameters is explained in
>> the blog [1], where the CPUs has to be chosen in accordance with the
>> NUMA node associated with the interface. We are working a sp

Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-23 Thread Flavio Percoco

On 12/01/17 08:11 -0500, Zane Bitter wrote:

On 11/01/17 10:01, Thomas Herve wrote:

On Wed, Jan 11, 2017 at 3:34 PM, Emilien Macchi  wrote:

On Wed, Jan 11, 2017 at 2:50 AM, Thomas Herve  wrote:

I think this is going where I thought it would: let's not do anything.
The image resource is there for v1 compatibility, but there is no
point to have a v2 resource, at least right now.


If we do nothing, we force our heat-template users to keep Glance v1
API enabled in their cloud (+ running Glance Registry service), which
at some point doens't make sense, since Glance team asked to moved
forward with Glance v2 API.

I would really recommend to move forward and stop ignoring the new API version.


Emilien was right: by defaulting to Glance v1, we still required it
for the image constraint, which is used everywhere like servers and
volumes. We can easily switch to v2 for this, I'll do it right away.


For those following along at home, this merged: 
https://review.openstack.org/#/c/418987/


Patch to deprecate the resource type: 
https://review.openstack.org/#/c/419043/


Thanks for the work here, folks!
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-23 Thread Flavio Percoco

On 19/01/17 12:48 +0300, Mikhail Fedosin wrote:

Hi Matt!

This should be discussed, for sure, but there is a lot of potential. In
general, it depends on how far we are willing to go. In the minimum
approximation we can seamlessly replace Glance with Glare and operators
simply get additional features for versioning, validation (and conversion,
if necessary) of their uploaded images on the fly, as well as support for
storing files in different stores.

If we dig a little deeper, then Glare allows you to store multiple files in
a single artifact, so we can create a new type (ec2_image) and define three
blobs inside: ami, ari, aki, and upload all three as a single object. This
will get rid of a large amount of legacy code and simplify the architecture
of Nova. Plus Glare will control the integrity of such artifact.


Hey Mike,

Thanks for bringing this up. While I think there's potential in Glare given it's
being created during a more mature age of OpenStack and based on matured
principles and standards, I believe you may be promoting Glare using the wrong
arguments.

As you mentioned in your first email on this thread, Glare has a set of
functionalities that are already useful to a set of existing projects. This is
great and I'd probably start from there.

* How much have these projects adopted Glare?
* Is Glare being deployed already?
* What projects are the main consumers of Glare?

Unfortunately, replacing Glance is not as simple as dropping Glare in because
it's not only being used by Nova. Glance is also a public API (at least v2) and
there are integrations that have been built by either cloud providers or cloud
consumers on top of the existing Glance API.

If Glare ships a Glance compatible API (as a way to make a drop-in replacement
possible), it'll have to support it and live with it for a long time. In
addition to this, Glare will have to keep up with the changes that may happen in
Glance's API during that time.


The next step could be full support for OVF and other formats that require
a large number of files. Here we can use artifact folders and put all the
files there.
"OpenStack Compute does not currently have support for OVF packages, so you
will need to extract the image file(s) from an OVF package if you wish to
use it with OpenStack."
http://docs.openstack.org/image-guide/introduction.html

Finally, I notice that there are a few nasty bugs in Glance (you know what
I mean), which make it extremely inconvenient for a number of deployments.


Not everyone is familiar with the issues of Glance's API. I believe I know what
you're referring to but I'd recommend to expand here for the sake of discussion.

That being said, I'd also like to point out that the flaws of Glance's API could
be fixed so I'd rather focus the discussion on the benefits Glare would bring
rather than how Glance's API may or may not be terrible. That's the kind of
competition I'd prefer to see in this discussion.

Cheers,
Flavio


On Wed, Jan 18, 2017 at 8:26 PM, Matt Riedemann 
wrote:


On 1/18/2017 10:54 AM, Mikhail Fedosin wrote:


Hello!

In this letter I want to tell you the current status of Glare project
and discuss its future development within the entire OpenStack community.

In the beginning I have to say a few words about myself - my name is
Mike and I am the PTL of Glare. Currently I work as a consultant at
Nokia, where we're developing the service as a universal catalog of
binary data. As I understand it right, Nokia has big plans for this
service, Moshe Elisha can tell you more about them.

And here I want to ask the community - how exactly Glare may be useful
in OpenStack? Glare was developed as a repository for all possible data
types, and it has many possible applications. For example, it's a
storage of vm images for Nova. Currently Glance is used for this, but
Glare has much more features and this transition is easy to implement.
Then it's a storage of Tosca templates. We were discussing integration
with Heat and storing templates and environments in Glare, also it may
be interesting for TripleO project. Mistral will store its workflows in
Glare, it has already been decided. I'm not sure if Murano project is
still alive, but they already use Glare 0.1 from Glance repo and it will
be removed soon (in Pike afaik), so they have no other options except to
start using Glare v1. Finally there were rumors about storing torrent
files from Ironic.

Now let me briefly describe Glare features:

 * Versioning of artifacts - each artifact has a version in SemVer
format and you can sort and filter by this field.
 * Multiblob support - there can be several files and folders per one
artifact.
 * The ease of creating new artifact types with oslo_versionedobjects
framework.
 * Fair immutability - no one can change artifact when it's active.
 * Multistore support - each artifact type data may be stored in
different storages: images may go to Swift; heat templates may be stored
directly in sql-database; for Docker Contatiners 

Re: [openstack-dev] [ironic-pyhon-agent][DIB] IPA failed to start on Ubuntu because of modprobe path

2017-01-23 Thread Bruno Cornec

Ian Wienand said on Mon, Jan 23, 2017 at 02:26:36PM +1100:

This should be just "/sbin"; I've proposed [1].  There may be even
better ways to do this which anyone is welcome to propose :)


+1

The problem with distributions not following anymore the FHS closely (and not working jointly on it to have a strong standard avoiding dispersion) is such a result :-( Some are using /usr/sbin (saying publicly that /sbin and /bin should be dead) while some other have different opinion. 


I'm personaly (because I'm an old timer unix/linux guy) in favour of keeping 
/sbin and /bin. But that should be an agreement for every distro, and they 
should commit to maintain compatibility with this even if they use /usr/sbin 
and /usr/bin instead.

Bruno.
--
HPE EMEA EG FLOSS Technology Strategist http://www.hpe.com/engage/opensource
Open Source Profession, WW Linux Community Leadhttp://github.com/bcornec   
FLOSS projects:http://mondorescue.org http://project-builder.org

Musique ancienne?   http://www.musique-ancienne.org  http://www.medieval.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Nominating zhongshengping for core of the Puppet OpenStack modules

2017-01-23 Thread Ivan Berezovskiy
+1

2017-01-21 3:07 GMT+04:00 Emilien Macchi :

> plus one
>
> On Fri, Jan 20, 2017 at 12:19 PM, Alex Schultz 
> wrote:
> > Hey Puppet Cores,
> >
> > I would like to nominate Zhong Shengping as a Core reviewer for the
> > Puppet OpenStack modules.  He is an excellent contributor to our
> > modules over the last several cycles. His stats for the last 90 days
> > can be viewed here[0].
> >
> > Please response with your +1 or any objections. If there are no
> > objections by Jan 27 I will add him to the core list.
> >
> > Thanks,
> > -Alex
> >
> > [0] http://stackalytics.com/report/contribution/puppet%
> 20openstack-group/90
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks, Ivan Berezovskiy
Senior Deployment Engineer
at Mirantis 

slack: iberezovskiy
skype: bouhforever
phone: + 7-960-343-42-46
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev