Re: [openstack-dev] [Congress] Guide for all reactive policy options? (execute[...])

2016-04-11 Thread Masahito MUROI

Hi Bryan,

You can see neutron driver's action with 'openstack congress datasource 
actions show' command.  It shows all execution method supported by 
neutronclient.


btw, the prefix of reaction policy rule is datasource *name*. If you 
initialize openstack like devstack, the datasource name for neutron is 
not neutron but neutronv2.


best regard,
Masahito

On 2016/04/12 14:27, Bryan Sullivan wrote:

Hi Congress team,

I'm trying to develop tests for the reactive policy features of
Congress. I have one such test working, shown at
https://git.opnfv.org/cgit/copper/tree/tests/adhoc/dmz01.sh, which
applies the following rule for pausing a server when there has been an
error in server placement (in a hypothetical "dmz" network environment):
|
"execute[nova:servers.pause(id)] :-
||dmz_placement_error(id),
||nova:servers(id,status='ACTIVE')"

I'm also trying to develop a similar test for deletion of a subnet that
has been defined in a reserved subnet space. But I can't figure out how
to specify the action. I'm currently trying things like:

"execute[neutron:delete_subnet(x)] :- reserved_subnet_error(x)"
or
|||"execute[neutron:subnet.delete(x)] :- reserved_subnet_error(x)"

|Where "reserved_subnet_error" is a table created by matching an
allocated subnet against a list of reserved subnets (e.g. for admin
purposes, and not intended to be made available to VMs).

To help me develop such tests, it would be good to know a complete list
of the "execute" action supported in Liberty (for all services). But I
only see a reference to the nova example above in the docs. I've looked
thru the code for neutron actions but can't find anywhere that a
complete set of supported actions is described, or the syntax for
invoking them in an execute rule.

Any pointers to where I should look (even in the code) is much appreciated.|

Thanks,
Bryan Sullivan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [nova] VM HA support in trunk

2016-04-11 Thread Affan Syed
Hi Kazu,

thanks for this update. Sorry I am a bit late in replying to this thread,
but one of my students just ran into an issue running pacemaker-based
evacuation of hosts. It seems that pacemaker 1.1.10 is not supposed to work
with remote, and the 14.04 distro comes with that version.

Did you get remote to work, if so how? The pull request [1] indicates that
remote support was added, but its unclear how the above version difference
was handled. Did you people resort to compiling the latest pcm from source
or something else?


Affan



[1] https://github.com/ntt-sic/masakari/pull/11

On Fri, 19 Feb 2016 at 09:19 Toshikazu Ichikawa <
ichikawa.toshik...@lab.ntt.co.jp> wrote:

> Hi Affan,
>
>
>
> Pacemaker works fine on either a canonical distribution or RDO.
>
> I use our tool [1] using Pacemaker on Ubuntu without any specific issue.
>
>
>
> [1] https://github.com/ntt-sic/masakari
>
>
>
> Thanks,
>
> Kazu
>
>
>
> *From:* Affan Syed [mailto:affan.syed@gmail.com]
> *Sent:* Tuesday, February 16, 2016 2:02 PM
> *To:* Matt Fischer ; Toshikazu Ichikawa <
> ichikawa.toshik...@lab.ntt.co.jp>
> *Cc:* openstack-operators@lists.openstack.org
> *Subject:* Re: [Openstack-operators] [nova] VM HA support in trunk
>
>
>
> Hi Kazu and Matt,
>
> Thanks for the pointers. I think the discussion around pacemaker and
> pacemaker remote seems most promising, esp with Russel's blog post I found
> after I emailed earlier [1].
>
>
>
> Not sure how tooling would be different, but pacemaker, given its use in
> the controller cluster anyways, seems a more logical choice. Any issues you
> people think with a canonical distribution instead of RDO?
>
>
>
> Affan
>
>
>
>
>
> [1]
> http://blog.russellbryant.net/2015/03/10/the-different-facets-of-openstack-ha/
>
>
>
> On Mon, 15 Feb 2016 at 20:59 Matt Fischer  wrote:
>
> I believe that either have your customers design their apps to handle
> failures or have tools that are reactive to failures.
>
>
>
> Unfortunately like many other private cloud operators we deal a lot with
> legacy applications that aren't scaled horizontally or fault tolerant and
> so we've built tooling to handle customer notifications (reactive). When we
> lose a compute host we generate a notice to customers and then work on
> evacuating their instances. For the evac portion nova host-evacuate or
> host-evacuate-live work fairly well, although we rarely get a functioning
> floating-IP after host-evacuate without other work.
>
>
>
> Getting adoption of heat or other automation tooling to educate customers
> is a long process, especially when they're used to VMware where I think
> they get the VM HA stuff for "free".
>
>
>
>
>
> On Mon, Feb 15, 2016 at 8:25 AM, Toshikazu Ichikawa <
> ichikawa.toshik...@lab.ntt.co.jp> wrote:
>
> Hi Affan,
>
>
>
>
>
> I don’t think any components in Liberty provide HA VM support directly.
>
>
>
> However, many works are published and open-sourced, here.
>
> https://etherpad.openstack.org/p/automatic-evacuation
>
> You may find ideas and solutions.
>
>
>
> And, the discussion on this topic is on-going at HA meeting.
>
> https://wiki.openstack.org/wiki/Meetings/HATeamMeeting
>
>
>
> thanks,
>
> Kazu
>
>
>
> *From:* Affan Syed [mailto:affan.syed@gmail.com]
> *Sent:* Monday, February 15, 2016 12:51 PM
> *To:* openstack-operators@lists.openstack.org
> *Subject:* [Openstack-operators] [nova] VM HA support in trunk
>
>
>
> reposting with the correct tag, hopefully. Would really appreciate some
> pointers.
>
> -- Forwarded message -
> From: Affan Syed 
> Date: Sat, 13 Feb 2016 at 15:13
> Subject: [nova] VM HA support in trunk
> To: 
>
>
>
> Hi all,
>
> I have been trying to understand if we currently have some VM HA support
> as part of Liberty?
>
>
>
> To be precise, how are host being down due to power failure handled,
> specifically in terms of migrating the VMs but possibly even their
> networking configs (tunnels etc).
>
>
>
> The VM migration like XEN-HA or KVM cluster seem to require 1+1 HA, I have
> read a few places about celiometer+heat templates to launch VMs for an N+1
> backup scenario, but these all seem like one-off setups.
>
>
>
>
>
> This issue seems to be very much important for legacy enterprises to move
> their "pets" --- not sure if we can simply wish away that mindset!
>
>
>
> Affan
>
>
>
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Congress] Issues with Tox testing

2016-04-11 Thread Anusha Ramineni
Hi Bryan,

Yes, tempest can be run outside devstack deployments. Please check the
README in https://github.com/openstack/tempest on configuring tempest.

As in liberty, you need to copy the tests to tempest, I guess installing
tempest on diff server also should work as long as congress service is
discoverable (never tried though) . But just to let you know, congress
Liberty version has minimal tempest coverage, In Mitaka we have enabled all
the tempest tests.

Best Regards,
Anusha

On 12 April 2016 at 10:43, Bryan Sullivan  wrote:

> Hi Anusha,
>
> That helps. Just one more question: in Liberty (which I'm currently based
> upon) have the tempest tests been run outside of devstack deployments, i.e.
> in an actual OpenStack deployment? The guide you reference mentions
> devstack but it's not clear that the same process applies outside devstack:
>
> e.g. "To list all Congress test cases, run command in /opt/stack/tempest:"
> references the "/opt/stack" folder which is not created outside of devstack
> environments. Thus to run them in a full OpenStack deployment, do I need to
> install  tempest and create an "opt/stack/tempest" folder to which the
> tests are copied, on the same server where Congress is installed?
>
> I'll try Mitaka soon but I expect to have the same question there:
> basically, are the tempest tests expected to be usable outside a devstack
> deploy?
>
> I guess I could just try it, but I don't want to waste time if this is not
> designed to be used outside devstack environments.
>
> Thanks,
> Bryan Sullivan
>
> --
> Date: Fri, 8 Apr 2016 09:01:29 +0530
> From: anusha.ii...@gmail.com
>
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Congress] Issues with Tox testing
>
> Hi Bryan,
>
> tox -epy27 doesn't run tempest tests , that is tests mentioned in
> https://github.com/openstack/congress/tree/stable/liberty/contrib/tempest
>  ,
> it runs only unit tests , tests present in
> https://github.com/openstack/congress/tree/stable/liberty/congress/tests .
>
> To run tempest tests, you need to manually copy the files to tempest and
> run the tests as mentioned in following readme
> https://github.com/openstack/congress/blob/stable/liberty/contrib/tempest/README.rst
>
> Mitaka supports tempest plugin, so manually copying tests to tempest can
> be avoided if you are using mitaka.
>
> Hope I clarified your question.
>
>
> Best Regards,
> Anusha
>
> On 8 April 2016 at 08:51, Bryan Sullivan  wrote:
>
> OK, somehow I did not pick up on that, or dropped it along the way of
> developing the script. Thanks for the clarification, also that Tempest is
> not required. I should have clarified that I'm using stable/liberty as the
> base. I will be moving to stable/mitaka soon, as part of the OPNFV Colorado
> release development.
>
> One additional question then - are the tests run by "tox -epy27" the same
> as the tests in the folder
> https://github.com/openstack/congress/tree/stable/liberty/contrib/tempest?
> If not, how are those tests supposed to be run for a non-devstack deploy (I
> see reference to devstack in the readme)?
>
> I see that the folders have been reorganized for mitaka. My question is
> per the goal to include as much of the Congress tests as possible in the
> OPNFV CI/CD process. Not that I expect any to fail, I just want OPNFV to
> leverage the full test suite. If for liberty that's best left as the tests
> run by the tox command, then that's OK.
>
> Thanks,
> Bryan Sullivan
>
> --
> Date: Thu, 7 Apr 2016 17:11:36 -0700
> From: ekcs.openst...@gmail.com
> To: openstack-dev@lists.openstack.org
>
> Subject: Re: [openstack-dev] [Congress] Issues with Tox testing
>
> Thanks for the feedback, Bryan. Glad you got things working!
>
> 1. The instructions asking to install those packages are missing from kilo
> (we’ll fix that), but they have been there since liberty. Was it perhaps
> unclear because the line is too long?
>
>- Additionally:
>
>$ sudo apt-get install git gcc python-dev libxml2 libxslt1-dev libzip-dev 
> mysql-server python-mysqldb build-essential libssl-dev libffi-dev
>
>
> 2. Tempest should not be required by the tox tests.
>
> Thanks!
>
> From: Bryan Sullivan 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, April 7, 2016 at 4:29 PM
> To: "openstack-dev@lists.openstack.org"  >
> Subject: Re: [openstack-dev] [Congress] Issues with Tox testing
>
> An update: I found that there were two dependencies needed that were not
> clear in the guide at https://github.com/openstack/congress. I also
> installed Tempest which was not referenced before. If these additions are
> correct (they worked for me), they should be added to
> 

[openstack-dev] [Congress] Guide for all reactive policy options? (execute[...])

2016-04-11 Thread Bryan Sullivan
Hi Congress team,

I'm trying to develop tests for the reactive policy features of Congress. I 
have one such test working, shown at 
https://git.opnfv.org/cgit/copper/tree/tests/adhoc/dmz01.sh, which applies the 
following rule for pausing a server when there has been an error in server 
placement (in a hypothetical "dmz" network environment): 

"execute[nova:servers.pause(id)] :- 
  dmz_placement_error(id),
  nova:servers(id,status='ACTIVE')" 

I'm also trying to develop a similar test for deletion of a subnet that has 
been defined in a reserved subnet space. But I can't figure out how to specify 
the action. I'm currently trying things like:

"execute[neutron:delete_subnet(x)] :- reserved_subnet_error(x)"
or 
"execute[neutron:subnet.delete(x)] :- reserved_subnet_error(x)"

Where "reserved_subnet_error" is a table created by matching an allocated 
subnet against a list of reserved subnets (e.g. for admin purposes, and not 
intended to be made available to VMs).

To help me develop such tests, it would be good to know a complete list of the 
"execute" action supported in Liberty (for all services). But I only see a 
reference to the nova example above in the docs. I've looked thru the code for 
neutron actions but can't find anywhere that a complete set of supported 
actions is described, or the syntax for invoking them in an execute rule.

Any pointers to where I should look (even in the code) is much appreciated.

Thanks,
Bryan Sullivan__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] security group rules

2016-04-11 Thread Sławek Kapłoński
Hello,

To be little bit more precise it allows AFAIK ingress from all instances 
(ports) which have got same security group.

-- 
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia poniedziałek, 11 kwietnia 2016 21:32:55 CEST Remo Mattei pisze:
> it says default not 0/0 which is not from anywhere.
> 
> So that applies only for the local network (default)
> 
> > On Apr 11, 2016, at 21:15, Jagga Soorma  wrote:
> > 
> > Hi Guys,
> > 
> > There is a default security group rule that has the following entry:
> > 
> > --
> > Direction: Ingress
> > Ether Type: IPv4
> > IP Protocol: Any
> > Port Range: Any
> > Remote Prefix: -
> > Remote Security Group: default
> > --
> > 
> > Now this makes me think that it should basically allow all ingress ipv4
> > traffic (udp & tcp) on any port.  However we have to manually open up ssh
> > for example by adding another rule for port 22 and remote prefix of
> > 0.0.0.0/0 .  Not sure what a - in the remote prefix
> > means and why is this rule even there if it does nothing.  Any help
> > understanding this would be appreciated.
> > 
> > Thanks.
> > 
> > !DSPAM:1,570c4ff2121991933018292!
> > ___ Mailing list:
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to
> > : openstack@lists.openstack.org
> > Unsubscribe :
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > 
> > 
> > !DSPAM:1,570c4ff2121991933018292!

signature.asc
Description: This is a digitally signed message part.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] security group rules

2016-04-11 Thread rezroo
In neutron a security group rule can have different types of "remote" - 
either a CIDR or another security group.


The rule means that your "remote" is another security group - so any VM 
in security group "default" can reach any port in this security group - 
so "default" has opened all its ports to members of "default.


Reza

On 4/11/2016 6:15 PM, Jagga Soorma wrote:

Hi Guys,

There is a default security group rule that has the following entry:

--
Direction: Ingress
Ether Type: IPv4
IP Protocol: Any
Port Range: Any
Remote Prefix: -
Remote Security Group: default
--

Now this makes me think that it should basically allow all ingress 
ipv4 traffic (udp & tcp) on any port.  However we have to manually 
open up ssh for example by adding another rule for port 22 and remote 
prefix of 0.0.0.0/0 . Not sure what a - in the 
remote prefix means and why is this rule even there if it does 
nothing.  Any help understanding this would be appreciated.


Thanks.



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] Does Neutron itself add VM ports to ovs?

2016-04-11 Thread Sławek Kapłoński
Hello,

I don't know this ODL and how it works but for ovs-agent nova-compute is part 
which adds port to ovs bridge (see for example nova/virt/libvirt/vif.py)

-- 
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia wtorek, 12 kwietnia 2016 12:31:01 CEST 张晨 pisze:
> Hello everyone,
> 
> 
> I have a question about Neutron. I learn that the ovs-agent receives the
> update-port rpc notification,and updates ovsdb data for VM port.
> 
> 
> But what is the situation when i use SDN controllers instead of OVS
> mechanism driver? I found no where in ODL to add the VM port to ovs.
> 
> 
> I asked the author of the related ODL plugin, but he told me that OpenStack
> adds the VM port to ovs.
> 
> 
> Then, where is the implementation in OpenStack to  add the VM port to ovs,
> when i'm using ODL replacing the OVSmechanism driver?
> 
> 
> Thanks

signature.asc
Description: This is a digitally signed message part.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Issues with Tox testing

2016-04-11 Thread Bryan Sullivan
Hi Anusha,

That helps. Just one more question: in Liberty (which I'm currently based upon) 
have the tempest tests been run outside of devstack deployments, i.e. in an 
actual OpenStack deployment? The guide you reference mentions devstack but it's 
not clear that the same process applies outside devstack:

e.g. "To list all Congress test cases, run command in /opt/stack/tempest:" 
references the "/opt/stack" folder which is not created outside of devstack 
environments. Thus to run them in a full OpenStack deployment, do I need to 
install  tempest and create an "opt/stack/tempest" folder to which the tests 
are copied, on the same server where Congress is installed?

I'll try Mitaka soon but I expect to have the same question there: basically, 
are the tempest tests expected to be usable outside a devstack deploy? 

I guess I could just try it, but I don't want to waste time if this is not 
designed to be used outside devstack environments.

Thanks,
Bryan Sullivan

Date: Fri, 8 Apr 2016 09:01:29 +0530
From: anusha.ii...@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Congress] Issues with Tox testing

Hi Bryan,
tox -epy27 doesn't run tempest tests , that is tests mentioned in 
https://github.com/openstack/congress/tree/stable/liberty/contrib/tempest , it 
runs only unit tests , tests present in 
https://github.com/openstack/congress/tree/stable/liberty/congress/tests .
To run tempest tests, you need to manually copy the files to tempest and run 
the tests as mentioned in following readme 
https://github.com/openstack/congress/blob/stable/liberty/contrib/tempest/README.rst
Mitaka supports tempest plugin, so manually copying tests to tempest can be 
avoided if you are using mitaka.
Hope I clarified your question.
Best Regards,Anusha

On 8 April 2016 at 08:51, Bryan Sullivan  wrote:



OK, somehow I did not pick up on that, or dropped it along the way of 
developing the script. Thanks for the clarification, also that Tempest is not 
required. I should have clarified that I'm using stable/liberty as the base. I 
will be moving to stable/mitaka soon, as part of the OPNFV Colorado release 
development.

One additional question then - are the tests run by "tox -epy27" the same as 
the tests in the folder 
https://github.com/openstack/congress/tree/stable/liberty/contrib/tempest? If 
not, how are those tests supposed to be run for a non-devstack deploy (I see 
reference to devstack in the readme)?

I see that the folders have been reorganized for mitaka. My question is per the 
goal to include as much of the Congress tests as possible in the OPNFV CI/CD 
process. Not that I expect any to fail, I just want OPNFV to leverage the full 
test suite. If for liberty that's best left as the tests run by the tox 
command, then that's OK.

Thanks,
Bryan Sullivan

Date: Thu, 7 Apr 2016 17:11:36 -0700
From: ekcs.openst...@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Congress] Issues with Tox testing

Thanks for the feedback, Bryan. Glad you got things working!
1. The instructions asking to install those packages are missing from kilo 
(we’ll fix that), but they have been there since liberty. Was it perhaps 
unclear because the line is too long?Additionally:
$ sudo apt-get install git gcc python-dev libxml2 libxslt1-dev libzip-dev 
mysql-server python-mysqldb build-essential libssl-dev libffi-dev2. Tempest 
should not be required by the tox tests.
Thanks!
From:  Bryan Sullivan 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)" 

Date:  Thursday, April 7, 2016 at 4:29 PM
To:  "openstack-dev@lists.openstack.org" 
Subject:  Re: [openstack-dev] [Congress] Issues with Tox testing

An update: I found that there were two dependencies needed that were not clear 
in the guide at https://github.com/openstack/congress. I also installed Tempest 
which was not referenced before. If these additions are correct (they worked 
for me), they should be added to 
https://github.com/openstack/congress/blob/master/README.rst.

$ sudo apt-get install libffi-dev libssl-dev
$ cd ~/git
$ git clone https://github.com/openstack/tempest/
$ cd tempest
$ ~/git/congress/bin/pip install -r requirements.txt
$ ~/git/congress/bin/pip install .

(not sure if both pip commands are needed - I'm not an expert on pip install)

After that, "tox -epy27" ran thru fine:

---
  ---
congress.tests.policy_engines.test_vmplacement.TestComputeVmAssignment.test_set_policy_with_dashes
   27.623
congress.tests.policy_engines.test_vmplacement.TestComputeVmAssignment.test_set_policy
   27.212
congress.tests.policy_engines.test_agnostic_performance.TestRuntimePerformance.test_simulate_latency
  1.325
congress.tests.dse.test_dse.TestDSE.test_policy_tables  

[openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-11 Thread Armando M.
Hi folks,

A provisional schedule for the Neutron project is available [1]. I am still
working with the session chairs and going through/ironing out some details
as well as gathering input from [2].

I hope I can get something more final by the end of this week. In the
meantime, please free to ask questions/provide comments.

Many thanks,
Armando

[1]
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
[2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Does Neutron itself add VM ports to ovs?

2016-04-11 Thread 张晨
Hello everyone,


I have a question about Neutron. I learn that the ovs-agent receives the 
update-port rpc notification,and updates ovsdb data for VM port.


But what is the situation when i use SDN controllers instead of OVS mechanism 
driver? I found no where in ODL to add the VM port to ovs.


I asked the author of the related ODL plugin, but he told me that OpenStack 
adds the VM port to ovs.


Then, where is the implementation in OpenStack to  add the VM port to ovs, when 
i'm using ODL replacing the OVSmechanism driver?


Thanks__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] start time of weekly TaaS IRC meeting

2016-04-11 Thread Takashi Yamamoto
it's my understanding too.
i submitted the change: https://review.openstack.org/#/c/304383/
please +1 if you are ok.

On Tue, Apr 12, 2016 at 9:06 AM, Soichi Shigeta
 wrote:
>
>  Hi Anil, Vinay, and folks,
>
> I'd like to confirm the start time of weekly TaaS IRC meeting
>   will be changed from 06:30 UTC to 05:30 UTC (adjustment for
>   summer/daylight time) from next IRC on 13th Apr.
>
>   Is this right?
>
>   Regards,
>   Soichi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Lessons learned from OpenStack Liberty Deployment

2016-04-11 Thread Mathew Mulamootil Varghese (mathew)
Hi All,

Appreciate if any of you could share lessons learned on Liberty Deployment. You 
can unicast me to avoid an email spam.

Need to share the same with a customer who is planning start with a PoC on 
Liberty.

Many Thanks,

Matt
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Neutron can't ping external network

2016-04-11 Thread liyulei
Hello,

 

I hava installed openstack Liberty using vxlan, though there are no error in
logs and I can create vm in computer node, I can't ping external network
either in vm or in router namespace. In my controller node, there is only
one Network Interface, and my tenant_network_type is vxlan. My question is
how many network interfaces is required at least, if one network interface
can achieve the goal, how should I edit the conf ?

 

 

Thanks

 

Li yulei

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Horizon][stable] proposing Rob Cresswell for Horizon stable core

2016-04-11 Thread Tony Breeds
On Thu, Apr 07, 2016 at 12:01:31PM +0200, Matthias Runge wrote:
> Hello,
> 
> I'm proposing Rob Cresswell to become stable core for Horizon. I
> thought, in the past all PTL were in stable team, but this doesn't seem
> to be true any more.

This *may* have been true when the project specific team were created, which
was before my time, but it isn't true now.

> Please chime in with +1/-1

-1

As with core status in other parts of OpenStack its merit / evidence based.
That is to say if you're doing good work, showing an understanding of the
stable policy then great lets do this thing.

A quite check of reviewstats:

stable-liberty-horizon-120.txt : http://paste.openstack.org/show/493706
stable-kilo-horizon-120.txt: http://paste.openstack.org/show/493707

Shows that Rob has done 2 stable reviews in the last 120 days.

This is absolutely no reflection on Rob's contribution to Horizon as a whole.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] Eliminating the DevStack layer

2016-04-11 Thread Hirofumi Ichihara



On 2016/04/12 8:02, Kevin Benton wrote:
Oh right, I'm definitely for eliminating these values from Devstack 
and just telling people to use post-config. I was just hesitant about 
advocating for their removal from neutron.
Yeah, my point is eliminating useless options from Devstack since we can 
change them in post-config of local.conf if need.





On Mon, Apr 11, 2016 at 3:55 PM, Brandon Logan 
> wrote:


On Mon, 2016-04-11 at 15:30 -0700, Kevin Benton wrote:
> >[1]:

https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L178
> >[2]:
https://github.com/openstack/nova/blob/master/nova/conf/virt.py#L164-L166
>
>
> This is a Nova option to decide how long to wait for Neutron to
> callback before considering a port failed to be wired. The time this
> will take will depend quite a bit on how heavily loaded the
system is.
> We can certainly try to get rid of it, but it means that we have to
> force assumptions about how quickly a system should give up waiting
> for wiring. It would be similar to getting rid of the option to
choose
> a timeout value for the API clients.
>
>
>
> >[3]:

https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L162
> >[4]:

https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L53
>
>
> Neutron does not need to be deployed with keystone. This is how you
> disable it. Some operators do not have Neutron exposed to tenants so
> keystone is stripped away for performance since the only things
> communicating with Neutron are internal trusted services.

This is correct. In a large deployment the number of requests going to
keystone dramatically affects performance.  Do you think this needs to
be a devstack config option though?  I kind of don't think it does for
no better reason than it's easy to just change the option in the
neutron.conf and restart.

>
> On Mon, Apr 11, 2016 at 12:42 PM, Hirofumi Ichihara
> > wrote:
> I agree. Throughout I was reviewing Devstack over 3 cycles,
> I thought the same thing. Devstack often accepted
patches just
> adding option although we're not sure who really needs the
> options.
> There are many useless stuff in the options.
> For example, default value of devstack option is the same
> value as
> default in Projects. Please look at [1] and [2], [3] and
[4].
> Who uses these options?
>
> We can see such options in devstack throughout. I agree we
> will adjust default configurations and
> that documents in Neutron side. However, let's eliminate
such
> options are clearly useless first.
> And then we should do after we made necessary options clear.
>
> [1]:
>

https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L178
> [2]:
>
https://github.com/openstack/nova/blob/master/nova/conf/virt.py#L164-L166
> [3]:
>

https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L162
> [4]:
>

https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L53
>
> Thanks,
> Hirofumi
>
>
> On 2016/04/09 0:07, Sean M. Collins wrote:
> Prior to the introduction of local.conf, the
only way
> to configure
> OpenStack components was to introduce code directly
> into DevStack, so
> that DevStack would pick it up then inject it
into the
> configuration
> file.
>
> This was because DevStack writes out new
configuration
> files on each
> run, so it wasn't possible for you to make
changes to
> any configuration
> file (nova.conf, neutron.conf, ml2_plugin.ini,
etc..).
>
> So, someone who wanted to set the Linux Bridge
Agent's
> physical_interface_mappings setting for Neutron
would
> have to use
> $LB_INTERFACE_MAPPINGS in DevStack, which would then
> be invoked by
> DevStack[1].
>
> The local.conf functionality was introduced quite a
> while back, and
> I think it's time to have a conversation about
why we
> should start
> moving away from the previous practice of declaring
>

[openstack-dev] [stackalytics] Proposal for some code/feature changes

2016-04-11 Thread Nikhil Komawar
Hello,

I was hoping to make some changes to the stackalytics dashboard
specifically of this type [1] following my requested suggestions here
[2]; possibly add a few extra columns for +0s and just Bot +1s. I think
having this info gives much clearer picture of the kind of reviews
someone is/wants to be involved in. I couldn't find documentation in the
README or anywhere else and the minimal amount of docstrings are making
it difficult for me to figure the changes.

What's the best possible route to accomplish this?

[1] http://stackalytics.com/report/contribution/astara-group/30
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/091836.html

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-11 Thread Jay Pipes
I'll actually be out in Hillsboro on Thursday night so I can ask folks 
when I'm out there...


Best,
-jay

On 04/11/2016 07:19 PM, Augustina Ragwitz wrote:

On Mon, Apr 11, 2016 at 3:54 PM, Michael Still > wrote:

Intel at Hillsboro had expressed an interest in hosting the N
mid-cycle last release, so they might still be an option? I don't
recall any other possible hosts in the queue, but its possible I've
missed someone.


I was also thinking about following up with Intel since they just hosted
the Horizon Midcycle. I'm in PDX so I can follow up on that.

---
Augustina Ragwitz
Sr Systems Software Engineer, HPE Cloud
Hewlett Packard Enterprise
---
irc: auggy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [releases] Behavior change in the bot for jenkins merge?

2016-04-11 Thread Clark Boylan
On Mon, Apr 11, 2016, at 06:18 PM, Nikhil Komawar wrote:
> Hi,
> 
> I noticed on a recent merge to glance [1] that the bot updated the bug
> [2] with comment from "in progress" to "fix released" vs. earlier
> behavior "fix committed". Is that behavior on purpose or issue with the
> bot?
> 
> [1] https://review.openstack.org/#/c/304184/
> [2] https://bugs.launchpad.net/glance/+bug/1568894

This was an intentional behavior change. See
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081612.html

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-11 Thread Tony Breeds
On Mon, Apr 11, 2016 at 03:49:16PM -0500, Matt Riedemann wrote:
> A few people have been asking about planning for the nova midcycle for
> newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work the
> best. R-14 is close to the US July 4th holiday, R-13 is during the week of
> the US July 4th holiday, and R-12 is the week of the n-2 milestone.

Thanks for starting this now.  It really helps  to know these things early.

This cycle *may* be harder than typical with:
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9478

Having said that, either of those options work for me.

> As far as a venue is concerned, I haven't heard any offers from companies to
> host yet. If no one brings it up by the summit, I'll see if hosting in
> Rochester, MN at the IBM site is a possibility.

+1 would Rochester again.  The drive from MSP was trivial ;P

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] security group rules

2016-04-11 Thread Remo Mattei
it says default not 0/0 which is not from anywhere. 

So that applies only for the local network (default) 
> On Apr 11, 2016, at 21:15, Jagga Soorma  wrote:
> 
> Hi Guys,
> 
> There is a default security group rule that has the following entry:
> 
> --
> Direction: Ingress
> Ether Type: IPv4
> IP Protocol: Any
> Port Range: Any
> Remote Prefix: -
> Remote Security Group: default
> --
> 
> Now this makes me think that it should basically allow all ingress ipv4 
> traffic (udp & tcp) on any port.  However we have to manually open up ssh for 
> example by adding another rule for port 22 and remote prefix of 0.0.0.0/0 
> .  Not sure what a - in the remote prefix means and why is 
> this rule even there if it does nothing.  Any help understanding this would 
> be appreciated.
> 
> Thanks.
> 
> !DSPAM:1,570c4ff2121991933018292! 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> 
> !DSPAM:1,570c4ff2121991933018292!

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [glance][reno] Broken releasenotes in glance_store

2016-04-11 Thread Nikhil Komawar
To close this:

This has been fixed as a part of the earlier opened bug
https://bugs.launchpad.net/glance-store/+bug/1568767 and other is
duplicated.

Thanks!

On 4/11/16 6:12 PM, Nikhil Komawar wrote:
> NVM, I verified it locally and created a report.
> https://bugs.launchpad.net/glance-store/+bug/1569062
>
> Thanks for bringing this up.
>
> On 4/11/16 4:07 PM, Nikhil Komawar wrote:
>> I just referred to it using my email inbox, gerrit seems to be down for
>> me to be fully confirmed on this fixing things.
>>
>> On 4/11/16 4:06 PM, Nikhil Komawar wrote:
>>> Thanks for your proposal Andreas. I guess [1] fixes things and good to
>>> have here for awareness?
>>>
>>> [1] https://review.openstack.org/303962
>>>
>>> On 4/11/16 2:56 AM, Andreas Jaeger wrote:
 I've noticed that the translation bot fails extracting strings from the
 release notes:
 https://jenkins.openstack.org/job/glance_store-propose-translation-update/203/console

 I could reproduce this with running "tox -e releasenotes" locally on
 glance_store. Could you check what's broken - and figure out how this
 could have sneaked in through our gates, please?

 thanks,
 Andreas

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-11 Thread Kai Qiang Wu
Run both Chronos and Marathon on Mesos
and Run Chronos on top of Marathon seems two different cases.


I think if #1 Add Chronos to the mesos bay , #2 is possible to archive
that. Only if we find frameworks(heat templates) not able to handle that,
we could use option #1.
But still flexible is better, I think.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Guz Egor 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   12/04/2016 04:36 am
Subject:Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay



+1 for "#1: Mesos and Marathon". Most deployments that I am aware of has
this setup. Also we can provide several line instructions how to run
Chronos on top of Marathon.

honestly I don't see how #2 will work, because Marathon installation is
different from Aurora installation.

---
Egor

From: Kai Qiang Wu 
To: OpenStack Development Mailing List (not for usage questions)

Sent: Sunday, April 10, 2016 6:59 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

#2 seems more flexible, and if it be proved it can "make the SAME mesos bay
applied with mutilple frameworks." It would be great. Which means, one
mesos bay should support multiple frameworks.




Thanks


Best Wishes,


Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193


Follow your heart. You are miracle!

Hongbin Lu ---11/04/2016 12:06:07 am---My preference is #1, but I don’t
feel strong to exclude #2. I would agree to go with #2 for now and

From: Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)"

Date: 11/04/2016 12:06 am
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay



My preference is #1, but I don’t feel strong to exclude #2. I would agree
to go with #2 for now and switch back to #1 if there is a demand from
users. For Ton’s suggestion to push Marathon into the introduced
configuration hook, I think it is a good idea.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: April-10-16 11:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
I would agree that #2 is the most flexible option, providing a well defined
path for additional frameworks such as Kubernetes and Swarm.
I would suggest that the current Marathon framework be refactored to use
this new hook, to serve as an example and to be the supported
framework in Magnum. This will also be useful to users who want other
frameworks but not Marathon.
Ton,

Adrian Otto ---04/08/2016 08:49:52 PM---On Apr 8, 2016, at 3:15 PM, Hongbin
Lu > wrote:

From: Adrian Otto 
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date: 04/08/2016 08:49 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay


On Apr 8, 2016, at 3:15 PM, Hongbin Lu  wrote:

Hi team,
I would like to give an update for this thread. In the last team, we
discussed several options to introduce Chronos to our mesos bay:
1. Add Chronos to the mesos bay. With this option, the mesos bay will have
two mesos frameworks by default (Marathon and Chronos).
2. Add a configuration hook for users to configure additional mesos
frameworks, such as Chronos. With this option, Magnum team doesn’t need to
maintain extra framework configuration. However, users need to do it
themselves.
This is my preference.

Adrian
3. Create a dedicated bay type for Chronos. With this option, we separate
Marathon and Chronos into two different bay types. As a result, each bay
type becomes easier to maintain, but those two mesos framework cannot share
resources (a key feature of mesos is to have different frameworks running
on the same cluster to increase resource utilization).Which option you
prefer? Or you have other suggestions? Advices are welcome.

Best regards,
Hongbin

From: Guz Egor 

[openstack-dev] [infra] [releases] Behavior change in the bot for jenkins merge?

2016-04-11 Thread Nikhil Komawar
Hi,

I noticed on a recent merge to glance [1] that the bot updated the bug
[2] with comment from "in progress" to "fix released" vs. earlier
behavior "fix committed". Is that behavior on purpose or issue with the bot?

[1] https://review.openstack.org/#/c/304184/
[2] https://bugs.launchpad.net/glance/+bug/1568894

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] security group rules

2016-04-11 Thread Jagga Soorma
Hi Guys,

There is a default security group rule that has the following entry:

--
Direction: Ingress
Ether Type: IPv4
IP Protocol: Any
Port Range: Any
Remote Prefix: -
Remote Security Group: default
--

Now this makes me think that it should basically allow all ingress ipv4
traffic (udp & tcp) on any port.  However we have to manually open up ssh
for example by adding another rule for port 22 and remote prefix of
0.0.0.0/0.  Not sure what a - in the remote prefix means and why is this
rule even there if it does nothing.  Any help understanding this would be
appreciated.

Thanks.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [neutron][taas] start time of weekly TaaS IRC meeting

2016-04-11 Thread Soichi Shigeta


 Hi Anil, Vinay, and folks,

I'd like to confirm the start time of weekly TaaS IRC meeting
  will be changed from 06:30 UTC to 05:30 UTC (adjustment for
  summer/daylight time) from next IRC on 13th Apr.

  Is this right?

  Regards,
  Soichi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-11 Thread Steve Baker

On 12/04/16 11:48, Jeremy Stanley wrote:

On 2016-04-12 11:43:06 +1200 (+1200), Steve Baker wrote:

Can I suggest a sub-team for
os-collect-config/os-refresh-config/os-apply-config? I ask since
these tools also make up the default heat agent, and there is
nothing in them which is TripleO specific.

Could make sense similarly for diskimage-builder, as there is a lot
of TripleO/Infra cross-over use and contribution happening there.

+1, this tool is general purpose and has diverse contributors and consumers
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Meeting SKIPPED, Tue April 12th, 21:00 UTC

2016-04-11 Thread Mike Perez
Hi all!

We will be skipping the cross-project meeting since there are no agenda items
to discuss, but someone can add one [1] to call a meeting next time. Thanks!

[1] - 
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-11 Thread Jeremy Stanley
On 2016-04-12 11:43:06 +1200 (+1200), Steve Baker wrote:
> Can I suggest a sub-team for
> os-collect-config/os-refresh-config/os-apply-config? I ask since
> these tools also make up the default heat agent, and there is
> nothing in them which is TripleO specific.

Could make sense similarly for diskimage-builder, as there is a lot
of TripleO/Infra cross-over use and contribution happening there.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-11 Thread Steve Baker

On 11/04/16 22:19, Steven Hardy wrote:

On Mon, Apr 11, 2016 at 05:54:11AM -0400, John Trowbridge wrote:

Hola OOOers,

It came up in the meeting last week that we could benefit from a CI
subteam with its own meeting, since CI is taking up a lot of the main
meeting time.

I like this idea, and think we should do something similar for the other
informal subteams (tripleoclient, UI), and also add a new subteam for
tripleo-quickstart (and maybe one for releases?).

+1, from the meeting and other recent discussions it sounds like defining
some sub-teams would be helpful, let's try to enumerate those discussed:

- tripleo-ci
- API (Mistral based API which is landing in tripleo-common atm)
- Tripleo-UI
- os-net-config
- python-tripleoclient
- tripleo-quickstart
Can I suggest a sub-team for 
os-collect-config/os-refresh-config/os-apply-config? I ask since these 
tools also make up the default heat agent, and there is nothing in them 
which is TripleO specific.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[OpenStack-Infra] [Infra] Meeting Tuesday April 12th at 19:00 UTC

2016-04-11 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday April 12th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-04-05-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-04-05-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-04-05-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[openstack-dev] [Infra] Meeting Tuesday April 12th at 19:00 UTC

2016-04-11 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday April 12th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-04-05-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-04-05-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-04-05-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-11 Thread Augustina Ragwitz
On Mon, Apr 11, 2016 at 3:54 PM, Michael Still  wrote:
>
> Intel at Hillsboro had expressed an interest in hosting the N mid-cycle
> last release, so they might still be an option? I don't recall any other
> possible hosts in the queue, but its possible I've missed someone.
>
>
I was also thinking about following up with Intel since they just hosted
the Horizon Midcycle. I'm in PDX so I can follow up on that.

---
Augustina Ragwitz
Sr Systems Software Engineer, HPE Cloud
Hewlett Packard Enterprise
---
irc: auggy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] Eliminating the DevStack layer

2016-04-11 Thread Kevin Benton
Oh right, I'm definitely for eliminating these values from Devstack and
just telling people to use post-config. I was just hesitant about
advocating for their removal from neutron.

On Mon, Apr 11, 2016 at 3:55 PM, Brandon Logan 
wrote:

> On Mon, 2016-04-11 at 15:30 -0700, Kevin Benton wrote:
> > >[1]:
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L178
> > >[2]:
> https://github.com/openstack/nova/blob/master/nova/conf/virt.py#L164-L166
> >
> >
> > This is a Nova option to decide how long to wait for Neutron to
> > callback before considering a port failed to be wired. The time this
> > will take will depend quite a bit on how heavily loaded the system is.
> > We can certainly try to get rid of it, but it means that we have to
> > force assumptions about how quickly a system should give up waiting
> > for wiring. It would be similar to getting rid of the option to choose
> > a timeout value for the API clients.
> >
> >
> >
> > >[3]:
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L162
> > >[4]:
> https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L53
> >
> >
> > Neutron does not need to be deployed with keystone. This is how you
> > disable it. Some operators do not have Neutron exposed to tenants so
> > keystone is stripped away for performance since the only things
> > communicating with Neutron are internal trusted services.
>
> This is correct. In a large deployment the number of requests going to
> keystone dramatically affects performance.  Do you think this needs to
> be a devstack config option though?  I kind of don't think it does for
> no better reason than it's easy to just change the option in the
> neutron.conf and restart.
>
> >
> > On Mon, Apr 11, 2016 at 12:42 PM, Hirofumi Ichihara
> >  wrote:
> > I agree. Throughout I was reviewing Devstack over 3 cycles,
> > I thought the same thing. Devstack often accepted patches just
> > adding option although we're not sure who really needs the
> > options.
> > There are many useless stuff in the options.
> > For example, default value of devstack option is the same
> > value as
> > default in Projects. Please look at [1] and [2], [3] and [4].
> > Who uses these options?
> >
> > We can see such options in devstack throughout. I agree we
> > will adjust default configurations and
> > that documents in Neutron side. However, let's eliminate such
> > options are clearly useless first.
> > And then we should do after we made necessary options clear.
> >
> > [1]:
> >
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L178
> > [2]:
> >
> https://github.com/openstack/nova/blob/master/nova/conf/virt.py#L164-L166
> > [3]:
> >
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L162
> > [4]:
> >
> https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L53
> >
> > Thanks,
> > Hirofumi
> >
> >
> > On 2016/04/09 0:07, Sean M. Collins wrote:
> > Prior to the introduction of local.conf, the only way
> > to configure
> > OpenStack components was to introduce code directly
> > into DevStack, so
> > that DevStack would pick it up then inject it into the
> > configuration
> > file.
> >
> > This was because DevStack writes out new configuration
> > files on each
> > run, so it wasn't possible for you to make changes to
> > any configuration
> > file (nova.conf, neutron.conf, ml2_plugin.ini, etc..).
> >
> > So, someone who wanted to set the Linux Bridge Agent's
> > physical_interface_mappings setting for Neutron would
> > have to use
> > $LB_INTERFACE_MAPPINGS in DevStack, which would then
> > be invoked by
> > DevStack[1].
> >
> > The local.conf functionality was introduced quite a
> > while back, and
> > I think it's time to have a conversation about why we
> > should start
> > moving away from the previous practice of declaring
> > variables in
> > DevStack, and then having them injected into the
> > configuration files.
> >
> > The biggest issue is: There is a disconnect between
> > the developers
> > using DevStack and someone who is an operator or who
> > has been editing
> > OpenStack conf files directly. So, for example I can
> > tell you all about
> > how DevStack has a 

Re: [openstack-dev] [devstack][neutron] Eliminating the DevStack layer

2016-04-11 Thread Brandon Logan
On Mon, 2016-04-11 at 15:30 -0700, Kevin Benton wrote:
> >[1]: 
> >https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L178
> >[2]: 
> >https://github.com/openstack/nova/blob/master/nova/conf/virt.py#L164-L166
> 
> 
> This is a Nova option to decide how long to wait for Neutron to
> callback before considering a port failed to be wired. The time this
> will take will depend quite a bit on how heavily loaded the system is.
> We can certainly try to get rid of it, but it means that we have to
> force assumptions about how quickly a system should give up waiting
> for wiring. It would be similar to getting rid of the option to choose
> a timeout value for the API clients.
> 
> 
> 
> >[3]: 
> >https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L162
> >[4]: 
> >https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L53
> 
> 
> Neutron does not need to be deployed with keystone. This is how you
> disable it. Some operators do not have Neutron exposed to tenants so
> keystone is stripped away for performance since the only things
> communicating with Neutron are internal trusted services.

This is correct. In a large deployment the number of requests going to
keystone dramatically affects performance.  Do you think this needs to
be a devstack config option though?  I kind of don't think it does for
no better reason than it's easy to just change the option in the
neutron.conf and restart.

> 
> On Mon, Apr 11, 2016 at 12:42 PM, Hirofumi Ichihara
>  wrote:
> I agree. Throughout I was reviewing Devstack over 3 cycles,
> I thought the same thing. Devstack often accepted patches just
> adding option although we're not sure who really needs the
> options.
> There are many useless stuff in the options.
> For example, default value of devstack option is the same
> value as
> default in Projects. Please look at [1] and [2], [3] and [4].
> Who uses these options?
> 
> We can see such options in devstack throughout. I agree we
> will adjust default configurations and
> that documents in Neutron side. However, let's eliminate such
> options are clearly useless first.
> And then we should do after we made necessary options clear.
> 
> [1]:
> 
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L178
> [2]:
> 
> https://github.com/openstack/nova/blob/master/nova/conf/virt.py#L164-L166
> [3]:
> 
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L162
> [4]:
> 
> https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L53
> 
> Thanks,
> Hirofumi
> 
> 
> On 2016/04/09 0:07, Sean M. Collins wrote:
> Prior to the introduction of local.conf, the only way
> to configure
> OpenStack components was to introduce code directly
> into DevStack, so
> that DevStack would pick it up then inject it into the
> configuration
> file.
> 
> This was because DevStack writes out new configuration
> files on each
> run, so it wasn't possible for you to make changes to
> any configuration
> file (nova.conf, neutron.conf, ml2_plugin.ini, etc..).
> 
> So, someone who wanted to set the Linux Bridge Agent's
> physical_interface_mappings setting for Neutron would
> have to use
> $LB_INTERFACE_MAPPINGS in DevStack, which would then
> be invoked by
> DevStack[1].
> 
> The local.conf functionality was introduced quite a
> while back, and
> I think it's time to have a conversation about why we
> should start
> moving away from the previous practice of declaring
> variables in
> DevStack, and then having them injected into the
> configuration files.
> 
> The biggest issue is: There is a disconnect between
> the developers
> using DevStack and someone who is an operator or who
> has been editing
> OpenStack conf files directly. So, for example I can
> tell you all about
> how DevStack has a bunch of variables for configuring
> Neutron (which is
> Not a Good Thing™), and how those go into DevStack and
> then end up coming
> out the other side in a Neutron configuration file.
> 
> Really, I would 

Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-11 Thread Michael Still
On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann 
wrote:

> A few people have been asking about planning for the nova midcycle for
> newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work
> the best. R-14 is close to the US July 4th holiday, R-13 is during the week
> of the US July 4th holiday, and R-12 is the week of the n-2 milestone.
>
> R-16 is too close to the summit IMO, and R-10 is pushing it out too far in
> the release. I'd be open to R-14 though but don't know what other people's
> plans are.
>
> As far as a venue is concerned, I haven't heard any offers from companies
> to host yet. If no one brings it up by the summit, I'll see if hosting in
> Rochester, MN at the IBM site is a possibility.
>

Intel at Hillsboro had expressed an interest in hosting the N mid-cycle
last release, so they might still be an option? I don't recall any other
possible hosts in the queue, but its possible I've missed someone.

Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] Eliminating the DevStack layer

2016-04-11 Thread Kevin Benton
>[1]:
https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L178
>[2]:
https://github.com/openstack/nova/blob/master/nova/conf/virt.py#L164-L166

This is a Nova option to decide how long to wait for Neutron to callback
before considering a port failed to be wired. The time this will take will
depend quite a bit on how heavily loaded the system is. We can certainly
try to get rid of it, but it means that we have to force assumptions about
how quickly a system should give up waiting for wiring. It would be similar
to getting rid of the option to choose a timeout value for the API clients.

>[3]:
https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L162
>[4]:
https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L53

Neutron does not need to be deployed with keystone. This is how you disable
it. Some operators do not have Neutron exposed to tenants so keystone is
stripped away for performance since the only things communicating with
Neutron are internal trusted services.

On Mon, Apr 11, 2016 at 12:42 PM, Hirofumi Ichihara <
ichihara.hirof...@lab.ntt.co.jp> wrote:

> I agree. Throughout I was reviewing Devstack over 3 cycles,
> I thought the same thing. Devstack often accepted patches just
> adding option although we're not sure who really needs the options.
> There are many useless stuff in the options.
> For example, default value of devstack option is the same value as
> default in Projects. Please look at [1] and [2], [3] and [4]. Who uses
> these options?
>
> We can see such options in devstack throughout. I agree we will adjust
> default configurations and
> that documents in Neutron side. However, let's eliminate such options are
> clearly useless first.
> And then we should do after we made necessary options clear.
>
> [1]:
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L178
> [2]:
> https://github.com/openstack/nova/blob/master/nova/conf/virt.py#L164-L166
> [3]:
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L162
> [4]:
> https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L53
>
> Thanks,
> Hirofumi
>
>
> On 2016/04/09 0:07, Sean M. Collins wrote:
>
>> Prior to the introduction of local.conf, the only way to configure
>> OpenStack components was to introduce code directly into DevStack, so
>> that DevStack would pick it up then inject it into the configuration
>> file.
>>
>> This was because DevStack writes out new configuration files on each
>> run, so it wasn't possible for you to make changes to any configuration
>> file (nova.conf, neutron.conf, ml2_plugin.ini, etc..).
>>
>> So, someone who wanted to set the Linux Bridge Agent's
>> physical_interface_mappings setting for Neutron would have to use
>> $LB_INTERFACE_MAPPINGS in DevStack, which would then be invoked by
>> DevStack[1].
>>
>> The local.conf functionality was introduced quite a while back, and
>> I think it's time to have a conversation about why we should start
>> moving away from the previous practice of declaring variables in
>> DevStack, and then having them injected into the configuration files.
>>
>> The biggest issue is: There is a disconnect between the developers
>> using DevStack and someone who is an operator or who has been editing
>> OpenStack conf files directly. So, for example I can tell you all about
>> how DevStack has a bunch of variables for configuring Neutron (which is
>> Not a Good Thing™), and how those go into DevStack and then end up coming
>> out the other side in a Neutron configuration file.
>>
>> Really, I would like to get rid of the intermediate layer (DevStack)
>> and get both Devs and Deployers to be able to just say: Here's my
>> neutron.conf - let's diff mine and yours and see what we need to sync.
>>
>> Matt Kassawara and I have had this issue, since he's coming from the
>> OSAD side, and I'm coming from the DevStack side. We both know what the
>> Neutron configuration should end up as, but DevStack having its own set
>> of variables and how those variables are handled and eventually rendered
>> as a Neutron config file makes things more difficult than it needs to
>> be, since Matt has to now go and learn about how DevStack handles all
>> these Neutron specific variables.
>>
>> The Neutron refactor[2] that I am working on, I am trying to configure
>> as little as possible in DevStack. Neutron should be able to, out of the
>> box, Just Work™. If it can't, then that needs to be fixed in Neutron.
>>
>> Secondly, the Neutron refactor will be getting rid of all the things
>> like $LB_INTERFACE_MAPPINGS - I would *much* prefer that someone using
>> DevStack actually set the apropriate line in their local.conf
>>
>> Such as:
>>
>>  [[post-config|/$Q_PLUGIN_CONF_FILE]]
>>  [linux_bridge]
>>  physical_interface_mappings = foo:bar
>>
>>
>> The advantage of this is, when someone is working with DevStack, the
>> things they are configuring are the same as all the 

Re: [OpenStack-Infra] Gerrit server replacement scheduled for April 11th

2016-04-11 Thread Jeremy Stanley
Our maintenance has concluded, and services are back in operation as
of 21:00 UTC. Please note that (as previously announced) this
maintenance included a move to new IP addresses for
review.openstack.org:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

These have replaced the following prior addresses:

IPv4 -> 104.130.159.134
IPv6 -> 2001:4800:7818:102:be76:4eff:fe05:9b12

DNS has been updated accordingly, but we understand that some users
are running from egress-filtered networks with port 29418/tcp
explicitly allowed to the former review.openstack.org IP addresses
and so may need to update their firewall rules accordingly. Users
dealing with egress filtering may find it easier to switch their
local configuration to use Gerrit's REST API via HTTPS instead, and
the current release of git-review has support for that workflow as
well.

http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

At the start of the maintenance we needed to stop Zuul, and due to
issues beyond our control were unable to save the active check/gate
pipeline contents. Any changes which were being tested or had been
approved and enqueued into the gate as of 20:00 UTC will need a
"recheck" comment to reinitiate testing on them. If they were
previously approved and enqueued in the gate, they should
automatically reenqueue into the gate again once check jobs pass for
them following your recheck.

We have double checked that this new Gerrit server is working as
intended and expected features are available, but if you experience
any problems please let us know in the #openstack-infra channel on
the Freenode IRC network, or via the
openstack-infra@lists.openstack.org mailing list.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] Gerrit server replacement scheduled for April 11th

2016-04-11 Thread Jeremy Stanley
Our maintenance has concluded, and services are back in operation as
of 21:00 UTC. Please note that (as previously announced) this
maintenance included a move to new IP addresses for
review.openstack.org:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

These have replaced the following prior addresses:

IPv4 -> 104.130.159.134
IPv6 -> 2001:4800:7818:102:be76:4eff:fe05:9b12

DNS has been updated accordingly, but we understand that some users
are running from egress-filtered networks with port 29418/tcp
explicitly allowed to the former review.openstack.org IP addresses
and so may need to update their firewall rules accordingly. Users
dealing with egress filtering may find it easier to switch their
local configuration to use Gerrit's REST API via HTTPS instead, and
the current release of git-review has support for that workflow as
well.

http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

At the start of the maintenance we needed to stop Zuul, and due to
issues beyond our control were unable to save the active check/gate
pipeline contents. Any changes which were being tested or had been
approved and enqueued into the gate as of 20:00 UTC will need a
"recheck" comment to reinitiate testing on them. If they were
previously approved and enqueued in the gate, they should
automatically reenqueue into the gate again once check jobs pass for
them following your recheck.

We have double checked that this new Gerrit server is working as
intended and expected features are available, but if you experience
any problems please let us know in the #openstack-infra channel on
the Freenode IRC network, or via the
openstack-in...@lists.openstack.org mailing list.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][reno] Broken releasenotes in glance_store

2016-04-11 Thread Nikhil Komawar
NVM, I verified it locally and created a report.
https://bugs.launchpad.net/glance-store/+bug/1569062

Thanks for bringing this up.

On 4/11/16 4:07 PM, Nikhil Komawar wrote:
> I just referred to it using my email inbox, gerrit seems to be down for
> me to be fully confirmed on this fixing things.
>
> On 4/11/16 4:06 PM, Nikhil Komawar wrote:
>> Thanks for your proposal Andreas. I guess [1] fixes things and good to
>> have here for awareness?
>>
>> [1] https://review.openstack.org/303962
>>
>> On 4/11/16 2:56 AM, Andreas Jaeger wrote:
>>> I've noticed that the translation bot fails extracting strings from the
>>> release notes:
>>> https://jenkins.openstack.org/job/glance_store-propose-translation-update/203/console
>>>
>>> I could reproduce this with running "tox -e releasenotes" locally on
>>> glance_store. Could you check what's broken - and figure out how this
>>> could have sneaked in through our gates, please?
>>>
>>> thanks,
>>> Andreas

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Hongbin Lu


From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: April-11-16 2:52 PM
To: OpenStack Development Mailing List (not for usage questions); Adrian Otto
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Yeah, I think there are two places where it may make sense.

1. Ironic's nova plugin is a lowst common denominator for treating a physical 
host like a vm. Ironic's api is much more rich, but sometimes all you need is 
the lowest common denominator and don't want to rewrite a bunch of code. In 
this case, it may make sense to have a nova plugin that talks to magnum to 
launch a heavy weight container to make the use case easy.
If I understand correctly, you were proposing a Magnum virt-driver for Nova, 
which is used to provision containers in Magnum bays? Magnum has different bay 
types (i.e. kubernetes, swarm, mesos) so the proposed driver needs to 
understand the APIs of different container orchestration engines (COEs). I 
think it will work only if Magnum provides an unified Container APIs so that 
the introduced Nova virt-driver can call Magnum unified APIs to launch 
containers.


2. Basic abstraction of Orchestration systems. Most (all?) docker orchestration 
systems work with a yaml file. What's in it differs, but shipping it from point 
A to point B using an authenticated channel can probably be nicely abstracted. 
I think this would be a big usability gain as well. Things like the 
applications catalog could much more easily hook into it then. The catalog 
would provide the yaml, and a tag to know which orchestrator type it is, and 
just pass that info along to magnum.
I am open to discuss that, but inventing a standard DSL for all COEs is a 
significant amount of work. We need to evaluate the benefits and costs before 
proceeding to this direction. In comparison, the proposal of unifying Container 
APIs [1] looks easier to implement and maintain.
[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers


Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Monday, April 11, 2016 11:10 AM
To: Adrian Otto; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have 

Re: [openstack-dev] [magnum][requirements][release] The introduction of package py2-ipaddress

2016-04-11 Thread Davanum Srinivas
Thierry, Hongbin,

Looked at current docker-py and came up with this:
https://github.com/docker/docker-py/pull/1033/

Thanks,
Dims

On Mon, Apr 11, 2016 at 3:43 PM, Hongbin Lu  wrote:
> Hi Thierry,
>
> Thanks for your advice. I submitted a patch [1] to downgrade docker-py to 
> 1.7.2. In long term, we will negotiate with upstream maintainers to resolve 
> the module conflicting issue.
>
> [1] https://review.openstack.org/#/c/304296/
>
> Best regards,
> Hongbin
>
>> -Original Message-
>> From: Thierry Carrez [mailto:thie...@openstack.org]
>> Sent: April-11-16 5:28 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [magnum][requirements][release] The
>> introduction of package py2-ipaddress
>>
>> Hongbin Lu wrote:
>> > Hi requirements team,
>> >
>> > In short, the recently introduced package py2-ipaddress [1] seems to
>> > break Magnum. In details, Magnum gate recently broke by an error:
>> > "'\xac\x18\x05\x07' does not appear to be an IPv4 or IPv6 address" [2]
>> > (the gate breakage has been temporarily fixed but we are looking for
>> a
>> > permanent fix [3]). After investigation, I opened a ticket in
>> > Cryptography for help [4]. According to the feedback from
>> Cryptography
>> > community, the problem is from py2-address, which was introduced to
>> > OpenStack recently [1].
>> >
>> > I wonder if we can get any advice from requirements team in this
>> > regards. In particular, what is the proper way to handle the
>> > problematic package?
>> >
>> > [1] https://review.openstack.org/#/c/302539/
>> > [2] https://bugs.launchpad.net/magnum/+bug/1568212
>> > [3] https://bugs.launchpad.net/magnum/+bug/1568427
>> > [4] https://github.com/pyca/cryptography/issues/2870
>>
>> py2-ipaddress was introduced as a dependency by docker-py 1.8.0.
>> Short-term solution would be to cap <1.8.0 in global-requirements
>> (which will make us fallback to 1.7.2 and remove py2-ipaddress).
>>
>> If the two modules are conflicting we should determine which one is the
>> best and converge to it. ipaddress seems a lot more used and pulled by
>> a lot of packages. So long-term solution would be to make docker-py
>> upstream depend on ipaddress instead...
>>
>> --
>> Thierry Carrez (ttx)
>>
>> ___
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet][ceph] Puppet-ceph is now a formal member of puppet-openstack

2016-04-11 Thread Andrew Woodward
It's been a while since we started the puppet-ceph module on stackforge as
a friend of OpenStack. Since then Ceph's usage in OpenStack has increased
greatly and we have both the puppet-openstack deployment scenarios as well
as check-trippleo running against the module.

We've been receiving leadership from the puppet-openstack team for a while
now and our small core team has struggled to keep up. As such we have added
the puppet-openstack cores to the review ACL's in gerrit and have been
formally added to the puppet-openstack project in governance. [1]

I thank the puppet-openstack team for their support and, I am glad to see
the module move under their leadership.

[1] https://review.openstack.org/300191
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][stable] proposing Rob Cresswell for Horizon stable core

2016-04-11 Thread David Lyle
+1

On Thu, Apr 7, 2016 at 8:05 PM, Zhenguo Niu  wrote:
> definitely +1
>
> On Fri, Apr 8, 2016 at 12:42 AM, Brad Pokorny 
> wrote:
>>
>> +1. I think Rob will provide good input for stable.
>>
>> Thanks,
>> Brad
>>
>> From: Timur Sufiev 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Thursday, April 7, 2016 at 4:31 AM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [Horizon][stable] proposing Rob Cresswell for
>> Horizon stable core
>>
>> +1
>> Чт, 7 апр. 2016 г. в 14:04, Itxaka Serrano Garcia :
>>>
>>> Im still not sure if non-cores (i.e. peasants like me) can vote but I
>>> will do it anyway :D
>>>
>>> A big +1 from me.
>>>
>>> Itxaka
>>>
>>> On 04/07/2016 12:01 PM, Matthias Runge wrote:
>>> > Hello,
>>> >
>>> > I'm proposing Rob Cresswell to become stable core for Horizon. I
>>> > thought, in the past all PTL were in stable team, but this doesn't seem
>>> > to be true any more.
>>> >
>>> > Please chime in with +1/-1
>>> >
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Best Regards,
> Zhenguo Niu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] upgrade options for custom heat resource plug-ins

2016-04-11 Thread Randall Burt
There is a mechanism to mark them as support status "hidden" so that they don't 
show up in resource-type-show and aren't allowed in new templates, but older 
templates should still work. Eventually they may go away altogether but that 
should be far in the future. For your custom resources, you can decide when or 
if to ever remove them.

On Apr 11, 2016, at 3:58 PM, "Praveen Yalagandula" 
 wrote:

> Randall,
> 
> Thanks for your reply.
> I was wondering especially about those "deprecated" properties.. What happens 
> after several releases? Do you just remove them at that point? If the 
> expected maximum lifespan of a stack is shorter than the span for which those 
> "deprecated" properties are maintained, then removing them works. But what 
> happens if it is longer?
> 
> Cheers,
> Praveen
> 
> On Mon, Apr 11, 2016 at 12:02 PM Randall Burt  
> wrote:
> Not really. Ideally, you need to write your resource such that these changes 
> are backwards compatible. We do this for the resources we ship with Heat (add 
> new properties while supporting deprecated properties for several releases).
> 
> On Apr 11, 2016, at 1:06 PM, "Praveen Yalagandula" 
>  wrote:
> 
> > Hi,
> >
> > We are developing a custom heat resource plug-in and wondering about how to 
> > handle plug-in upgrades. As our product's object model changes with new 
> > releases, we will need to release updated resource plug-in code too. 
> > However, the "properties" stored in the heat DB for the existing resources, 
> > whose definitions have been upgraded, need to be updated too. Was there any 
> > discussion on this?
> >
> > Thanks,
> > Praveen Yalagandula
> > Avi Networks
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Term "workload" has two clashing meanings

2016-04-11 Thread Aleksandr Maretskiy
"dataplane" looks good to me

On Mon, Apr 11, 2016 at 10:56 PM, Boris Pavlovic  wrote:

> Alex,
>
> I would suggest to call it "dataplane" because it obvious points to
> dataplane testing
>
> Best regards,
> Boris Pavlovic
>
> On Mon, Apr 11, 2016 at 11:10 AM, Roman Vasilets 
> wrote:
>
>> Hi all, personally I want to suggest* crossload. *Concept is similar to
>> cross training(training in two or more sports in order to improve
>> fitness and performance, especially in a main sport.) in sport. By that
>> template - crossload is load in two or more areas in order to improve
>> durability and performance, especially in a main area.
>> Thanks, Roman.
>>
>> On Mon, Apr 11, 2016 at 6:38 PM, Aleksandr Maretskiy <
>> amarets...@mirantis.com> wrote:
>>
>>> Hi all,
>>>
>>> this is about terminology, we have term "workload" in Rally that appears
>>> in two clashing meanings:
>>>
>>>  1. module rally.plugins.workload
>>> 
>>> which collects plugins for cross-VM testing
>>>  2. workload replaces term "scenario" in our new input task format
>>> 
>>> (task->scenarios is replaced with task->subtasks->workloads)
>>>
>>> Let's introduce new term as replacement of "1." (or maybe "2." but I
>>> suppose this is not the best option).
>>>
>>> Maybe rename rally.plugins.workload to:
>>>rally.plugins.
>>> *vmload   *rally.plugins.*vmperf*
>>>rally.plugins.*shaker*
>>>rally.plugins.*vmworkload*
>>>...more ideas?
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] upgrade options for custom heat resource plug-ins

2016-04-11 Thread Praveen Yalagandula
Randall,

Thanks for your reply.
I was wondering especially about those "deprecated" properties.. What
happens after several releases? Do you just remove them at that point? If
the expected maximum lifespan of a stack is shorter than the span for which
those "deprecated" properties are maintained, then removing them works. But
what happens if it is longer?

Cheers,
Praveen

On Mon, Apr 11, 2016 at 12:02 PM Randall Burt 
wrote:

> Not really. Ideally, you need to write your resource such that these
> changes are backwards compatible. We do this for the resources we ship with
> Heat (add new properties while supporting deprecated properties for several
> releases).
>
> On Apr 11, 2016, at 1:06 PM, "Praveen Yalagandula" <
> yprav...@avinetworks.com>
>  wrote:
>
> > Hi,
> >
> > We are developing a custom heat resource plug-in and wondering about how
> to handle plug-in upgrades. As our product's object model changes with new
> releases, we will need to release updated resource plug-in code too.
> However, the "properties" stored in the heat DB for the existing resources,
> whose definitions have been upgraded, need to be updated too. Was there any
> discussion on this?
> >
> > Thanks,
> > Praveen Yalagandula
> > Avi Networks
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-11 Thread Anita Kuno
On 04/11/2016 04:49 PM, Matt Riedemann wrote:
> A few people have been asking about planning for the nova midcycle for
> newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work
> the best. R-14 is close to the US July 4th holiday, R-13 is during the
> week of the US July 4th holiday, and R-12 is the week of the n-2 milestone.
> 
> R-16 is too close to the summit IMO, and R-10 is pushing it out too far
> in the release. I'd be open to R-14 though but don't know what other
> people's plans are.
> 
> As far as a venue is concerned, I haven't heard any offers from
> companies to host yet. If no one brings it up by the summit, I'll see if
> hosting in Rochester, MN at the IBM site is a possibility.
> 
> [1] http://releases.openstack.org/newton/schedule.html
> 
Thanks for bringing up the topic so early, Matt. It really helps with
scheduling.

Your assessment of timing sounds reasonable to me.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-11 Thread Sean Dague
On 04/11/2016 04:49 PM, Matt Riedemann wrote:
> A few people have been asking about planning for the nova midcycle for
> newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work
> the best. R-14 is close to the US July 4th holiday, R-13 is during the
> week of the US July 4th holiday, and R-12 is the week of the n-2 milestone.
> 
> R-16 is too close to the summit IMO, and R-10 is pushing it out too far
> in the release. I'd be open to R-14 though but don't know what other
> people's plans are.
> 
> As far as a venue is concerned, I haven't heard any offers from
> companies to host yet. If no one brings it up by the summit, I'll see if
> hosting in Rochester, MN at the IBM site is a possibility.
> 
> [1] http://releases.openstack.org/newton/schedule.html

Personal preference is R-11, which is about when it was the last go around.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Adrian Otto
That’s not what I was talking about here. I’m addressing the interest in a 
common compute API for the various types of compute (VM, BM, Container). Having 
a “containers” API for multiple COE’s is a different subject.

Adrian

On Apr 11, 2016, at 11:10 AM, Hongbin Lu 
> wrote:

Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your data nodes (containers), and the infrastructure on which they 
run (Nova instances).

Regards,

Adrian



On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
> wrote:

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases that 
will in production only be 

[openstack-dev] [nova] Newton midcycle planning

2016-04-11 Thread Matt Riedemann
A few people have been asking about planning for the nova midcycle for 
newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work 
the best. R-14 is close to the US July 4th holiday, R-13 is during the 
week of the US July 4th holiday, and R-12 is the week of the n-2 milestone.


R-16 is too close to the summit IMO, and R-10 is pushing it out too far 
in the release. I'd be open to R-14 though but don't know what other 
people's plans are.


As far as a venue is concerned, I haven't heard any offers from 
companies to host yet. If no one brings it up by the summit, I'll see if 
hosting in Rochester, MN at the IBM site is a possibility.


[1] http://releases.openstack.org/newton/schedule.html

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-11 Thread Guz Egor
+1 for "#1: Mesos and Marathon". Most deployments that I am aware of has this 
setup. Also we can provide several line instructions how to run Chronos on top 
of Marathon.
honestly I don't see how #2 will work, because Marathon installation is 
different from Aurora installation. 
--- Egor
  From: Kai Qiang Wu 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Sunday, April 10, 2016 6:59 PM
 Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
   
#2 seems more flexible, and if it be proved it can "make the SAME mesos bay 
applied with mutilple frameworks." It would be great. Which means, one mesos 
bay should support multiple frameworks.




Thanks


Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park, 
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle! 

Hongbin Lu ---11/04/2016 12:06:07 am---My preference is #1, but I don’t feel 
strong to exclude #2. I would agree to go with #2 for now and

From: Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date: 11/04/2016 12:06 am
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay



My preference is #1, but I don’t feel strong to exclude #2. I would agree to go 
with #2 for now and switch back to #1 if there is a demand from users. For 
Ton’s suggestion to push Marathon into the introduced configuration hook, I 
think it is a good idea.
 
Best regards,
Hongbin
 
From: Ton Ngo [mailto:t...@us.ibm.com] 
Sent: April-10-16 11:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
 I would agree that #2 is the most flexible option, providing a well defined 
path for additional frameworks such as Kubernetes and Swarm. 
I would suggest that the current Marathon framework be refactored to use this 
new hook, to serve as an example and to be the supported
framework in Magnum. This will also be useful to users who want other 
frameworks but not Marathon.
Ton,

Adrian Otto ---04/08/2016 08:49:52 PM---On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
> wrote:

From: Adrian Otto 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date: 04/08/2016 08:49 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay


   
   
   
   
On Apr 8, 2016, at 3:15 PM, Hongbin Lu  wrote:

Hi team,
I would like to give an update for this thread. In the last team, we discussed 
several options to introduce Chronos to our mesos bay:   
   
   
   
1. Add Chronos to the mesos bay. With this option, the mesos bay will have two 
mesos frameworks by default (Marathon and Chronos).
2. Add a configuration hook for users to configure additional mesos frameworks, 
such as Chronos. With this option, Magnum team doesn’t need to maintain extra 
framework configuration. However, users need to do it themselves.
This is my preference.

Adrian   
   
   
   
   
   
   
   
3. Create a dedicated bay type for Chronos. With this option, we separate 
Marathon and Chronos into two different bay types. As a result, each bay type 
becomes easier to maintain, but those two mesos framework cannot share 
resources (a key feature of mesos is to have different frameworks running on 
the same cluster to increase resource utilization).Which option you prefer? Or 
you have other suggestions? Advices are welcome.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com] 
Sent: March-28-16 12:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Jay,

just keep in mind that Chronos can be run by Marathon. 

--- 
EgorFrom: Jay Lau 
To: OpenStack Development Mailing List (not for usage questions) 
 
Sent: Friday, March 25, 2016 7:01 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Yes, that's exactly what I want to do, adding dcos cli and also add Chronos to 
Mesos Bay to make it can handle both long running services and batch jobs.

Thanks,

On Fri, Mar 25, 2016 at 5:25 PM, Michal Rostecki  
wrote:

On 03/25/2016 07:57 AM, Jay Lau wrote:

Hi Magnum,

The current mesos bay only include mesos and marathon, it is better to
enhance the mesos bay have more components and finally enhance it to a
DCOS which focus on container 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Allison Randal
On 04/11/2016 02:51 PM, Fox, Kevin M wrote:
> Yeah, I think there are two places where it may make sense.
> 
> 1. Ironic's nova plugin is a lowst common denominator for treating a
> physical host like a vm. Ironic's api is much more rich, but sometimes
> all you need is the lowest common denominator and don't want to rewrite
> a bunch of code. In this case, it may make sense to have a nova plugin
> that talks to magnum to launch a heavy weight container to make the use
> case easy.
> 
> 2. Basic abstraction of Orchestration systems. Most (all?) docker
> orchestration systems work with a yaml file. What's in it differs, but
> shipping it from point A to point B using an authenticated channel can
> probably be nicely abstracted. I think this would be a big usability
> gain as well. Things like the applications catalog could much more
> easily hook into it then. The catalog would provide the yaml, and a tag
> to know which orchestrator type it is, and just pass that info along to
> magnum.

The typical conundrum here is making "the easy things easy, and the hard
things possible". It doesn't have to be a choice between a) providing a
rich API with access to all the features of each individual compute
paradigm, and b) providing a simple API that allows users to request a
compute resource of any type that's available in the public/private
cloud they're interacting with. OpenStack can have both.

The simple lowest common denominator interface would be very limited
(both by necessity and by design), but easy to understand and get
started on, making some smart assumptions on common usage patterns. The
richer APIs are there for users who need more power and flexibility, and
are ready to go beyond the easy on-ramp.

Again, nothing new here, it seems to be the direction we're already
heading. I'm just articulating why.

Allison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][reno] Broken releasenotes in glance_store

2016-04-11 Thread Nikhil Komawar
I just referred to it using my email inbox, gerrit seems to be down for
me to be fully confirmed on this fixing things.

On 4/11/16 4:06 PM, Nikhil Komawar wrote:
> Thanks for your proposal Andreas. I guess [1] fixes things and good to
> have here for awareness?
>
> [1] https://review.openstack.org/303962
>
> On 4/11/16 2:56 AM, Andreas Jaeger wrote:
>> I've noticed that the translation bot fails extracting strings from the
>> release notes:
>> https://jenkins.openstack.org/job/glance_store-propose-translation-update/203/console
>>
>> I could reproduce this with running "tox -e releasenotes" locally on
>> glance_store. Could you check what's broken - and figure out how this
>> could have sneaked in through our gates, please?
>>
>> thanks,
>> Andreas

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][reno] Broken releasenotes in glance_store

2016-04-11 Thread Nikhil Komawar
Thanks for your proposal Andreas. I guess [1] fixes things and good to
have here for awareness?

[1] https://review.openstack.org/303962

On 4/11/16 2:56 AM, Andreas Jaeger wrote:
> I've noticed that the translation bot fails extracting strings from the
> release notes:
> https://jenkins.openstack.org/job/glance_store-propose-translation-update/203/console
>
> I could reproduce this with running "tox -e releasenotes" locally on
> glance_store. Could you check what's broken - and figure out how this
> could have sneaked in through our gates, please?
>
> thanks,
> Andreas

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Announcing Newton mid-cycle meetup

2016-04-11 Thread Nikhil Komawar
Hello everyone,

We are pleased to announce that the Glance mid-cycle meetup for Newton
will be held Wednesday-Friday, June 15-17 2016 in Cambridge, MA, USA at
the VMwareoffice. We are sending this notice in advance to help you plan
your travel early. Whether you work in/with Glance community on a
regular basis, attempting to get a spec approved for Newton or just want
to get involved, you are welcome to attend this event. However, we will
need to count the participation early and RSVP will be done using the
etherpad [1]. This etherpad should be considered as a live wiki
throughout the planning, execution and conclusion of the event where we
will be recording important information including venue address, hotel
tips, RSVP, schedule, sessions & their description, etc.

This will be an important event for Glance community as we will most
likely be discussing updates on some of the ongoing work of Import
refactor, Nova related changes for Glance, Glare, etc. Please try to
work with your respective organizations to make it to the event. We will
also try our best to get some video conferencing capability setup for
those who won't be able to attend in person.

Looking forward to seeing you all at the summit and thereafter in
Cambridge, MA.

Note to wiki moderators: This has been updated here [2]

[1] https://etherpad.openstack.org/p/newton-glance-midcycle-meetup
[2] https://wiki.openstack.org/wiki/Sprints#Newton_sprints

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Term "workload" has two clashing meanings

2016-04-11 Thread Boris Pavlovic
Alex,

I would suggest to call it "dataplane" because it obvious points to
dataplane testing

Best regards,
Boris Pavlovic

On Mon, Apr 11, 2016 at 11:10 AM, Roman Vasilets 
wrote:

> Hi all, personally I want to suggest* crossload. *Concept is similar to
> cross training(training in two or more sports in order to improve fitness
> and performance, especially in a main sport.) in sport. By that template
> - crossload is load in two or more areas in order to improve durability
> and performance, especially in a main area.
> Thanks, Roman.
>
> On Mon, Apr 11, 2016 at 6:38 PM, Aleksandr Maretskiy <
> amarets...@mirantis.com> wrote:
>
>> Hi all,
>>
>> this is about terminology, we have term "workload" in Rally that appears
>> in two clashing meanings:
>>
>>  1. module rally.plugins.workload
>> 
>> which collects plugins for cross-VM testing
>>  2. workload replaces term "scenario" in our new input task format
>> 
>> (task->scenarios is replaced with task->subtasks->workloads)
>>
>> Let's introduce new term as replacement of "1." (or maybe "2." but I
>> suppose this is not the best option).
>>
>> Maybe rename rally.plugins.workload to:
>>rally.plugins.
>> *vmload   *rally.plugins.*vmperf*
>>rally.plugins.*shaker*
>>rally.plugins.*vmworkload*
>>...more ideas?
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][performance][profiling] Profiling Mitaka Keystone: some results and asking for a help

2016-04-11 Thread Matt Fischer
On Mon, Apr 11, 2016 at 8:11 AM, Dina Belova  wrote:

> Hey, openstackers!
>
> Recently I was trying to profile Keystone (OpenStack Liberty vs Mitaka)
> using this set of changes
> 
>  (that's
> currently on review - some final steps are required there to finish the
> work) and OSprofiler.
>
> Some preliminary results (all in one OpenStack node) can be found here
> 
>  (raw
> OSprofiler reports are not yet merged to some place and can be found here
> ). The full plan
> 
>  of
> what's going to be tested  can be found in the docs as well. In short I
> wanted to take a look how does Keystone changed its DB/Cache usage from
> Liberty to Mitaka, keeping in mind that there were several changes
> introduced:
>
>- federation support was added (and made DB scheme a bit more complex)
>- Keystone moved to oslo.cache usage
>- local context cache was introduced during Mitaka
>
> First of all - *good job on making Keystone less DB-extensive in case of
> cache turned on*! If Keystone caching is turned on, number of DB queries
> done to Keystone DB in Mitaka is averagely twice less than in Liberty,
> comparing the same requests and topologies. Thanks Keystone community to
> make it happen :)
>
> Although, I faced *two strange issues* during my experiments, and I'm
> kindly asking you, folks, to help me here:
>
>- I've created #1567403
> bug to share
>information - when I turned caching on, local context cache should cache
>identical per API requests function calls not to ping Memcache too often.
>Although I faced such calls, Keystone still used Memcache to gather this
>information. May someone take a look on this and help me figure out what am
>I observing? At the first sight local context cache should work ok, but for
>some reason I do not see it's being used.
>- One more filed bug - #1567413
> - is about a bit
>opposite situation :) When I turned cache off explicitly in the
>keystone.conf file, I still observed some of the values being fetched from
>Memcache... Your help is very appreciated!
>
> Thanks in advance and sorry for a long email :)
>
> Cheers,
> Dina
>
>
Dina,

Thanks for starting this conversation. I had some weird perf results
comparing L to an RC release of Mitaka, but I was holding them until
someone else confirmed what I saw. I'm testing token creation and
validation. From what I saw, token validation slowed down in Mitaka. After
doing my benchmark runs, the traffic to memcache was 8x in Mitaka from what
it was in Liberty. That implies more caching but 8x is a lot and even
memcache references are not free.

I know some of the Keystone folks are looking into this so it will be good
to follow-up on it. Maybe we could talk about this at the summit?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] Eliminating the DevStack layer

2016-04-11 Thread Hirofumi Ichihara

I agree. Throughout I was reviewing Devstack over 3 cycles,
I thought the same thing. Devstack often accepted patches just
adding option although we're not sure who really needs the options.
There are many useless stuff in the options.
For example, default value of devstack option is the same value as
default in Projects. Please look at [1] and [2], [3] and [4]. Who uses 
these options?


We can see such options in devstack throughout. I agree we will adjust 
default configurations and
that documents in Neutron side. However, let's eliminate such options 
are clearly useless first.

And then we should do after we made necessary options clear.

[1]: 
https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L178
[2]: 
https://github.com/openstack/nova/blob/master/nova/conf/virt.py#L164-L166
[3]: 
https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L162
[4]: 
https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L53


Thanks,
Hirofumi

On 2016/04/09 0:07, Sean M. Collins wrote:

Prior to the introduction of local.conf, the only way to configure
OpenStack components was to introduce code directly into DevStack, so
that DevStack would pick it up then inject it into the configuration
file.

This was because DevStack writes out new configuration files on each
run, so it wasn't possible for you to make changes to any configuration
file (nova.conf, neutron.conf, ml2_plugin.ini, etc..).

So, someone who wanted to set the Linux Bridge Agent's
physical_interface_mappings setting for Neutron would have to use
$LB_INTERFACE_MAPPINGS in DevStack, which would then be invoked by
DevStack[1].

The local.conf functionality was introduced quite a while back, and
I think it's time to have a conversation about why we should start
moving away from the previous practice of declaring variables in
DevStack, and then having them injected into the configuration files.

The biggest issue is: There is a disconnect between the developers
using DevStack and someone who is an operator or who has been editing
OpenStack conf files directly. So, for example I can tell you all about
how DevStack has a bunch of variables for configuring Neutron (which is
Not a Good Thing™), and how those go into DevStack and then end up coming
out the other side in a Neutron configuration file.

Really, I would like to get rid of the intermediate layer (DevStack)
and get both Devs and Deployers to be able to just say: Here's my
neutron.conf - let's diff mine and yours and see what we need to sync.

Matt Kassawara and I have had this issue, since he's coming from the
OSAD side, and I'm coming from the DevStack side. We both know what the
Neutron configuration should end up as, but DevStack having its own set
of variables and how those variables are handled and eventually rendered
as a Neutron config file makes things more difficult than it needs to
be, since Matt has to now go and learn about how DevStack handles all
these Neutron specific variables.

The Neutron refactor[2] that I am working on, I am trying to configure
as little as possible in DevStack. Neutron should be able to, out of the
box, Just Work™. If it can't, then that needs to be fixed in Neutron.

Secondly, the Neutron refactor will be getting rid of all the things
like $LB_INTERFACE_MAPPINGS - I would *much* prefer that someone using
DevStack actually set the apropriate line in their local.conf

Such as:

 [[post-config|/$Q_PLUGIN_CONF_FILE]]
 [linux_bridge]
 physical_interface_mappings = foo:bar


The advantage of this is, when someone is working with DevStack, the
things they are configuring are the same as all the other OpenStack 
documentation.

For example, someone could read the Networking Guide, read the example
configuration[3] and the only thing they'd need to learn is our syntax
for specifying what file the contents go in (the 
"[[post-config|/$Q_PLUGIN_CONF_FILE]]" piece).

Thoughts?

[1]: 
https://github.com/openstack-dev/devstack/blob/1195a5b7394fc5b7a1cb1415978e9997701f5af1/lib/neutron_plugins/linuxbridge_agent#L63

[2]: https://review.openstack.org/168438

[3]: 
http://docs.openstack.org/liberty/networking-guide/scenario-classic-lb.html#example-configuration






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][requirements][release] The introduction of package py2-ipaddress

2016-04-11 Thread Hongbin Lu
Hi Thierry,

Thanks for your advice. I submitted a patch [1] to downgrade docker-py to 
1.7.2. In long term, we will negotiate with upstream maintainers to resolve the 
module conflicting issue.

[1] https://review.openstack.org/#/c/304296/

Best regards,
Hongbin

> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: April-11-16 5:28 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][requirements][release] The
> introduction of package py2-ipaddress
> 
> Hongbin Lu wrote:
> > Hi requirements team,
> >
> > In short, the recently introduced package py2-ipaddress [1] seems to
> > break Magnum. In details, Magnum gate recently broke by an error:
> > "'\xac\x18\x05\x07' does not appear to be an IPv4 or IPv6 address" [2]
> > (the gate breakage has been temporarily fixed but we are looking for
> a
> > permanent fix [3]). After investigation, I opened a ticket in
> > Cryptography for help [4]. According to the feedback from
> Cryptography
> > community, the problem is from py2-address, which was introduced to
> > OpenStack recently [1].
> >
> > I wonder if we can get any advice from requirements team in this
> > regards. In particular, what is the proper way to handle the
> > problematic package?
> >
> > [1] https://review.openstack.org/#/c/302539/
> > [2] https://bugs.launchpad.net/magnum/+bug/1568212
> > [3] https://bugs.launchpad.net/magnum/+bug/1568427
> > [4] https://github.com/pyca/cryptography/issues/2870
> 
> py2-ipaddress was introduced as a dependency by docker-py 1.8.0.
> Short-term solution would be to cap <1.8.0 in global-requirements
> (which will make us fallback to 1.7.2 and remove py2-ipaddress).
> 
> If the two modules are conflicting we should determine which one is the
> best and converge to it. ipaddress seems a lot more used and pulled by
> a lot of packages. So long-term solution would be to make docker-py
> upstream depend on ipaddress instead...
> 
> --
> Thierry Carrez (ttx)
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About snapshot Rollback?

2016-04-11 Thread Duncan Thomas
Ok, you're right about device naming by UUID.

So we have two advantages compared to the existing system:

- Keeping the same volume id (and therefore disk UUID) makes reverting a VM
much easier since device names inside the instance stay the same
- Can significantly reduce the amount of copying required on some backends

These do seem like solid reasons to consider the feature.

If you can solve the backwards compatibility problem mentioned further up
this thread, then I think there's a strong case for considering adding this
API.

The next step is a spec and a PoC implementation.



On 11 April 2016 at 20:57, Erlon Cruz  wrote:

> You are right, the instance should be shutdown or the device be unmounted,
> before 'revert' or removing the old device. That should be enough to avoid
> corruption. I think the device naming is not a problem if you use the same
> volume (at least the disk UUID will be the same).
>
> On Mon, Apr 11, 2016 at 2:39 PM, Duncan Thomas 
> wrote:
>
>> You can't just change the contents of a volume under the instance though
>> - at the very least you need to do an unmount in the instance, and a detach
>> is preferable, otherwise you've got data corruption issues.
>>
>> At that point, the device naming problems are identical.
>>
>> On 11 April 2016 at 20:22, Erlon Cruz  wrote:
>>
>>> The actual user workflow is:
>>>
>>>  1 - User creates a volume(s)
>>>  2 - User attach volume to instance
>>>  3 - User creates a snapshot
>>>  4 - Something happens causing the need of a revert
>>>  5 - User creates a volume(s) from the snapshot(s)
>>>  6 - User detach old volumes
>>>  7 - User attach new volumes (and pray so they get the same id) - Nova,
>>> should have the ability to honor supplied device names (vdc, vdd, etc),
>>> which not always happen[1]. But, does the volume keep the same UUID in the
>>> system? Several application use that to boot.
>>>
>>> The suggested workflow would be simpler for a user POV:
>>>
>>>  1 - User creates a volume(s)
>>>  2 - User attach volume to instance
>>>  3 - User creates a snapshot
>>>  4 - Something happens causing the need of a revert
>>>  5 - User revert snapshot(s)
>>>
>>>
>>>  [1] https://goo.gl/Kusfne
>>>
>>> On Fri, Apr 8, 2016 at 5:07 AM, Ivan Kolodyazhny  wrote:
>>>
 Hi Chenzongliang,

 I still don't understand what is difference between proposed feature
 and 'restore volume from snapshot'? Could you please explain it?

 Regards,
 Ivan Kolodyazhny,
 http://blog.e0ne.info/

 On Thu, Apr 7, 2016 at 12:00 PM, Chenzongliang <
 chenzongli...@huawei.com> wrote:

> Dear Cruz:
>
>
>
>  Thanks for you kind support, I will review the previous spec
> according to the following links.  May be more user scenario we should
> considered,such as backup,create volume from snapshot,consistency group 
> and
> etc,we will spend some time to gather
>
> the user's scenarios and determin what to do next step.
>
>
>
> Sincerely,
>
> zongliang chen
>
>
>
> *发件人:* Erlon Cruz [mailto:sombra...@gmail.com]
> *发送时间:* 2016年4月5日 2:50
> *收件人:* OpenStack Development Mailing List (not for usage questions)
> *抄送:* Zhangli (ISSP); Shenhong (C)
> *主题:* Re: [openstack-dev] [Cinder] About snapshot Rollback?
>
>
>
> Hi Chen,
>
>
>
> Not sure if I got you right but I brought this topic in
> #openstack-cinder some days ago. The idea is to be able to rollback a
> snapshot in Cinder. Today what is possible to do is to create a volume 
> from
> a snapshot. From the user point of view, this is not ideal, as there are
> several cases, if not the majority of, that the purpose of the snapshot is
> to revert to a desired state, and not keep the original volume. For some
> backends, keeping the original volume means space consumption. This space
> problem becomes bold when we think about consistency groups. For
> consistency groups, some backends might have to copy an entire filesystem
> for each snapshot, consuming space and time. So, I think it would be
> desired to have the ability to revert snapshots.
>
>
>
> I know there have been efforts in the past[1] to implement that, but
> for some reason the work was stopped. If you want to retake the effort
> please create a spec[2]  sol everybody can provide feedback.
>
>
>
> Erlon
>
>
>
>
>
> [1]
> https://blueprints.launchpad.net/cinder/+spec/cinder-volume-rollback-snapshot
>
> [2] https://github.com/openstack/cinder-specs
>
>
>
> On Thu, Mar 24, 2016 at 6:09 AM, Chenzongliang <
> chenzongli...@huawei.com> wrote:
>
> Hi all:
>
> We are condering add a fucntion rollback_snapshot when we use
> backup. In the end user's 

Re: [openstack-dev] [heat] upgrade options for custom heat resource plug-ins

2016-04-11 Thread Randall Burt
Not really. Ideally, you need to write your resource such that these changes 
are backwards compatible. We do this for the resources we ship with Heat (add 
new properties while supporting deprecated properties for several releases).

On Apr 11, 2016, at 1:06 PM, "Praveen Yalagandula" 
 wrote:

> Hi,
> 
> We are developing a custom heat resource plug-in and wondering about how to 
> handle plug-in upgrades. As our product's object model changes with new 
> releases, we will need to release updated resource plug-in code too. However, 
> the "properties" stored in the heat DB for the existing resources, whose 
> definitions have been upgraded, need to be updated too. Was there any 
> discussion on this?
> 
> Thanks,
> Praveen Yalagandula
> Avi Networks
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Fox, Kevin M
Yeah, I think there are two places where it may make sense.

1. Ironic's nova plugin is a lowst common denominator for treating a physical 
host like a vm. Ironic's api is much more rich, but sometimes all you need is 
the lowest common denominator and don't want to rewrite a bunch of code. In 
this case, it may make sense to have a nova plugin that talks to magnum to 
launch a heavy weight container to make the use case easy.

2. Basic abstraction of Orchestration systems. Most (all?) docker orchestration 
systems work with a yaml file. What's in it differs, but shipping it from point 
A to point B using an authenticated channel can probably be nicely abstracted. 
I think this would be a big usability gain as well. Things like the 
applications catalog could much more easily hook into it then. The catalog 
would provide the yaml, and a tag to know which orchestrator type it is, and 
just pass that info along to magnum.

Thanks,
Kevin



From: Hongbin Lu [hongbin...@huawei.com]
Sent: Monday, April 11, 2016 11:10 AM
To: Adrian Otto; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your 

[Openstack-operators] User Committee IRC Meeting

2016-04-11 Thread Edgar Magana
Dear Users and Operators,

This is a kind reminder for the User Committee IRC meeting that will be hosted 
today Monday 04/11, 2016 at 1900 UTC in (freenode) #openstack-meeting

Agenda:
https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee

Thank you all!

Edgar
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Neutron][Infra] Post processing of gate hooks on job timeouts

2016-04-11 Thread Assaf Muller
On Mon, Apr 11, 2016 at 1:56 PM, Clark Boylan  wrote:
> On Mon, Apr 11, 2016, at 10:52 AM, Jakub Libosvar wrote:
>> On 04/11/2016 06:41 PM, Clark Boylan wrote:
>> > On Mon, Apr 11, 2016, at 03:07 AM, Jakub Libosvar wrote:
>> >> Hi,
>> >>
>> >> recently we hit an issue in Neutron with tests getting stuck [1]. As a
>> >> side effect we discovered logs are not collected properly which makes it
>> >> hard to find the root cause. The reason of missing logs is that we send
>> >> SIGKILL to whatever gate hook is running when we hit the global timeout
>> >> per gate job [2]. This gives no time to running process to perform any
>> >> post-processing. In post_gate_hook function in Neutron, we collect logs
>> >> from /tmp directory, compress them and move them to /opt/stack/logs to
>> >> make them exposed.
>> >>
>> >> I have in mind two solutions to which I'd like to get feedback before
>> >> sending patches.
>> >>
>> >> 1) In Neutron, we execute tests in post_gate_hook (dunno why). But even
>> >> if we would have moved test execution into gate_hook and tests get stuck
>> >> then the post_gate_hook won't be triggered [3]. So the solution I
>> >> propose here is to terminate gate_hook N minutes before global timeout
>> >> and still execute post_gate_hook (with timeout) as post-processing
>> >> routine.
>> >>
>> >> 2) Second proposal is to let timeout wrapped commands know they are
>> >> about to be killed. We can send let's say SIGTERM instead of SIGKILL and
>> >> after certain amount of time, send SIGKILL. Example: We send SIGTERM 3
>> >> minutes before global timeout, letting these 3 minutes to 'command' to
>> >> handle the SIGTERM signal.
>> >>
>> >>  timeout -s 15 -k 3 $((REMAINING_TIME-3))m bash -c "command"
>> >>
>> >> With the 2nd approach we can trap the signal that kills running test
>> >> suite and collects logs with same functions we currently have.
>> >>
>> >>
>> >> I would personally go with second option but I want to hear if anybody
>> >> has a better idea about post processing in gate jobs or if there is
>> >> already a tool we can use to collect logs.
>> >>
>> >> Thanks,
>> >> Kuba
>> >
>> > Devstack gate already does a "soft" timeout [0] then proceeds to cleanup
>> > (part of which is collecting logs) [1], then Jenkins does the "hard"
>> > timeout [2]. Why aren't we collecting the required log files as part of
>> > the existing cleanup?
>> This existing cleanup doesn't support hooks. Neutron tests produce a lot
>> of logs by default stored in /tmp/dsvm- so we need to compress
>> and move them to /opt/stack/logs in order to get them collected by [1].
>
> My suggestion would be to stop writing these log files to /tmp and
> instead write them to the log dir where they will be automagically
> compressed and collected.

Yeah that's what I'm doing here https://review.openstack.org/#/c/303594/.

>
>>
>> >
>> > [0]
>> > https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n569
>> > [1]
>> > https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n594
>> > [2]
>> > https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n325
>> >
>> > Clark
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Tim Bell

As we’ve deployed more OpenStack components in production, one of the points we 
have really appreciated is the common areas

- Single pane of glass for Horizon
- Single accounting infrastructure
- Single resource management, quota and admin roles
- Single storage pools with Cinder
- (not quite yet but) common CLI

Building on this, our workflows have simplified

- Lifecycle management (cleaning up when users leave)
- Onboarding (registering for access to the resoures and mapping to the 
appropriate projects)
- Capacity planning (shifting resources, e.g. containers becoming popular 
needing more capacity)

Getting consistent APIs and CLIs is really needed though since the “one 
platform” message is not so easy to explain given the historical decisions, 
such as project vs tenant.

As Subbu has said, the cloud software is one part but there are so many others…

Tim



On 11/04/16 18:08, "Fox, Kevin M"  wrote:

>The more I've used Containers in production the more I've come to the 
>conclusion they are much different beasts then Nova Instances. Nova's 
>abstraction lets Physical hardware and VM's share one common API, and it makes 
>a lot of sense to unify them.
>
>Oh. To be explicit, I'm talking about docker style lightweight containers, not 
>heavy weight containers like LXC ones. The heavy weight ones do work well with 
>Nova. For the rest of the conversation container = lightweight container.
>
>Trove can make use of containers provided there is a standard api in OpenStack 
>for provisioning them. Right now, Magnum provides a way to get Kubernetes 
>orchestrated clusters, for example, but doensn't have good integration with it 
>to hook it into keystone so that Trusts can be used with it on the users 
>behalf for advanced services like Trove. So some pieces are missing. Heat 
>should have a way to have Kubernetes Yaml resources too.
>
>I think the recent request to rescope Kuryr to include non network features is 
>a good step in solving some of the issues.
>
>Unfortunately, it will probably take some time to get Magnum to the point 
>where it can be used by other OpenStack advanced services. Maybe these sorts 
>of issues should be written down and discussed at the upcoming summit between 
>the Magnum and Kuryr teams?
>
>Thanks,
>Kevin
>
>
>
>From: Amrith Kumar [amr...@tesora.com]
>Sent: Monday, April 11, 2016 8:47 AM
>To: OpenStack Development Mailing List (not for usage questions); Allison 
>Randal; Davanum Srinivas; foundat...@lists.openstack.org
>Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
>Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>
>Monty, Dims,
>
>I read the notes and was similarly intrigued about the idea. In particular, 
>from the perspective of projects like Trove, having a common Compute API is 
>very valuable. It would allow the projects to have a single view of 
>provisioning compute, as we can today with Nova and get the benefit of bare 
>metal through Ironic, VM's through Nova VM's, and containers through 
>nova-docker.
>
>With this in place, a project like Trove can offer database-as-a-service on a 
>spectrum of compute infrastructures as any end-user would expect. Databases 
>don't always make sense in VM's, and while containers are great for quick and 
>dirty prototyping, and VM's are great for much more, there are databases that 
>will in production only be meaningful on bare-metal.
>
>Therefore, if there is a move towards offering a common API for VM's, 
>bare-metal and containers, that would be huge.
>
>Without such a mechanism, consuming containers in Trove adds considerable 
>complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a 
>working prototype of Trove leveraging Ironic, VM's, and nova-docker to 
>provision databases is something I worked on a while ago, and have not 
>revisited it since then (once the direction appeared to be Magnum for 
>containers).
>
>With all that said, I don't want to downplay the value in a container specific 
>API. I'm merely observing that from the perspective of a consumer of computing 
>services, a common abstraction is incredibly valuable.
>
>Thanks,
>
>-amrith
>
>> -Original Message-
>> From: Monty Taylor [mailto:mord...@inaugust.com]
>> Sent: Monday, April 11, 2016 11:31 AM
>> To: Allison Randal ; Davanum Srinivas
>> ; foundat...@lists.openstack.org
>> Cc: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
>> Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>>
>> On 04/11/2016 09:43 AM, Allison Randal wrote:
>> >> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas 
>> wrote:
>> >>> Reading unofficial notes [1], i found one topic very interesting:
>> >>> One Platform – How do we truly support containers and 

Re: [Openstack] [mentoring] Mentee project preferrence

2016-04-11 Thread Emily K Hugenbruch

Hi Victoria,
A few things to keep in mind. Many of the technical mentoring matches we
made were in anticipation of the Upstream University training. So if your
mentee is signed up for that, your early mentoring should focus on getting
them prepared for upstream and just basically introduced to the community.
http://docs.openstack.org/upstream-training/
Beyond that, Carol's suggestions are absolutely spot on. Consider whether
your expertise could help them in their area. Could you still help to teach
them how to review patches, or run unit tests, even if it's not the project
you're focused on? Encourage your mentee to be open to contributing in
other areas, too. Projects are interconnected and skills are often
transferable.

If it really seems like it won't work out, please let us know and we can
look to re-match you. Thanks.
Sincerely,
Emily Kate Hugenbruch
OpenStack Cloud Enablement Engineer - z/VM and Software Engineer - z/VM
IBM Corporation Endicott, NY
Twitter: @ekhugen
IRC: ekhugen@freenode


Inactive hide details for "Barrett, Carol L" ---04/11/2016 02:08:51 PM---Hi
Victoria – I think you have a couple of options: 1"Barrett, Carol L"
---04/11/2016 02:08:51 PM---Hi Victoria – I think you have a couple of
options: 1) Provide coaching without being the proje

From: "Barrett, Carol L" 
To: Victoria Martínez de la Cruz 
Cc: Emily K Hugenbruch/Endicott/IBM@IBMUS
Date: 04/11/2016 02:08 PM
Subject: RE: [Openstack] [mentoring] Mentee project preferrence



Hi Victoria – I think you have a couple of options:
1) Provide coaching without being the project expert
2) Utilize your network to help the Mentee to find a Mentor who
is a project expert
3) End the mentoring relationship and we can put both you and
the Mentee back on the lists for upcoming an upcoming match.



Emily: Any other thoughts?
Thanks
Carol

From: Victoria Martínez de la Cruz [mailto:victo...@vmartinezdelacruz.com]
Sent: Monday, April 11, 2016 10:48 AM
To: openstack@lists.openstack.org
Subject: [Openstack] [mentoring] Mentee project preferrence

Hi there,

What should the mentor do with regards to technical mentorship is the
mentee is more interested in a project that the mentor is not familiar
with?

Thanks,

Victoria
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

2016-04-11 Thread Steven Dake (stdake)
My proposal was for docs-only patches not code contributions with docs.
Obviously we want a high bar for code contributions.  This is part of the
reason we have the DocImpact flag (for folks that don't feel comfortable
writing documentation because perhaps of ESL, or other reasons).

We already have a way to decouple code from docs with DocImpact.

Regards
-steve

On 4/11/16, 6:17 AM, "Michał Jastrzębski"  wrote:

>So one way to approach it is to decouple docs from code, make it 2
>reviews. We can -1 code without docs and ask to create separate
>patchset depending on one in question with docs. Then we can nitpick
>all we want:) new contributor will get his/hers code merged, at least
>one patchset, so it will work better on morale, and we'll be able to
>keep high bar for QSG and other docs. There is possibility that author
>will leave docs patch after code merge, but well, we can take over
>docs review.
>
>What do you think guys? I'd really like to keep high quality standard
>all the way and don't scare off new commiters at the same time.
>
>Cheers,
>Michal
>
>On 11 April 2016 at 03:50, Steven Dake (stdake)  wrote:
>>
>>
>> On 4/11/16, 1:38 AM, "Gerard Braad"  wrote:
>>
>>>Hi,
>>>
>>>On Mon, Apr 11, 2016 at 4:20 PM, Steven Dake (stdake) 
>>>wrote:
 On 4/11/16, 12:54 AM, "Gerard Braad"  wrote:
 as
>at the moment getting an environment up-and-running according to the
>quickstart guide is a hit and miss
 I don't think deployment is not hit or miss as long as the QSG are
 followed to a T :)
>>>
>>>Maybe saying "at the moment" was incorrect. As the deployment
>>>according to the QSG has been a few weeks ago. Sorry about this... as
>>>you guys have put a lot of effort into it recently.
>>>
>>>
 I agree we need more clarity in what belongs in the QSG.
>>>This can be a separate discussion (Not intending to hijack this thread).
>>>
>>>
>>>I am not a core reviewer, but I keep it as-is. I do not see a need for
>>
>> Even though your not a core reviewer, your comments are valued.  The
>> reason I addressed core reviewers specifically as they have +2
>>permissions
>> and I would like more leniency on new documentation in other files
>>outside
>> those listed above (philosophy document, QSG) with a pubic statement of
>> such.
>>
>>>a lower-bar. Although, documentation is the entry-point into a
>>>community (as user and potential contributor) and therefore it should
>>>be of a high quality. Maybe I would be to provide more suggestions
>>>instead of just indication of 'change this for that'.
>>
>> The issue I see with our QSG is it has the highest bar for review
>>passage
>> of any file in the repository.  Any QSG change typically requires 10 or
>> more patch sets to make it through the core reviewer gauntlet.  This
>> discourages people from writing new documentation.  I don't want this to
>> carry over into other parts of the documentation that are as of yet
>> unwritten.  I'd like new documentation to be ok with misspellings,
>>grammar
>> errors, formatting problems, ESL authors, and that sort of thing.
>>
>> The QSG should tolerate none of these types of errors at this point - it
>> must be absolutely perfect (at least in English:) as to not cause
>> confusion to new operators.
>>
>> Regards
>> -steve
>>
>>>
>>>regards,
>>>
>>>
>>>Gerard
>>>
>>>
>>>__
>>>OpenStack Development Mailing List (not for usage questions)
>>>Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [mentoring] Mentee project preferrence

2016-04-11 Thread Marton Kiss
Hi Victoria,

I suggest try to switch somehow, please write me a private email with the
details and we will find a solution.

Brgds,
  Marton

On Mon, Apr 11, 2016 at 7:53 PM Victoria Martínez de la Cruz <
victo...@vmartinezdelacruz.com> wrote:

> Hi there,
>
> What should the mentor do with regards to technical mentorship is the
> mentee is more interested in a project that the mentor is not familiar with?
>
> Thanks,
>
> Victoria
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Rally] Term "workload" has two clashing meanings

2016-04-11 Thread Roman Vasilets
Hi all, personally I want to suggest* crossload. *Concept is similar to
cross training(training in two or more sports in order to improve fitness
and performance, especially in a main sport.) in sport. By that template -
crossload is load in two or more areas in order to improve durability and
performance, especially in a main area.
Thanks, Roman.

On Mon, Apr 11, 2016 at 6:38 PM, Aleksandr Maretskiy <
amarets...@mirantis.com> wrote:

> Hi all,
>
> this is about terminology, we have term "workload" in Rally that appears
> in two clashing meanings:
>
>  1. module rally.plugins.workload
> 
> which collects plugins for cross-VM testing
>  2. workload replaces term "scenario" in our new input task format
> 
> (task->scenarios is replaced with task->subtasks->workloads)
>
> Let's introduce new term as replacement of "1." (or maybe "2." but I
> suppose this is not the best option).
>
> Maybe rename rally.plugins.workload to:
>rally.plugins.
> *vmload   *rally.plugins.*vmperf*
>rally.plugins.*shaker*
>rally.plugins.*vmworkload*
>...more ideas?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Hongbin Lu
Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your data nodes (containers), and the infrastructure on which they 
run (Nova instances).

Regards,

Adrian



On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
> wrote:

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases that 
will in production only be meaningful on bare-metal.

Therefore, if there is a move towards offering a common API for VM's, 
bare-metal and containers, that would be huge.

Without such a mechanism, consuming containers in Trove adds considerable 
complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a working 
prototype of Trove leveraging Ironic, VM's, and nova-docker to provision 

Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-11 Thread Ben Nemec
On 04/11/2016 12:12 PM, Steven Hardy wrote:
> On Mon, Apr 11, 2016 at 10:33:53AM -0500, Ben Nemec wrote:
>> On 04/11/2016 04:54 AM, John Trowbridge wrote:
>>> Hola OOOers,
>>>
>>> It came up in the meeting last week that we could benefit from a CI
>>> subteam with its own meeting, since CI is taking up a lot of the main
>>> meeting time.
>>>
>>> I like this idea, and think we should do something similar for the other
>>> informal subteams (tripleoclient, UI), and also add a new subteam for
>>> tripleo-quickstart (and maybe one for releases?).
>>>
>>> We should make seperate ACL's for these subteams as well. The informal
>>> approach of adding cores who can +2 anything but are told to only +2
>>> what they know doesn't scale very well.
>>
>> How so?  Are we planning to give people +2 even though we don't trust
>> them to not +2 things they shouldn't?  I remain of the opinion that if
>> we need ACL controls to keep someone from doing something then they
>> shouldn't have +2 in the first place.
> 
> IMO it's not about a lack of trust at all, there are several other projects
> using this model and there are a number of advantages:
> 
> - Clear responsibilities enable better communication, e.g having a clearly
>   defined core team for a specific subteam enables folks to more easily
>   know the folks they should approach re reviews, to discuss features etc.

Fair enough, although I'm not sure a wiki page wouldn't be a better way
to capture this information.  We're never going to have granular enough
gerrit groups to capture things like who the experts on
upgrades/networking/ssl/etc. are.

> 
> - Beyond a certain point, large teams make disscussion e.g in a timeboxed
>   weekly meeting hard.  We're already at this point, e.g folks show up
>   wanting to add an item to the weekly agenda on some topic, but we spend
>   59 of the available 60 minutes discussing bugs, specs and CI.  Having
>   sub-teams that feel empowered to self-organize e.g extra meetings and
>   their own core members may help this process scale a little better?

I probably should have been more explicit that I'm only referring to
separate Gerrit groups.  Totally +1 on the concept of sub-teams in general.

> 
> - Potentially easier on-ramp (encourage domain experts as sub-team cores),
>   this isn't about lack of trust, it's acknowledging that spending a year
>   or more learning all the different pieces of TripleO is really hard and
>   not everyone wants or needs to do it.  Would folks feel a little more
>   motivated to contribute if they could aim towards deep expertise
>   reviewing a smaller subsystem?
> 
>> Quickstart is a bit of a weird case because the regular contributors to
>> it have not previously been very involved in TripleO upstream so I don't
>> think most of us have enough context to know whether they should have
>> +2.  I guess the UI would fall under the same category, so I'd be in
>> favor of keeping those two separate, but otherwise I think we're
>> creating bureaucracy for its own sake.
> 
> I think the overhead of creating a few additional gerrit groups is pretty
> small, there's zero "bureaucracy" for pretty much everyone involved,
> tripleo-core still works the same but we might just be a little quicker to
> nominate folks and/or attract reviews on some smaller projects given this
> change IMO (again, not through any lack of trust but because the teams
> would better represent the way folks are actually working).

I still have reservations, but once again I seem to be in the minority
here so I won't spend a lot of time arguing the point.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Amrith Kumar
Adrian, thx for your detailed mail.

Yes, I was hopeful of a silver bullet and as we’ve discussed before (I think it 
was Vancouver), there’s likely no silver bullet in this area. After that 
conversation, and some further experimentation, I found that even if Trove had 
access to a single Compute API, there were other significant complications 
further down the road, and I didn’t pursue the project further at the time.

We will be discussing Trove and Containers in Austin [1] and I’ll try and close 
the loop with you on this while we’re in Town. I still would like to come up 
with some way in which we can offer users the option of provisioning database 
as containers.

Thanks,

-amrith

[1] https://etherpad.openstack.org/p/trove-newton-summit-container

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: Monday, April 11, 2016 12:54 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your data nodes (containers), and the infrastructure on which they 
run (Nova instances).

Regards,

Adrian



On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
> wrote:

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty 

[openstack-dev] [heat] upgrade options for custom heat resource plug-ins

2016-04-11 Thread Praveen Yalagandula
Hi,

We are developing a custom heat resource plug-in and wondering about how to
handle plug-in upgrades. As our product's object model changes with new
releases, we will need to release updated resource plug-in code too.
However, the "properties" stored in the heat DB for the existing resources,
whose definitions have been upgraded, need to be updated too. Was there any
discussion on this?

Thanks,
Praveen Yalagandula
Avi Networks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About snapshot Rollback?

2016-04-11 Thread Erlon Cruz
You are right, the instance should be shutdown or the device be unmounted,
before 'revert' or removing the old device. That should be enough to avoid
corruption. I think the device naming is not a problem if you use the same
volume (at least the disk UUID will be the same).

On Mon, Apr 11, 2016 at 2:39 PM, Duncan Thomas 
wrote:

> You can't just change the contents of a volume under the instance though -
> at the very least you need to do an unmount in the instance, and a detach
> is preferable, otherwise you've got data corruption issues.
>
> At that point, the device naming problems are identical.
>
> On 11 April 2016 at 20:22, Erlon Cruz  wrote:
>
>> The actual user workflow is:
>>
>>  1 - User creates a volume(s)
>>  2 - User attach volume to instance
>>  3 - User creates a snapshot
>>  4 - Something happens causing the need of a revert
>>  5 - User creates a volume(s) from the snapshot(s)
>>  6 - User detach old volumes
>>  7 - User attach new volumes (and pray so they get the same id) - Nova,
>> should have the ability to honor supplied device names (vdc, vdd, etc),
>> which not always happen[1]. But, does the volume keep the same UUID in the
>> system? Several application use that to boot.
>>
>> The suggested workflow would be simpler for a user POV:
>>
>>  1 - User creates a volume(s)
>>  2 - User attach volume to instance
>>  3 - User creates a snapshot
>>  4 - Something happens causing the need of a revert
>>  5 - User revert snapshot(s)
>>
>>
>>  [1] https://goo.gl/Kusfne
>>
>> On Fri, Apr 8, 2016 at 5:07 AM, Ivan Kolodyazhny  wrote:
>>
>>> Hi Chenzongliang,
>>>
>>> I still don't understand what is difference between proposed feature and
>>> 'restore volume from snapshot'? Could you please explain it?
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>>> On Thu, Apr 7, 2016 at 12:00 PM, Chenzongliang >> > wrote:
>>>
 Dear Cruz:



  Thanks for you kind support, I will review the previous spec
 according to the following links.  May be more user scenario we should
 considered,such as backup,create volume from snapshot,consistency group and
 etc,we will spend some time to gather

 the user's scenarios and determin what to do next step.



 Sincerely,

 zongliang chen



 *发件人:* Erlon Cruz [mailto:sombra...@gmail.com]
 *发送时间:* 2016年4月5日 2:50
 *收件人:* OpenStack Development Mailing List (not for usage questions)
 *抄送:* Zhangli (ISSP); Shenhong (C)
 *主题:* Re: [openstack-dev] [Cinder] About snapshot Rollback?



 Hi Chen,



 Not sure if I got you right but I brought this topic in
 #openstack-cinder some days ago. The idea is to be able to rollback a
 snapshot in Cinder. Today what is possible to do is to create a volume from
 a snapshot. From the user point of view, this is not ideal, as there are
 several cases, if not the majority of, that the purpose of the snapshot is
 to revert to a desired state, and not keep the original volume. For some
 backends, keeping the original volume means space consumption. This space
 problem becomes bold when we think about consistency groups. For
 consistency groups, some backends might have to copy an entire filesystem
 for each snapshot, consuming space and time. So, I think it would be
 desired to have the ability to revert snapshots.



 I know there have been efforts in the past[1] to implement that, but
 for some reason the work was stopped. If you want to retake the effort
 please create a spec[2]  sol everybody can provide feedback.



 Erlon





 [1]
 https://blueprints.launchpad.net/cinder/+spec/cinder-volume-rollback-snapshot

 [2] https://github.com/openstack/cinder-specs



 On Thu, Mar 24, 2016 at 6:09 AM, Chenzongliang <
 chenzongli...@huawei.com> wrote:

 Hi all:

 We are condering add a fucntion rollback_snapshot when we use
 backup. In the end user's scenario. If a vm fails, we hope that we can use
 snapshot to to recovery the volume's data.

 Beacuse it can quickly recovery our vm. But if we use the remote
 data to recovery. We will spend more time.

 But i'm not sure if the data was recoveried from the backend.
 whether the host need to rescan the volumes? At the same time. If a volume
 have been extended, whether it can be roolback?



I want to know whether the topic have been discussed or have other
 recommendations to us?



Thanks



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 

Re: [openstack-dev] [Openstack-security] [Security]abandoned OSSNs?

2016-04-11 Thread Matt Fischer
Thanks Michael,

I'm following the thread and I've asked Thierry for this tag to be
subscribable here if we're not using openstack-security anymore so that I
can receive the follow-ups.



On Mon, Apr 11, 2016 at 8:28 AM, Michael Xin 
wrote:

> Matt:
> Thanks for asking this. I forwarded this email to the new email list so
> that folks with better knowledge can answer this.
>
>
> Thanks and have a great day.
>
> Yours,
> Michael
>
>
>
> -
> Michael Xin | Manager, Security Engineering - US
> Product Security  |Rackspace Hosting
> Office #: 501-7341   or  210-312-7341
> Mobile #: 210-284-8674
> 5000 Walzem Road, San Antonio, Tx 78218
>
> 
> Experience fanatical support
>
> From: Matt Fischer 
> Date: Monday, April 11, 2016 at 9:19 AM
> To: "openstack-secur...@lists.openstack.org" <
> openstack-secur...@lists.openstack.org>
> Subject: [Openstack-security] abandoned OSSNs?
>
> Some folks from our security team here asked me to ensure them that our
> services were patched for all the OSSNs that are listed here:
> https://wiki.openstack.org/wiki/Security_Notes
>
> Most of these are straight-forward, but there are some OSSNs that have
> been allocated an ID but then abandoned. There is no detailed wiki page and
> my best google efforts lead me to a possible IRC mention and maybe an
> abandoned review. The two specifically are OSSN-50/51.
>
> So what am I to do with an "abandoned" OSSN? Has it been decided that
> there is no issue anymore? These are pretty old if I look at the dates
> framing the other OSSNs (49/52), so I assume they aren't urgent. Can we
> ignore these? They sound somewhat scary, for example, "keystonemiddleware
> can allow access after token revocation" but I have no means to say whether
> it affects us or how we can mitigate without more info.
>
> Thoughts?
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Infra] Post processing of gate hooks on job timeouts

2016-04-11 Thread Jakub Libosvar
On 04/11/2016 06:41 PM, Clark Boylan wrote:
> On Mon, Apr 11, 2016, at 03:07 AM, Jakub Libosvar wrote:
>> Hi,
>>
>> recently we hit an issue in Neutron with tests getting stuck [1]. As a
>> side effect we discovered logs are not collected properly which makes it
>> hard to find the root cause. The reason of missing logs is that we send
>> SIGKILL to whatever gate hook is running when we hit the global timeout
>> per gate job [2]. This gives no time to running process to perform any
>> post-processing. In post_gate_hook function in Neutron, we collect logs
>> from /tmp directory, compress them and move them to /opt/stack/logs to
>> make them exposed.
>>
>> I have in mind two solutions to which I'd like to get feedback before
>> sending patches.
>>
>> 1) In Neutron, we execute tests in post_gate_hook (dunno why). But even
>> if we would have moved test execution into gate_hook and tests get stuck
>> then the post_gate_hook won't be triggered [3]. So the solution I
>> propose here is to terminate gate_hook N minutes before global timeout
>> and still execute post_gate_hook (with timeout) as post-processing
>> routine.
>>
>> 2) Second proposal is to let timeout wrapped commands know they are
>> about to be killed. We can send let's say SIGTERM instead of SIGKILL and
>> after certain amount of time, send SIGKILL. Example: We send SIGTERM 3
>> minutes before global timeout, letting these 3 minutes to 'command' to
>> handle the SIGTERM signal.
>>
>>  timeout -s 15 -k 3 $((REMAINING_TIME-3))m bash -c "command"
>>
>> With the 2nd approach we can trap the signal that kills running test
>> suite and collects logs with same functions we currently have.
>>
>>
>> I would personally go with second option but I want to hear if anybody
>> has a better idea about post processing in gate jobs or if there is
>> already a tool we can use to collect logs.
>>
>> Thanks,
>> Kuba
> 
> Devstack gate already does a "soft" timeout [0] then proceeds to cleanup
> (part of which is collecting logs) [1], then Jenkins does the "hard"
> timeout [2]. Why aren't we collecting the required log files as part of
> the existing cleanup?
This existing cleanup doesn't support hooks. Neutron tests produce a lot
of logs by default stored in /tmp/dsvm- so we need to compress
and move them to /opt/stack/logs in order to get them collected by [1].

> 
> [0]
> https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n569
> [1]
> https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n594
> [2]
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n325
> 
> Clark
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Infra] Post processing of gate hooks on job timeouts

2016-04-11 Thread Clark Boylan
On Mon, Apr 11, 2016, at 10:52 AM, Jakub Libosvar wrote:
> On 04/11/2016 06:41 PM, Clark Boylan wrote:
> > On Mon, Apr 11, 2016, at 03:07 AM, Jakub Libosvar wrote:
> >> Hi,
> >>
> >> recently we hit an issue in Neutron with tests getting stuck [1]. As a
> >> side effect we discovered logs are not collected properly which makes it
> >> hard to find the root cause. The reason of missing logs is that we send
> >> SIGKILL to whatever gate hook is running when we hit the global timeout
> >> per gate job [2]. This gives no time to running process to perform any
> >> post-processing. In post_gate_hook function in Neutron, we collect logs
> >> from /tmp directory, compress them and move them to /opt/stack/logs to
> >> make them exposed.
> >>
> >> I have in mind two solutions to which I'd like to get feedback before
> >> sending patches.
> >>
> >> 1) In Neutron, we execute tests in post_gate_hook (dunno why). But even
> >> if we would have moved test execution into gate_hook and tests get stuck
> >> then the post_gate_hook won't be triggered [3]. So the solution I
> >> propose here is to terminate gate_hook N minutes before global timeout
> >> and still execute post_gate_hook (with timeout) as post-processing
> >> routine.
> >>
> >> 2) Second proposal is to let timeout wrapped commands know they are
> >> about to be killed. We can send let's say SIGTERM instead of SIGKILL and
> >> after certain amount of time, send SIGKILL. Example: We send SIGTERM 3
> >> minutes before global timeout, letting these 3 minutes to 'command' to
> >> handle the SIGTERM signal.
> >>
> >>  timeout -s 15 -k 3 $((REMAINING_TIME-3))m bash -c "command"
> >>
> >> With the 2nd approach we can trap the signal that kills running test
> >> suite and collects logs with same functions we currently have.
> >>
> >>
> >> I would personally go with second option but I want to hear if anybody
> >> has a better idea about post processing in gate jobs or if there is
> >> already a tool we can use to collect logs.
> >>
> >> Thanks,
> >> Kuba
> > 
> > Devstack gate already does a "soft" timeout [0] then proceeds to cleanup
> > (part of which is collecting logs) [1], then Jenkins does the "hard"
> > timeout [2]. Why aren't we collecting the required log files as part of
> > the existing cleanup?
> This existing cleanup doesn't support hooks. Neutron tests produce a lot
> of logs by default stored in /tmp/dsvm- so we need to compress
> and move them to /opt/stack/logs in order to get them collected by [1].

My suggestion would be to stop writing these log files to /tmp and
instead write them to the log dir where they will be automagically
compressed and collected.

> 
> > 
> > [0]
> > https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n569
> > [1]
> > https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n594
> > [2]
> > https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n325
> > 
> > Clark
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [mentoring] Mentee project preferrence

2016-04-11 Thread Victoria Martínez de la Cruz
Hi there,

What should the mentor do with regards to technical mentorship is the
mentee is more interested in a project that the mentor is not familiar with?

Thanks,

Victoria
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-11 Thread Dmitry Tantsur

On 04/11/2016 05:33 PM, Ben Nemec wrote:

On 04/11/2016 04:54 AM, John Trowbridge wrote:

Hola OOOers,

It came up in the meeting last week that we could benefit from a CI
subteam with its own meeting, since CI is taking up a lot of the main
meeting time.

I like this idea, and think we should do something similar for the other
informal subteams (tripleoclient, UI), and also add a new subteam for
tripleo-quickstart (and maybe one for releases?).

We should make seperate ACL's for these subteams as well. The informal
approach of adding cores who can +2 anything but are told to only +2
what they know doesn't scale very well.


How so?  Are we planning to give people +2 even though we don't trust
them to not +2 things they shouldn't?  I remain of the opinion that if
we need ACL controls to keep someone from doing something then they
shouldn't have +2 in the first place.

Quickstart is a bit of a weird case because the regular contributors to
it have not previously been very involved in TripleO upstream so I don't
think most of us have enough context to know whether they should have
+2.  I guess the UI would fall under the same category, so I'd be in
favor of keeping those two separate, but otherwise I think we're
creating bureaucracy for its own sake.


FWIW it works pretty well for the ironic-inspector-core subteam of the 
big ironic-core.




-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Cinder-Nova API meeting

2016-04-11 Thread Ildikó Váncsa
Hi All,

It's a friendly reminder we're having the next Cinder-Nova API interactions 
meeting this Wednesday __13th April 2100UTC__, on the #openstack-meeting-cp 
channel.

You can follow up the recent activities here: 
https://etherpad.openstack.org/p/cinder-nova-api-changes Our current focus is 
on attach/detach scenarios focusing on better tracking of attachment info in 
Cinder to avoid detach problems listed on the etherpad.

Best Regards,
/Ildikó

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About snapshot Rollback?

2016-04-11 Thread Duncan Thomas
You can't just change the contents of a volume under the instance though -
at the very least you need to do an unmount in the instance, and a detach
is preferable, otherwise you've got data corruption issues.

At that point, the device naming problems are identical.

On 11 April 2016 at 20:22, Erlon Cruz  wrote:

> The actual user workflow is:
>
>  1 - User creates a volume(s)
>  2 - User attach volume to instance
>  3 - User creates a snapshot
>  4 - Something happens causing the need of a revert
>  5 - User creates a volume(s) from the snapshot(s)
>  6 - User detach old volumes
>  7 - User attach new volumes (and pray so they get the same id) - Nova,
> should have the ability to honor supplied device names (vdc, vdd, etc),
> which not always happen[1]. But, does the volume keep the same UUID in the
> system? Several application use that to boot.
>
> The suggested workflow would be simpler for a user POV:
>
>  1 - User creates a volume(s)
>  2 - User attach volume to instance
>  3 - User creates a snapshot
>  4 - Something happens causing the need of a revert
>  5 - User revert snapshot(s)
>
>
>  [1] https://goo.gl/Kusfne
>
> On Fri, Apr 8, 2016 at 5:07 AM, Ivan Kolodyazhny  wrote:
>
>> Hi Chenzongliang,
>>
>> I still don't understand what is difference between proposed feature and
>> 'restore volume from snapshot'? Could you please explain it?
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>> On Thu, Apr 7, 2016 at 12:00 PM, Chenzongliang 
>> wrote:
>>
>>> Dear Cruz:
>>>
>>>
>>>
>>>  Thanks for you kind support, I will review the previous spec
>>> according to the following links.  May be more user scenario we should
>>> considered,such as backup,create volume from snapshot,consistency group and
>>> etc,we will spend some time to gather
>>>
>>> the user's scenarios and determin what to do next step.
>>>
>>>
>>>
>>> Sincerely,
>>>
>>> zongliang chen
>>>
>>>
>>>
>>> *发件人:* Erlon Cruz [mailto:sombra...@gmail.com]
>>> *发送时间:* 2016年4月5日 2:50
>>> *收件人:* OpenStack Development Mailing List (not for usage questions)
>>> *抄送:* Zhangli (ISSP); Shenhong (C)
>>> *主题:* Re: [openstack-dev] [Cinder] About snapshot Rollback?
>>>
>>>
>>>
>>> Hi Chen,
>>>
>>>
>>>
>>> Not sure if I got you right but I brought this topic in
>>> #openstack-cinder some days ago. The idea is to be able to rollback a
>>> snapshot in Cinder. Today what is possible to do is to create a volume from
>>> a snapshot. From the user point of view, this is not ideal, as there are
>>> several cases, if not the majority of, that the purpose of the snapshot is
>>> to revert to a desired state, and not keep the original volume. For some
>>> backends, keeping the original volume means space consumption. This space
>>> problem becomes bold when we think about consistency groups. For
>>> consistency groups, some backends might have to copy an entire filesystem
>>> for each snapshot, consuming space and time. So, I think it would be
>>> desired to have the ability to revert snapshots.
>>>
>>>
>>>
>>> I know there have been efforts in the past[1] to implement that, but for
>>> some reason the work was stopped. If you want to retake the effort please
>>> create a spec[2]  sol everybody can provide feedback.
>>>
>>>
>>>
>>> Erlon
>>>
>>>
>>>
>>>
>>>
>>> [1]
>>> https://blueprints.launchpad.net/cinder/+spec/cinder-volume-rollback-snapshot
>>>
>>> [2] https://github.com/openstack/cinder-specs
>>>
>>>
>>>
>>> On Thu, Mar 24, 2016 at 6:09 AM, Chenzongliang 
>>> wrote:
>>>
>>> Hi all:
>>>
>>> We are condering add a fucntion rollback_snapshot when we use
>>> backup. In the end user's scenario. If a vm fails, we hope that we can use
>>> snapshot to to recovery the volume's data.
>>>
>>> Beacuse it can quickly recovery our vm. But if we use the remote
>>> data to recovery. We will spend more time.
>>>
>>> But i'm not sure if the data was recoveried from the backend.
>>> whether the host need to rescan the volumes? At the same time. If a volume
>>> have been extended, whether it can be roolback?
>>>
>>>
>>>
>>>I want to know whether the topic have been discussed or have other
>>> recommendations to us?
>>>
>>>
>>>
>>>Thanks
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> __
> OpenStack 

Re: [openstack-dev] [Cinder] About snapshot Rollback?

2016-04-11 Thread Erlon Cruz
The actual user workflow is:

 1 - User creates a volume(s)
 2 - User attach volume to instance
 3 - User creates a snapshot
 4 - Something happens causing the need of a revert
 5 - User creates a volume(s) from the snapshot(s)
 6 - User detach old volumes
 7 - User attach new volumes (and pray so they get the same id) - Nova,
should have the ability to honor supplied device names (vdc, vdd, etc),
which not always happen[1]. But, does the volume keep the same UUID in the
system? Several application use that to boot.

The suggested workflow would be simpler for a user POV:

 1 - User creates a volume(s)
 2 - User attach volume to instance
 3 - User creates a snapshot
 4 - Something happens causing the need of a revert
 5 - User revert snapshot(s)


 [1] https://goo.gl/Kusfne

On Fri, Apr 8, 2016 at 5:07 AM, Ivan Kolodyazhny  wrote:

> Hi Chenzongliang,
>
> I still don't understand what is difference between proposed feature and
> 'restore volume from snapshot'? Could you please explain it?
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Thu, Apr 7, 2016 at 12:00 PM, Chenzongliang 
> wrote:
>
>> Dear Cruz:
>>
>>
>>
>>  Thanks for you kind support, I will review the previous spec
>> according to the following links.  May be more user scenario we should
>> considered,such as backup,create volume from snapshot,consistency group and
>> etc,we will spend some time to gather
>>
>> the user's scenarios and determin what to do next step.
>>
>>
>>
>> Sincerely,
>>
>> zongliang chen
>>
>>
>>
>> *发件人:* Erlon Cruz [mailto:sombra...@gmail.com]
>> *发送时间:* 2016年4月5日 2:50
>> *收件人:* OpenStack Development Mailing List (not for usage questions)
>> *抄送:* Zhangli (ISSP); Shenhong (C)
>> *主题:* Re: [openstack-dev] [Cinder] About snapshot Rollback?
>>
>>
>>
>> Hi Chen,
>>
>>
>>
>> Not sure if I got you right but I brought this topic in #openstack-cinder
>> some days ago. The idea is to be able to rollback a snapshot in Cinder.
>> Today what is possible to do is to create a volume from a snapshot. From
>> the user point of view, this is not ideal, as there are several cases, if
>> not the majority of, that the purpose of the snapshot is to revert to a
>> desired state, and not keep the original volume. For some backends, keeping
>> the original volume means space consumption. This space problem becomes
>> bold when we think about consistency groups. For consistency groups, some
>> backends might have to copy an entire filesystem for each snapshot,
>> consuming space and time. So, I think it would be desired to have the
>> ability to revert snapshots.
>>
>>
>>
>> I know there have been efforts in the past[1] to implement that, but for
>> some reason the work was stopped. If you want to retake the effort please
>> create a spec[2]  sol everybody can provide feedback.
>>
>>
>>
>> Erlon
>>
>>
>>
>>
>>
>> [1]
>> https://blueprints.launchpad.net/cinder/+spec/cinder-volume-rollback-snapshot
>>
>> [2] https://github.com/openstack/cinder-specs
>>
>>
>>
>> On Thu, Mar 24, 2016 at 6:09 AM, Chenzongliang 
>> wrote:
>>
>> Hi all:
>>
>> We are condering add a fucntion rollback_snapshot when we use backup.
>> In the end user's scenario. If a vm fails, we hope that we can use snapshot
>> to to recovery the volume's data.
>>
>> Beacuse it can quickly recovery our vm. But if we use the remote data
>> to recovery. We will spend more time.
>>
>> But i'm not sure if the data was recoveried from the backend. whether
>> the host need to rescan the volumes? At the same time. If a volume have
>> been extended, whether it can be roolback?
>>
>>
>>
>>I want to know whether the topic have been discussed or have other
>> recommendations to us?
>>
>>
>>
>>Thanks
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-11 Thread Steven Hardy
On Mon, Apr 11, 2016 at 10:33:53AM -0500, Ben Nemec wrote:
> On 04/11/2016 04:54 AM, John Trowbridge wrote:
> > Hola OOOers,
> > 
> > It came up in the meeting last week that we could benefit from a CI
> > subteam with its own meeting, since CI is taking up a lot of the main
> > meeting time.
> > 
> > I like this idea, and think we should do something similar for the other
> > informal subteams (tripleoclient, UI), and also add a new subteam for
> > tripleo-quickstart (and maybe one for releases?).
> > 
> > We should make seperate ACL's for these subteams as well. The informal
> > approach of adding cores who can +2 anything but are told to only +2
> > what they know doesn't scale very well.
> 
> How so?  Are we planning to give people +2 even though we don't trust
> them to not +2 things they shouldn't?  I remain of the opinion that if
> we need ACL controls to keep someone from doing something then they
> shouldn't have +2 in the first place.

IMO it's not about a lack of trust at all, there are several other projects
using this model and there are a number of advantages:

- Clear responsibilities enable better communication, e.g having a clearly
  defined core team for a specific subteam enables folks to more easily
  know the folks they should approach re reviews, to discuss features etc.

- Beyond a certain point, large teams make disscussion e.g in a timeboxed
  weekly meeting hard.  We're already at this point, e.g folks show up
  wanting to add an item to the weekly agenda on some topic, but we spend
  59 of the available 60 minutes discussing bugs, specs and CI.  Having
  sub-teams that feel empowered to self-organize e.g extra meetings and
  their own core members may help this process scale a little better?

- Potentially easier on-ramp (encourage domain experts as sub-team cores),
  this isn't about lack of trust, it's acknowledging that spending a year
  or more learning all the different pieces of TripleO is really hard and
  not everyone wants or needs to do it.  Would folks feel a little more
  motivated to contribute if they could aim towards deep expertise
  reviewing a smaller subsystem?

> Quickstart is a bit of a weird case because the regular contributors to
> it have not previously been very involved in TripleO upstream so I don't
> think most of us have enough context to know whether they should have
> +2.  I guess the UI would fall under the same category, so I'd be in
> favor of keeping those two separate, but otherwise I think we're
> creating bureaucracy for its own sake.

I think the overhead of creating a few additional gerrit groups is pretty
small, there's zero "bureaucracy" for pretty much everyone involved,
tripleo-core still works the same but we might just be a little quicker to
nominate folks and/or attract reviews on some smaller projects given this
change IMO (again, not through any lack of trust but because the teams
would better represent the way folks are actually working).

Cheers,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Encrypted Ephemeral Storage

2016-04-11 Thread Chris Buccella
I've been looking into using encrypted ephemeral storage with LVM. With the
[ephemeral_storage_encryption] and [keymgr] sections to nova.conf, I get an
LVM volume with "-dmcrypt" is appended to the volume name, but otherwise
see no difference; I can still grep for text inside the volume.

Upon reading the source, I don't see "cryptsetup luksFormat" being called
anywhere (nova/libvirt/storage/*).

I was expecting a new encrypted LVM volume when a new instance was created.
Are my expectations misplaced? How is this feature envisioned to work?


Thanks,

-Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA

2016-04-11 Thread Edgar Magana
Sean,

This is a very good concern. I can't talk for all projects but during the Ops 
Meet-ups we normally collect the feedback and send it to the PTLs or anyone 
from the project team who can help us.
The best answer should be provide by the Product Working Group from User 
Committee: https://wiki.openstack.org/wiki/ProductTeam

Adding Shamail and Carol to provide more details. They are leading the Product 
WG.

Thanks,

Edgar



On 4/11/16, 8:58 AM, "Sean M. Collins"  wrote:

>Kris G. Lindgren wrote:
>> You mean outside of the LDT filing an RFE bug with neutron to get
>
>Sorry, I don't know what LDT is. Can you explain?
>
>As for the RFE bug and the contributions that GoDaddy has been involved
>with, my statement is not about "if" operators are contributing, because
>obviously they are. But an RFE bug and coming to the midcycle is part of 
>Neutron's development process. Not a working group.
>
>
>-- 
>Sean M. Collins
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Adrian Otto
Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your data nodes (containers), and the infrastructure on which they 
run (Nova instances).

Regards,

Adrian



On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
> wrote:

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases that 
will in production only be meaningful on bare-metal.

Therefore, if there is a move towards offering a common API for VM's, 
bare-metal and containers, that would be huge.

Without such a mechanism, consuming containers in Trove adds considerable 
complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a working 
prototype of Trove leveraging Ironic, VM's, and nova-docker to provision 
databases is something I worked on a while ago, and have not revisited it since 
then (once the direction appeared to be Magnum for containers).

With all that said, I don't want to downplay the value in a container specific 
API. I'm merely observing that from the perspective of a consumer of computing 
services, a common abstraction is incredibly valuable.

Thanks,

-amrith

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com]
Sent: Monday, April 11, 2016 11:31 AM
To: Allison Randal >; Davanum 
Srinivas
>; 

Re: [openstack-dev] [Neutron][Infra] Post processing of gate hooks on job timeouts

2016-04-11 Thread Clark Boylan
On Mon, Apr 11, 2016, at 03:07 AM, Jakub Libosvar wrote:
> Hi,
> 
> recently we hit an issue in Neutron with tests getting stuck [1]. As a
> side effect we discovered logs are not collected properly which makes it
> hard to find the root cause. The reason of missing logs is that we send
> SIGKILL to whatever gate hook is running when we hit the global timeout
> per gate job [2]. This gives no time to running process to perform any
> post-processing. In post_gate_hook function in Neutron, we collect logs
> from /tmp directory, compress them and move them to /opt/stack/logs to
> make them exposed.
> 
> I have in mind two solutions to which I'd like to get feedback before
> sending patches.
> 
> 1) In Neutron, we execute tests in post_gate_hook (dunno why). But even
> if we would have moved test execution into gate_hook and tests get stuck
> then the post_gate_hook won't be triggered [3]. So the solution I
> propose here is to terminate gate_hook N minutes before global timeout
> and still execute post_gate_hook (with timeout) as post-processing
> routine.
> 
> 2) Second proposal is to let timeout wrapped commands know they are
> about to be killed. We can send let's say SIGTERM instead of SIGKILL and
> after certain amount of time, send SIGKILL. Example: We send SIGTERM 3
> minutes before global timeout, letting these 3 minutes to 'command' to
> handle the SIGTERM signal.
> 
>  timeout -s 15 -k 3 $((REMAINING_TIME-3))m bash -c "command"
> 
> With the 2nd approach we can trap the signal that kills running test
> suite and collects logs with same functions we currently have.
> 
> 
> I would personally go with second option but I want to hear if anybody
> has a better idea about post processing in gate jobs or if there is
> already a tool we can use to collect logs.
> 
> Thanks,
> Kuba

Devstack gate already does a "soft" timeout [0] then proceeds to cleanup
(part of which is collecting logs) [1], then Jenkins does the "hard"
timeout [2]. Why aren't we collecting the required log files as part of
the existing cleanup?

[0]
https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n569
[1]
https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n594
[2]
https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n325

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Russell Bryant
On Mon, Apr 11, 2016 at 11:30 AM, Monty Taylor  wrote:

> On 04/11/2016 09:43 AM, Allison Randal wrote:
>
>> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas 
>>> wrote:
>>>
 Reading unofficial notes [1], i found one topic very interesting:
 One Platform – How do we truly support containers and bare metal under
 a common API with VMs? (Ironic, Nova, adjacent communities e.g.
 Kubernetes, Apache Mesos etc)

 Anyone present at the meeting, please expand on those few notes on
 etherpad? And how if any this feedback is getting back to the
 projects?

>>>
>> It was really two separate conversations that got conflated in the
>> summary. One conversation was just being supportive of bare metal, VMs,
>> and containers within the OpenStack umbrella. The other conversation
>> started with Monty talking about his work on shade, and how it wouldn't
>> exist if more APIs were focused on the way users consume the APIs, and
>> less an expression of the implementation details of each project.
>> OpenStackClient was mentioned as a unified CLI for OpenStack focused
>> more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
>> but falls in the same general category of work.)
>>
>> i.e. There wasn't anything new in the conversation, it was more a matter
>> of the developers/TC members on the board sharing information about work
>> that's already happening.
>>
>
> I agree with that - but would like to clarify the 'bare metal, VMs and
> containers' part a bit. (an in fact, I was concerned in the meeting that
> the messaging around this would be confusing because we 'supporting bare
> metal' and 'supporting containers' mean two different things but we use one
> phrase to talk about it.
>
> It's abundantly clear at the strategic level that having OpenStack be able
> to provide both VMs and Bare Metal as two different sorts of resources
> (ostensibly but not prescriptively via nova) is one of our advantages. We
> wanted to underscore how important it is to be able to do that, and wanted
> to underscore that so that it's really clear how important it is any time
> the "but cloud should just be VMs" sentiment arises.
>
> The way we discussed "supporting containers" was quite different and was
> not about nova providing containers. Rather, it was about reaching out to
> our friends in other communities and working with them on making OpenStack
> the best place to run things like kubernetes or docker swarm. Those are
> systems that ultimately need to run, and it seems that good integration
> (like kuryr with libnetwork) can provide a really strong story. I think
> pretty much everyone agrees that there is not much value to us or the world
> for us to compete with kubernetes or docker.
>
> So, we do want to be supportive of bare metal and containers - but the
> specific _WAY_ we want to be supportive of those things is different for
> each one.
>

I was there and agree with the summary provided by Allison and Monty.

It's important to have some high level alignment on where we see our core
strengths and where we see ourselves as complementary and not competitive.
I don't think any of it was new information, but valuable to revisit
nonetheless.

-- 
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA

2016-04-11 Thread Kris G. Lindgren
LDT is large deployment team, its a working group for large deployments.  Like 
Rackspace, Cern, NeCTAR, Yahoo, GoDaddy, Bluebox.  Talk about issues scaling 
openstack, Nova cells, monitoring, all the stuff that becomes hard when you 
have thousands of servers or hundreds of clouds.  Also, the public-cloud 
working group is part of the LDT working group as well.  Since a large portion 
of us also happen to run public clouds.

Sorry - but your post came off (to me) as: Working groups don’t do anything 
actionable, atleast I have never seen it in neutron.  I was just giving 
actionable work that has come from LDT, alone, in neutron.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy







On 4/11/16, 9:58 AM, "Sean M. Collins"  wrote:

>Kris G. Lindgren wrote:
>> You mean outside of the LDT filing an RFE bug with neutron to get
>
>Sorry, I don't know what LDT is. Can you explain?
>
>As for the RFE bug and the contributions that GoDaddy has been involved
>with, my statement is not about "if" operators are contributing, because
>obviously they are. But an RFE bug and coming to the midcycle is part of 
>Neutron's development process. Not a working group.
>
>
>-- 
>Sean M. Collins
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Fox, Kevin M
The more I've used Containers in production the more I've come to the 
conclusion they are much different beasts then Nova Instances. Nova's 
abstraction lets Physical hardware and VM's share one common API, and it makes 
a lot of sense to unify them.

Oh. To be explicit, I'm talking about docker style lightweight containers, not 
heavy weight containers like LXC ones. The heavy weight ones do work well with 
Nova. For the rest of the conversation container = lightweight container.

Trove can make use of containers provided there is a standard api in OpenStack 
for provisioning them. Right now, Magnum provides a way to get Kubernetes 
orchestrated clusters, for example, but doensn't have good integration with it 
to hook it into keystone so that Trusts can be used with it on the users behalf 
for advanced services like Trove. So some pieces are missing. Heat should have 
a way to have Kubernetes Yaml resources too.

I think the recent request to rescope Kuryr to include non network features is 
a good step in solving some of the issues.

Unfortunately, it will probably take some time to get Magnum to the point where 
it can be used by other OpenStack advanced services. Maybe these sorts of 
issues should be written down and discussed at the upcoming summit between the 
Magnum and Kuryr teams?

Thanks,
Kevin



From: Amrith Kumar [amr...@tesora.com]
Sent: Monday, April 11, 2016 8:47 AM
To: OpenStack Development Mailing List (not for usage questions); Allison 
Randal; Davanum Srinivas; foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases that 
will in production only be meaningful on bare-metal.

Therefore, if there is a move towards offering a common API for VM's, 
bare-metal and containers, that would be huge.

Without such a mechanism, consuming containers in Trove adds considerable 
complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a working 
prototype of Trove leveraging Ironic, VM's, and nova-docker to provision 
databases is something I worked on a while ago, and have not revisited it since 
then (once the direction appeared to be Magnum for containers).

With all that said, I don't want to downplay the value in a container specific 
API. I'm merely observing that from the perspective of a consumer of computing 
services, a common abstraction is incredibly valuable.

Thanks,

-amrith

> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: Monday, April 11, 2016 11:31 AM
> To: Allison Randal ; Davanum Srinivas
> ; foundat...@lists.openstack.org
> Cc: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
> Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>
> On 04/11/2016 09:43 AM, Allison Randal wrote:
> >> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas 
> wrote:
> >>> Reading unofficial notes [1], i found one topic very interesting:
> >>> One Platform – How do we truly support containers and bare metal
> >>> under a common API with VMs? (Ironic, Nova, adjacent communities e.g.
> >>> Kubernetes, Apache Mesos etc)
> >>>
> >>> Anyone present at the meeting, please expand on those few notes on
> >>> etherpad? And how if any this feedback is getting back to the
> >>> projects?
> >
> > It was really two separate conversations that got conflated in the
> > summary. One conversation was just being supportive of bare metal,
> > VMs, and containers within the OpenStack umbrella. The other
> > conversation started with Monty talking about his work on shade, and
> > how it wouldn't exist if more APIs were focused on the way users
> > consume the APIs, and less an expression of the implementation details
> of each project.
> > OpenStackClient was mentioned as a unified CLI for OpenStack focused
> > more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
> > but falls in the same general category of work.)
> >
> > i.e. There wasn't anything new in the conversation, it was more a
> > matter of the 

Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA

2016-04-11 Thread Sean M. Collins
Kris G. Lindgren wrote:
> You mean outside of the LDT filing an RFE bug with neutron to get

Sorry, I don't know what LDT is. Can you explain?

As for the RFE bug and the contributions that GoDaddy has been involved
with, my statement is not about "if" operators are contributing, because
obviously they are. But an RFE bug and coming to the midcycle is part of 
Neutron's development process. Not a working group.


-- 
Sean M. Collins

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-security] [Security]abandoned OSSNs?

2016-04-11 Thread Dave Walker
Hi,

I believe 50 and 51 were both assigned to me.  They were closely linked,
but seperate issues.

I wrote 50 up here:
https://review.openstack.org/#/c/200303/2

After discussion in a security meeting, my memory is that it was agreed
that they probably weren't required.

I'd have to pull out the meeting log to be certain, but I'd also continue
them if the mood has now changed.

--
Kind Regards,
Dave Walker

On 11 Apr 2016 16:06, "Clark, Robert Graham"  wrote:
>
> Thanks Matt, Michael,
>
>
>
> To start with, lets look quickly at the more recent OSSNs that are marked
as work in progress, namely 63,64,65 and 66 – these should all be published
within a week or so.
>
>
>
> Looking further back we have the more difficult OSSNs 50 and 51, I’m not
100% sure what the blockers are on these.  I believe
https://wiki.openstack.org/wiki/OSSN/OSSN-0056 may supersede OSSN-0051 and
is rooted in bug https://bugs.launchpad.net/ossn/+bug/1435530 - it looks to
me like OSSN-0056 was written during a mid-cycle and could be the right one.
>
>
>
> I’m struggling to work out the story behind OSSN-0050 – I’m adding Nathan
Kinder who might be able to shed more light on this.
>
>
>
> -Rob
>
>
>
>
>
>
>
> From: Michael Xin [mailto:michael@rackspace.com]
> Sent: 11 April 2016 15:28
> To: Matt Fischer; OpenStack Development Mailing List (not for usage
questions)
> Subject: Re: [openstack-dev] [Openstack-security] [Security]abandoned
OSSNs?
>
>
>
> Matt:
>
> Thanks for asking this. I forwarded this email to the new email list so
that folks with better knowledge can answer this.
>
>
>
>
>
> Thanks and have a great day.
>
>
>
> Yours,
>
> Michael
>
>
>
>
>
>
-
>
> Michael Xin | Manager, Security Engineering - US
>
> Product Security  |Rackspace Hosting
>
> Office #: 501-7341   or  210-312-7341
>
> Mobile #: 210-284-8674
>
> 5000 Walzem Road, San Antonio, Tx 78218
>
>

>
> Experience fanatical support
>
>
>
> From: Matt Fischer 
> Date: Monday, April 11, 2016 at 9:19 AM
> To: "openstack-secur...@lists.openstack.org" <
openstack-secur...@lists.openstack.org>
> Subject: [Openstack-security] abandoned OSSNs?
>
>
>
> Some folks from our security team here asked me to ensure them that our
services were patched for all the OSSNs that are listed here:
https://wiki.openstack.org/wiki/Security_Notes
>
>
>
> Most of these are straight-forward, but there are some OSSNs that have
been allocated an ID but then abandoned. There is no detailed wiki page and
my best google efforts lead me to a possible IRC mention and maybe an
abandoned review. The two specifically are OSSN-50/51.
>
>
>
> So what am I to do with an "abandoned" OSSN? Has it been decided that
there is no issue anymore? These are pretty old if I look at the dates
framing the other OSSNs (49/52), so I assume they aren't urgent. Can we
ignore these? They sound somewhat scary, for example, "keystonemiddleware
can allow access after token revocation" but I have no means to say whether
it affects us or how we can mitigate without more info.
>
>
>
> Thoughts?
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [API]Make API errors conform to the common error message without microversion

2016-04-11 Thread Sean Dague
On 04/11/2016 11:11 AM, michael mccune wrote:
> please forgive my lack of direct knowledge about the neutron process and
> how this fits in. i'm just commenting from the perspective of someone
> looking at this from the api-wg.
> 
> On 04/11/2016 09:52 AM, Duncan Thomas wrote:
>> So by adding the handling of a header to change the behaviour of the
>> API, you're basically implementing a subset of microversions, with a
>> non-standard header (See the API WG spec on non-proliferation of
>> headers). You'll find it takes much of the work that implementing
>> microversions does, and explodes your API test matrix some more.
>>
>> Sounds like something that should go on hold until microversions is
>> done, assuming that microversions are desired anyway. Standard error
>> messages are not such a big win that they're worth non-standard headers
>> and yet more API weirdness that needs to sit around potentially for a
>> very long time (see the API WG rules on removing APIs, which is
>> basically never)
>>
> 
> i think this advice sounds reasonable. adding a side-channel around
> microversions sounds like work that would itself need a microversion
> bump when it is finally removed ;)
> 
> i also agree with the reasoning about the benefit from the standardized
> error messages. it is nice to get a standard error message produced, but
> i think adding microversions is probably a bigger win in the near term
> because it will make these other transitions smoother.

This really was the motivation in creating microversions in the first
place. There were (and still are) many issues in the API, but we kept
tripping over how the client might be able to discover / ask for newer
features. So we solved the discovery / ask for up front, and now there
is a standard mechanism (through a single monotonic value) of providing
for and asking for the new and better API. And works mostly the same
between the 4 services that have now implemented it.

That's a much better path than inventing a new mechanism entirely.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Amrith Kumar
Monty, Dims, 

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases that 
will in production only be meaningful on bare-metal.

Therefore, if there is a move towards offering a common API for VM's, 
bare-metal and containers, that would be huge.

Without such a mechanism, consuming containers in Trove adds considerable 
complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a working 
prototype of Trove leveraging Ironic, VM's, and nova-docker to provision 
databases is something I worked on a while ago, and have not revisited it since 
then (once the direction appeared to be Magnum for containers).

With all that said, I don't want to downplay the value in a container specific 
API. I'm merely observing that from the perspective of a consumer of computing 
services, a common abstraction is incredibly valuable. 

Thanks,

-amrith 

> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: Monday, April 11, 2016 11:31 AM
> To: Allison Randal ; Davanum Srinivas
> ; foundat...@lists.openstack.org
> Cc: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
> Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
> 
> On 04/11/2016 09:43 AM, Allison Randal wrote:
> >> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas 
> wrote:
> >>> Reading unofficial notes [1], i found one topic very interesting:
> >>> One Platform – How do we truly support containers and bare metal
> >>> under a common API with VMs? (Ironic, Nova, adjacent communities e.g.
> >>> Kubernetes, Apache Mesos etc)
> >>>
> >>> Anyone present at the meeting, please expand on those few notes on
> >>> etherpad? And how if any this feedback is getting back to the
> >>> projects?
> >
> > It was really two separate conversations that got conflated in the
> > summary. One conversation was just being supportive of bare metal,
> > VMs, and containers within the OpenStack umbrella. The other
> > conversation started with Monty talking about his work on shade, and
> > how it wouldn't exist if more APIs were focused on the way users
> > consume the APIs, and less an expression of the implementation details
> of each project.
> > OpenStackClient was mentioned as a unified CLI for OpenStack focused
> > more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
> > but falls in the same general category of work.)
> >
> > i.e. There wasn't anything new in the conversation, it was more a
> > matter of the developers/TC members on the board sharing information
> > about work that's already happening.
> 
> I agree with that - but would like to clarify the 'bare metal, VMs and
> containers' part a bit. (an in fact, I was concerned in the meeting that
> the messaging around this would be confusing because we 'supporting bare
> metal' and 'supporting containers' mean two different things but we use
> one phrase to talk about it.
> 
> It's abundantly clear at the strategic level that having OpenStack be able
> to provide both VMs and Bare Metal as two different sorts of resources
> (ostensibly but not prescriptively via nova) is one of our advantages. We
> wanted to underscore how important it is to be able to do that, and wanted
> to underscore that so that it's really clear how important it is any time
> the "but cloud should just be VMs" sentiment arises.
> 
> The way we discussed "supporting containers" was quite different and was
> not about nova providing containers. Rather, it was about reaching out to
> our friends in other communities and working with them on making OpenStack
> the best place to run things like kubernetes or docker swarm.
> Those are systems that ultimately need to run, and it seems that good
> integration (like kuryr with libnetwork) can provide a really strong
> story. I think pretty much everyone agrees that there is not much value to
> us or the world for us to compete with kubernetes or docker.
> 
> So, we do want to be supportive of bare metal and containers - but the
> specific _WAY_ we want to be supportive of those things is different for
> each one.
> 
> Monty
> 
> 
> __
> OpenStack Development 

Re: [openstack-dev] [Openstack-security] [Security]abandoned OSSNs?

2016-04-11 Thread Nathan Kinder


On 04/11/2016 08:04 AM, Clark, Robert Graham wrote:
> Thanks Matt, Michael,
> 
>  
> 
> To start with, lets look quickly at the more recent OSSNs that are
> marked as work in progress, namely 63,64,65 and 66 – these should all be
> published within a week or so.
> 
>  
> 
> Looking further back we have the more difficult OSSNs 50 and 51, I’m not
> 100% sure what the blockers are on these.  I believe
> https://wiki.openstack.org/wiki/OSSN/OSSN-0056 may supersede OSSN-0051
> and is rooted in bug https://bugs.launchpad.net/ossn/+bug/1435530 - it
> looks to me like OSSN-0056 was written during a mid-cycle and could be
> the right one.
> 
>  
> 
> I’m struggling to work out the story behind OSSN-0050 – I’m adding
> Nathan Kinder who might be able to shed more light on this.

It looks like that one was added to the wiki by 'Davewalker' in this
revision:


https://wiki.openstack.org/w/index.php?title=Security_Notes=next=85312

I searched all open and closed OSSN bugs, and did not see one that
matches this issue.

-NGK

> 
>  
> 
> -Rob
> 
>  
> 
>  
> 
>  
> 
> *From:*Michael Xin [mailto:michael@rackspace.com]
> *Sent:* 11 April 2016 15:28
> *To:* Matt Fischer; OpenStack Development Mailing List (not for usage
> questions)
> *Subject:* Re: [openstack-dev] [Openstack-security] [Security]abandoned
> OSSNs?
> 
>  
> 
> Matt:
> 
> Thanks for asking this. I forwarded this email to the new email list so
> that folks with better knowledge can answer this. 
> 
>  
> 
>  
> 
> Thanks and have a great day. 
> 
>  
> 
> Yours,
> 
> Michael 
> 
>  
> 
>  
> 
> -
> 
> Michael Xin | Manager, Security Engineering - US 
> 
> Product Security  |Rackspace Hosting
> 
> Office #: 501-7341   or  210-312-7341
> 
> Mobile #: 210-284-8674 
> 
> 5000 Walzem Road, San Antonio, Tx 78218
> 
> 
> 
> Experience fanatical support
> 
>  
> 
> *From: *Matt Fischer >
> *Date: *Monday, April 11, 2016 at 9:19 AM
> *To: *"openstack-secur...@lists.openstack.org
> "
>  >
> *Subject: *[Openstack-security] abandoned OSSNs?
> 
>  
> 
> Some folks from our security team here asked me to ensure them that our
> services were patched for all the OSSNs that are listed
> here: https://wiki.openstack.org/wiki/Security_Notes
> 
>  
> 
> Most of these are straight-forward, but there are some OSSNs that have
> been allocated an ID but then abandoned. There is no detailed wiki page
> and my best google efforts lead me to a possible IRC mention and maybe
> an abandoned review. The two specifically are OSSN-50/51.
> 
>  
> 
> So what am I to do with an "abandoned" OSSN? Has it been decided that
> there is no issue anymore? These are pretty old if I look at the dates
> framing the other OSSNs (49/52), so I assume they aren't urgent. Can we
> ignore these? They sound somewhat scary, for example,
> "keystonemiddleware can allow access after token revocation" but I have
> no means to say whether it affects us or how we can mitigate without
> more info.
> 
>  
> 
> Thoughts?
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally] Term "workload" has two clashing meanings

2016-04-11 Thread Aleksandr Maretskiy
Hi all,

this is about terminology, we have term "workload" in Rally that appears in
two clashing meanings:

 1. module rally.plugins.workload

which collects plugins for cross-VM testing
 2. workload replaces term "scenario" in our new input task format

(task->scenarios is replaced with task->subtasks->workloads)

Let's introduce new term as replacement of "1." (or maybe "2." but I
suppose this is not the best option).

Maybe rename rally.plugins.workload to:
   rally.plugins.
*vmload   *rally.plugins.*vmperf*
   rally.plugins.*shaker*
   rally.plugins.*vmworkload*
   ...more ideas?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Infra] Post processing of gate hooks on job timeouts

2016-04-11 Thread Assaf Muller
On Mon, Apr 11, 2016 at 9:39 AM, Morales, Victor
 wrote:
>
>
>
>
>
> On 4/11/16, 5:07 AM, "Jakub Libosvar"  wrote:
>
>>Hi,
>>
>>recently we hit an issue in Neutron with tests getting stuck [1]. As a
>>side effect we discovered logs are not collected properly which makes it
>>hard to find the root cause. The reason of missing logs is that we send
>>SIGKILL to whatever gate hook is running when we hit the global timeout
>>per gate job [2]. This gives no time to running process to perform any
>>post-processing. In post_gate_hook function in Neutron, we collect logs
>>from /tmp directory, compress them and move them to /opt/stack/logs to
>>make them exposed.
>>
>>I have in mind two solutions to which I'd like to get feedback before
>>sending patches.
>>
>>1) In Neutron, we execute tests in post_gate_hook (dunno why). But even
>>if we would have moved test execution into gate_hook and tests get stuck
>>then the post_gate_hook won't be triggered [3]. So the solution I
>>propose here is to terminate gate_hook N minutes before global timeout
>>and still execute post_gate_hook (with timeout) as post-processing routine.
>>
>>2) Second proposal is to let timeout wrapped commands know they are
>>about to be killed. We can send let's say SIGTERM instead of SIGKILL and
>>after certain amount of time, send SIGKILL. Example: We send SIGTERM 3
>>minutes before global timeout, letting these 3 minutes to 'command' to
>>handle the SIGTERM signal.
>>
>> timeout -s 15 -k 3 $((REMAINING_TIME-3))m bash -c "command"
>>
>>With the 2nd approach we can trap the signal that kills running test
>>suite and collects logs with same functions we currently have.
>>
>>
>>I would personally go with second option but I want to hear if anybody
>>has a better idea about post processing in gate jobs or if there is
>>already a tool we can use to collect logs.
>
> I also like the second option, it seems less aggressive and give opportunity 
> to catch
> more information before killing processes.  Ideally, timeouts are ultimatums 
> for worst-case scenarios
> and should be never reach it.

Kuba and I discussed this issue at length - I also think the 2nd
approach is reasonable but I'd like to see what more Devstack oriented
folks think.

>
>>
>>Thanks,
>>Kuba
>>
>>
>>[1] https://bugs.launchpad.net/bugs/1567668
>>[2]
>>https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L1151
>>[3]
>>https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate-wrap.sh#L581
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-11 Thread Ben Nemec
On 04/11/2016 04:54 AM, John Trowbridge wrote:
> Hola OOOers,
> 
> It came up in the meeting last week that we could benefit from a CI
> subteam with its own meeting, since CI is taking up a lot of the main
> meeting time.
> 
> I like this idea, and think we should do something similar for the other
> informal subteams (tripleoclient, UI), and also add a new subteam for
> tripleo-quickstart (and maybe one for releases?).
> 
> We should make seperate ACL's for these subteams as well. The informal
> approach of adding cores who can +2 anything but are told to only +2
> what they know doesn't scale very well.

How so?  Are we planning to give people +2 even though we don't trust
them to not +2 things they shouldn't?  I remain of the opinion that if
we need ACL controls to keep someone from doing something then they
shouldn't have +2 in the first place.

Quickstart is a bit of a weird case because the regular contributors to
it have not previously been very involved in TripleO upstream so I don't
think most of us have enough context to know whether they should have
+2.  I guess the UI would fall under the same category, so I'd be in
favor of keeping those two separate, but otherwise I think we're
creating bureaucracy for its own sake.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA

2016-04-11 Thread Kris G. Lindgren
You mean outside of the LDT filing an RFE bug with neutron to get 
segmented/routed network support added to neutron complete with an etherpad of 
all the ways we are using that at our companies and our use cases [1] .  Or 
where we (GoDaddy) came to the neutron Mid-cycle in Fort Collins to further 
talk about said use case as well as to put feelers out for ip-usages-extension. 
 Which was commited to Neutron in the Mitaka release [2]. 

These are just the things that I was am aware of and have been involved in 
neutron alone in the past 6 months, I am sure there are many more.

[1] - https://etherpad.openstack.org/p/Network_Segmentation_Usecases & 
https://bugs.launchpad.net/neutron/+bug/1458890
[2] - 
https://github.com/openstack/neutron/commit/2f741ca5f9545c388270ddab774e9e030b006d8a

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy







On 4/11/16, 9:11 AM, "Sean M. Collins"  wrote:

>To be blunt: Are we ensuring that all this work that people are
>capturing in these working groups is actually getting updated and
>communicated to the developers?
>
>As I become more involved with rolling upgrades, I will try and attend
>meetings and be available from the WG side, but I don't believe I've
>ever seen someone from the WG side come over to Neutron and say "We need
>XYZ and here's a link to what we've captured in our repo to explain what
>we mean"
>
>But then again I'm not on the neutron-drivers team or a core.
>
>Anyway, I updated what I've been involved with in the Mitaka cycle, when
>it comes to Neutron and upgrades (https://review.openstack.org/304181)
>
>-- 
>Sean M. Collins
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Monty Taylor

On 04/11/2016 09:43 AM, Allison Randal wrote:

On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas  wrote:

Reading unofficial notes [1], i found one topic very interesting:
One Platform – How do we truly support containers and bare metal under
a common API with VMs? (Ironic, Nova, adjacent communities e.g.
Kubernetes, Apache Mesos etc)

Anyone present at the meeting, please expand on those few notes on
etherpad? And how if any this feedback is getting back to the
projects?


It was really two separate conversations that got conflated in the
summary. One conversation was just being supportive of bare metal, VMs,
and containers within the OpenStack umbrella. The other conversation
started with Monty talking about his work on shade, and how it wouldn't
exist if more APIs were focused on the way users consume the APIs, and
less an expression of the implementation details of each project.
OpenStackClient was mentioned as a unified CLI for OpenStack focused
more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
but falls in the same general category of work.)

i.e. There wasn't anything new in the conversation, it was more a matter
of the developers/TC members on the board sharing information about work
that's already happening.


I agree with that - but would like to clarify the 'bare metal, VMs and 
containers' part a bit. (an in fact, I was concerned in the meeting that 
the messaging around this would be confusing because we 'supporting bare 
metal' and 'supporting containers' mean two different things but we use 
one phrase to talk about it.


It's abundantly clear at the strategic level that having OpenStack be 
able to provide both VMs and Bare Metal as two different sorts of 
resources (ostensibly but not prescriptively via nova) is one of our 
advantages. We wanted to underscore how important it is to be able to do 
that, and wanted to underscore that so that it's really clear how 
important it is any time the "but cloud should just be VMs" sentiment 
arises.


The way we discussed "supporting containers" was quite different and was 
not about nova providing containers. Rather, it was about reaching out 
to our friends in other communities and working with them on making 
OpenStack the best place to run things like kubernetes or docker swarm. 
Those are systems that ultimately need to run, and it seems that good 
integration (like kuryr with libnetwork) can provide a really strong 
story. I think pretty much everyone agrees that there is not much value 
to us or the world for us to compete with kubernetes or docker.


So, we do want to be supportive of bare metal and containers - but the 
specific _WAY_ we want to be supportive of those things is different for 
each one.


Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >