Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Adrian Otto
I’d be comfortable with server_type.

Adrian

On Jul 15, 2015, at 11:51 PM, Jay Lau 
mailto:jay.lau@gmail.com>> wrote:

After more thinking, I agree with Hongbin that instance_type might make 
customer confused with flavor, what about using server_type?

Actually, nova has concept of server group, the "servers" in this group can be 
vm. pm or container.

Thanks!

2015-07-16 11:58 GMT+08:00 Kai Qiang Wu 
mailto:wk...@cn.ibm.com>>:

Hi Hong Bin,

Thanks for your reply.


I think it is better to discuss the 'platform' Vs instance_type Vs others case 
first.
Attach:  initial patch (about the discussion): 
https://review.openstack.org/#/c/200401/

My other patches all depend on above patch, if above patch can not reach a 
meaningful agreement.

My following patches would be blocked by that.



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Hongbin Lu ---07/16/2015 11:47:30 AM---Kai, Sorry for the 
confusion. To clarify, I was thinking how to name the field you proposed in 
baymo

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 07/16/2015 11:47 AM

Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?




Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field you 
proposed in baymodel [1]. I prefer to drop it and use the existing field 
‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?


Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or 
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
 We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc. And 
separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot Baremetal. 
'VM' or Baremetal here is just used for heat template selection.


1> If used flavor, it is nova specific concept: take two as example,
   m1.small, or m1.middle.
  m1.small < 'VM' m1.middle < 'VM'
  Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I think 
flavor not good to be used.


2> @Adrian, we have --flavor-id field for baymodel now, it would picked up by 
heat-templates, and boot instances with such flavor.


3> Finally, I think instance_type is better.  instance_type can be used as heat 
templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat stack 
deploy

instance_type = 'baremetal', it means such templates fit for ironic baremetal 
heat stack deploy.





Thanks!


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
   No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Hongbin Lu ---07/16/2015 04:44:14 AM---+1 for the idea of using 
Nova flavor directly. Why we introduced the “platform” field to indicate “v

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 07/16/2015 04:44 AM
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?





+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is that 
magnum need to map a bay to a Heat template (which will be used to provision 
the bay). Currently, Magnum has three layers of mapping:

• platform: vm or baremetal
• os: atomic, coreos, …
• coe: kubernetes, swarm or mesos


I think we could just replace “platform” with “flavor”, if we can populate a 
list of flovars for VM and another list of flavors f

[openstack-dev] [manila][puppet-manila] Support to configure GlusterFS drivers with Manila shares.

2015-07-16 Thread sac
Hi,

This patch [1] adds support to configure GlusterFS drivers (both NFS and
GlusterFS-Native/FUSE)  with Manila shares. I've tested the patch, using
packstack patch [2]

No obvious errors, looks good.

Requesting to review the patch.

[1] https://review.openstack.org/#/c/200811/

[2] https://review.openstack.org/#/c/184447/


-sac
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Kai Qiang Wu
+ 1 about server_type.

I also think it is OK.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Adrian Otto 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   07/16/2015 03:18 PM
Subject:Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?



I’d be comfortable with server_type.

Adrian

  On Jul 15, 2015, at 11:51 PM, Jay Lau  wrote:

  After more thinking, I agree with Hongbin that instance_type might
  make customer confused with flavor, what about using server_type?

  Actually, nova has concept of server group, the "servers" in this
  group can be vm. pm or container.

  Thanks!

  2015-07-16 11:58 GMT+08:00 Kai Qiang Wu :
Hi Hong Bin,

Thanks for your reply.


I think it is better to discuss the 'platform' Vs instance_type Vs
others case first.
Attach:  initial patch (about the discussion):
https://review.openstack.org/#/c/200401/

My other patches all depend on above patch, if above patch can not
reach a meaningful agreement.

My following patches would be blocked by that.



Thanks


Best Wishes,



Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing
P.R.China 100193



Follow your heart. You are miracle!

Hongbin Lu ---07/16/2015 11:47:30 AM---Kai, Sorry for
the confusion. To clarify, I was thinking how to name the field you
proposed in baymo

From: Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)"

Date: 07/16/2015 11:47 AM



Subject: Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?



Kai,

Sorry for the confusion. To clarify, I was thinking how to name the
field you proposed in baymodel [1]. I prefer to drop it and use the
existing field ‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?


Hi HongBin,

I think flavors introduces more confusion than nova_instance_type
or instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
 We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if
else etc. And separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another
boot Baremetal. 'VM' or Baremetal here is just used for heat
template selection.


1> If used flavor, it is nova specific concept: take two as
example,
   m1.small, or m1.middle.
  m1.small < 'VM' m1.middle < 'VM'
  Both m1.small and m1.middle can be used in 'VM'
environment.
So we should not use m1.small as a template identification. That's
why I think flavor not good to be used.


2> @Adrian, we have --flavor-id field for baymodel now, it would
picked up by heat-templates, and boot instances with such flavor.


3> Finally, I think instance_type is better.  instance_type can be
used as heat templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM'
heat stack deploy

instance_type = 'baremetal', it means such templates fit for ironic
baremetal heat stack deploy.





Thanks!


Best Wishes,



Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring 

Re: [openstack-dev] [Neutron]Request for help to review a patch

2015-07-16 Thread Neil.Jerram
  As it is a bug fix, perhaps you could add this to the agenda for the next Neutron IRC meeting, in the Bugs section?Regards,       Neil  From: Damon WangSent: Thursday, 16 July 2015 07:18To: OpenStack Development Mailing List (not for usage questions)Reply To: OpenStack Development Mailing List (not for usage questions)Subject: [openstack-dev] [Neutron]Request for help to review a patchHi,I know that request review is not good in mail list, but the review process of this patch seems freeze except  gained two +1 :-)The review url is: https://review.openstack.org/#/c/172875/Thanks a lot,Wei wang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwith Hyper

2015-07-16 Thread Jay Lau
Hi Peng,


Just want to get more for Hyper. If we create a hyper bay, then can I set
up multiple hosts in a hyper bay? If so, who will do the scheduling, does
mesos or some others integrate with hyper?

I did not find much info for hyper cluster management.

Thanks.

2015-07-16 9:54 GMT+08:00 Peng Zhao :

>
>
>
>
>>
>>
>> -- Original --
>> *From: * “Adrian Otto”;
>> *Date: * Wed, Jul 15, 2015 02:31 AM
>> *To: * “OpenStack Development Mailing List (not for usage questions)“<
>> openstack-dev@lists.openstack.org>;
>>
>> *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal
>> withHyper
>>
>> Peng,
>>
>>  On Jul 13, 2015, at 8:37 PM, Peng Zhao  wrote:
>>
>>  Thanks Adrian!
>>
>>  Hi, all,
>>
>>  Let me recap what is hyper and the idea of hyperstack.
>>
>>  Hyper is a single-host runtime engine. Technically,
>> Docker = LXC + AUFS
>> Hyper = Hypervisor + AUFS
>> where AUFS is the Docker image.
>>
>>
>>  I do not understand the last line above. My understanding is that AUFS
>> == UnionFS, which is used to implement a storage driver for Docker. Others
>> exist for btrfs, and devicemapper. You select which one you want by setting
>> an option like this:
>>
>>  DOCKEROPTS=”-s devicemapper”
>>
>>  Are you trying to say that with Hyper, AUFS is used to provide layered
>> Docker image capability that are shared by multiple hypervisor guests?
>>
>> Peng >>> Yes, AUFS implies the Docker images here.
>
> My guess is that you are trying to articulate that a host running Hyper is
>> a 1:1 substitute for a host running Docker, and will respond using the
>> Docker remote API. This would result in containers running on the same host
>> that have a superior security isolation than they would if LXC was used as
>> the backend to Docker. Is this correct?
>>
>> Peng>>> Exactly
>
>>
>>  Due to the shared-kernel nature of LXC, Docker lacks of the necessary
>> isolation in a multi-tenant CaaS platform, and this is what
>> Hyper/hypervisor is good at.
>>
>>  And because of this, most CaaS today run on top of IaaS:
>> https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/388x275/e286dea1266b46c1999d566b0f9e326b/iaas.png
>> Hyper enables the native, secure, bare-metal CaaS
>> https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/395x244/828ad577dafb3f357e95899e962651b2/caas.png
>>
>>  From the tech stack perspective, Hyperstack turns Magnum o run in
>> parallel with Nova, not running on atop.
>>
>>
>>  For this to work, we’d expect to get a compute host from Heat, so if the
>> bay type were set to “hyper”, we’d need to use a template that can produce
>> a compute host running Hyper. How would that host be produced, if we do not
>> get it from nova? Might it make more sense to make a dirt driver for nova
>> that could produce a Hyper guest on a host already running the nova-compute
>> agent? That way Magnum would not need to re-create any of Nova’s
>> functionality in order to produce nova instances of type “hyper”.
>>
>
> Peng >>> We don’t have to get the physical host from nova. Let’s say
>OpenStack = Nova+Cinder+Neutron+Bare-metal+KVM, so “AWS-like IaaS for
> everyone else”
>HyperStack= Magnum+Cinder+Neutron+Bare-metal+Hyper, then “Google-like
> CaaS for everyone else”
>
> Ideally, customers should deploy a single OpenStack cluster, with both
> nova/kvm and magnum/hyper. I’m looking for a solution to make nova/magnum
> co-exist.
>
> Is Hyper compatible with libvirt?
>>
>
> Peng>>> We are working on the libvirt integration, expect in v0.5
>
>
>>  Can Hyper support nested Docker containers within the Hyper guest?
>>
>
> Peng>>> Docker in Docker? In a HyperVM instance, there is no docker
> daemon, cgroup and namespace (except MNT for pod). VM serves the purpose
> of isolation. We plan to support cgroup and namespace, so you can control
> whether multiple containers in a pod share the same namespace, or
> completely isolated. But in either case, no docker daemon is present.
>
>
>>  Thanks,
>>
>>  Adrian Otto
>>
>>
>>  Best,
>> Peng
>>
>>  -- Original --
>>  *From: * “Adrian Otto”;
>> *Date: * Tue, Jul 14, 2015 07:18 AM
>> *To: * “OpenStack Development Mailing List (not for usage questions)“<
>> openstack-dev@lists.openstack.org>;
>>
>> *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run on
>> metal withHyper
>>
>> Team,
>>
>>  I woud like to ask for your input about adding support for Hyper in
>> Magnum:
>>
>>  https://blueprints.launchpad.net/magnum/+spec/hyperstack
>>
>>  We touched on this in our last team meeting, and it was apparent that
>> achieving a higher level of understanding of the technology before weighing
>> in about the directional approval of this blueprint. Peng Zhao and Xu Wang
>> have graciously agreed to respond to this thread to address questions about
>> how the technology works, and how it could be integrated with Magnum.
>>
>>  Please take a moment to review the blueprint, and ask

[openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Vladimir Kozhukalov
Dear colleagues,

I'd like to suggest to get rid of Fuel upgrade tarball and convert this
thing into fuel-upgrade rpm package. Since we've switched to online rpm/deb
based upgrades, it seems we can stop packaging rpm/deb repositories and
docker containers into tarball and instead package upgrade python script
into rpm. It's gonna decrease the complexity of build process as well as
make it a little bit faster.

What do you think of this?


Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][port-security]Could not create vm in network with the port-security-enabled=False

2015-07-16 Thread ????
Hi all,
I could use the feature port-security-enabled on port. 
When I create net and subnet 
 neutron net-create net2 --port-security-enabled=False
 neutron subnet-create net2 6.6.6.0/24  --enable-dhcp=False 
--name subnet2
it works well.
 Then i create a vm in dashboard choosing net2, it returns "No valid host 
was found. There are not enough hosts available."  The log in 
nova-conductor.log says :
ERROR nova.scheduler.utils [req-a0cf72f9-2887-4d60-80f5-e515b72d64be 
6acf7be037184d2eaa6db168056a154a 6e95e4dfcb624c1fb4c14ed0ab1464a2 - - -] 
[instance: 29b7e973-eda1-43e7-a1d8-fd7d171a9c28] Error from last host: 
dvr-compute1.novalocal (node dvr-compute1.novalocal): [u'Traceback (most recent 
call last):\n', u'  File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2219, in 
_do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2362, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
29b7e973-eda1-43e7-a1d8-fd7d171a9c28 was re-scheduled: Network requires 
port_security_enabled and subnet associated in order to apply security 
groups.\n']


But when I create vm in dashboard, I don't choose any security-group.


BTW, does icehouse support port-security? I configure extension_drivers in 
devstack, but the neutron ext-list does not show port-security.
Do anyone could help?
Thank you.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is osapi_v3.enabled = False by default?

2015-07-16 Thread Alex Xu
2015-07-16 7:54 GMT+08:00 Ken'ichi Ohmichi :

> 2015-07-16 3:03 GMT+09:00 Sean Dague :
> > On 07/15/2015 01:44 PM, Matt Riedemann wrote:
> >> The osapi_v3.enabled option is False by default [1] even though it's
> >> marked as the CURRENT API and the v2 API is marked as SUPPORTED (and
> >> we've frozen it for new feature development).
> >>
> >> I got looking at this because osapi_v3.enabled is True in nova.conf in
> >> both the check-tempest-dsvm-nova-v21-full job and non-v21
> >> check-tempest-dsvm-full job, but only in the v21 job is
> >> "x-openstack-nova-api-version: '2.1'" used.
> >>
> >> Shouldn't the v2.1 API be enabled by default now?
> >>
> >> [1]
> >>
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n44
>
> Oops, nice catch.
> Yeah, we need to make the default enabled.
>
> > Honestly, we should probably deprecate out osapi_v3.enabled make it
> > osapi_v21 (or osapi_v2_microversions) so as to not confuse people
> further.
>
> +1 for renaming it to osapi_v21 (or osapi_v2_microversions).
>

Why we still need this option?


>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova] proxy quota/limits info from neutron

2015-07-16 Thread Alex Xu
2015-07-15 22:57 GMT+08:00 Matt Riedemann :

>
>
> On 7/15/2015 3:24 AM, Alex Xu wrote:
>
>>
>>
>> 2015-07-15 5:14 GMT+08:00 Matt Riedemann > >:
>>
>>
>>
>>
>> On 7/14/2015 3:43 PM, Cale Rath wrote:
>>
>> Hi,
>>
>> I created a patch to fail on the proxy call to Neutron for used
>> limits,
>> found here: https://review.openstack.org/#/c/199604/
>>
>> This patch was done because of this:
>>
>> http://docs.openstack.org/developer/nova/project_scope.html?highlight=proxy#no-more-api-proxies
>> ,
>> where it’s stated that Nova shouldn’t be proxying API calls.
>>
>> That said, Matt Riedemann brings up the point that this breaks
>> the case
>> where Neutron is installed and we want to be more graceful,
>> rather than
>> just raising an exception.  Here are some options:
>>
>> 1. fail - (the code in the patch above)
>> 2. proxy to neutron for floating ips and security groups -
>> that's what
>> the original change was doing back in havana
>> 3. return -1 or something for floatingips/security groups to
>> indicate
>> that we don't know, you have to get those from neutron
>>
>> Does anybody have an opinion on which option we should do
>> regarding API
>> proxies in this case?
>>
>> Thanks,
>>
>> Cale Rath
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> <
>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> I prefer the proxy option, despite that we don't want to do more
>> proxies to other services, it's the least of all evils here in my
>> opinion.
>>
>> I don't think we can do #1, that breaks anyone using those APIs and
>> is using Neutron, so it's a non-starter.
>>
>>
>> agree
>>
>>
>> #3 is an API change in semantics which would at least be a
>> microversion and is kind of clunky.
>>
>>
>> agree too~
>>
>>
>> For #2 we at least have the nova.network.base_api which we didn't
>> have in Havana when I was originally working on this, that would
>> abstract the neutron-specific cruft out of the nova-api code.  The
>> calls to neutron were pretty simple from what I remember - we could
>> just resurrect the old patch:
>>
>> https://review.openstack.org/#/c/43822/
>>
>>
>> +1, but looks like this need new microversion also. It means after 2.x
>> version, this api value is valid for neutron, before 2.x version, don't
>> trust this api...
>>
>
> I'm not exactly clear on why we couldn't implement this as a bug fix for
> v2.0?  I guess because of the standard reason we give for all microversions
> which is discoverability.
>

yes...It is the standard reason


>
> I guess in the v2.0 case we could just log the warning (option 4). It's
> not great, but at least it's a thing that an operator could find if they
> are using v2.0 and expecting proper quotas/limits values for security
> groups and floating IPs when using neutron but talking to the nova-api.
>

This info is more important for API user, so API doc is enough?


>
>
>>
>>
>> Another option is #4, we mark the bug as won't fix and we log a
>> warning if neutron is configured saying some of the resources aren't
>> going to be correct, use the neutron API to get information for
>> quotas on security groups, floating IPs, etc.  That's also kind of
>> gross IMO, but it's an option.
>>
>>
>> if we plan to deprecate network proxy api in no longer future, this is
>> easy option.
>>
>>
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__

Re: [openstack-dev] [nova] Why is osapi_v3.enabled = False by default?

2015-07-16 Thread Alex Xu
FYI, this should be part work of
https://github.com/openstack/nova-specs/blob/master/specs/liberty/approved/nova-api-remove-v3.rst

2015-07-16 16:27 GMT+08:00 Alex Xu :

>
>
> 2015-07-16 7:54 GMT+08:00 Ken'ichi Ohmichi :
>
>> 2015-07-16 3:03 GMT+09:00 Sean Dague :
>> > On 07/15/2015 01:44 PM, Matt Riedemann wrote:
>> >> The osapi_v3.enabled option is False by default [1] even though it's
>> >> marked as the CURRENT API and the v2 API is marked as SUPPORTED (and
>> >> we've frozen it for new feature development).
>> >>
>> >> I got looking at this because osapi_v3.enabled is True in nova.conf in
>> >> both the check-tempest-dsvm-nova-v21-full job and non-v21
>> >> check-tempest-dsvm-full job, but only in the v21 job is
>> >> "x-openstack-nova-api-version: '2.1'" used.
>> >>
>> >> Shouldn't the v2.1 API be enabled by default now?
>> >>
>> >> [1]
>> >>
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n44
>>
>> Oops, nice catch.
>> Yeah, we need to make the default enabled.
>>
>> > Honestly, we should probably deprecate out osapi_v3.enabled make it
>> > osapi_v21 (or osapi_v2_microversions) so as to not confuse people
>> further.
>>
>> +1 for renaming it to osapi_v21 (or osapi_v2_microversions).
>>
>
> Why we still need this option?
>
>
>>
>> Thanks
>> Ken Ohmichi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Vladimir Kozhukalov
By the way, first step for this to happen is to move
 stackforge/fuel-web/fuel_upgrade_system into a separate repository.
Fortunately, this directory is not the place where the code is continuously
changing (changes are rather seldom) and moving this project is going to
barely affect the whole development flow. So, action flow is as follows

0) patch to openstack-infra for creating new repository (workflow -1)
1) patch to Fuel CI to create verify jobs
2) freeze stackforge/fuel-web/fuel_upgrade_system directory
3) create upstream repository which is to be sucked in by openstack infra
4) patch to openstack-infra for creating new repository (workflow +1)
5) patch with rpm spec for fuel-upgrade package and other infrastructure
files like run_tests.sh
6) patch to perestroika to build fuel-upgrade package from new repo
7) patch to fuel-main to remove upgrade tarball
8) patch to Fuel CI to remove upgrade tarball
9) patch to fuel-web to remove fuel_upgrade_system directory



Vladimir Kozhukalov

On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> I'd like to suggest to get rid of Fuel upgrade tarball and convert this
> thing into fuel-upgrade rpm package. Since we've switched to online rpm/deb
> based upgrades, it seems we can stop packaging rpm/deb repositories and
> docker containers into tarball and instead package upgrade python script
> into rpm. It's gonna decrease the complexity of build process as well as
> make it a little bit faster.
>
> What do you think of this?
>
>
> Vladimir Kozhukalov
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-16 Thread Oleg Bondarev
+1

On Thu, Jul 16, 2015 at 2:04 AM, Brian Haley  wrote:

> +1
>
>
> On 07/15/2015 02:47 PM, Carl Baldwin wrote:
>
>> As the Neutron L3 Lieutenant along with Kevin Benton for control
>> plane, and Assaf Muller for testing, I would like to propose Cedric
>> Brandily as a member of the Neutron core reviewer team under these
>> areas of focus.
>>
>> Cedric has been a long time contributor to Neutron showing expertise
>> particularly in these areas.  His knowledge and involvement will be
>> very important to the project.  He is a trusted member of our
>> community.  He has been reviewing consistently [1][2] and community
>> feedback that I've received indicates that he is a solid reviewer.
>>
>> Existing Neutron core reviewers from these areas of focus, please vote
>> +1/-1 for the addition of Cedric to the team.
>>
>> Thanks!
>> Carl Baldwin
>>
>> [1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
>> [2] http://stackalytics.com/report/contribution/neutron-group/90
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Should we document the using of "device:owner" of the PORT ?

2015-07-16 Thread Salvatore Orlando
It is not possible to constrain this attribute to an enum, because there is
no fixed list of device owners. Nevertheless it's good to document know
device owners.

Likewise the API layer should have checks in place to ensure accidental
updates to this attributes do not impact control plane functionality or at
least do not leave the system in an inconsistent state.

Salvatore


On 16 July 2015 at 07:51, Kevin Benton  wrote:

> I'm guessing Salvatore might just be suggesting that we restrict users
> from populating values that have special meaning (e.g. l3 agent router
> interface ports). I don't think at this point we could constrain the owner
> field to essentially an enum at this point.
>
> On Wed, Jul 15, 2015 at 10:22 PM, Mike Kolesnik 
> wrote:
>
>>
>> --
>>
>> Yes please.
>>
>> This would be a good starting point.
>> I also think that the ability of editing it, as well as the value it
>> could be set to, should be constrained.
>>
>> FYI the oVirt project uses this field to identify ports it creates and
>> manages.
>> So if you're going to constrain it to something, it should probably be
>> configurable so that managers other than Nova can continue to use Neutron.
>>
>>
>> As you have surely noticed, there are several code path which rely on an
>> appropriate value being set in this attribute.
>> This means a user can potentially trigger malfunctioning by sending PUT
>> requests to edit this attribute.
>>
>> Summarizing, I think that document its usage is a good starting point,
>> but I believe we should address the way this attribute is exposed at the
>> API layer as well.
>>
>> Salvatore
>>
>>
>>
>> On 13 July 2015 at 11:52, Wang, Yalei  wrote:
>>
>>> Hi all,
>>> The device:owner the port is defined as a 255 byte string, and is widely
>>> used now, indicating the use of the port.
>>> Seems we can fill it freely, and user also could update/set it from cmd
>>> line(port-update $PORT_ID --device_owner), and I don’t find the guideline
>>> for using.
>>>
>>> What is its function? For indicating the using of the port, and seems
>>> horizon also use it to show the topology.
>>> And nova really need it editable, should we at least document all of the
>>> possible values into some guide to make it clear? If yes, I can do it.
>>>
>>> I got these using from the code(maybe not complete, pls point it out):
>>>
>>> From constants.py,
>>> DEVICE_OWNER_ROUTER_HA_INTF = "network:router_ha_interface"
>>> DEVICE_OWNER_ROUTER_INTF = "network:router_interface"
>>> DEVICE_OWNER_ROUTER_GW = "network:router_gateway"
>>> DEVICE_OWNER_FLOATINGIP = "network:floatingip"
>>> DEVICE_OWNER_DHCP = "network:dhcp"
>>> DEVICE_OWNER_DVR_INTERFACE = "network:router_interface_distributed"
>>> DEVICE_OWNER_AGENT_GW = "network:floatingip_agent_gateway"
>>> DEVICE_OWNER_ROUTER_SNAT = "network:router_centralized_snat"
>>> DEVICE_OWNER_LOADBALANCER = "neutron:LOADBALANCER"
>>>
>>> And from debug_agent.py
>>> DEVICE_OWNER_NETWORK_PROBE = 'network:probe'
>>> DEVICE_OWNER_COMPUTE_PROBE = 'compute:probe'
>>>
>>> And setting from nova/network/neutronv2/api.py,
>>> 'compute:%s' % instance.availability_zone
>>>
>>>
>>> Thanks all!
>>> /Yalei
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please add add 'Fuel' to list topic categories

2015-07-16 Thread Thierry Carrez
Qiming Teng wrote:
> I believe we are all receiving a large number of Fuel related messages
> everyday, but not all of us have the abundant bandwidth to read them.
> Maybe we can consider adding 'Fuel' to the topic categories we can check
> on/off when customising the subscription.
> 
> Currently, the option is to filter out "all messages that do not match
> any topic filter", which is an obvious overkill.
> 
> Thanks for considering this.

Added.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Aleksandra Fedorova
Hi, Vladimir,

I like the initiative, just to add some steps:

10) patch to fuel-qa/ and jenkins jobs to change the workflow of upgrades tests,
11) clarification on how upgrade should be tested (against which
repositories and ISO images), how update of upgrade rpm should be
tested
12) documentation fixes on how upgrade should be performed,
13) patch to HCF and release checklists to change publishing process.

I see the 11) as the most unclear here, so please include it in your
considerations with high priority. Let's involve QA team from the very
beginning.



On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov
 wrote:
> By the way, first step for this to happen is to move
> stackforge/fuel-web/fuel_upgrade_system into a separate repository.
> Fortunately, this directory is not the place where the code is continuously
> changing (changes are rather seldom) and moving this project is going to
> barely affect the whole development flow. So, action flow is as follows
>
> 0) patch to openstack-infra for creating new repository (workflow -1)
> 1) patch to Fuel CI to create verify jobs
> 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
> 3) create upstream repository which is to be sucked in by openstack infra
> 4) patch to openstack-infra for creating new repository (workflow +1)
> 5) patch with rpm spec for fuel-upgrade package and other infrastructure
> files like run_tests.sh
> 6) patch to perestroika to build fuel-upgrade package from new repo
> 7) patch to fuel-main to remove upgrade tarball
> 8) patch to Fuel CI to remove upgrade tarball
> 9) patch to fuel-web to remove fuel_upgrade_system directory
>
>
>
> Vladimir Kozhukalov
>
> On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov
>  wrote:
>>
>> Dear colleagues,
>>
>> I'd like to suggest to get rid of Fuel upgrade tarball and convert this
>> thing into fuel-upgrade rpm package. Since we've switched to online rpm/deb
>> based upgrades, it seems we can stop packaging rpm/deb repositories and
>> docker containers into tarball and instead package upgrade python script
>> into rpm. It's gonna decrease the complexity of build process as well as
>> make it a little bit faster.
>>
>> What do you think of this?
>>
>>
>> Vladimir Kozhukalov
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Aleksandra Fedorova
Fuel CI Engineer
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Tags, explain like I am five?

2015-07-16 Thread Thierry Carrez
Joshua Harlow wrote:
>> Tags can be proposed by anyone, not only by the TC and they get
>> discussed and voted on gerrit. The proposed tags need to be as objective
>> as possible. And there is a working group
>> (https://etherpad.openstack.org/p/ops-tags-June-2015) among operators
>> trying to define tags that may help operators to judge if a project is
>> good for them to use or not.
> 
> So my only thought about this is that ^ sounds like a lot of red-tape,
> and I really wonder if there is anyway to make this more 'relaxed' (and
> also 'fun') and/or less strict but still achieve the same result
> ("objectiveness"...).

Elevator pitch version:

Tags are a specific type of project metadata that we publish to
facilitate navigation in the "big tent" of OpenStack projects. Tags are
binary, opinionated definitions that objectively apply (or not apply) to
projects.

I don't really like the idea of a popularity contest to define "HA" or
"scales" -- anyone with a stake in the game and their cat will upvote or
downvote for no reason. I prefer to define HA in clear terms and have
some group maintain the tag across the set of projects.

I could imagine *some* project metadata to be based on popular votes,
where there is no real alternative -- for example the ops-defined data
on deployment is based on the user survey, which is certainly not exact
science, but our best guess. I just fail to see how *generally* relying
on popularity contests to define anything would result in better
information for our users...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Should we document the using of "device:owner" of the PORT ?

2015-07-16 Thread Kevin Benton
What do you think of just blocking all PUTs to that field? Is that a
feasible change without inducing widespread riots about breaking changes?

On Thu, Jul 16, 2015 at 2:53 AM, Salvatore Orlando 
wrote:

> It is not possible to constrain this attribute to an enum, because there
> is no fixed list of device owners. Nevertheless it's good to document know
> device owners.
>
> Likewise the API layer should have checks in place to ensure accidental
> updates to this attributes do not impact control plane functionality or at
> least do not leave the system in an inconsistent state.
>
> Salvatore
>
>
> On 16 July 2015 at 07:51, Kevin Benton  wrote:
>
>> I'm guessing Salvatore might just be suggesting that we restrict users
>> from populating values that have special meaning (e.g. l3 agent router
>> interface ports). I don't think at this point we could constrain the owner
>> field to essentially an enum at this point.
>>
>> On Wed, Jul 15, 2015 at 10:22 PM, Mike Kolesnik 
>> wrote:
>>
>>>
>>> --
>>>
>>> Yes please.
>>>
>>> This would be a good starting point.
>>> I also think that the ability of editing it, as well as the value it
>>> could be set to, should be constrained.
>>>
>>> FYI the oVirt project uses this field to identify ports it creates and
>>> manages.
>>> So if you're going to constrain it to something, it should probably be
>>> configurable so that managers other than Nova can continue to use Neutron.
>>>
>>>
>>> As you have surely noticed, there are several code path which rely on an
>>> appropriate value being set in this attribute.
>>> This means a user can potentially trigger malfunctioning by sending PUT
>>> requests to edit this attribute.
>>>
>>> Summarizing, I think that document its usage is a good starting point,
>>> but I believe we should address the way this attribute is exposed at the
>>> API layer as well.
>>>
>>> Salvatore
>>>
>>>
>>>
>>> On 13 July 2015 at 11:52, Wang, Yalei  wrote:
>>>
 Hi all,
 The device:owner the port is defined as a 255 byte string, and is
 widely used now, indicating the use of the port.
 Seems we can fill it freely, and user also could update/set it from cmd
 line(port-update $PORT_ID --device_owner), and I don’t find the guideline
 for using.

 What is its function? For indicating the using of the port, and seems
 horizon also use it to show the topology.
 And nova really need it editable, should we at least document all of
 the possible values into some guide to make it clear? If yes, I can do it.

 I got these using from the code(maybe not complete, pls point it out):

 From constants.py,
 DEVICE_OWNER_ROUTER_HA_INTF = "network:router_ha_interface"
 DEVICE_OWNER_ROUTER_INTF = "network:router_interface"
 DEVICE_OWNER_ROUTER_GW = "network:router_gateway"
 DEVICE_OWNER_FLOATINGIP = "network:floatingip"
 DEVICE_OWNER_DHCP = "network:dhcp"
 DEVICE_OWNER_DVR_INTERFACE = "network:router_interface_distributed"
 DEVICE_OWNER_AGENT_GW = "network:floatingip_agent_gateway"
 DEVICE_OWNER_ROUTER_SNAT = "network:router_centralized_snat"
 DEVICE_OWNER_LOADBALANCER = "neutron:LOADBALANCER"

 And from debug_agent.py
 DEVICE_OWNER_NETWORK_PROBE = 'network:probe'
 DEVICE_OWNER_COMPUTE_PROBE = 'compute:probe'

 And setting from nova/network/neutronv2/api.py,
 'compute:%s' % instance.availability_zone


 Thanks all!
 /Yalei



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kevin Benton
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.o

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwith Hyper

2015-07-16 Thread Peng Zhao
Hi Jay,
Yes, we are working with the community to integrate Hyper with Mesos and K8S.
Since Hyper uses Pod as the default job unit, it is quite easy to integrate with
K8S. Mesos takes a bit more efforts, but still straightforward.
We expect to finish both integration in v0.4 early August.
Best, Peng
- Hyper - Make VM run like 
Container


On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau < jay.lau@gmail.com > wrote:
Hi Peng,


Just want to get more for Hyper. If we create a hyper bay, then can I set up
multiple hosts in a hyper bay? If so, who will do the scheduling, does mesos or
some others integrate with hyper?

I did not find much info for hyper cluster management.

Thanks.

2015-07-16 9:54 GMT+08:00 Peng Zhao < p...@hyper.sh > :





-- Original --  From: “Adrian Otto”< 
adrian.otto@rackspace. com >; Date: Wed, Jul 15, 2015 02:31 AM To: “OpenStack 
Development Mailing List (not for usage questions)“< openstack-dev@ 
lists.openstack.org >;
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal withHyper
Peng,
On Jul 13, 2015, at 8:37 PM, Peng Zhao < p...@hyper.sh > wrote:
Thanks Adrian!
Hi, all,
Let me recap what is hyper and the idea of hyperstack.
Hyper is a single-host runtime engine. Technically, Docker = LXC + AUFS Hyper = 
Hypervisor + AUFS where AUFS is the Docker image.
I do not understand the last line above. My understanding is that AUFS ==
UnionFS, which is used to implement a storage driver for Docker. Others exist
for btrfs, and devicemapper. You select which one you want by setting an option
like this:
DOCKEROPTS= ” -s devicemapper ”
Are you trying to say that with Hyper, AUFS is used to provide layered Docker
image capability that are shared by multiple hypervisor guests?
Peng >>> Yes, AUFS implies the Docker images here.
My guess is that you are trying to articulate that a host running Hyper is a 1:1
substitute for a host running Docker, and will respond using the Docker remote
API. This would result in containers running on the same host that have a
superior security isolation than they would if LXC was used as the backend to
Docker. Is this correct?
Peng>>> Exactly
Due to the shared-kernel nature of LXC, Docker lacks of the necessary isolation
in a multi-tenant CaaS platform, and this is what Hyper/hypervisor is good at.
And because of this, most CaaS today run on top of IaaS: 
https://trello-attachments.s3. amazonaws.com/ 55545e127c7cbe0ec5b82f2b/
388x275/ e286dea1266b46c1999d566b0f9e32 6b/iaas.png Hyper enables the native, 
secure, bare-metal CaaS https://trello-attachments. s3.amazonaws.com/ 
55545e127c7cbe0ec5b82f2b/
395x244/ 828ad577dafb3f357e95899e962651 b2/caas.png
>From the tech stack perspective, Hyperstack turns Magnum o run in parallel with
Nova, not running on atop.
For this to work, we’d expect to get a compute host from Heat, so if the bay
type were set to “hyper”, we’d need to use a template that can produce a compute
host running Hyper. How would that host be produced, if we do not get it from
nova? Might it make more sense to make a dirt driver for nova that could produce
a Hyper guest on a host already running the nova-compute agent? That way Magnum
would not need to re-create any of Nova’s functionality in order to produce nova
instances of type “hyper”.
Peng >>> We don’t have to get the physical host from nova. Let’s say OpenStack 
= Nova+Cinder+Neutron+Bare- metal+KVM, so “AWS-like IaaS for everyone else”
HyperStack= Magnum+Cinder+Neutron+Bare- metal+Hyper, then “Google-like CaaS for 
everyone else”
Ideally, customers should deploy a single OpenStack cluster, with both nova/kvm
and magnum/hyper. I’m looking for a solution to make nova/magnum co-exist.
Is Hyper compatible with libvirt?
Peng>>> We are working on the libvirt integration, expect in v0.5

Can Hyper support nested Docker containers within the Hyper guest?
Peng>>> Docker in Docker? In a HyperVM instance, there is no docker daemon,
cgroup and namespace (except MNT for pod). VM serves the purpose of isolation.
We plan to support cgroup and namespace, so you can control whether multiple
containers in a pod share the same namespace, or completely isolated. But in
either case, no docker daemon is present.

Thanks,
Adrian Otto

Best, Peng
-- Original -- From: “Adrian Otto”< 
adrian.otto@rackspace. com >; Date: Tue, Jul 14, 2015 07:18 AM To: “OpenStack 
Development Mailing List (not for usage questions)“< openstack-dev@ 
lists.openstack.org >;
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper
Team,
I woud like to ask for your input about adding support for Hyper in Magnum:
https://blueprints.launchpad. net/magnum/+spec/hyperstack
We touched on this in our last team meeting, and it was apparent that achieving
a higher level of understanding of the technology before weighing in about the
directional approval of this blueprint. Peng Zhao and Xu Wang have graci

Re: [openstack-dev] [neutron] What does flavor mean for a network?

2015-07-16 Thread Neil Jerram

Thanks everyone for your responses...

On 15/07/15 21:01, Doug Wiegley wrote:
That begins to looks like nova’s metadata tags and scheduler, which is 
a valid use case. The underpinnings of flavors could do this, but it’s 
not in the initial implementation.


doug

On Jul 15, 2015, at 12:38 PM, Kevin Benton > wrote:


Wouldn't it be valid to assign flavors to groups of provider 
networks? e.g. a tenant wants to attach to a network that is wired up 
to a 40g router so he/she chooses a network of the "fat pipe" flavor.


Indeed.

Otherwise, why does 'flavor:network' exist at all in the current codebase?

As the code currently stands, 'flavor:network' appears to be consumed 
only by agent/linux/interface.py, with the logic that if the 
interface_driver setting is set to MetaInterfaceDriver, the interface 
driver class that is actually used for a particular network will be 
derived by using the network's 'flavor:network' value as a lookup key in 
the dict specified by the meta_flavor_driver_mappings setting.


Is that an intended part of the flavors design?

I hope it doesn't sound like I'm just complaining!  My reason for asking 
these questions is that I'm working at 
https://review.openstack.org/#/c/198439/ on a type of network that works 
through routing on each compute host instead of bridging, and two of the 
consequences of that are that


(1) there will not be L2 broadcast connectivity between the instances 
attached to such a network, whereas there would be with all existing 
Neutron network types


(2) the DHCP agent needs some changes to provide DHCP service on 
unbridged TAP interfaces.


Probably best here not to worry too much about the details.  But, at a 
high level:


- there is an aspect of the network's behavior that needs to be 
portrayed in the UI, so that tenants/projects can know when it is 
appropriate to attach instances to that network


- there is an aspect of the network's implementation that the DHCP agent 
needs to be aware of, so that it can adjust accordingly.


I believe the flavor:network 'works', for these purposes, in the senses 
that it is portrayed in the UI, and that it is available to software 
components such as the DHCP agent.  So I was wondering whether 
'flavor:network' would be the correct location in principle for a value 
identifying this kind of network, according to the intention of the 
flavors enhancement.





On Wed, Jul 15, 2015 at 10:40 AM, Madhusudhan Kandadai 
> wrote:




On Wed, Jul 15, 2015 at 9:25 AM, Kyle Mestery
mailto:mest...@mestery.com>> wrote:

On Wed, Jul 15, 2015 at 10:54 AM, Neil Jerram
mailto:neil.jer...@metaswitch.com>> wrote:

I've been reading available docs about the forthcoming
Neutron flavors framework, and am not yet sure I
understand what it means for a network.


In reality, this is envisioned more for service plugins (e.g.
flavors of LBaaS, VPNaaS, and FWaaS) than core neutron resources.

Yes. Right put. This is for service plugins and its part of
extensions than core network resources//


Is it a way for an admin to provide a particular kind of
network, and then for a tenant to know what they're
attaching their VMs to?


I'll defer to Madhu who is implementing this, but I don't
believe that's the intention at all.

Currently, an admin will be able to assign particular flavors,
unfortunately, this is not going to be tenant specific flavors.



To be clear - I wasn't suggesting or asking for tenant-specific 
flavors.  I only meant that a tenant might choose which network to 
attach a particular set of VMs to, depending on the flavors of the 
available networks.  (E.g. as in Kevin's example above.)



As you might have seen in the review, we are just using tenant_id
to bypass the keystone checks implemented in base.py and it is
not stored in the db as well. It is something to do in the future
and documented the same in the blueprint.


How does it differ from provider:network-type?  (I guess,
because the latter is supposed to be for implementation
consumption only - but is that correct?)


Flavors are created and curated by operators, and consumed by
API users.

+1



Many thanks,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is osapi_v3.enabled = False by default?

2015-07-16 Thread Sean Dague
On 07/15/2015 08:12 PM, GHANSHYAM MANN wrote:
> On Thu, Jul 16, 2015 at 3:03 AM, Sean Dague  wrote:
>> On 07/15/2015 01:44 PM, Matt Riedemann wrote:
>>> The osapi_v3.enabled option is False by default [1] even though it's
>>> marked as the CURRENT API and the v2 API is marked as SUPPORTED (and
>>> we've frozen it for new feature development).
>>>
>>> I got looking at this because osapi_v3.enabled is True in nova.conf in
>>> both the check-tempest-dsvm-nova-v21-full job and non-v21
>>> check-tempest-dsvm-full job, but only in the v21 job is
>>> "x-openstack-nova-api-version: '2.1'" used.
>>>
>>> Shouldn't the v2.1 API be enabled by default now?
>>>
>>> [1]
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n44
>>
>> Honestly, we should probably deprecate out osapi_v3.enabled make it
>> osapi_v21 (or osapi_v2_microversions) so as to not confuse people further.
>>
> 
> Nice Catch. We might have just forgot to make it default to True.
> 
> How about just deprecating it and remove in N and makes v21 enable all
> the time (irrespective of osapi_v3.enabled) as they are current now.

Yeh, that's probably a fine approach as well. I don't think we need an
option any more here.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Oleg Gelbukh
Vladimir,

I am fully support moving fuel-upgrade-system into repository of its own.
However, I'm not 100% sure how docker containers are going to appear on the
upgraded master node. Do we have public repository of Docker images
already? Or we are going to build them from scratch during the upgrade?

--
Best regards,
Oleg Gelbukh

On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> By the way, first step for this to happen is to move
>  stackforge/fuel-web/fuel_upgrade_system into a separate repository.
> Fortunately, this directory is not the place where the code is continuously
> changing (changes are rather seldom) and moving this project is going to
> barely affect the whole development flow. So, action flow is as follows
>
> 0) patch to openstack-infra for creating new repository (workflow -1)
> 1) patch to Fuel CI to create verify jobs
> 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
> 3) create upstream repository which is to be sucked in by openstack infra
> 4) patch to openstack-infra for creating new repository (workflow +1)
> 5) patch with rpm spec for fuel-upgrade package and other infrastructure
> files like run_tests.sh
> 6) patch to perestroika to build fuel-upgrade package from new repo
> 7) patch to fuel-main to remove upgrade tarball
> 8) patch to Fuel CI to remove upgrade tarball
> 9) patch to fuel-web to remove fuel_upgrade_system directory
>
>
>
> Vladimir Kozhukalov
>
> On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> I'd like to suggest to get rid of Fuel upgrade tarball and convert this
>> thing into fuel-upgrade rpm package. Since we've switched to online rpm/deb
>> based upgrades, it seems we can stop packaging rpm/deb repositories and
>> docker containers into tarball and instead package upgrade python script
>> into rpm. It's gonna decrease the complexity of build process as well as
>> make it a little bit faster.
>>
>> What do you think of this?
>>
>>
>> Vladimir Kozhukalov
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwith Hyper

2015-07-16 Thread Jay Lau
Thanks Peng, then I can see two integration points for Magnum and Hyper:

1) Once Hyper and k8s integration finished, we can deploy k8s in two mode:
docker and hyper mode, the end user can select which mode they want to use.
For such case, we do not need to create a new bay but may need some
enhancement for current k8s bay

2) After mesos and hyper integration,  we can treat mesos and hyper as a
new bay to magnum. Just like what we are doing now for mesos+marathon.

Thanks!

2015-07-16 17:38 GMT+08:00 Peng Zhao :

>Hi Jay,
>
> Yes, we are working with the community to integrate Hyper with Mesos and
> K8S. Since Hyper uses Pod as the default job unit, it is quite easy to
> integrate with K8S. Mesos takes a bit more efforts, but still
> straightforward.
>
> We expect to finish both integration in v0.4 early August.
>
> Best,
> Peng
>
> -
> Hyper - Make VM run like Container
>
>
>
> On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau  wrote:
>
>> Hi Peng,
>>
>>
>> Just want to get more for Hyper. If we create a hyper bay, then can I set
>> up multiple hosts in a hyper bay? If so, who will do the scheduling, does
>> mesos or some others integrate with hyper?
>>
>> I did not find much info for hyper cluster management.
>>
>> Thanks.
>>
>> 2015-07-16 9:54 GMT+08:00 Peng Zhao :
>>
>>>
>>>
>>>
>>>


 -- Original --
 *From: * “Adrian Otto”;
 *Date: * Wed, Jul 15, 2015 02:31 AM
 *To: * “OpenStack Development Mailing List (not for usage questions)“<
 openstack-dev@lists.openstack.org>;

 *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
 onmetal withHyper

 Peng,

  On Jul 13, 2015, at 8:37 PM, Peng Zhao  wrote:

  Thanks Adrian!

  Hi, all,

  Let me recap what is hyper and the idea of hyperstack.

  Hyper is a single-host runtime engine. Technically,
 Docker = LXC + AUFS
 Hyper = Hypervisor + AUFS
 where AUFS is the Docker image.


  I do not understand the last line above. My understanding is that AUFS
 == UnionFS, which is used to implement a storage driver for Docker. Others
 exist for btrfs, and devicemapper. You select which one you want by setting
 an option like this:

  DOCKEROPTS=”-s devicemapper”

  Are you trying to say that with Hyper, AUFS is used to provide
 layered Docker image capability that are shared by multiple hypervisor
 guests?

 Peng >>> Yes, AUFS implies the Docker images here.
>>>
>>> My guess is that you are trying to articulate that a host running Hyper
 is a 1:1 substitute for a host running Docker, and will respond using the
 Docker remote API. This would result in containers running on the same host
 that have a superior security isolation than they would if LXC was used as
 the backend to Docker. Is this correct?

 Peng>>> Exactly
>>>

  Due to the shared-kernel nature of LXC, Docker lacks of the necessary
 isolation in a multi-tenant CaaS platform, and this is what
 Hyper/hypervisor is good at.

  And because of this, most CaaS today run on top of IaaS:
 https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/388x275/e286dea1266b46c1999d566b0f9e326b/iaas.png
 Hyper enables the native, secure, bare-metal CaaS
 https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/395x244/828ad577dafb3f357e95899e962651b2/caas.png

  From the tech stack perspective, Hyperstack turns Magnum o run in
 parallel with Nova, not running on atop.


  For this to work, we’d expect to get a compute host from Heat, so if
 the bay type were set to “hyper”, we’d need to use a template that can
 produce a compute host running Hyper. How would that host be produced, if
 we do not get it from nova? Might it make more sense to make a dirt driver
 for nova that could produce a Hyper guest on a host already running the
 nova-compute agent? That way Magnum would not need to re-create any of
 Nova’s functionality in order to produce nova instances of type “hyper”.

>>>
>>> Peng >>> We don’t have to get the physical host from nova. Let’s say
>>>OpenStack = Nova+Cinder+Neutron+Bare-metal+KVM, so “AWS-like IaaS for
>>> everyone else”
>>>HyperStack= Magnum+Cinder+Neutron+Bare-metal+Hyper, then “Google-like
>>> CaaS for everyone else”
>>>
>>> Ideally, customers should deploy a single OpenStack cluster, with both
>>> nova/kvm and magnum/hyper. I’m looking for a solution to make nova/magnum
>>> co-exist.
>>>
>>> Is Hyper compatible with libvirt?

>>>
>>> Peng>>> We are working on the libvirt integration, expect in v0.5
>>>
>>>
  Can Hyper support nested Docker containers within the Hyper guest?

>>>
>>> Peng>>> Docker in Docker? In a HyperVM instance, there is no docker
>>> daemon, cgroup and namespace (except MNT for pod). VM s

Re: [openstack-dev] [Nova] Device names supplied to the boot request

2015-07-16 Thread Sean Dague
On 07/15/2015 01:41 PM, Andrew Laski wrote:
> On 07/15/15 at 12:19pm, Matt Riedemann wrote:

>> The other part of the discussion is around the API changes, not just
>> for libvirt, but having a microversion that removes the device from
>> the request so it's no longer optional and doesn't provide some false
>> sense that it works properly all of the time.  We talked about this in
>> the nova channel yesterday and I think the thinking was we wanted to
>> get agreement on dropping that with a microversion before moving
>> forward with the libvirt change you have to ignore the requested
>> device name.
>>
>> From what I recall, this was supposed to really only work reliably for
>> xen but now it actually might not, and would need to be tested again.
>> Seems we could start by checking the xen CI to see if it is running
>> the test_minimum_basic scenario test or anything in
>> test_attach_volume.py in Tempest.
> 
> This doesn't really work reliably for xen either, depending on what is
> being done.  For the xenapi driver Nova converts the device name
> provided into an integer based on the trailing letter, so 'vde' becomes
> 4, and asks xen to mount the device based on that int.  Xen does honor
> that integer request so you'll get an 'e' device, but you could be
> asking for hde and get an xvde or vice versa.

So this sounds like it's basically not working today. For Linux guests
it really can't work without custom in guest code anyway, given how
device enumeration works.

That feels to me like we remove it from the API with a microversion, and
when we do that just comment that trying to use this before that
microversion is highly unreliable (possibly dangerous) and may just
cause tears.

...

On a slight tangent, probably a better way to provide mount stability to
the guest is with FS labels. libvirt is already labeling the filesystems
it creates, and xenserver probably could as well. The infra folks ran
into an issue yesterday
http://status.openstack.org//elastic-recheck/#1475012 where using that
info was their fix.

It's not the same thing as deterministic devices, but deterministic
devices really aren't a thing on first boot unless you have guest agent
code, or only boot with one disk and hot plug the rest carefully.
Neither are really fun answers.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Freezer] Proposing new core reviewers

2015-07-16 Thread Marzi, Fausto
All,
At the Freezer Team, we think it's time to revisit the core reviewers for the 
Project.

The idea behind this would be add as core reviewer the engineers that delivered 
a complete feature in the Project and actively participated to the other 
reviews, meetings and conversations on the IRC channel in the last 6 months.

The current core contributors that haven't been active on any of the listed 
activities in the last 12 months, will be removed.

Proposed engineers:


-  Jonas Pfannschmidt: Delivered the Freezer Web UI integrated in 
Horizon



-  Fabrizio Vanni: Delivered the Freezer Scheduler and the Freezer API



-  Guillermo "Memo" Garcia: Delivered the Windows support and 
contributed to the Web UI


Many Thanks,
Fausto Marzi - Freezer PTL
#openstack-freezer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Vladimir Kozhukalov
Oleg,

All docker containers currently are distributed as rpm packages. A little
bit surprising, isn't it? But it works and we can easily deliver updates
using this old plain rpm based mechanism. The package in 6.1GA is called
fuel-docker-images-6.1.0-1.x86_64.rpm So, upgrade flow would be like this
0) add new (say 7.0) repository into /etc/yum.repos.d/some.repo
1) install fuel-upgrade package (yum install fuel-upgrade-7.0)
2) fuel-upgrade package has all other packages (docker, bootstrap image,
target images, puppet modules) as its dependencies
3) run fuel-upgrade script (say /usr/bin/fuel-upgrade) and it performs all
necessary actions like moving files, run new containers, upload fixtures
into nailgun via REST API.

It is necessary to note that we are talking here about Fuel master node
upgrades, not about Openstack cluster upgrades (which is the feature you
are working on).

Vladimir Kozhukalov

On Thu, Jul 16, 2015 at 1:22 PM, Oleg Gelbukh  wrote:

> Vladimir,
>
> I am fully support moving fuel-upgrade-system into repository of its own.
> However, I'm not 100% sure how docker containers are going to appear on the
> upgraded master node. Do we have public repository of Docker images
> already? Or we are going to build them from scratch during the upgrade?
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> By the way, first step for this to happen is to move
>>  stackforge/fuel-web/fuel_upgrade_system into a separate repository.
>> Fortunately, this directory is not the place where the code is continuously
>> changing (changes are rather seldom) and moving this project is going to
>> barely affect the whole development flow. So, action flow is as follows
>>
>> 0) patch to openstack-infra for creating new repository (workflow -1)
>> 1) patch to Fuel CI to create verify jobs
>> 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
>> 3) create upstream repository which is to be sucked in by openstack infra
>> 4) patch to openstack-infra for creating new repository (workflow +1)
>> 5) patch with rpm spec for fuel-upgrade package and other infrastructure
>> files like run_tests.sh
>> 6) patch to perestroika to build fuel-upgrade package from new repo
>> 7) patch to fuel-main to remove upgrade tarball
>> 8) patch to Fuel CI to remove upgrade tarball
>> 9) patch to fuel-web to remove fuel_upgrade_system directory
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> I'd like to suggest to get rid of Fuel upgrade tarball and convert this
>>> thing into fuel-upgrade rpm package. Since we've switched to online rpm/deb
>>> based upgrades, it seems we can stop packaging rpm/deb repositories and
>>> docker containers into tarball and instead package upgrade python script
>>> into rpm. It's gonna decrease the complexity of build process as well as
>>> make it a little bit faster.
>>>
>>> What do you think of this?
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Getting rid of launchpad group fuel-astute

2015-07-16 Thread Alexander Kislitsky
Dear colleagues,

I'd like to get rid of group fuel-astute on launchpad. In group only 2
active members. Actually they are members of fuel-python team. Bugs for
fuel-astute project always concern to fuel-web project. Bugs assigned to
fuel-astute can stay without attention for a long time. Thus I propose to
use fuel-python team instead fuel-astute.

First of all we should reassign team for bugs [1]. After that we can remove
or disable fuel-astute launchpad group.

What do you think about this?

[1] https://goo.gl/ap35t9
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-16 Thread John Garbutt
On 15 July 2015 at 19:25, Robert Collins  wrote:
> On 16 July 2015 at 02:18, Ed Leafe  wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA512
> ...
>> What I'd like to investigate is replacing the current design of having
>> the compute nodes communicating with the scheduler via message queues.
>> This design is overly complex and has several known scalability
>> issues. My thought is to replace this with a Cassandra [1] backend.
>> Compute nodes would update their state to Cassandra whenever they
>> change, and that data would be read by the scheduler to make its host
>> selection. When the scheduler chooses a host, it would post the claim
>> to Cassandra wrapped in a lightweight transaction, which would ensure
>> that no other scheduler has tried to claim those resources. When the
>> host has built the requested VM, it will delete the claim and update
>> Cassandra with its current state.
>
> +1 on doing an experiment.
>
> Some semi-random thoughts here. Well, not random at all, I've been
> mulling on this for a while.
>
> I think Kafka may fit our model significantly vis-a-vis updating state
> more closely than Cassandra does. It would be neat if we could do a
> few different sketchy implementations and head-to-head test them. I
> love Cassandra in a lot of ways, but lightweight-transaction are two
> words that I'd really not expect to see in Cassandra (Yes, I know it
> has them in the official docs and design :)) - its a full paxos
> interaction to do SERIAL consistency, which is more work than ether
> QUORUM or LOCAL_QUORUM. A sharded approach - there is only one compute
> node in question for the update needed - can be less work than either
> and still race free.
>
> I too also very much want to see us move to brokerless RPC,
> systematically, for all the reasons :). You might need a little of
> that mixed in to the experiments, depending on the scale reached.
>
> In terms of quantification; are you looking to test scalability (e.g.
> scheduling some N events per second without races), [there are huge
> improvements possible by rewriting the current schedulers innards to
> be less wasteful, but that doesn't address active-active setups],
> latency (e.g. 99th percentile time-to-schedule) or <...> ?

+1 for trying Kafka

I have tried to write up my thoughts on the Kafka approach (and a few
related things) in here:
https://review.openstack.org/#/c/191914/5/specs/backlog/approved/parallel-scheduler.rst,cm

Its trying to describe what I want to prototype for the next
scheduler, its also possibly one of the worse specs I have ever seen.
There may be some ideas worth nicking in there (there may not be!)

John

PS
I also cover my want for multiple schedulers living in Nova, long term
(We already have 2.5 schedulers, depending on how you count them)
I can see some of these schedulers being the "best" for a sub set of
deployments.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ENROLL node state is introduced - next steps [ACTION RECOMMENDED]

2015-07-16 Thread Dmitry Tantsur

Hi all!

Today we landed a patch [1] that switches (starting with API version 
1.11) node creation API to default to ENROLL node state instead of 
AVAILABLE. Nothing to worry about right now: we don't default to this 
API version (yet?) in our clients. But read on to figure out how not to 
get broken in the future.


Nodes in ENROLL state a basically just records in the database. They are 
not used for scheduling, the only way to make them enter the play is via 
"manage" provision actions. This means, when you switch to API 1.11 for 
node creation, your tooling will probably get broken. There are 2 steps 
to get your tooling prepared for it:


1. Switch to new version right now with fixing whatever breaks.
If you're targetting liberty I recommend you start explicitly using 1.11 
API, e.g. for CLI:


 $ ironic --ironic-api-version 1.11 node-create 

2. Even if you're not doing step 1, you can make your code compatible 
with both pre-1.11 and 1.11 API. Just insert 2 more transitions after 
creating a node - "manage" and "provide". E.g. for CLI:


 $ ironic node-set-provision-state UUID manage
 # wait
 $ ironic node-set-provision-state UUID provide
 # wait

For Kilo it would simply move node to MANAGEABLE and back to AVAILABLE.

Important side note: some people don't realize that ALL provision state 
transitions are asynchronous. And all of them can fail! Even if "manage" 
action was seemingly instant and synchronous before 1.11, it was not. 
Now with 1.11 API in place "manage" action may take substantial time and 
may fail. Make sure your tooling account for it.


Now it's up to the ironic team to decide [2] whether and when we're 
bumping ironicclient default API version to something above 1.11. Opinions?


[1] https://review.openstack.org/#/c/194722/
[2] https://review.openstack.org/#/c/196320/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [nova] schedule instance based on CPU frequency ?

2015-07-16 Thread Andrzej Kuriata
From: Chris Friesen [mailto:chris.frie...@windriver.com]
Sent: Thursday, July 16, 2015 8:07 AM
>
> On 07/15/2015 04:57 PM, Dugger, Donald D wrote:
>> In re: Static CPU frequency.  For modern Intel CPUs this really isn't true.
>> Turbo Boost is a feature that allows certain CPUs in certain
>> conditions to actually run at a higher clock rate that what is
>> advertised at power on (the havoc this causes code that depends upon
>> timing based upon CPU spin loops is left as an exercise for the reader
>> :-)
>
> Reasonably recent machines have constant rates for the timestamp counter even 
> in the face of CPU frequency variation.  Nobody should be using bare spin 
> loops.
>
>> Having said that, I think CPU frequency is a really bad metric to be
>> making any kind of scheduling decisions on.  A Core I7 running at 2
>> GHz is going to potentially run code faster than a Core I3 running at
>> 2.2 GHz (issues of micro-architecture and cache sizes impact
>> performance much more than minor variations in clock speed).  If you
>> really want to schedule based upon CPU capability you need to define
>> an abstract metric, identify how many of these abstract units apply to
>> the specific compute nodes in your cloud and do scheduling based upon
>> that.  There is actually work going to do just this, check out the BP:
>>
>> https://blueprints.launchpad.net/nova/+spec/normalized-compute-units
>
> I agree with the general concept, but I'm a bit concerned that the 
> "normalized"
> units will only be accurate for the specific units being tested.  Other 
> workloads may scale differently, especially if different CPU features are 
> exposed (potentially allowing for much more efficient low-level instructions).
>

The idea is to run benchmark process at the start of nova compute.
That process could be customized to base on:
- Mega/Giga Instructions per Second (M/GIPS),
- Floating-point Operations per Second (FLOPS),
- mix of those,
- or in general, any benchmarking algorithm, most relevant to mix of
workloads run on the host.
The result of benchmarking process would be a number of Normalized
Compute Units (NCUs) which given host supports. It would also be
possible to do benchmarking differently on different hosts.
There is a backlog spec created [1] to describe the idea - I encourage
everyone interested in this topic to post comments.

Thanks,
Andrzej

[1] https://review.openstack.org/#/c/192609/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Oleg Gelbukh
Vladimir,

Thank you, now it sounds concieving.

My understanding that at the moment all Docker images used by Fuel are
packaged in single RPM? Do you plan to split individual images into
separate RPMs?

Did you think about publishing those images to Dockerhub?

--
Best regards,
Oleg Gelbukh

On Thu, Jul 16, 2015 at 1:50 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Oleg,
>
> All docker containers currently are distributed as rpm packages. A little
> bit surprising, isn't it? But it works and we can easily deliver updates
> using this old plain rpm based mechanism. The package in 6.1GA is called
> fuel-docker-images-6.1.0-1.x86_64.rpm So, upgrade flow would be like this
> 0) add new (say 7.0) repository into /etc/yum.repos.d/some.repo
> 1) install fuel-upgrade package (yum install fuel-upgrade-7.0)
> 2) fuel-upgrade package has all other packages (docker, bootstrap image,
> target images, puppet modules) as its dependencies
> 3) run fuel-upgrade script (say /usr/bin/fuel-upgrade) and it performs all
> necessary actions like moving files, run new containers, upload fixtures
> into nailgun via REST API.
>
> It is necessary to note that we are talking here about Fuel master node
> upgrades, not about Openstack cluster upgrades (which is the feature you
> are working on).
>
> Vladimir Kozhukalov
>
> On Thu, Jul 16, 2015 at 1:22 PM, Oleg Gelbukh 
> wrote:
>
>> Vladimir,
>>
>> I am fully support moving fuel-upgrade-system into repository of its own.
>> However, I'm not 100% sure how docker containers are going to appear on the
>> upgraded master node. Do we have public repository of Docker images
>> already? Or we are going to build them from scratch during the upgrade?
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> By the way, first step for this to happen is to move
>>>  stackforge/fuel-web/fuel_upgrade_system into a separate repository.
>>> Fortunately, this directory is not the place where the code is continuously
>>> changing (changes are rather seldom) and moving this project is going to
>>> barely affect the whole development flow. So, action flow is as follows
>>>
>>> 0) patch to openstack-infra for creating new repository (workflow -1)
>>> 1) patch to Fuel CI to create verify jobs
>>> 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
>>> 3) create upstream repository which is to be sucked in by openstack infra
>>> 4) patch to openstack-infra for creating new repository (workflow +1)
>>> 5) patch with rpm spec for fuel-upgrade package and other infrastructure
>>> files like run_tests.sh
>>> 6) patch to perestroika to build fuel-upgrade package from new repo
>>> 7) patch to fuel-main to remove upgrade tarball
>>> 8) patch to Fuel CI to remove upgrade tarball
>>> 9) patch to fuel-web to remove fuel_upgrade_system directory
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Dear colleagues,

 I'd like to suggest to get rid of Fuel upgrade tarball and convert this
 thing into fuel-upgrade rpm package. Since we've switched to online rpm/deb
 based upgrades, it seems we can stop packaging rpm/deb repositories and
 docker containers into tarball and instead package upgrade python script
 into rpm. It's gonna decrease the complexity of build process as well as
 make it a little bit faster.

 What do you think of this?


 Vladimir Kozhukalov

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] New library release request process

2015-07-16 Thread Doug Hellmann
Excerpts from Andreas Jaeger's message of 2015-07-16 08:11:48 +0200:
> Doug,
> 
> I'm missing openstackdocstheme and openstack-doc-tools in your import. 
> How do you want to handle these?

There are some tools in the repository to extract the history from a
repo. I'll see what I can do for those 2 today.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Should we document the using of "device:owner" of the PORT ?

2015-07-16 Thread Akihiro Motoki
I think it is better to block PUT for device_owner/device_id by regular
users.
It can be controlled by policy.json.
If we do this change, we need to do it carefully because nova calls neutron
port
operations with regular user privilege if port binding extension is not
supported.

I agree that it is a good idea that API layer checks new values do not
affected
to neutron control plane.

IMHO, blocking the change to device_owner/id is simpler.
Multiple security bugs due to handling of this attribute were reported
and blocking updating it makes things simpler.


2015-07-16 18:26 GMT+09:00 Kevin Benton :

> What do you think of just blocking all PUTs to that field? Is that a
> feasible change without inducing widespread riots about breaking changes?
>
> On Thu, Jul 16, 2015 at 2:53 AM, Salvatore Orlando 
> wrote:
>
>> It is not possible to constrain this attribute to an enum, because there
>> is no fixed list of device owners. Nevertheless it's good to document know
>> device owners.
>>
>> Likewise the API layer should have checks in place to ensure accidental
>> updates to this attributes do not impact control plane functionality or at
>> least do not leave the system in an inconsistent state.
>>
>> Salvatore
>>
>>
>> On 16 July 2015 at 07:51, Kevin Benton  wrote:
>>
>>> I'm guessing Salvatore might just be suggesting that we restrict users
>>> from populating values that have special meaning (e.g. l3 agent router
>>> interface ports). I don't think at this point we could constrain the owner
>>> field to essentially an enum at this point.
>>>
>>> On Wed, Jul 15, 2015 at 10:22 PM, Mike Kolesnik 
>>> wrote:
>>>

 --

 Yes please.

 This would be a good starting point.
 I also think that the ability of editing it, as well as the value it
 could be set to, should be constrained.

 FYI the oVirt project uses this field to identify ports it creates and
 manages.
 So if you're going to constrain it to something, it should probably be
 configurable so that managers other than Nova can continue to use Neutron.


 As you have surely noticed, there are several code path which rely on
 an appropriate value being set in this attribute.
 This means a user can potentially trigger malfunctioning by sending PUT
 requests to edit this attribute.

 Summarizing, I think that document its usage is a good starting point,
 but I believe we should address the way this attribute is exposed at the
 API layer as well.

 Salvatore



 On 13 July 2015 at 11:52, Wang, Yalei  wrote:

> Hi all,
> The device:owner the port is defined as a 255 byte string, and is
> widely used now, indicating the use of the port.
> Seems we can fill it freely, and user also could update/set it from
> cmd line(port-update $PORT_ID --device_owner), and I don’t find the
> guideline for using.
>
> What is its function? For indicating the using of the port, and seems
> horizon also use it to show the topology.
> And nova really need it editable, should we at least document all of
> the possible values into some guide to make it clear? If yes, I can do it.
>
> I got these using from the code(maybe not complete, pls point it out):
>
> From constants.py,
> DEVICE_OWNER_ROUTER_HA_INTF = "network:router_ha_interface"
> DEVICE_OWNER_ROUTER_INTF = "network:router_interface"
> DEVICE_OWNER_ROUTER_GW = "network:router_gateway"
> DEVICE_OWNER_FLOATINGIP = "network:floatingip"
> DEVICE_OWNER_DHCP = "network:dhcp"
> DEVICE_OWNER_DVR_INTERFACE = "network:router_interface_distributed"
> DEVICE_OWNER_AGENT_GW = "network:floatingip_agent_gateway"
> DEVICE_OWNER_ROUTER_SNAT = "network:router_centralized_snat"
> DEVICE_OWNER_LOADBALANCER = "neutron:LOADBALANCER"
>
> And from debug_agent.py
> DEVICE_OWNER_NETWORK_PROBE = 'network:probe'
> DEVICE_OWNER_COMPUTE_PROBE = 'compute:probe'
>
> And setting from nova/network/neutronv2/api.py,
> 'compute:%s' % instance.availability_zone
>
>
> Thanks all!
> /Yalei
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions

[openstack-dev] [nova] Nova API Meeting

2015-07-16 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held
tomorrow Friday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Fri)
Japan 21:00 (Fri)
China 20:00 (Fri)
United Kingdom 13:00 (Fri)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Vladimir Kozhukalov
Oleg,

Yes, you are right. At the moment all docker containers are packaged into a
single rpm package. Yes, it would be great to split them into several
one-by-one rpms, but it is not my current priority. I'll definitely think
of this when I'll be moving so called "late" packages (which depend on
other packages) into "perestroika". Yet another thing is that eventually
all those packages and containers will be "artifacts" and will be treated
differently according to their nature. That will be the time when we'll be
thinking of a docker registry and other stuff like this.






Vladimir Kozhukalov

On Thu, Jul 16, 2015 at 2:58 PM, Oleg Gelbukh  wrote:

> Vladimir,
>
> Thank you, now it sounds concieving.
>
> My understanding that at the moment all Docker images used by Fuel are
> packaged in single RPM? Do you plan to split individual images into
> separate RPMs?
>
> Did you think about publishing those images to Dockerhub?
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Thu, Jul 16, 2015 at 1:50 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Oleg,
>>
>> All docker containers currently are distributed as rpm packages. A little
>> bit surprising, isn't it? But it works and we can easily deliver updates
>> using this old plain rpm based mechanism. The package in 6.1GA is called
>> fuel-docker-images-6.1.0-1.x86_64.rpm So, upgrade flow would be like this
>> 0) add new (say 7.0) repository into /etc/yum.repos.d/some.repo
>> 1) install fuel-upgrade package (yum install fuel-upgrade-7.0)
>> 2) fuel-upgrade package has all other packages (docker, bootstrap image,
>> target images, puppet modules) as its dependencies
>> 3) run fuel-upgrade script (say /usr/bin/fuel-upgrade) and it performs
>> all necessary actions like moving files, run new containers, upload
>> fixtures into nailgun via REST API.
>>
>> It is necessary to note that we are talking here about Fuel master node
>> upgrades, not about Openstack cluster upgrades (which is the feature you
>> are working on).
>>
>> Vladimir Kozhukalov
>>
>> On Thu, Jul 16, 2015 at 1:22 PM, Oleg Gelbukh 
>> wrote:
>>
>>> Vladimir,
>>>
>>> I am fully support moving fuel-upgrade-system into repository of its
>>> own. However, I'm not 100% sure how docker containers are going to appear
>>> on the upgraded master node. Do we have public repository of Docker images
>>> already? Or we are going to build them from scratch during the upgrade?
>>>
>>> --
>>> Best regards,
>>> Oleg Gelbukh
>>>
>>> On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 By the way, first step for this to happen is to move
  stackforge/fuel-web/fuel_upgrade_system into a separate repository.
 Fortunately, this directory is not the place where the code is continuously
 changing (changes are rather seldom) and moving this project is going to
 barely affect the whole development flow. So, action flow is as follows

 0) patch to openstack-infra for creating new repository (workflow -1)
 1) patch to Fuel CI to create verify jobs
 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
 3) create upstream repository which is to be sucked in by openstack
 infra
 4) patch to openstack-infra for creating new repository (workflow +1)
 5) patch with rpm spec for fuel-upgrade package and other
 infrastructure files like run_tests.sh
 6) patch to perestroika to build fuel-upgrade package from new repo
 7) patch to fuel-main to remove upgrade tarball
 8) patch to Fuel CI to remove upgrade tarball
 9) patch to fuel-web to remove fuel_upgrade_system directory



 Vladimir Kozhukalov

 On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> I'd like to suggest to get rid of Fuel upgrade tarball and convert
> this thing into fuel-upgrade rpm package. Since we've switched to online
> rpm/deb based upgrades, it seems we can stop packaging rpm/deb 
> repositories
> and docker containers into tarball and instead package upgrade python
> script into rpm. It's gonna decrease the complexity of build process as
> well as make it a little bit faster.
>
> What do you think of this?
>
>
> Vladimir Kozhukalov
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _

Re: [openstack-dev] [neutron][db] online-schema-migrations patch landed

2015-07-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 07/15/2015 04:03 PM, Salvatore Orlando wrote:
> Do you reckon that the process that led to creating a migration
> like [1] should also be documented in devref?

https://review.openstack.org/202534
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVp6YxAAoJEC5aWaUY1u5759wH/RTu1EypLWtgJfUplQisMD6K
qkve7/reMSzdxUYH+7M4oqdcKJt+ADi3FqtEkmhGlp19AxZMCkQ8lyRiBrz/I65X
rrChRpa/taXm0d4/+Qj3qB7q8FN1OdmpYMOIMjRBE0yfju0G+PP5iIyBxh3x3t5q
TJkGGNcF9G2hOTs3Pj4YzJC6D0RbGxPG5gcBjG/i6FzvUn2XBr3nhVG+3JZMR4Cz
BxG27FkJX9GbgtP0dXETzvwq52lHXT4m9U8tp0Kwd7nqqPVSeXbRDdHDyuawpBKs
GKiur+n8wIPuQkdmDyhH+QA+pHYz1vrcvI7EGpAD4/Fj8xBs0bVq+5SuSofsVmw=
=E/Tx
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Oleg Gelbukh
Vladimir,

Good, thank you for extended answer.

--
Best regards,
Oleg Gelbukh

On Thu, Jul 16, 2015 at 3:30 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Oleg,
>
> Yes, you are right. At the moment all docker containers are packaged into
> a single rpm package. Yes, it would be great to split them into several
> one-by-one rpms, but it is not my current priority. I'll definitely think
> of this when I'll be moving so called "late" packages (which depend on
> other packages) into "perestroika". Yet another thing is that eventually
> all those packages and containers will be "artifacts" and will be treated
> differently according to their nature. That will be the time when we'll be
> thinking of a docker registry and other stuff like this.
>
>
>
>
>
>
> Vladimir Kozhukalov
>
> On Thu, Jul 16, 2015 at 2:58 PM, Oleg Gelbukh 
> wrote:
>
>> Vladimir,
>>
>> Thank you, now it sounds concieving.
>>
>> My understanding that at the moment all Docker images used by Fuel are
>> packaged in single RPM? Do you plan to split individual images into
>> separate RPMs?
>>
>> Did you think about publishing those images to Dockerhub?
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Thu, Jul 16, 2015 at 1:50 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Oleg,
>>>
>>> All docker containers currently are distributed as rpm packages. A
>>> little bit surprising, isn't it? But it works and we can easily deliver
>>> updates using this old plain rpm based mechanism. The package in 6.1GA is
>>> called fuel-docker-images-6.1.0-1.x86_64.rpm So, upgrade flow would be like
>>> this
>>> 0) add new (say 7.0) repository into /etc/yum.repos.d/some.repo
>>> 1) install fuel-upgrade package (yum install fuel-upgrade-7.0)
>>> 2) fuel-upgrade package has all other packages (docker, bootstrap image,
>>> target images, puppet modules) as its dependencies
>>> 3) run fuel-upgrade script (say /usr/bin/fuel-upgrade) and it performs
>>> all necessary actions like moving files, run new containers, upload
>>> fixtures into nailgun via REST API.
>>>
>>> It is necessary to note that we are talking here about Fuel master node
>>> upgrades, not about Openstack cluster upgrades (which is the feature you
>>> are working on).
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Thu, Jul 16, 2015 at 1:22 PM, Oleg Gelbukh 
>>> wrote:
>>>
 Vladimir,

 I am fully support moving fuel-upgrade-system into repository of its
 own. However, I'm not 100% sure how docker containers are going to appear
 on the upgraded master node. Do we have public repository of Docker images
 already? Or we are going to build them from scratch during the upgrade?

 --
 Best regards,
 Oleg Gelbukh

 On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> By the way, first step for this to happen is to move
>  stackforge/fuel-web/fuel_upgrade_system into a separate repository.
> Fortunately, this directory is not the place where the code is 
> continuously
> changing (changes are rather seldom) and moving this project is going to
> barely affect the whole development flow. So, action flow is as follows
>
> 0) patch to openstack-infra for creating new repository (workflow -1)
> 1) patch to Fuel CI to create verify jobs
> 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
> 3) create upstream repository which is to be sucked in by openstack
> infra
> 4) patch to openstack-infra for creating new repository (workflow +1)
> 5) patch with rpm spec for fuel-upgrade package and other
> infrastructure files like run_tests.sh
> 6) patch to perestroika to build fuel-upgrade package from new repo
> 7) patch to fuel-main to remove upgrade tarball
> 8) patch to Fuel CI to remove upgrade tarball
> 9) patch to fuel-web to remove fuel_upgrade_system directory
>
>
>
> Vladimir Kozhukalov
>
> On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> I'd like to suggest to get rid of Fuel upgrade tarball and convert
>> this thing into fuel-upgrade rpm package. Since we've switched to online
>> rpm/deb based upgrades, it seems we can stop packaging rpm/deb 
>> repositories
>> and docker containers into tarball and instead package upgrade python
>> script into rpm. It's gonna decrease the complexity of build process as
>> well as make it a little bit faster.
>>
>> What do you think of this?
>>
>>
>> Vladimir Kozhukalov
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>



Re: [openstack-dev] [openstack-announce] End of life for managed stable/icehouse branches

2015-07-16 Thread Thomas Goirand
On 07/15/2015 12:05 PM, Thierry Carrez wrote:
> The "cost" of keeping stable branches around without CI is more a
> branding cost than a technical cost, I think.

Which is why I suggested to rename the branches, if it poses a problem.
For example eol/icehouse would have been fine.

> An OpenStack upstream
> stable branch means a number of things, and lack of CI isn't one of
> them. We also have tooling that looks at "stable/*" and applies rules to
> it. If we have kept stable/icehouse upstream, it would have been renamed
> no-more-tested/icehouse or something to make sure we don't call two
> completely different things under the same name.

Sure.

> It feels like you're (or were) mostly after a private zone to share
> icehouse security patches

Yes. And I was expecting a private security gerrit for that.

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Matthew Mosesohn
One item that will impact this separation is that fuel_upgrade
implicitly depends on the openstack.yaml release file from
fuel-nailgun. Without it, the upgrade process won't work. We should
refactor fuel-nailgun to implement this functionality on its own, but
then have fuel_upgrade call this piece. Right now, we're copying the
openstack.yaml for the target version of Fuel and embedding it in the
tarball[1].
Instead, the version should be taken from the new version of
fuel-nailgun that is installed inside the nailgun container.

The other file which gets embedded in the upgrade tarball is the
version.yaml file, but I think that's okay to embed during RPM build.

[1]https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/openstack.py#L211-L213

On Thu, Jul 16, 2015 at 3:55 PM, Oleg Gelbukh  wrote:
> Vladimir,
>
> Good, thank you for extended answer.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Thu, Jul 16, 2015 at 3:30 PM, Vladimir Kozhukalov
>  wrote:
>>
>> Oleg,
>>
>> Yes, you are right. At the moment all docker containers are packaged into
>> a single rpm package. Yes, it would be great to split them into several
>> one-by-one rpms, but it is not my current priority. I'll definitely think of
>> this when I'll be moving so called "late" packages (which depend on other
>> packages) into "perestroika". Yet another thing is that eventually all those
>> packages and containers will be "artifacts" and will be treated differently
>> according to their nature. That will be the time when we'll be thinking of a
>> docker registry and other stuff like this.
>>
>>
>>
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Thu, Jul 16, 2015 at 2:58 PM, Oleg Gelbukh 
>> wrote:
>>>
>>> Vladimir,
>>>
>>> Thank you, now it sounds concieving.
>>>
>>> My understanding that at the moment all Docker images used by Fuel are
>>> packaged in single RPM? Do you plan to split individual images into separate
>>> RPMs?
>>>
>>> Did you think about publishing those images to Dockerhub?
>>>
>>> --
>>> Best regards,
>>> Oleg Gelbukh
>>>
>>> On Thu, Jul 16, 2015 at 1:50 PM, Vladimir Kozhukalov
>>>  wrote:

 Oleg,

 All docker containers currently are distributed as rpm packages. A
 little bit surprising, isn't it? But it works and we can easily deliver
 updates using this old plain rpm based mechanism. The package in 6.1GA is
 called fuel-docker-images-6.1.0-1.x86_64.rpm So, upgrade flow would be like
 this
 0) add new (say 7.0) repository into /etc/yum.repos.d/some.repo
 1) install fuel-upgrade package (yum install fuel-upgrade-7.0)
 2) fuel-upgrade package has all other packages (docker, bootstrap image,
 target images, puppet modules) as its dependencies
 3) run fuel-upgrade script (say /usr/bin/fuel-upgrade) and it performs
 all necessary actions like moving files, run new containers, upload 
 fixtures
 into nailgun via REST API.

 It is necessary to note that we are talking here about Fuel master node
 upgrades, not about Openstack cluster upgrades (which is the feature you 
 are
 working on).

 Vladimir Kozhukalov

 On Thu, Jul 16, 2015 at 1:22 PM, Oleg Gelbukh 
 wrote:
>
> Vladimir,
>
> I am fully support moving fuel-upgrade-system into repository of its
> own. However, I'm not 100% sure how docker containers are going to appear 
> on
> the upgraded master node. Do we have public repository of Docker images
> already? Or we are going to build them from scratch during the upgrade?
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov
>  wrote:
>>
>> By the way, first step for this to happen is to move
>> stackforge/fuel-web/fuel_upgrade_system into a separate repository.
>> Fortunately, this directory is not the place where the code is 
>> continuously
>> changing (changes are rather seldom) and moving this project is going to
>> barely affect the whole development flow. So, action flow is as follows
>>
>> 0) patch to openstack-infra for creating new repository (workflow -1)
>> 1) patch to Fuel CI to create verify jobs
>> 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
>> 3) create upstream repository which is to be sucked in by openstack
>> infra
>> 4) patch to openstack-infra for creating new repository (workflow +1)
>> 5) patch with rpm spec for fuel-upgrade package and other
>> infrastructure files like run_tests.sh
>> 6) patch to perestroika to build fuel-upgrade package from new repo
>> 7) patch to fuel-main to remove upgrade tarball
>> 8) patch to Fuel CI to remove upgrade tarball
>> 9) patch to fuel-web to remove fuel_upgrade_system directory
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov
>>  wrote:
>>>
>>> Dear collea

Re: [openstack-dev] [openstack-announce] End of life for managed stable/icehouse branches

2015-07-16 Thread Thomas Goirand
On 07/15/2015 12:37 PM, Ihar Hrachyshka wrote:
> On 07/14/2015 09:14 PM, Thomas Goirand wrote:
>> On 07/14/2015 10:29 AM, Ihar Hrachyshka wrote:
>>> On 07/14/2015 12:33 AM, Thomas Goirand wrote:
 I missed this announce...
>>>
 On 07/02/2015 05:32 AM, Jeremy Stanley wrote:
> Per the Icehouse EOL discussion[1] last month, now that the 
> final 2014.1.5 release[2] is behind us I have followed our
> usual end of life steps for stable/icehouse branches on repos
> under the control of the OpenStack Release Cycle Management
> project-team. Specifically, for any repos with the
> release:managed[3] tag, icehouse-specific test jobs were
> removed from our CI system and all open change reviews were
> abandoned for stable/icehouse. Then the final states of the
> branches were tagged as "icehouse-eol" and the branches
> subsequently deleted.
>>>
 I believe I asked you about 10 times to keep these branches
 alive, so that distributions could work together on a longer
 support, even without a CI behind it.
>>>
 I have also asked for a private gerrit for maintaining the 
 Icehouse patches after EOL.
>>>
 While I understand the later means some significant work, I
 don't understand why you have deleted the Icehouse branches.
>>>
 Effectively, under these conditions, I am giving up doing any
 kind of coordination between distros for security patches of
 Icehouse. :(
>>>
>>> As far as I know, there was no real coordination on those
>>> patches before, neither I saw any real steps from any side to get
>>> it up.
> 
>> Well... as far as I know, you were not there during the
>> conversations we had at the summits about this. Neither you are on
>> my list of Icehouse security persons. So I fail to see how you
>> could be in the loop for this indeed.
> 
> 
> Indeed, in Openstack, people work in public, and publish details about
> their (private?) talks on summits on the mailing list. This is the
> place where decisions are made, not summits, and it's a pity that some
> people see chats on summits as something defining the future.

I do understand that not writing about it on the list (or anything else
which everyone could read) was my mistake. It wont happen twice, I swear.

> If you don't think I (a member of stable-maint-core) should have been
> in the loop, fine for me.

I regret you haven't been in the loop indeed.

>>> That said, anyone can come up with an initiative to maintain
>>> those branches under some 3party roof (just push -eol tag into
>>> github and advertise), and if there is real (and not just
>>> anticipated) collaboration going on around it, then the project
>>> may reconsider getting it back in the big stadium.
> 
>> I have a list of contacts for each and every downstream
>> distributions.
> 
> Whom have you contacted on RDO side? Just curious.

I personally know Haikel Gemmar, Alan Pevec and Mathias Grunge. Alan
Pevec was (is?) my contact here.

> I am not sure RDO would be interested in consuming pieces of unclear
> quality (no CI) thru rebase only to realize that half of those are not
> valid. I would not dare to lower quality of 'after-eol' releases of
> RDO by rebasing on top of unvalidated patches.

What we discussed was that distributions would run their own CI and
validate patches away from upstream CI, then we would agree on a patch
and share it.

If you see a better way to work out things, I'd be happy to define a new
procedure.

Now, about maintenance of the stable CI, I really would like to have
enough time to participate maintaining it. Just like *many* other things
for which I don't have time. :(

Hopefully, starting with Liberty, I wont be the only one doing the
packaging work in Debian, and I'll have more time for other things.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of launchpad group fuel-astute

2015-07-16 Thread Evgeniy L
Hi,

Agree, most of the python developers are familiar with Astute.
But at the same time it can look strange when we assign Astute (which is
written in Ruby)
bugs to the group which is called fuel-python :)

Thanks,

On Thu, Jul 16, 2015 at 2:21 PM, Alexander Kislitsky <
akislit...@mirantis.com> wrote:

> Dear colleagues,
>
> I'd like to get rid of group fuel-astute on launchpad. In group only 2
> active members. Actually they are members of fuel-python team. Bugs for
> fuel-astute project always concern to fuel-web project. Bugs assigned to
> fuel-astute can stay without attention for a long time. Thus I propose to
> use fuel-python team instead fuel-astute.
>
> First of all we should reassign team for bugs [1]. After that we can
> remove or disable fuel-astute launchpad group.
>
> What do you think about this?
>
> [1] https://goo.gl/ap35t9
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] New library release request process

2015-07-16 Thread Anne Gentle
On Thu, Jul 16, 2015 at 6:58 AM, Doug Hellmann 
wrote:

> Excerpts from Andreas Jaeger's message of 2015-07-16 08:11:48 +0200:
> > Doug,
> >
> > I'm missing openstackdocstheme and openstack-doc-tools in your import.
> > How do you want to handle these?
>
> There are some tools in the repository to extract the history from a
> repo. I'll see what I can do for those 2 today.
>


Thanks Doug (and Andreas for asking). I was going to look myself since we
need a release of openstackdocstheme pretty soon.

Much appreciation,
Anne


>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Hongbin Lu
I am OK with server_type as well.

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-16-15 3:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?


+ 1 about server_type.

I also think it is OK.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Adrian Otto ---07/16/2015 03:18:04 PM---I’d be 
comfortable with server_type. Adrian]Adrian Otto ---07/16/2015 03:18:04 
PM---I’d be comfortable with server_type. Adrian

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 07/16/2015 03:18 PM
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?





I’d be comfortable with server_type.

Adrian
On Jul 15, 2015, at 11:51 PM, Jay Lau 
mailto:jay.lau@gmail.com>> wrote:

After more thinking, I agree with Hongbin that instance_type might make 
customer confused with flavor, what about using server_type?

Actually, nova has concept of server group, the "servers" in this group can be 
vm. pm or container.

Thanks!

2015-07-16 11:58 GMT+08:00 Kai Qiang Wu 
mailto:wk...@cn.ibm.com>>:
Hi Hong Bin,

Thanks for your reply.


I think it is better to discuss the 'platform' Vs instance_type Vs others case 
first.
Attach:  initial patch (about the discussion): 
https://review.openstack.org/#/c/200401/

My other patches all depend on above patch, if above patch can not reach a 
meaningful agreement.

My following patches would be blocked by that.



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
   No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Hongbin Lu ---07/16/2015 11:47:30 AM---Kai, Sorry for the 
confusion. To clarify, I was thinking how to name the field you proposed in 
baymo

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 07/16/2015 11:47 AM

Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?





Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field you 
proposed in baymodel [1]. I prefer to drop it and use the existing field 
‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?


Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or 
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc. And 
separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot Baremetal. 
'VM' or Baremetal here is just used for heat template selection.


1> If used flavor, it is nova specific concept: take two as example,
  m1.small, or m1.middle.
 m1.small < 'VM' m1.middle < 'VM'
 Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I think 
flavor not good to be used.


2> @Adrian, we have --flavor-id field for baymodel now, it would picked up by 
heat-templates, and boot instances with such flavor.


3> Finally, I think instance_type is better.  instance_type can be used as heat 
templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat stack 
deploy

instance_type = 'baremetal', it means such templates fit for ironic baremetal 
heat stack deploy.





Thanks!


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM 

Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas v2

2015-07-16 Thread Jain, Vivek
A quick reminder that we will be meeting today at 16:00UTC (9:00 am PDT) in 
#openstack-lbaas to discuss Horizon LBaaS v2 UI.

Thanks,
Vivek

From: "Balle, Susanne" mailto:susanne.ba...@hp.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, July 15, 2015 at 10:35 AM
To: "Eichberger, German" 
mailto:german.eichber...@hp.com>>, "OpenStack 
Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Tonse, Milan" mailto:mto...@ebay.com>>
Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas 
v2

I agree with German. Let’s keep things together for now. Susanne

From: Eichberger, German
Sent: Wednesday, July 15, 2015 1:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Balle, Susanne; Tonse, Milan
Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas 
v2

Hi,

Let’s move it into the LBaaS repo that seems like the right place for me —

Thanks,
German

From: "Jain, Vivek" mailto:vivekj...@ebay.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 14, 2015 at 10:22 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Balle Balle, Susanne" mailto:susanne.ba...@hp.com>>, 
"Tonse, Milan" mailto:mto...@ebay.com>>
Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas 
v2

Thanks Akihiro. Currently lbaas panels are part of horizon repo. If there is a 
easy way to de-couple lbaas dashboard from horizon? I think that will simplify 
development efforts. What does it take to separate lbaas dashboard from horizon?

Thanks,
Vivek

From: Akihiro Motoki mailto:amot...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 14, 2015 at 10:09 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Balle, Susanne" mailto:susanne.ba...@hp.com>>, 
"Tonse, Milan" mailto:mto...@ebay.com>>
Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas 
v2

Another option is to create a project under openstack.
designate-dashboard project takes this approach,
and the core team of the project is both horizon-core and designate-core.
We can do the similar approach. Thought?

I have one question.
Do we have a separate place forever or do we want to merge horizon repo
once the implementation are available.
If we have a separate repo for LBaaS v2 panel, we need to release it separately.

I am not sure I am available at LBaaS meeting, but I would like to help
this efforts as a core from horizon and neutron.

Akihiro


2015-07-15 1:52 GMT+09:00 Doug Wiegley 
mailto:doug...@parksidesoftware.com>>:
I’d be good submitting it to the neutron-lbaas repo, under a horizon/ 
directory. We can iterate there, and talk with the Horizon team about how best 
to integrate. Would that work?

Thanks,
doug

> On Jul 13, 2015, at 3:05 PM, Jain, Vivek 
> mailto:vivekj...@ebay.com>> wrote:
>
> Hi German,
>
> We integrated UI with LBaaS v2 GET APIs. We have created all panels for
> CREATE and UPDATE as well.
> Plan is to share our code with community on stackforge for more
> collaboration from the community.
>
> So far Ganesh from cisco has shown interest in helping with some work. It
> will be great if we can get more hands.
>
> Q: what is the process for hosting in-progress project on stackforge? Can
> someone help me here?
>
> Thanks,
> Vivek
>
> On 7/10/15, 11:40 AM, "Eichberger, German" 
> mailto:german.eichber...@hp.com>>
> wrote:
>
>> Hi Vivek,
>>
>> Hope things are well. With the Midccyle next week I am wondering if you
>> made any progress and/or how we can best help with the panels.
>>
>> Thanks,
>> German
>>
>> From: "Jain, Vivek" 
>> mailto:vivekj...@ebay.com>>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org>
>> g>>
>> Date: Wednesday, April 8, 2015 at 3:32 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org>
>> g>>
>> Cc: "Balle Balle, Susanne"
>> mailto:susanne.ba...@hp.com>>>,
>>  "Tonse, Milan"
>> mailto:mto...@ebay.com>>>
>> Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for
>> neutron-lbaas v2
>>
>> Thanks German for the etherpad link. If you have any documentation for
>> flows, please share those too.
>>
>> I will work with my team at 

Re: [openstack-dev] [openstack-announce] End of life for managed stable/icehouse branches

2015-07-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 07/16/2015 03:06 PM, Thomas Goirand wrote:
> What we discussed was that distributions would run their own CI
> and validate patches away from upstream CI, then we would agree on
> a patch and share it.
> 

That's fine. I think a good start would be to get them voting on
existing *supported* branches. It will only give stable maintainers
another way to assure backports make sense.

But I am not convinced that those CI systems are in place everywhere:
I don't know anything that works on patch granularity in RDO (bulk rpm
updates are indeed validated).

It's my assumption that setting all those per-distro CIs working is a
tough nut to crack. It would be easier for everyone to provide
resources to keep the gate passing for existing stable branches. If
the gate is passing for a stable branch, and the project has enough
human resources to expect that it will still work in the next month, I
don't see particular reason to kill the branch if there is indeed
demand to keep it.

Working on upstream gate stability obviously does not invalidate any
effort to introduce distribution CI votes in gerrit, and I would be
happy to see RDO or Ubuntu meaningfully voting on backports. It's my
belief though that distribution CI votes cannot serve as a replacement
for upstream gate.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVp7HTAAoJEC5aWaUY1u57Dt8IAIY+T99uFmGbSBS3MhekxaJ3
Ry3XD96JOORCvGJ/N8c69GhQrY4h/qa1aK+Ua0cZ4BDo+VwUYvjhVO35xJo+mWXB
Ir8A60iAARpaUXtEZ8E9R8dtEk7mQXAIl/igeq6M/DlTSUG+QSJG1Eh1Au4RI8PH
gyqFuIxNk8UU8hDhBMQLtm+ZB7jIoF2GdAZKnGR2o7ZdwE1Y6PJbzeTkr274XcNM
WLMY/JEv2kkxiZBfKbIUjO8PEab+wN5OOxsEzapx77Kulb1fYAKSxR33jHPm0S/w
zZt2ev6fhkcnTQQGsLfeIfczzYHYmdT7D/85R2lGKR1S/5HKyQxj5omOeycihTQ=
=wbqg
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is osapi_v3.enabled = False by default?

2015-07-16 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 07/16/2015 03:27 AM, Alex Xu wrote:
> 
>> Honestly, we should probably deprecate out osapi_v3.enabled make
>> it osapi_v21 (or osapi_v2_microversions) so as to not confuse
>> people further.
> 
> +1 for renaming it to osapi_v21 (or osapi_v2_microversions).
> 
> 
> Why we still need this option?

Agreed - it isn't needed anymore.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJVp7fGAAoJEKMgtcocwZqL0McP/3BZZoRUiMr0YhSNAgaNXziu
HaJVXYiB8Aui/9ixtKDmO/4JL6lhjAgV2b2JMWce2rJxTLtDCoJBpZ2tUOVejlaH
5qvJjLeU2r3wPUphoF/3G/Q0ABaCKbgz/J6duJFOjH2YaHfHs1qEwsDjrmN3MZ4n
Ir6HMqKyo94bg5tGPAkcvxVX55G9RVjbLWojMH/4GEklzNekoKcOVWzCoSS1bQ9J
JLjiZfF7mrqAVkO24xljcM95Obpl2iToNsdcmqUOtobJb9KvV5SRQ8p9QTh9TDhj
pQg0RHbTqdxv3RE6QyhoSTdeOj+trorIj3gVUu4xcMcJnDCcVIeoxociXVhK70bk
e8j4/q90ILGrAz701I7XjPfzKl6UQxjjGRHsCIXGsMnPFg3oENziuD4PvscRhrsj
iFqBFMUD4j2jdIbm7MaHzmj5rrtl56rAcsrbdFXSu3HwdJzxnchRFHd2M6ExoQJ1
u2kfaev+sUdVzkoOL26GIV2GG4nC+4gE4Qy4ScRj0Ib+2VhnldNdKqhnGmh5WgGB
sEqOrZ03Qqc3VGwTJK/9RuMkkAvDN6ny76kQGeKUDcMggJdbwqTFyy3iEba9O23U
UQPr+mK2uwy5s5HFtdudMe2tUQ5YXikXde+y549Y1UmoN3dcBB7Pjvn6+TbC9tE7
QBjLDJX92bp6T3sL8asE
=e3dk
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-library] Using librarian-puppet to manage upstream fuel-library modules

2015-07-16 Thread Aleksandr Didenko
Hi,

guys, what if we "simplify" things a bit? All we need is:

   1. Remove all the community modules from fuel-library.
   2. Create 'Puppetfile' with list of community modules and their versions
   that we currently use.
   3. Make sure all our customizations are proposed to the upstream modules
   (via gerrit or github pull-requests).
   4. Create a separate file with list of patches for each module we need
   to cherry-pick (we need to support gerrit reviews and github pull-requests).
   5. Update 'make iso' scripts:
  1. Make them use 'r10k' (or other tool) to download upstream modules
  based on 'Puppetfile'
  2. Iterate over list of patches for each module and cherry-pick them
  (just like we do for custom ISO build. I'm not sure if librarian provides
  such possibility)

Eventually, when all the functionality we rely on is accepted in upstream
modules, we'll get rid of file with list of patches for modules. But
meanwhile it should be much easier to manage modules and customization in
such way.

Regards,

Alex



On Fri, Jul 10, 2015 at 5:25 PM, Alex Schultz  wrote:

> Done. Sorry about that.
>
> -Alex
>
> On Fri, Jul 10, 2015 at 9:22 AM, Simon Pasquier 
> wrote:
>
>> Alex, could you enable the comments for all on your document?
>> Thanks!
>> Simon
>>
>> On Thu, Jul 9, 2015 at 11:07 AM, Bogdan Dobrelya 
>> wrote:
>>
>>> > Hello everyone,
>>> >
>>> > I took some time this morning to write out a document[0] that outlines
>>> > one possible ways for us to manage our upstream modules in a more
>>> > consistent fashion. I know we've had a few emails bouncing around
>>> > lately around this topic of our use of upstream modules and how can we
>>> > improve this. I thought I would throw out my idea of leveraging
>>> > librarian-puppet to manage the upstream modules within our
>>> > fuel-library repository. Ideally, all upstream modules should come
>>> > from upstream sources and be removed from the fuel-library itself.
>>> > Unfortunately because of the way our repository sits today, this is a
>>> > very large undertaking and we do not currently have a way to manage
>>> > the inclusion of the modules in an automated way. I believe this is
>>> > where librarian-puppet can come in handy and provide a way to manage
>>> > the modules. Please take a look at my document[0] and let me know if
>>> > there are any questions.
>>> >
>>> > Thanks,
>>> > -Alex
>>> >
>>> > [0]
>>> https://docs.google.com/document/d/13aK1QOujp2leuHmbGMwNeZIRDr1bFgJi88nxE642xLA/edit?usp=sharing
>>>
>>> The document is great, Alex!
>>> I'm fully support the idea to start adapting fuel-library by
>>> the suggested scheme. The "monitoring" feature of ibrarian looks not
>>> intrusive and we have no blockers to start using the librarian just
>>> immediately.
>>>
>>> --
>>> Best regards,
>>> Bogdan Dobrelya,
>>> Irc #bogdando
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mercador] Meeting this week

2015-07-16 Thread Geoff Arnold
The Mercador meeting this week will be a break-out from the Keystone sprint (as 
well as on IRC). See 

https://wiki.openstack.org/wiki/Meetings/MercadorTeamMeeting 


Geoff__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Add support for Keystone's Fernet encryption keys management: initialization, rotation

2015-07-16 Thread Adam Heczko
Hi Folks,
Keystone supports Fernet tokens which have payload encrypted by AES 128 bit
key.
Although AES 128 bit key looks secure enough for most OpenStack deployments
[2], one may would like to rotate encryption keys according to already
proposed 3 step key rotation scheme (in case keys get compromised or
organizational security policy requirement).
Also creation and initial AES key distribution between Keystone HA nodes
could be challenging and this complexity could be handled by Fuel
deployment tool.

In regards to Fuel, I'd like to:
1. Add support for initializing Keystone's Fernet signing keys to Fuel
during OpenStack cluster (Keystone) deployment
2. Add support for rotating Keystone's Fernet signing keys to Fuel
according to some automatic schedule (for example one rotation per week) or
triggered from the Fuel web user interface or through Fuel API.

These two capabilities will be implemented in Fuel by related blueprint [1].

[1] https://blueprints.launchpad.net/fuel/+spec/fernet-tokens-support
[2] http://www.eetimes.com/document.asp?doc_id=1279619


Regards,

-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] schedule instance based on CPU frequency ?

2015-07-16 Thread Dugger, Donald D
In re: Normalized units.  I agree that this is dependent upon benchmarking and 
there are many issues with the reliability of benchmarks.  Having said that, I 
think normalized units is a much better metric to be using than CPU frequency.  
The reality is that the end user doesn't care about CPU freq., the end user 
cares about how fast the computer will run his job.  That is the goal of the 
normalized compute units BP, to provide a consistent way of measuring the 
performance of the compute node.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com] 
Sent: Thursday, July 16, 2015 12:07 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] schedule instance based on CPU frequency ?

On 07/15/2015 04:57 PM, Dugger, Donald D wrote:
> In re: Static CPU frequency.  For modern Intel CPUs this really isn't true.
> Turbo Boost is a feature that allows certain CPUs in certain 
> conditions to actually run at a higher clock rate that what is 
> advertised at power on (the havoc this causes code that depends upon 
> timing based upon CPU spin loops is left as an exercise for the reader 
> :-)

Reasonably recent machines have constant rates for the timestamp counter even 
in the face of CPU frequency variation.  Nobody should be using bare spin loops.

> Having said that, I think CPU frequency is a really bad metric to be 
> making any kind of scheduling decisions on.  A Core I7 running at 2 
> GHz is going to potentially run code faster than a Core I3 running at 
> 2.2 GHz (issues of micro-architecture and cache sizes impact 
> performance much more than minor variations in clock speed).  If you 
> really want to schedule based upon CPU capability you need to define 
> an abstract metric, identify how many of these abstract units apply to 
> the specific compute nodes in your cloud and do scheduling based upon 
> that.  There is actually work going to do just this, check out the BP:
>
> https://blueprints.launchpad.net/nova/+spec/normalized-compute-units

I agree with the general concept, but I'm a bit concerned that the "normalized" 
units will only be accurate for the specific units being tested.  Other 
workloads may scale differently, especially if different CPU features are 
exposed (potentially allowing for much more efficient low-level instructions).

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Non-responsive upstream libraries (python34 specifically)

2015-07-16 Thread Davanum Srinivas
Hi all,

I ended up here:
https://github.com/linsomniac/python-memcached/issues/54
https://github.com/linsomniac/python-memcached/pull/67

while chasing a keystone py34 CI problem since memcached is running in
our CI VM:
https://review.openstack.org/#/c/177661/

and got word from @zigo that this library and several other libraries
have a long lag time for  (or never respond!)

What do we do in these situations?

Thanks,
dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwith Hyper

2015-07-16 Thread Adrian Otto
Jay,

Hyper is a substitute for a Docker host, so I expect it could work equally well 
for all of the current bay types. Hyper’s idea of a “pod” and a Kubernetes 
“pod” are similar, but different. I’m not yet convinced that integrating Hyper 
host creation direct with Magnum (and completely bypassing nova) is a good 
idea. It probably makes more sense to implement use nova with the ironic dirt 
driver to provision Hyper hosts so we can use those as substitutes for Bay 
nodes in our various Bay types. This would fit in the place were we use Fedora 
Atomic today. We could still rely on nova to do all of the machine instance 
management and accounting like we do today, but produce bays that use Hyper 
instead of a Docker host. Everywhere we currently offer CoreOS as an option we 
could also offer Hyper as an alternative, with some caveats.

There may be some caveats/drawbacks to consider before committing to a Hyper 
integration. I’ll be asking those of Peng also on this thread, so keep an eye 
out.

Thanks,

Adrian

On Jul 16, 2015, at 3:23 AM, Jay Lau 
mailto:jay.lau@gmail.com>> wrote:

Thanks Peng, then I can see two integration points for Magnum and Hyper:

1) Once Hyper and k8s integration finished, we can deploy k8s in two mode: 
docker and hyper mode, the end user can select which mode they want to use. For 
such case, we do not need to create a new bay but may need some enhancement for 
current k8s bay

2) After mesos and hyper integration,  we can treat mesos and hyper as a new 
bay to magnum. Just like what we are doing now for mesos+marathon.

Thanks!

2015-07-16 17:38 GMT+08:00 Peng Zhao mailto:p...@hyper.sh>>:
Hi Jay,

Yes, we are working with the community to integrate Hyper with Mesos and K8S. 
Since Hyper uses Pod as the default job unit, it is quite easy to integrate 
with K8S. Mesos takes a bit more efforts, but still straightforward.

We expect to finish both integration in v0.4 early August.

Best,
Peng

-
Hyper - Make VM run like Container



On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau 
mailto:jay.lau@gmail.com>> wrote:
Hi Peng,


Just want to get more for Hyper. If we create a hyper bay, then can I set up 
multiple hosts in a hyper bay? If so, who will do the scheduling, does mesos or 
some others integrate with hyper?

I did not find much info for hyper cluster management.

Thanks.

2015-07-16 9:54 GMT+08:00 Peng Zhao mailto:p...@hyper.sh>>:






-- Original --
From:  “Adrian 
Otto”mailto:adrian.o...@rackspace.com>>;
Date:  Wed, Jul 15, 2015 02:31 AM
To:  “OpenStack Development Mailing List (not for usage 
questions)“mailto:openstack-dev@lists.openstack.org>>;

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal withHyper

Peng,

On Jul 13, 2015, at 8:37 PM, Peng Zhao mailto:p...@hyper.sh>> 
wrote:

Thanks Adrian!

Hi, all,

Let me recap what is hyper and the idea of hyperstack.

Hyper is a single-host runtime engine. Technically,
Docker = LXC + AUFS
Hyper = Hypervisor + AUFS
where AUFS is the Docker image.

I do not understand the last line above. My understanding is that AUFS == 
UnionFS, which is used to implement a storage driver for Docker. Others exist 
for btrfs, and devicemapper. You select which one you want by setting an option 
like this:

DOCKEROPTS=”-s devicemapper”

Are you trying to say that with Hyper, AUFS is used to provide layered Docker 
image capability that are shared by multiple hypervisor guests?

Peng >>> Yes, AUFS implies the Docker images here.

My guess is that you are trying to articulate that a host running Hyper is a 
1:1 substitute for a host running Docker, and will respond using the Docker 
remote API. This would result in containers running on the same host that have 
a superior security isolation than they would if LXC was used as the backend to 
Docker. Is this correct?

Peng>>> Exactly

Due to the shared-kernel nature of LXC, Docker lacks of the necessary isolation 
in a multi-tenant CaaS platform, and this is what Hyper/hypervisor is good at.

And because of this, most CaaS today run on top of IaaS: 
https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/388x275/e286dea1266b46c1999d566b0f9e326b/iaas.png
Hyper enables the native, secure, bare-metal CaaS  
https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/395x244/828ad577dafb3f357e95899e962651b2/caas.png

From the tech stack perspective, Hyperstack turns Magnum o run in parallel with 
Nova, not running on atop.

For this to work, we’d expect to get a compute host from Heat, so if the bay 
type were set to “hyper”, we’d need to use a template that can produce a 
compute host running Hyper. How would that host be produced, if we do not get 
it from nova? Might it make more sense to make a dirt driver for nova that 
could produce a Hyper guest on a host already running the nova-compute agent? 
That way Magnum would not need to re-create any of Nova’s functionality

Re: [openstack-dev] [openstack-announce] End of life for managed stable/icehouse branches

2015-07-16 Thread Thomas Goirand
On 07/16/2015 03:29 PM, Ihar Hrachyshka wrote:
> Working on upstream gate stability obviously does not invalidate any
> effort to introduce distribution CI votes in gerrit, and I would be
> happy to see RDO or Ubuntu meaningfully voting on backports. It's my
> belief though that distribution CI votes cannot serve as a replacement
> for upstream gate.

To me, it'd be a way more easy to work out a distribution CI *after* a
release, than one following trunk. Following trunk is nuts, there's
always the need for new packages and upgrade everything. Just like this
week, upgrading python-taskflow made me try to:
- upgrade mock
- as a consequence setuptools
- package 2 or 3 new 3rd party things
- upgrade some other stuff

For a given release of OpenStack, things aren't moving, so it's easier
to set it up. If the gate is always broken with upstream changes,
distributions would not.

Also, having a CI which does build of packages on each commit, and the
deployment + test of all that on a multi-node setup, is exactly what the
Mirantis CI does. I'm not pretending it's easy to do (and in fact, it's
not...), but at least, we do it for MOS, so it should be possible to do
for the community version of OpenStack.

Let's hope we find the time to get this done.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-ansible-deployment] [openstack-ansible] Using commit flags appropriately

2015-07-16 Thread Ian Cordasco
Hey everyone,

Now that the project is starting to grow and has some amount of
documentation. We should really start using flags in our commits more
appropriately, e.g., "UpgradeImpact", "DocImpact", etc.

For example, my own recent change to upgrade our keystone module to use v3
should have also had an "UpgradeImpact" flag but only had a DocImpact flag.

This will help consumers of openstack-ansible in the future and it will
help us if/when we start writing release notes for openstack-ansible.

Cheers,
Ian
sigmavirus24 (irc.freenode.net)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Non-responsive upstream libraries (python34 specifically)

2015-07-16 Thread Victor Stinner

Hi,

Le 16/07/2015 17:00, Davanum Srinivas a écrit :

I ended up here:
https://github.com/linsomniac/python-memcached/issues/54
https://github.com/linsomniac/python-memcached/pull/67


Oh, that's my pull request :-) Multiple peoples asked to merge my pull 
request, and I pinged explicitly the maintainer without _any_ kind of 
feedback. He's away or just don't care anymore since april (2015).


For me, they are two options:

* Fork python-memcached, but keep the same Python module name. Similar 
approach than suds/suds-jurko and pam/pam3


* Switch to pymemcache which is already compatible with Python 3

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Non-responsive upstream libraries (python34 specifically)

2015-07-16 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2015-07-16 11:00:33 -0400:
> Hi all,
> 
> I ended up here:
> https://github.com/linsomniac/python-memcached/issues/54
> https://github.com/linsomniac/python-memcached/pull/67
> 
> while chasing a keystone py34 CI problem since memcached is running in
> our CI VM:
> https://review.openstack.org/#/c/177661/
> 
> and got word from @zigo that this library and several other libraries
> have a long lag time for  (or never respond!)
> 
> What do we do in these situations?

If we identify projects like this, I think it makes sense for us to
start looking into alternatives and porting over to more actively
maintained libraries.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Nailgun agent core reviewers nomination

2015-07-16 Thread Vladimir Sharshov
Hi,

we have created separate project for fuel-nailgun-agent. At now moment only
i have core-reviewer rights.We hardly need more core reviewers here.

I want to nominate Vladimir Kozhukalov to fuel-nailgun-agent core. At now
moment Vladimir is one of the main contributor in nailgun-agent.

Please reply with +1/-1.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Ton Ngo
+1, seems like the best choice.
Ton Ngo,



From:   Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   07/16/2015 06:33 AM
Subject:Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?



I am OK with server_type as well.

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-16-15 3:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?



+ 1 about server_type.

I also think it is OK.


Thanks

Best Wishes,


Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193


Follow your heart. You are miracle!

(Embedded image moved to file: pic28496.gif)Inactive hide details for
Adrian Otto ---07/16/2015 03:18:04 PM---I’d be comfortable with
server_type. AdrianAdrian Otto ---07/16/2015 03:18:04 PM---I’d be
comfortable with server_type. Adrian

From: Adrian Otto 
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date: 07/16/2015 03:18 PM
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?




I’d be comfortable with server_type.

Adrian
  On Jul 15, 2015, at 11:51 PM, Jay Lau  wrote:

  After more thinking, I agree with Hongbin that instance_type might
  make customer confused with flavor, what about using server_type?

  Actually, nova has concept of server group, the "servers" in this
  group can be vm. pm or container.

  Thanks!

  2015-07-16 11:58 GMT+08:00 Kai Qiang Wu :
Hi Hong Bin,

Thanks for your reply.


I think it is better to discuss the 'platform' Vs instance_type
Vs others case first.
Attach:  initial patch (about the discussion):
https://review.openstack.org/#/c/200401/

My other patches all depend on above patch, if above patch can
not reach a meaningful agreement.

My following patches would be blocked by that.



Thanks


Best Wishes,



Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software
Park,
   No.8 Dong Bei Wang West Road, Haidian District Beijing
P.R.China 100193



Follow your heart. You are miracle!

Hongbin Lu ---07/16/2015 11:47:30 AM---Kai, Sorry
for the confusion. To clarify, I was thinking how to name the
field you proposed in baymo

From: Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage
questions)" 
Date: 07/16/2015 11:47 AM



Subject: Re: [openstack-dev] [magnum] Magnum template manage
use platform VS others as a type?




Kai,

Sorry for the confusion. To clarify, I was thinking how to name
the field you proposed in baymodel [1]. I prefer to drop it and
use the existing field ‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage
questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage
use platform VS others as a type?


Hi HongBin,

I think flavors introduces more confusion than
nova_instance_type or instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language,
if else etc. And separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another
boot Baremetal. 'VM' or Baremetal here is just used for heat
template selection.


1> If used flavor, it is nova specific concept: take two as
example,
  m1.small, or m1.middle.
 m1.small < 'VM' m1.middl

Re: [openstack-dev] [Neutron]Request for help to review a patch

2015-07-16 Thread Damon Wang
Hi Neil,

Nice suggestion :-)

Thanks,
Wei Wang

2015-07-16 15:46 GMT+08:00 :

> As it is a bug fix, perhaps you could add this to the agenda for the next
> Neutron IRC meeting, in the Bugs section?
>
> Regards,
>   Neil
>
>
>   *From: *Damon Wang
> *Sent: *Thursday, 16 July 2015 07:18
> *To: *OpenStack Development Mailing List (not for usage questions)
> *Reply To: *OpenStack Development Mailing List (not for usage questions)
> *Subject: *[openstack-dev] [Neutron]Request for help to review a patch
>
> Hi,
>
> I know that request review is not good in mail list, but the review
> process of this patch seems freeze except  gained two +1 :-)
>
> The review url is: https://review.openstack.org/#/c/172875/
>
> Thanks a lot,
> Wei wang
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Add support for Keystone's Fernet encryption keys management: initialization, rotation

2015-07-16 Thread Davanum Srinivas
Adam,

For 1, do we let user configure max_active_keys? what's the default?

Please note that there is a risk that an active token may be
invalidated if Fernet key rotation removes keys early. So that's a
potential issue to keep in mind (relation of token expiry to period of
key rotation).

thanks,
dims


On Thu, Jul 16, 2015 at 10:22 AM, Adam Heczko  wrote:
> Hi Folks,
> Keystone supports Fernet tokens which have payload encrypted by AES 128 bit
> key.
> Although AES 128 bit key looks secure enough for most OpenStack deployments
> [2], one may would like to rotate encryption keys according to already
> proposed 3 step key rotation scheme (in case keys get compromised or
> organizational security policy requirement).
> Also creation and initial AES key distribution between Keystone HA nodes
> could be challenging and this complexity could be handled by Fuel deployment
> tool.
>
> In regards to Fuel, I'd like to:
> 1. Add support for initializing Keystone's Fernet signing keys to Fuel
> during OpenStack cluster (Keystone) deployment
> 2. Add support for rotating Keystone's Fernet signing keys to Fuel according
> to some automatic schedule (for example one rotation per week) or triggered
> from the Fuel web user interface or through Fuel API.
>
> These two capabilities will be implemented in Fuel by related blueprint [1].
>
> [1] https://blueprints.launchpad.net/fuel/+spec/fernet-tokens-support
> [2] http://www.eetimes.com/document.asp?doc_id=1279619
>
>
> Regards,
>
> --
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment] [openstack-ansible] Using commit flags appropriately

2015-07-16 Thread Kevin Carter
+1 - I think we should start doing this immediately. 

--

Kevin Carter
Racker, Developer, Hacker @ The Rackspace Private Cloud.


From: Ian Cordasco 
Sent: Thursday, July 16, 2015 10:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [os-ansible-deployment] [openstack-ansible] Using 
commit flags appropriately

Hey everyone,

Now that the project is starting to grow and has some amount of
documentation. We should really start using flags in our commits more
appropriately, e.g., "UpgradeImpact", "DocImpact", etc.

For example, my own recent change to upgrade our keystone module to use v3
should have also had an "UpgradeImpact" flag but only had a DocImpact flag.

This will help consumers of openstack-ansible in the future and it will
help us if/when we start writing release notes for openstack-ansible.

Cheers,
Ian
sigmavirus24 (irc.freenode.net)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Denys Klepikov for fuel-docs core

2015-07-16 Thread Miroslav Anashkin
+1

-- 

*Kind Regards*

*Miroslav Anashkin**L2 support engineer**,*
*Mirantis Inc.*
*+7(495)640-4944 (office receptionist)*
*+1(650)587-5200 (office receptionist, call from US)*
*35b, Bld. 3, Vorontsovskaya St.*
*Moscow**, Russia, 109147.*

www.mirantis.com

manash...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Add support for Keystone's Fernet encryption keys management: initialization, rotation

2015-07-16 Thread Dolph Mathews
On Thu, Jul 16, 2015 at 10:29 AM, Davanum Srinivas 
wrote:

> Adam,
>
> For 1, do we let user configure max_active_keys? what's the default?
>

The default in keystone is 3, simply to support having one key in each of
the three phases of rotation. You can increase it from there per your
desired rotation frequency and token lifespan.


>
> Please note that there is a risk that an active token may be
> invalidated if Fernet key rotation removes keys early. So that's a
> potential issue to keep in mind (relation of token expiry to period of
> key rotation).
>

Keystone's three phase rotation scheme avoids this by allowing you to
pre-stage keys across the cluster before using them for encryption.


>
> thanks,
> dims
>
>
> On Thu, Jul 16, 2015 at 10:22 AM, Adam Heczko 
> wrote:
> > Hi Folks,
> > Keystone supports Fernet tokens which have payload encrypted by AES 128
> bit
> > key.
> > Although AES 128 bit key looks secure enough for most OpenStack
> deployments
> > [2], one may would like to rotate encryption keys according to already
> > proposed 3 step key rotation scheme (in case keys get compromised or
> > organizational security policy requirement).
> > Also creation and initial AES key distribution between Keystone HA nodes
> > could be challenging and this complexity could be handled by Fuel
> deployment
> > tool.
> >
> > In regards to Fuel, I'd like to:
> > 1. Add support for initializing Keystone's Fernet signing keys to Fuel
> > during OpenStack cluster (Keystone) deployment
> > 2. Add support for rotating Keystone's Fernet signing keys to Fuel
> according
> > to some automatic schedule (for example one rotation per week) or
> triggered
> > from the Fuel web user interface or through Fuel API.
> >
> > These two capabilities will be implemented in Fuel by related blueprint
> [1].
> >
> > [1] https://blueprints.launchpad.net/fuel/+spec/fernet-tokens-support
> > [2] http://www.eetimes.com/document.asp?doc_id=1279619
> >
> >
> > Regards,
> >
> > --
> > Adam Heczko
> > Security Engineer @ Mirantis Inc.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security-group] rules for filter mac-addresses

2015-07-16 Thread Sean M. Collins
On Tue, Jul 14, 2015 at 03:31:49AM PDT, Kevin Benton wrote:
> Unfortunately the security groups API does not have mac-level rules right
> now.

There is also the fact that the Security Group API is limited (by
design) to do fairly simple things, and also that the model has similar
fields to the AWS API for Security Groups.

Overall, I want to try and minimize (or even avoid) any changes to the
Security Group API, and try and collect usecases for more complex
filtering to see if the FwaaS API can satisfy - since it is an API that
we have more freedom to change and modify compared to an API that is
named the same as something in AWS - and it is meant for more complex
usecases.

http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nailgun agent core reviewers nomination

2015-07-16 Thread Sergii Golovatiuk
+1

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Thu, Jul 16, 2015 at 10:20 AM, Vladimir Sharshov 
wrote:

> Hi,
>
> we have created separate project for fuel-nailgun-agent. At now moment
> only i have core-reviewer rights.We hardly need more core reviewers here.
>
> I want to nominate Vladimir Kozhukalov to fuel-nailgun-agent core. At now
> moment Vladimir is one of the main contributor in nailgun-agent.
>
> Please reply with +1/-1.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Fox, Kevin M
Wait... so the issue is if you were to just use nova flavor, you don't have 
enough information to choose a set of templates that may be more optimal for 
that flavor type (like vm's or bare metal)? Is this a NaaS vs flatdhcp kind of 
thing? I just took a quick skim of the heat templates and it wasn't really 
clear why the template needs to know.

If that sort of thing is needed, maybe allow a heat environment or the template 
set to be tagged onto nova flavors in Magnum by the admin, and then the user 
can be concerned only with nova flavors? They are use to dealing with them. 
Sahara and Trove do some similar things I think.

Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Wednesday, July 15, 2015 8:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field you 
proposed in baymodel [1]. I prefer to drop it and use the existing field 
‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?


Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or 
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
  We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc. And 
separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot Baremetal. 
'VM' or Baremetal here is just used for heat template selection.


1> If used flavor, it is nova specific concept: take two as example,
m1.small, or m1.middle.
   m1.small < 'VM' m1.middle < 'VM'
   Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I think 
flavor not good to be used.


2> @Adrian, we have --flavor-id field for baymodel now, it would picked up by 
heat-templates, and boot instances with such flavor.


3> Finally, I think instance_type is better.  instance_type can be used as heat 
templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat stack 
deploy

instance_type = 'baremetal', it means such templates fit for ironic baremetal 
heat stack deploy.





Thanks!


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---07/16/2015 04:44:14 AM---+1 for the 
idea of using Nova flavor directly. Why we introduc]Hongbin Lu ---07/16/2015 
04:44:14 AM---+1 for the idea of using Nova flavor directly. Why we introduced 
the “platform” field to indicate “v

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 07/16/2015 04:44 AM
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?





+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is that 
magnum need to map a bay to a Heat template (which will be used to provision 
the bay). Currently, Magnum has three layers of mapping:
* platform: vm or baremetal
* os: atomic, coreos, …
* coe: kubernetes, swarm or mesos

I think we could just replace “platform” with “flavor”, if we can populate a 
list of flovars for VM and another list of flavors for baremetal (We may need 
an additional list of flavors for container in the future for the nested 
container use case). Then, the new three layers would be:
* flavor: baremetal, m1.small, m1.medium,  …
* os: atomic, coreos, ...
* coe: kubernetes, swarm or mesos

This approach can avoid introducing a new field in baymodel to indicate what 
Nova flavor already indicates.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: July-15-15 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Maybe somehow I missed the point, but why not just use ra

[openstack-dev] [nova] Non-Priorty Feature Proposal Freeze has happened

2015-07-16 Thread John Garbutt
Hi,

Just a quick heads up about today's blueprint activities.

The main aims here are:
* focus on the agreed priorities (including bug fixes) during liberty-3
* optimise for max number of complete blueprints
* focus review effort on the things that are already being reviewed now

The following documents most of the movements:
https://etherpad.openstack.org/p/liberty-nova-non-priority-feature-proposal-freeze

Removing all the non-priority blueprints that don't have code up for
review, moved us from 104 approved blueprints, down to 87. A small
handful I have marked a partial and up for review.

Now is a great time to start helping out with more code reviews, and
try to get as much of these features merged before the Non-Priority
Feature Freeze on 30th July.

As always, any questions or issues, please do get in touch.

Thanks,
Johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel Zabbix in deployment tasks

2015-07-16 Thread Sergii Golovatiuk
Hi,

Working on granular deployment, I realized we still call zabbix.pp in
deployment tasks. As far as I know zabbix was moved to plugin. Should we
remove zabbix from
1. Deployment graph
2. fixtures
3. Tests
4. Any other places

Are we going to clean up zabbix code as part of migration to plugin?

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Device names supplied to the boot request

2015-07-16 Thread Nikola Đipanov
On 07/16/2015 11:24 AM, Sean Dague wrote:
> On 07/15/2015 01:41 PM, Andrew Laski wrote:
>> On 07/15/15 at 12:19pm, Matt Riedemann wrote:
> 
>>> The other part of the discussion is around the API changes, not just
>>> for libvirt, but having a microversion that removes the device from
>>> the request so it's no longer optional and doesn't provide some false
>>> sense that it works properly all of the time.  We talked about this in
>>> the nova channel yesterday and I think the thinking was we wanted to
>>> get agreement on dropping that with a microversion before moving
>>> forward with the libvirt change you have to ignore the requested
>>> device name.
>>>
>>> From what I recall, this was supposed to really only work reliably for
>>> xen but now it actually might not, and would need to be tested again.
>>> Seems we could start by checking the xen CI to see if it is running
>>> the test_minimum_basic scenario test or anything in
>>> test_attach_volume.py in Tempest.
>>
>> This doesn't really work reliably for xen either, depending on what is
>> being done.  For the xenapi driver Nova converts the device name
>> provided into an integer based on the trailing letter, so 'vde' becomes
>> 4, and asks xen to mount the device based on that int.  Xen does honor
>> that integer request so you'll get an 'e' device, but you could be
>> asking for hde and get an xvde or vice versa.
> 
> So this sounds like it's basically not working today. For Linux guests
> it really can't work without custom in guest code anyway, given how
> device enumeration works.
> 
> That feels to me like we remove it from the API with a microversion, and
> when we do that just comment that trying to use this before that
> microversion is highly unreliable (possibly dangerous) and may just
> cause tears.
> 

The problem with outright banning it is that we still have to support
people who want to use the older version meaning all of the code would
have to support it indefinitely (3.0 is not even on the horizon), given
the shady gains, I can't help but feel that this is needless complexity.

Also, not being able to specify device names would make it impossible to
implement certain features that EC2 API can provide, such as overriding
the image block devices without significant effort.

> ...
> 
> On a slight tangent, probably a better way to provide mount stability to
> the guest is with FS labels. libvirt is already labeling the filesystems
> it creates, and xenserver probably could as well. The infra folks ran
> into an issue yesterday
> http://status.openstack.org//elastic-recheck/#1475012 where using that
> info was their fix.
> 

I think the reason device_names are exposed in the API is that that was
the quickest way to provide a sort of an ID of a block device attached
to a certain instance that further API calls can then act upon.

> It's not the same thing as deterministic devices, but deterministic
> devices really aren't a thing on first boot unless you have guest agent
> code, or only boot with one disk and hot plug the rest carefully.
> Neither are really fun answers.
> 
>   -Sean
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] New API Guidelines read for cross project review

2015-07-16 Thread michael mccune

hey all,

we have 4 API Guidelines that are ready for final review.

1. Add generic name of each project for terms
https://review.openstack.org/#/c/196918/

2. Add new guideline for HTTP Headers
https://review.openstack.org/#/c/186526/

3. Adds guidance on request bodies with different Methods
https://review.openstack.org/#/c/184358/

4. Adding 5xx guidance
https://review.openstack.org/#/c/183698/

if the API Working Group hasn't received any further feedback, we'll 
merge them on July 23.


thanks,
mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Device names supplied to the boot request

2015-07-16 Thread Nikola Đipanov
On 07/16/2015 05:47 PM, Nikola Đipanov wrote:
> 
> Also, not being able to specify device names would make it impossible to
> implement certain features that EC2 API can provide, such as overriding
> the image block devices without significant effort.
> 

I forgot to add links that explain this in more detail [1][2]

[1] https://review.openstack.org/#/c/190324/
[2] https://bugs.launchpad.net/nova/+bug/1370250




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rpm-packaging] Meeting minutes IRC meeting July 16th

2015-07-16 Thread Dirk Müller
Hi,

extraction of the meeting minutes from
https://etherpad.openstack.org/p/openstack-rpm-packaging

we agreed to switch to the standard way of doing meeting minutes once
the infra review got merged. its a bit of a short minutes, feel free
to reach out to number80 or me in case you have questions.

Greetings,
Dirk


Attendees:
derekh, dirk, toabctl, number80, apevec, jruzicka


* Regular Meeting schedule:thursday 4pm CET every two weeks
* define short term goal clearer vs long term goal
* topics that are ongoing:
* continuous builds
* generic rpm specs
* shared stable maintenance

agreed:

short term goal for liberty:
* have openstackclient and its deps templatized and packaged so that
we can directly create downstream spec files that build and work
* deliverable will be spec files

AI: need to define testing criteria  for gating

longer term goal:

* kill downstream packaging efforts if package exists in rpm-packaging
* maintain packaging for the stable/ lifecycle
** perhaps also extend stable/ branch lifecycle to satisfy our needs
(not sure, worth a try)
* continuous builds
* revisit import downstream packages in a two months frame
* gather gating idea on wiki page

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is osapi_v3.enabled = False by default?

2015-07-16 Thread Matt Riedemann



On 7/16/2015 4:57 AM, Sean Dague wrote:

On 07/15/2015 08:12 PM, GHANSHYAM MANN wrote:

On Thu, Jul 16, 2015 at 3:03 AM, Sean Dague  wrote:

On 07/15/2015 01:44 PM, Matt Riedemann wrote:

The osapi_v3.enabled option is False by default [1] even though it's
marked as the CURRENT API and the v2 API is marked as SUPPORTED (and
we've frozen it for new feature development).

I got looking at this because osapi_v3.enabled is True in nova.conf in
both the check-tempest-dsvm-nova-v21-full job and non-v21
check-tempest-dsvm-full job, but only in the v21 job is
"x-openstack-nova-api-version: '2.1'" used.

Shouldn't the v2.1 API be enabled by default now?

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n44


Honestly, we should probably deprecate out osapi_v3.enabled make it
osapi_v21 (or osapi_v2_microversions) so as to not confuse people further.



Nice Catch. We might have just forgot to make it default to True.

How about just deprecating it and remove in N and makes v21 enable all
the time (irrespective of osapi_v3.enabled) as they are current now.


Yeh, that's probably a fine approach as well. I don't think we need an
option any more here.

-Sean



OK, I'll push up a change, thanks everyone!

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-announce] End of life for managed stable/icehouse branches

2015-07-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 07/16/2015 05:08 PM, Thomas Goirand wrote:
> On 07/16/2015 03:29 PM, Ihar Hrachyshka wrote:
>> Working on upstream gate stability obviously does not invalidate
>> any effort to introduce distribution CI votes in gerrit, and I
>> would be happy to see RDO or Ubuntu meaningfully voting on
>> backports. It's my belief though that distribution CI votes
>> cannot serve as a replacement for upstream gate.
> 
> To me, it'd be a way more easy to work out a distribution CI
> *after* a release, than one following trunk. Following trunk is
> nuts, there's always the need for new packages and upgrade
> everything. Just like this week, upgrading python-taskflow made me
> try to: - upgrade mock - as a consequence setuptools - package 2 or
> 3 new 3rd party things - upgrade some other stuff
> 
> For a given release of OpenStack, things aren't moving, so it's
> easier to set it up. If the gate is always broken with upstream
> changes, distributions would not.
> 
> Also, having a CI which does build of packages on each commit, and
> the deployment + test of all that on a multi-node setup, is exactly
> what the Mirantis CI does. I'm not pretending it's easy to do (and
> in fact, it's not...), but at least, we do it for MOS, so it should
> be possible to do for the community version of OpenStack.
> 
> Let's hope we find the time to get this done.
> 

It would be great to see that one voting in upstream. And gate
working. And then there won't be particular reason to drop older branche
s.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVp+T8AAoJEC5aWaUY1u57pWoIAL1pPBZSH6RPaseyhWVEsD4Q
1Y34kWZm27sgca6g10ycO1QxalBYb3X9wJzOhxezgajZaPFnRh0UJLwNqJgBAycO
eBGtBH5/FvBr3thWyjtO4thfyYoYdkbw1S5NwYmZ0hPspwXwMCBQh9yR2HDvy2N5
VXatm6yBQClbkyUnekhHxgi70wsVhQemWUzAGStCi7h45gpXC2+6NieDNFoXQkut
8rOcKFmU0i3AmHiyZGx7EbiuvT2Xwrsfu6x4bJeG9Ft3cWXjrHOGAGhqS23qcBBG
R34Q5LJvxJFKHgi89amaEF2tc89XRKxVPgHE4i4DGkJ44erozNrR74emF+imanI=
=ILdG
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] New library release request process

2015-07-16 Thread Doug Hellmann
Excerpts from Anne Gentle's message of 2015-07-16 08:14:54 -0500:
> On Thu, Jul 16, 2015 at 6:58 AM, Doug Hellmann 
> wrote:
> 
> > Excerpts from Andreas Jaeger's message of 2015-07-16 08:11:48 +0200:
> > > Doug,
> > >
> > > I'm missing openstackdocstheme and openstack-doc-tools in your import.
> > > How do you want to handle these?
> >
> > There are some tools in the repository to extract the history from a
> > repo. I'll see what I can do for those 2 today.
> >
> 
> 
> Thanks Doug (and Andreas for asking). I was going to look myself since we
> need a release of openstackdocstheme pretty soon.
> 
> Much appreciation,
> Anne

One sticking point with these tools is that they don't fit into our
current definition of a deliverable, which is "N repos that share a
launchpad project and version number." I think we have a couple of
options to deal with that:

1. Create separate launchpad projects for each of them, so they can be
managed independently like the other projects.

2. Start releasing and versioning them together.

3. Add support for a deliverable type with no launchpad project, which
would skip the launchpad updates.

I like option 1, with 3 being a fallback. I don't really see option 2 as
viable.

What does everyone else think?

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Device names supplied to the boot request

2015-07-16 Thread Matt Riedemann



On 7/16/2015 11:47 AM, Nikola Đipanov wrote:

On 07/16/2015 11:24 AM, Sean Dague wrote:

On 07/15/2015 01:41 PM, Andrew Laski wrote:

On 07/15/15 at 12:19pm, Matt Riedemann wrote:



The other part of the discussion is around the API changes, not just
for libvirt, but having a microversion that removes the device from
the request so it's no longer optional and doesn't provide some false
sense that it works properly all of the time.  We talked about this in
the nova channel yesterday and I think the thinking was we wanted to
get agreement on dropping that with a microversion before moving
forward with the libvirt change you have to ignore the requested
device name.

 From what I recall, this was supposed to really only work reliably for
xen but now it actually might not, and would need to be tested again.
Seems we could start by checking the xen CI to see if it is running
the test_minimum_basic scenario test or anything in
test_attach_volume.py in Tempest.


This doesn't really work reliably for xen either, depending on what is
being done.  For the xenapi driver Nova converts the device name
provided into an integer based on the trailing letter, so 'vde' becomes
4, and asks xen to mount the device based on that int.  Xen does honor
that integer request so you'll get an 'e' device, but you could be
asking for hde and get an xvde or vice versa.


So this sounds like it's basically not working today. For Linux guests
it really can't work without custom in guest code anyway, given how
device enumeration works.

That feels to me like we remove it from the API with a microversion, and
when we do that just comment that trying to use this before that
microversion is highly unreliable (possibly dangerous) and may just
cause tears.



The problem with outright banning it is that we still have to support
people who want to use the older version meaning all of the code would
have to support it indefinitely (3.0 is not even on the horizon), given
the shady gains, I can't help but feel that this is needless complexity.


Huh?  That's what the microversion in the v2.1 API is for - we add a 
microversion that drops support for the device name in the API request, 
if you're using a version of the API before that we log a warning that 
it's unreliable and probably shouldn't be used.  With the microversion 
you're opting in to using it.




Also, not being able to specify device names would make it impossible to
implement certain features that EC2 API can provide, such as overriding
the image block devices without significant effort.


Huh? (x2)  With your change you're ignoring the requested device name 
anyway, so how does this matter?  Also, the ec2 API is moving out of 
tree so do we care what that means for the openstack compute API?





...

On a slight tangent, probably a better way to provide mount stability to
the guest is with FS labels. libvirt is already labeling the filesystems
it creates, and xenserver probably could as well. The infra folks ran
into an issue yesterday
http://status.openstack.org//elastic-recheck/#1475012 where using that
info was their fix.



I think the reason device_names are exposed in the API is that that was
the quickest way to provide a sort of an ID of a block device attached
to a certain instance that further API calls can then act upon.


It's not the same thing as deterministic devices, but deterministic
devices really aren't a thing on first boot unless you have guest agent
code, or only boot with one disk and hot plug the rest carefully.
Neither are really fun answers.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican : Need help as to how to test the integration of Barbican with the HSM

2015-07-16 Thread Asha Seshagiri
Hi All ,

I would need help to test the integration of Barbican with HSM.
Have configured the Barbican client to connect to HSM server by registering
barbican IP to the HSM server and have assigned the partition. Have
modified the barbican.conf file with the following changes  :

# = Secret Store Plugin ===
[secretstore]
namespace = barbican.secretstore.plugin
enabled_secretstore_plugins = store_crypto

# = Crypto plugin ===
[crypto]
namespace = barbican.crypto.plugin
enabled_crypto_plugins = p11_crypto

[simple_crypto_plugin]
# the kek should be a 32-byte value which is base64 encoded
kek = 'YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY='

[dogtag_plugin]
pem_path = '/etc/barbican/kra_admin_cert.pem'
dogtag_host = localhost
dogtag_port = 8443
nss_db_path = '/etc/barbican/alias'
nss_db_path_ca = '/etc/barbican/alias-ca'
nss_password = 'password123'
simple_cmc_profile = 'caOtherCert'

[p11_crypto_plugin]
# Path to vendor PKCS11 library
library_path = '/usr/lib/libCryptoki2_64.so'
# Password to login to PKCS11 session
login = 'test123'
# Label to identify master KEK in the HSM (must not be the same as HMAC
label)
mkek_label = 'an_mkek'
# Length in bytes of master KEK
mkek_length = 32
# Label to identify HMAC key in the HSM (must not be the same as MKEK label)
hmac_label = 'my_hmac_label'
# HSM Slot id (Should correspond to a configured PKCS11 slot). Default: 1
# slot_id = 1

Would need help as to how to test whether the Integration of Barbican with
HSM is successful.
Where are the encypted KEK stored and how do we know the KEK is generated
on the HSM side and the same KEK is used for encryption/decryption of
secrets in barbarian.
Would also like to know if I have done the right changes required for
Integration with HSM

I was able to generate and retrieve the secret .

*root@HSM-Client ~]# curl -X POST -H 'content-type:application/json' -H
'X-Project-Id: 12345' -d '{"secret": {"name":"secretname", "algorithm":
"aes", "bit_length": 256, "mode": "cbc"}}'
http://184.172.96.189:9311/v1/secrets
*

*{"secret_ref":
"http://184.172.96.189:9311/v1/secrets/275b99ad-71f5-4e4c-8bda-5c2b011c265b
"}[root@HSM-Client
~]#*

Any help would highly be appreciated.
-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Time to remove Python2.6 jobs from master

2015-07-16 Thread Andreas Jaeger

On 07/14/2015 11:58 AM, Luigi Toscano wrote:

On Tuesday 14 of July 2015 10:33:18 Ihar Hrachyshka wrote:

On 07/14/2015 01:46 AM, Perry, Sean wrote:

-Original Message- From: Doug Hellmann

I don't *want* to keep 2.6 support around, and I do understand
that the requirements work will be made easier.  I'm just trying
to understand what other impact dropping it will have.


It will break RHEL 5 (maybe early 6 too) and older RH systems.
Ubuntu older than 9 I think (which is beyond unsupported). Not sure
about other Linux dists.

Basically if RHEL 5 is no longer a valid target and we are sure all
of the 6s have updated then let's nuke it from orbit.


I don't believe there was any release of RHEL-OSP that targeted RHEL
5. As for RHEL 6, the last version that got support for it was OSP5
which is based on Icehouse.

Speaking of RDO, there were attempts to get nova pieces of Juno
backported to RHEL 6 (mostly for CERN). Other than that, I don't think
anyone considers to run anything Kilo+ on RHEL 6, and it will probably
fail to work properly since a lot of underlying platform components in
RHEL 6 would be not ready for new features. (RHEL-OSP could
potentially get all the needed pieces in place, but there was a
decision not to go this route and instead require RHEL 7 for anything
Juno+).


Some Sahara plugins (HDP, CDH) supports only CentOS6/RHEL6. In order to
generate images for them, even if diskimage-builder some scripts need to run
on the guest directly. So at least diskimage-builder should keep Python 2.6
support for guests (RHEL6 ships with Python 2.6)


In that case, please comment on
https://review.openstack.org/#/c/201295/

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps

2015-07-16 Thread gord chung



On 16/07/2015 12:05 AM, Angus Salkeld wrote:
Will this be hidden within the client so that as long as we have aodh 
enabled in our gate's devstack

this will just work?


yes, we discussed this last week during our midcycle. the plan going 
forward is to allow current existing Ceilometer alarm functionality to 
persist as is until we have a document process to transition over to 
split Aodh service. We are currently looking at the existing integration 
cases and have them prioritised. Once we have integrations all resolved 
we will announce code removal. It is currently targeted to be removed in 
M* cycle, dependent on current integration work.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Service Chain project IRC meeting minutes - 07/16/2015

2015-07-16 Thread Cathy Zhang
Hi Everyone,

Thanks for joining the service chaining project meeting on 7/16/2015. Here is 
the link to the meeting logs:
http://eavesdrop.openstack.org/meetings/service_chaining/2015/

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel Zabbix in deployment tasks

2015-07-16 Thread Mike Scherbakov
I thought it was done...
Stas - do you know anything about it?

On Thu, Jul 16, 2015 at 9:18 AM Sergii Golovatiuk 
wrote:

> Hi,
>
> Working on granular deployment, I realized we still call zabbix.pp in
> deployment tasks. As far as I know zabbix was moved to plugin. Should we
> remove zabbix from
> 1. Deployment graph
> 2. fixtures
> 3. Tests
> 4. Any other places
>
> Are we going to clean up zabbix code as part of migration to plugin?
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>  __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Denys Klepikov for fuel-docs core

2015-07-16 Thread Mike Scherbakov
+1

On Thu, Jul 16, 2015 at 8:40 AM Miroslav Anashkin 
wrote:

> +1
>
> --
>
> *Kind Regards*
>
> *Miroslav Anashkin**L2 support engineer**,*
> *Mirantis Inc.*
> *+7(495)640-4944 (office receptionist)*
> *+1(650)587-5200 (office receptionist, call from US)*
> *35b, Bld. 3, Vorontsovskaya St.*
> *Moscow**, Russia, 109147.*
>
> www.mirantis.com
>
> manash...@mirantis.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Device names supplied to the boot request

2015-07-16 Thread Nikola Đipanov
On 07/16/2015 06:35 PM, Matt Riedemann wrote:
> 
> 
> On 7/16/2015 11:47 AM, Nikola Đipanov wrote:
>> On 07/16/2015 11:24 AM, Sean Dague wrote:
>>> On 07/15/2015 01:41 PM, Andrew Laski wrote:
 On 07/15/15 at 12:19pm, Matt Riedemann wrote:
>>> 
> The other part of the discussion is around the API changes, not just
> for libvirt, but having a microversion that removes the device from
> the request so it's no longer optional and doesn't provide some false
> sense that it works properly all of the time.  We talked about this in
> the nova channel yesterday and I think the thinking was we wanted to
> get agreement on dropping that with a microversion before moving
> forward with the libvirt change you have to ignore the requested
> device name.
>
>  From what I recall, this was supposed to really only work reliably
> for
> xen but now it actually might not, and would need to be tested again.
> Seems we could start by checking the xen CI to see if it is running
> the test_minimum_basic scenario test or anything in
> test_attach_volume.py in Tempest.

 This doesn't really work reliably for xen either, depending on what is
 being done.  For the xenapi driver Nova converts the device name
 provided into an integer based on the trailing letter, so 'vde' becomes
 4, and asks xen to mount the device based on that int.  Xen does honor
 that integer request so you'll get an 'e' device, but you could be
 asking for hde and get an xvde or vice versa.
>>>
>>> So this sounds like it's basically not working today. For Linux guests
>>> it really can't work without custom in guest code anyway, given how
>>> device enumeration works.
>>>
>>> That feels to me like we remove it from the API with a microversion, and
>>> when we do that just comment that trying to use this before that
>>> microversion is highly unreliable (possibly dangerous) and may just
>>> cause tears.
>>>
>>
>> The problem with outright banning it is that we still have to support
>> people who want to use the older version meaning all of the code would
>> have to support it indefinitely (3.0 is not even on the horizon), given
>> the shady gains, I can't help but feel that this is needless complexity.
> 
> Huh?  That's what the microversion in the v2.1 API is for - we add a
> microversion that drops support for the device name in the API request,
> if you're using a version of the API before that we log a warning that
> it's unreliable and probably shouldn't be used.  With the microversion
> you're opting in to using it.
> 

so are you saying that we don't have to support actually persisting the
user supplied device names for request that ask for version < N? If so
than my change can be accompanied with a version bump and we're good to go.

If we have to support both and somehow notify the compute that it should
persist the requested device names some of the time, then I am very mich
against that.

IMHO microversions should not be used for fixing utter brokenness, it
should just be fixed. Keeping bug compatibility is not something we
should do IMHO but that's a different discussion.

>>
>> Also, not being able to specify device names would make it impossible to
>> implement certain features that EC2 API can provide, such as overriding
>> the image block devices without significant effort.
> 
> Huh? (x2)  With your change you're ignoring the requested device name
> anyway, so how does this matter?  Also, the ec2 API is moving out of
> tree so do we care what that means for the openstack compute API?
>

Please look at the patch and the bug I link in the follow up email
(copied here for your convenience). It should be clearer then which
features cannot possibly work [1][2].

As for supporting the EC2 API - I don't know the answer to that if we
decide we don't care about them - that's cool with me. Even without that
as a consideration, I still think the current proposed patch is the best
way forward.

[1] https://review.openstack.org/#/c/190324/
[2] https://bugs.launchpad.net/nova/+bug/1370250

>>
>>> ...
>>>
>>> On a slight tangent, probably a better way to provide mount stability to
>>> the guest is with FS labels. libvirt is already labeling the filesystems
>>> it creates, and xenserver probably could as well. The infra folks ran
>>> into an issue yesterday
>>> http://status.openstack.org//elastic-recheck/#1475012 where using that
>>> info was their fix.
>>>
>>
>> I think the reason device_names are exposed in the API is that that was
>> the quickest way to provide a sort of an ID of a block device attached
>> to a certain instance that further API calls can then act upon.
>>
>>> It's not the same thing as deterministic devices, but deterministic
>>> devices really aren't a thing on first boot unless you have guest agent
>>> code, or only boot with one disk and hot plug the rest carefully.
>>> Neither are really fun answers.
>>>
>>> -Sean
>>>
>>
>>
>> _

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Adrian Otto
To be clear we have two pursuits on this thread:

1) What to rename bay.blatform to.
2) How we might eliminate the attribute, or replace it with something more 
intuitive

We have a consensus now on how to address #1. My direction to Kannan is to 
proceed using server_type as the new attribute name. If anyone disagrees, you 
can let us know now, or submit a subsequent patch to address that concern, and 
we can vote on it in Gerrit.

On this subject of potentially eliminating, or replacing this attribute with 
something else, let’s continue to discuss that.

One key issue is that our current HOT file format does not have any facility 
for conditional logic evaluation, so if the Bay orchestration differs between 
various server_type values, we need to select the appropriate value based on 
the way the bay is created. I’m open to hearing suggestions for implementing 
any needed conditional logic, if we can put it into a better place.

Adrian

On Jul 16, 2015, at 8:54 AM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:

Wait... so the issue is if you were to just use nova flavor, you don't have 
enough information to choose a set of templates that may be more optimal for 
that flavor type (like vm's or bare metal)? Is this a NaaS vs flatdhcp kind of 
thing? I just took a quick skim of the heat templates and it wasn't really 
clear why the template needs to know.

If that sort of thing is needed, maybe allow a heat environment or the template 
set to be tagged onto nova flavors in Magnum by the admin, and then the user 
can be concerned only with nova flavors? They are use to dealing with them. 
Sahara and Trove do some similar things I think.

Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Wednesday, July 15, 2015 8:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field you 
proposed in baymodel [1]. I prefer to drop it and use the existing field 
‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or 
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
  We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc. And 
separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot Baremetal. 
'VM' or Baremetal here is just used for heat template selection.


1> If used flavor, it is nova specific concept: take two as example,
m1.small, or m1.middle.
   m1.small < 'VM' m1.middle < 'VM'
   Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I think 
flavor not good to be used.


2> @Adrian, we have --flavor-id field for baymodel now, it would picked up by 
heat-templates, and boot instances with such flavor.


3> Finally, I think instance_type is better.  instance_type can be used as heat 
templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat stack 
deploy

instance_type = 'baremetal', it means such templates fit for ironic baremetal 
heat stack deploy.





Thanks!


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Hongbin Lu ---07/16/2015 04:44:14 AM---+1 for the idea of using 
Nova flavor directly. Why we introduced the “platform” field to indicate “v

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 07/16/2015 04:44 AM
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?




+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is that 
magnum need to map a bay to a Heat template (which will be used to provision 
the bay). Currently, Magnum has three layers of mapping:
• platform: vm or

[openstack-dev] [fuel] NodeGroups vs network-templates and static routes

2015-07-16 Thread Andrew Woodward
In 6.0 we added nodegroups as part of the multiple-cluster-networks
features. With these you can add additional sets of networks with so that
the nodes can exist on different network segments. When these are used you
will also need to set the gateway for each of your networks. When you do
this, you get routes set up between the matching network names across the
nodegroups.

For example network.yaml that looks like (shortened)

networks:
- cidr: 172.16.0.0/24
  gateway: 172.16.0.1
  group_id: 2
  id: 6
- cidr: 172.16.10.0/24
  gateway: 172.16.10.1
  group_id: 3
  id: 9

Will result in mappings like this in a nodes yaml (in nodegroup 2)

network_scheme:
  endpoints:
br-ex:
  IP:
  - 172.16.0.4/24
  routes:
  - net: 172.16.10.0/24
via: 172.16.0.1


With the introduction of templates we may no longer need nodegroups. They
served two functions. 1) They allowed us to create additional networks. 2)
They created additional routes between networks of the same name. Comparing
with what is in templates, #1 is taken care of, but what about #2? I think
that we need the routes configured anyway. Nodes with the same network role
should have a route for it when it crosses network segments.

This would have traditionally been separated by nodegroups. but it now can
be coded with templates. In this case (such as the yaml above) we must have
routes for the nodes to communicate on the correct interface. Since we need
code for routes between segments of the same network role, it might behoove
ourselves to compute (maybe not use when they are the local interface).
This serves two functions, it allows us to visualize the routing topology
instead of just relying on the default route. Secondly when we get to using
a routing protocol it gives us the data necessary to validate the routing
protocol with what we expected.

Regardless of computing all the routes, we need to compute same role, but
multi-segement routes. In this case I see that nodegroups becomes
redundant. It's only value is that it may be a simpler interface then
templates but it imposes the old network topology which I could see people
wanting to get away from.
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Meeting July 16

2015-07-16 Thread Andrew Woodward
Meeting minutes are available at

http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-07-16-16.00.html

On Wed, Jul 15, 2015 at 3:36 PM Andrew Woodward  wrote:

> Please note the IRC meeting is scheduled for 16:00 UTC in
> #openstack-meeting-alt
>
> Please review meeting agenda and update if there is something you wish to
> discuss.
>
> https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] librarian-puppet integration, need help with build tasks for fuel-library

2015-07-16 Thread Alex Schultz
Hello everyone,

I have committed the initial configuration required to start leveraging
librarian-puppet as part of the way we pull in upstream puppet modules[0].
Additionally, I have also committed a change that would pull in the
openstack-ironic module[1].  The one piece that is missing from this being
a complete solution is the ability to run librarian-puppet as part of our
build process for the fuel-library.  I've looked into the fuel-main build
scripts and I think it's over my head to figure this out just by looking.
Can anyone explain to me or assist me in how I could go about modifying the
existing build system to be able to run librarian-puppet to prepare the
source for the package?  In my initial investigation, it looks like it
would be a modification of the fuel-main/packages/module.mk[3] file.  I
basically need to do the prepare_library[3] function from the 202763
review[0] after we've pulled all the sources together to fetch the upstream
modules.


Thanks,
-Alex

[0] https://review.openstack.org/202763
[1] https://review.openstack.org/202767
[2]
https://github.com/stackforge/fuel-main/blob/master/packages/module.mk#L63-L82
[3]
https://review.openstack.org/#/c/202763/1/utils/jenkins/fuel_noop_tests.rb
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-16 Thread Mathieu Gagné
Hi,

I stubble on this review [1] which proposes adding info about provider
networks in network_data.json.

Concerns were raised by Kevin Benton about how those information
shouldn't be exposed to virtual instances for various reasons you can
read in the review and I totally agree with those concerns.

Monty Taylor mentioned a valid use case with Ironic where the node needs
the provider networks info to properly configure itself.

While I totally agree this is clearly a valid use case, I do not believe
the proposed implementation is the right answer.

(copying and rewording my comment found in the review)

For one, it breaks the virtual instance use case as I do not understand
how cloud-init or any similar tools will now be able to consume that
information.

If you boot a virtual instance where the traffic is already decapsulated
by the hypervisor, how is cloud-init supposed to know that it doesn't
have to configure vlan network interfaces?

This important distinction between virtual and metal is not addressed in
the proposed implementation. In fact, it breaks the virtual use case and
I strongly believe it should be reverted now.

I do understand that the baremetal use case is valid but do not
understand how inexorably exposing this information to virtual instances
will not introduce confusion and errors.

So it looks like there is a missing part in this feature. There should
be a way to "hide" this information if the instance does not require to
configure vlan interfaces to make network functional.

Furthermore, John Garbutt mentioned "Part of me worries a little about
leaking this information, but I know some guests need this info. [...]
But even in those cases its security by obscurity reasons only, and I
want to ignore those.".

To that, I answer that as an public provider operator, I do not wish to
expose such information to my users. It's not up to Nova to decide for
me that exposing provider networks info is "ok" and those concerns be
"ignored". Please do not make such decisions lightly and without asking
a second opinion from operators which are the ones who will consume your
project.

[1] https://review.openstack.org/#/c/152703/5

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][fwaas] Usecase classification

2015-07-16 Thread Sean M. Collins
Hi all,

Over the past day at the Advanced Services midcycle in Seattle[1], a
group of us gathered to try and categorize the usecases collected in the
etherpad[2] into more specific buckets.

The work product of that effort is located at
https://trello.com/b/TIWf4dBJ/fwaas-usecase-categorization

The motivation for Trello was that cards could be moved around between
lists and has good features that could capture the verbal discussions we
had, as well as the ability to use tags to group related items, and link
related items.

We used the following methodology:

We started by placing all the usecases from the etherpad into the
usecase column, then discussed each usecase - to determine if it was

* Already covered by the Security Group API

* Covered by the Firewall as a Service API, as it exists today

* A gap in the Firewall as a Service API, as it exists today

* A gap in both the Security Group API, and the Firewall as a Service API

* Currently out of scope

* WONTFIX

For the case of "Currently out of scope" list, the metric we used for
placing usecases in this list, was that there were questions or
complexities involved with creating features that meant that we would
try to defer implementing them, or perhaps gathering more data before
making a more permanent decision. In some cases, there were complex
interactions with other APIs or projects that would need to be mapped
out.

WONTFIX was a list that we used for usecases from the etherpad that just
didn't fit with our mission, which was to define a RESTful API that
could express more advanced filtering operations than the Security Group
API. Some of the decisions are based on strong opinions, as well as
trying to limit what we could commit to doing as the FwaaS API - and in
most cases we tried to capture the discussion that led to us placing
this usecase in the WONTFIX list. We were not glib with this list, many
of the cards that we placed on it had strong discussion.

We also employed a number of tags that we added to each usecase, since
there were a couple common themes to some of the usecases, such as L7
filtering, an implementation detail of a specific driver, user oriented
usecase, or operator oriented usecase.

One of the important tags we also used was the red color "Need to
revisit" tag for things that we placed in a list, but could easily see a
decision in another direction, or perhaps didn't feel that we had firm
consensus, even within our small group. Viewers will notice that there
were cards in the WONTFIX list that also were tagged with this tag.

The overall objective of this exercise was to try and categorize the
usecases, down into smaller and more manageable pieces. Notably,
identifying usecases where we identified them as demonstrating a gap in
the current Firewall as a Service API, we could use those to guide an
effort for proposing changes to the existing API.

There is currently a spec that I jotted some of my thoughts down on[3],
but I plan on continuing to discuss it at the midcycle, in order to distill
some of the thoughts that have been shared at the midcycle and turn it
into a proposal for future work.

Finally, if you would like to be added to the trello board - I would be
happy to, although at this point it may be useful to start creating RFE
bugs with the usecases and continue discussion there.

[1]: https://etherpad.openstack.org/p/LBaaS-FWaaS-VPNaaS_Summer_Midcycle_meetup

[2]: https://etherpad.openstack.org/p/fwaas_use_cases

[3]: https://etherpad.openstack.org/p/fwaas-api-evolution-spec


-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Non-responsive upstream libraries (python34 specifically)

2015-07-16 Thread Thomas Goirand
On 07/16/2015 05:19 PM, Doug Hellmann wrote:
> Excerpts from Davanum Srinivas (dims)'s message of 2015-07-16 11:00:33 -0400:
>> Hi all,
>>
>> I ended up here:
>> https://github.com/linsomniac/python-memcached/issues/54
>> https://github.com/linsomniac/python-memcached/pull/67
>>
>> while chasing a keystone py34 CI problem since memcached is running in
>> our CI VM:
>> https://review.openstack.org/#/c/177661/
>>
>> and got word from @zigo that this library and several other libraries
>> have a long lag time for  (or never respond!)
>>
>> What do we do in these situations?
> 
> If we identify projects like this, I think it makes sense for us to
> start looking into alternatives and porting over to more actively
> maintained libraries.
> 
> Doug

I have sent bugs against cinder, nova, and oslo.vmware because they use
Suds, which is unmaintained upstream, and which we would like to get out
of Debian (at least for the next release: Stretch). And no, suds-jurko
isn't better, it as well is unmaintained upstream.

IMO, we should, for the Liberty release, get rid of:
- suds & suds-jurko
- memcached (in the favor of pymemcache)
- mysqldb (this has been discussed at large already)
- cliff-tablib and tablib (not ported to Py3, used only for testing)

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] keystone session upgrade

2015-07-16 Thread michael mccune

hi all,

i've been researching, and coding, about how to upgrade sahara to use 
keystone sessions for authentication instead of our current method. i'm 
running into some issues that i believe might make the current proposed 
approach[1] unfeasible.


one issue i'm running into is the nature of how we change the context to 
the admin user at some points, and in general how we change information 
in the context as we pass it around. this creates some issues with the 
currently proposed spec.


i think we might be better served by taking an approach where the 
context will hold the an auth plugin object. which would be populated 
from the keystonemiddleware for user requests and could be changed to 
the admin when necessary.


in this manner we would create sessions as necessary for each client, 
and then associate the auth plugin object with the session as we create 
the clients. this would also allow us to drop the session cache from the 
context, and we would still be able to have specific sessions for 
clients that require unique options (for example certs).


i'm curious if anyone has thoughts on this matter?

i will also likely be rewriting the spec to encompass these changes if i 
can get them working locally.


thanks,
mike

[1]: https://review.openstack.org/#/c/197743/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Fox, Kevin M
Good point.

+1 on server_type. it seems reasonable.

As for the need, I'd really rather not have my users have to know how to map 
flavors to server_types themselves. Its something they will get wrong at times, 
and we'll get emails about/spend time explaining.

The lack of heat conditionals has been unpleasant. I know its being worked on 
now, but not there yet.

In talking with the heat developers, their current recommendation has been, put 
the conditional stuff as provider resources in different environment files, and 
make the template generic. (ala 
http://hardysteven.blogspot.com/2013/10/heat-providersenvironments-101-ive.html).
  You can then switch out one environment for another to switch things somewhat 
conditionally then. I'm not sure if this is flexible enough to handle the 
concern you have though.

But, I think the conditional thing is not the real issue. Whether it supported 
proper conditionals, it would work with environments, or it would work with 
seperate templates, any way you slice it, you need some way to fetch which of 
the choices you want to specify. Either by being specified manually by the 
user, or some stored mapping in a config file, nova flavor metadata, or flavor 
mapping stored in the magnum db.

So does the user provide that piece of information or does the admin attach it 
to the flavor some how? I'm all for the admin doing it, since I can do it when 
I setup the flavors/magnum and never have to worry about it again. Maybe even 
support a default = 'vm' so that I only have to go in and tag the ironic 
flavors as such. That means I only have to worry about tagging 1 or 2 flavors 
by hand, and the users don't have to do anything. A way better user experience 
for all involved.

Thanks,
Kevin


From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Thursday, July 16, 2015 12:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

To be clear we have two pursuits on this thread:

1) What to rename bay.blatform to.
2) How we might eliminate the attribute, or replace it with something more 
intuitive

We have a consensus now on how to address #1. My direction to Kannan is to 
proceed using server_type as the new attribute name. If anyone disagrees, you 
can let us know now, or submit a subsequent patch to address that concern, and 
we can vote on it in Gerrit.

On this subject of potentially eliminating, or replacing this attribute with 
something else, let’s continue to discuss that.

One key issue is that our current HOT file format does not have any facility 
for conditional logic evaluation, so if the Bay orchestration differs between 
various server_type values, we need to select the appropriate value based on 
the way the bay is created. I’m open to hearing suggestions for implementing 
any needed conditional logic, if we can put it into a better place.

Adrian

On Jul 16, 2015, at 8:54 AM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:

Wait... so the issue is if you were to just use nova flavor, you don't have 
enough information to choose a set of templates that may be more optimal for 
that flavor type (like vm's or bare metal)? Is this a NaaS vs flatdhcp kind of 
thing? I just took a quick skim of the heat templates and it wasn't really 
clear why the template needs to know.

If that sort of thing is needed, maybe allow a heat environment or the template 
set to be tagged onto nova flavors in Magnum by the admin, and then the user 
can be concerned only with nova flavors? They are use to dealing with them. 
Sahara and Trove do some similar things I think.

Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Wednesday, July 15, 2015 8:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field you 
proposed in baymodel [1]. I prefer to drop it and use the existing field 
‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or 
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
  We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc. And 
separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot Baremetal. 
'VM' or Baremetal here is just used for heat temp

Re: [openstack-dev] [Fuel][Fuel-library] Using librarian-puppet to manage upstream fuel-library modules

2015-07-16 Thread Sergii Golovatiuk
Hi,


On Thu, Jul 16, 2015 at 9:01 AM, Aleksandr Didenko 
wrote:

> Hi,
>
> guys, what if we "simplify" things a bit? All we need is:
>
>1. Remove all the community modules from fuel-library.
>2. Create 'Puppetfile' with list of community modules and their
>versions that we currently use.
>3. Make sure all our customizations are proposed to the upstream
>modules (via gerrit or github pull-requests).
>4. Create a separate file with list of patches for each module we need
>to cherry-pick (we need to support gerrit reviews and github 
> pull-requests).
>5. Update 'make iso' scripts:
>   1. Make them use 'r10k' (or other tool) to download upstream
>   modules based on 'Puppetfile'
>
> I am giving +1 to librarian here.

>
>1. Iterate over list of patches for each module and cherry-pick them
>   (just like we do for custom ISO build. I'm not sure if librarian 
> provides
>   such possibility)
>
>
Puppetlabs is in transition of moving all modules to openstack. We may use
pull-requests here just specifying repository. However, I am thinking about
hacking librarian to add cherry-pick option.


> Eventually, when all the functionality we rely on is accepted in upstream
> modules, we'll get rid of file with list of patches for modules. But
> meanwhile it should be much easier to manage modules and customization in
> such way.
>
> Regards,
>
> Alex
>
>
>
> On Fri, Jul 10, 2015 at 5:25 PM, Alex Schultz 
> wrote:
>
>> Done. Sorry about that.
>>
>> -Alex
>>
>> On Fri, Jul 10, 2015 at 9:22 AM, Simon Pasquier 
>> wrote:
>>
>>> Alex, could you enable the comments for all on your document?
>>> Thanks!
>>> Simon
>>>
>>> On Thu, Jul 9, 2015 at 11:07 AM, Bogdan Dobrelya >> > wrote:
>>>
 > Hello everyone,
 >
 > I took some time this morning to write out a document[0] that outlines
 > one possible ways for us to manage our upstream modules in a more
 > consistent fashion. I know we've had a few emails bouncing around
 > lately around this topic of our use of upstream modules and how can we
 > improve this. I thought I would throw out my idea of leveraging
 > librarian-puppet to manage the upstream modules within our
 > fuel-library repository. Ideally, all upstream modules should come
 > from upstream sources and be removed from the fuel-library itself.
 > Unfortunately because of the way our repository sits today, this is a
 > very large undertaking and we do not currently have a way to manage
 > the inclusion of the modules in an automated way. I believe this is
 > where librarian-puppet can come in handy and provide a way to manage
 > the modules. Please take a look at my document[0] and let me know if
 > there are any questions.
 >
 > Thanks,
 > -Alex
 >
 > [0]
 https://docs.google.com/document/d/13aK1QOujp2leuHmbGMwNeZIRDr1bFgJi88nxE642xLA/edit?usp=sharing

 The document is great, Alex!
 I'm fully support the idea to start adapting fuel-library by
 the suggested scheme. The "monitoring" feature of ibrarian looks not
 intrusive and we have no blockers to start using the librarian just
 immediately.

 --
 Best regards,
 Bogdan Dobrelya,
 Irc #bogdando


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Sergii Golovatiuk
Hi,

Let's put openstack.yaml to package if it requires for master node upgrade.
Environment update part should be removed as it never reached production
state.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Thu, Jul 16, 2015 at 8:07 AM, Matthew Mosesohn 
wrote:

> One item that will impact this separation is that fuel_upgrade
> implicitly depends on the openstack.yaml release file from
> fuel-nailgun. Without it, the upgrade process won't work. We should
> refactor fuel-nailgun to implement this functionality on its own, but
> then have fuel_upgrade call this piece. Right now, we're copying the
> openstack.yaml for the target version of Fuel and embedding it in the
> tarball[1].
> Instead, the version should be taken from the new version of
> fuel-nailgun that is installed inside the nailgun container.
>
> The other file which gets embedded in the upgrade tarball is the
> version.yaml file, but I think that's okay to embed during RPM build.
>
> [1]
> https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/openstack.py#L211-L213
>
> On Thu, Jul 16, 2015 at 3:55 PM, Oleg Gelbukh 
> wrote:
> > Vladimir,
> >
> > Good, thank you for extended answer.
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> >
> > On Thu, Jul 16, 2015 at 3:30 PM, Vladimir Kozhukalov
> >  wrote:
> >>
> >> Oleg,
> >>
> >> Yes, you are right. At the moment all docker containers are packaged
> into
> >> a single rpm package. Yes, it would be great to split them into several
> >> one-by-one rpms, but it is not my current priority. I'll definitely
> think of
> >> this when I'll be moving so called "late" packages (which depend on
> other
> >> packages) into "perestroika". Yet another thing is that eventually all
> those
> >> packages and containers will be "artifacts" and will be treated
> differently
> >> according to their nature. That will be the time when we'll be thinking
> of a
> >> docker registry and other stuff like this.
> >>
> >>
> >>
> >>
> >>
> >>
> >> Vladimir Kozhukalov
> >>
> >> On Thu, Jul 16, 2015 at 2:58 PM, Oleg Gelbukh 
> >> wrote:
> >>>
> >>> Vladimir,
> >>>
> >>> Thank you, now it sounds concieving.
> >>>
> >>> My understanding that at the moment all Docker images used by Fuel are
> >>> packaged in single RPM? Do you plan to split individual images into
> separate
> >>> RPMs?
> >>>
> >>> Did you think about publishing those images to Dockerhub?
> >>>
> >>> --
> >>> Best regards,
> >>> Oleg Gelbukh
> >>>
> >>> On Thu, Jul 16, 2015 at 1:50 PM, Vladimir Kozhukalov
> >>>  wrote:
> 
>  Oleg,
> 
>  All docker containers currently are distributed as rpm packages. A
>  little bit surprising, isn't it? But it works and we can easily
> deliver
>  updates using this old plain rpm based mechanism. The package in
> 6.1GA is
>  called fuel-docker-images-6.1.0-1.x86_64.rpm So, upgrade flow would
> be like
>  this
>  0) add new (say 7.0) repository into /etc/yum.repos.d/some.repo
>  1) install fuel-upgrade package (yum install fuel-upgrade-7.0)
>  2) fuel-upgrade package has all other packages (docker, bootstrap
> image,
>  target images, puppet modules) as its dependencies
>  3) run fuel-upgrade script (say /usr/bin/fuel-upgrade) and it performs
>  all necessary actions like moving files, run new containers, upload
> fixtures
>  into nailgun via REST API.
> 
>  It is necessary to note that we are talking here about Fuel master
> node
>  upgrades, not about Openstack cluster upgrades (which is the feature
> you are
>  working on).
> 
>  Vladimir Kozhukalov
> 
>  On Thu, Jul 16, 2015 at 1:22 PM, Oleg Gelbukh 
>  wrote:
> >
> > Vladimir,
> >
> > I am fully support moving fuel-upgrade-system into repository of its
> > own. However, I'm not 100% sure how docker containers are going to
> appear on
> > the upgraded master node. Do we have public repository of Docker
> images
> > already? Or we are going to build them from scratch during the
> upgrade?
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> >
> > On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov
> >  wrote:
> >>
> >> By the way, first step for this to happen is to move
> >> stackforge/fuel-web/fuel_upgrade_system into a separate repository.
> >> Fortunately, this directory is not the place where the code is
> continuously
> >> changing (changes are rather seldom) and moving this project is
> going to
> >> barely affect the whole development flow. So, action flow is as
> follows
> >>
> >> 0) patch to openstack-infra for creating new repository (workflow
> -1)
> >> 1) patch to Fuel CI to create verify jobs
> >> 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
> >> 3) create upstream repository which is to be sucked in by openstack
> >> infra
> >> 4) patch to openstack-infra for creating new repository (

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Adrian Otto
Kevin,

You make a really good point. Reducing required inputs from users in exchange 
for a little more setup by cloud operators is a well justified tradeoff. I'm 
pretty sure flavors in Nova can have tag Metadata added without a nova 
extension, right? Can someone check to be sure?

If we do have a way to tag flavors, then let's default the value (as you said) 
to use in cases where the flavor is untagged, and make that configurable as a 
Magnum config directive. We could also log a warning each time the default is 
used unless the administrator disables the log notices in our config. That way 
we have a way to direct them to relevant documentation if they start using 
Magnum without tagging any flavors first.

We should also mention flavor tagging in our various setup guides with 
references to detailed instructions.

Let's also make sure that flavor and image args to bay_create also have a 
configurable default in Magnum for when they are omitted by the user.

Adrian


 Original message 
From: "Fox, Kevin M" 
Date: 07/16/2015 1:32 PM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Good point.

+1 on server_type. it seems reasonable.

As for the need, I'd really rather not have my users have to know how to map 
flavors to server_types themselves. Its something they will get wrong at times, 
and we'll get emails about/spend time explaining.

The lack of heat conditionals has been unpleasant. I know its being worked on 
now, but not there yet.

In talking with the heat developers, their current recommendation has been, put 
the conditional stuff as provider resources in different environment files, and 
make the template generic. (ala 
http://hardysteven.blogspot.com/2013/10/heat-providersenvironments-101-ive.html).
  You can then switch out one environment for another to switch things somewhat 
conditionally then. I'm not sure if this is flexible enough to handle the 
concern you have though.

But, I think the conditional thing is not the real issue. Whether it supported 
proper conditionals, it would work with environments, or it would work with 
seperate templates, any way you slice it, you need some way to fetch which of 
the choices you want to specify. Either by being specified manually by the 
user, or some stored mapping in a config file, nova flavor metadata, or flavor 
mapping stored in the magnum db.

So does the user provide that piece of information or does the admin attach it 
to the flavor some how? I'm all for the admin doing it, since I can do it when 
I setup the flavors/magnum and never have to worry about it again. Maybe even 
support a default = 'vm' so that I only have to go in and tag the ironic 
flavors as such. That means I only have to worry about tagging 1 or 2 flavors 
by hand, and the users don't have to do anything. A way better user experience 
for all involved.

Thanks,
Kevin


From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Thursday, July 16, 2015 12:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

To be clear we have two pursuits on this thread:

1) What to rename bay.blatform to.
2) How we might eliminate the attribute, or replace it with something more 
intuitive

We have a consensus now on how to address #1. My direction to Kannan is to 
proceed using server_type as the new attribute name. If anyone disagrees, you 
can let us know now, or submit a subsequent patch to address that concern, and 
we can vote on it in Gerrit.

On this subject of potentially eliminating, or replacing this attribute with 
something else, let’s continue to discuss that.

One key issue is that our current HOT file format does not have any facility 
for conditional logic evaluation, so if the Bay orchestration differs between 
various server_type values, we need to select the appropriate value based on 
the way the bay is created. I’m open to hearing suggestions for implementing 
any needed conditional logic, if we can put it into a better place.

Adrian

On Jul 16, 2015, at 8:54 AM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:

Wait... so the issue is if you were to just use nova flavor, you don't have 
enough information to choose a set of templates that may be more optimal for 
that flavor type (like vm's or bare metal)? Is this a NaaS vs flatdhcp kind of 
thing? I just took a quick skim of the heat templates and it wasn't really 
clear why the template needs to know.

If that sort of thing is needed, maybe allow a heat environment or the template 
set to be tagged onto nova flavors in Magnum by the admin, and then the user 
can be concerned only with nova flavors? They are use to dealing with them. 
Sahara and Trove do some similar things I think.

Thanks,
Kevin


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-16 Thread Sean M. Collins
On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
> So it looks like there is a missing part in this feature. There should
> be a way to "hide" this information if the instance does not require to
> configure vlan interfaces to make network functional.

I just commented on the review, but the provider network API extension
is admin only, most likely for the reasons that I think someone has
already mentioned, that it exposes details of the phyiscal network
layout that should not be exposed to tenants.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] What does flavor mean for a network?

2015-07-16 Thread Itsuro ODA
Neil,

flavor:network is for Metaplugin. It is unrelated to flavor framework.

FYI, Metaplugin will be removed in liberty. 
https://review.openstack.org/#/c/192056/

Thanks.
Itsuro Oda (oda-g)

On Thu, 16 Jul 2015 10:44:01 +0100
Neil Jerram  wrote:

> Thanks everyone for your responses...
> 
> On 15/07/15 21:01, Doug Wiegley wrote:
> > That begins to looks like nova’s metadata tags and scheduler, which is > a 
> > valid use case. The underpinnings of flavors could do this, but it’s > not 
> > in the initial implementation.
> >
> > doug
> >
> >> On Jul 15, 2015, at 12:38 PM, Kevin Benton > 
> >> > wrote:
> >>
> >> Wouldn't it be valid to assign flavors to groups of provider >> networks? 
> >> e.g. a tenant wants to attach to a network that is wired up >> to a 40g 
> >> router so he/she chooses a network of the "fat pipe" flavor.
> 
> Indeed.
> 
> Otherwise, why does 'flavor:network' exist at all in the current codebase?
> 
> As the code currently stands, 'flavor:network' appears to be consumed only by 
> agent/linux/interface.py, with the logic that if the interface_driver setting 
> is set to MetaInterfaceDriver, the interface driver class that is actually 
> used for a particular network will be derived by using the network's 
> 'flavor:network' value as a lookup key in the dict specified by the 
> meta_flavor_driver_mappings setting.
> 
> Is that an intended part of the flavors design?
> 
> I hope it doesn't sound like I'm just complaining!  My reason for asking 
> these questions is that I'm working at 
> https://review.openstack.org/#/c/198439/ on a type of network that works 
> through routing on each compute host instead of bridging, and two of the 
> consequences of that are that
> 
> (1) there will not be L2 broadcast connectivity between the instances 
> attached to such a network, whereas there would be with all existing Neutron 
> network types
> 
> (2) the DHCP agent needs some changes to provide DHCP service on unbridged 
> TAP interfaces.
> 
> Probably best here not to worry too much about the details.  But, at a high 
> level:
> 
> - there is an aspect of the network's behavior that needs to be portrayed in 
> the UI, so that tenants/projects can know when it is appropriate to attach 
> instances to that network
> 
> - there is an aspect of the network's implementation that the DHCP agent 
> needs to be aware of, so that it can adjust accordingly.
> 
> I believe the flavor:network 'works', for these purposes, in the senses that 
> it is portrayed in the UI, and that it is available to software components 
> such as the DHCP agent.  So I was wondering whether 'flavor:network' would be 
> the correct location in principle for a value identifying this kind of 
> network, according to the intention of the flavors enhancement.
> 
> 
> >>
> >> On Wed, Jul 15, 2015 at 10:40 AM, Madhusudhan Kandadai >> 
> >> > 
> >> > wrote:
> >>
> >>
> >>
> >> On Wed, Jul 15, 2015 at 9:25 AM, Kyle Mestery
> >> mailto:mest...@mestery.com>> wrote:
> >>
> >> On Wed, Jul 15, 2015 at 10:54 AM, Neil Jerram
> >>  >> > wrote:
> >>
> >> I've been reading available docs about the forthcoming
> >> Neutron flavors framework, and am not yet sure I
> >> understand what it means for a network.
> >>
> >>
> >> In reality, this is envisioned more for service plugins (e.g.
> >> flavors of LBaaS, VPNaaS, and FWaaS) than core neutron resources.
> >>
> >> Yes. Right put. This is for service plugins and its part of
> >> extensions than core network resources//
> >>
> >>
> >> Is it a way for an admin to provide a particular kind of
> >> network, and then for a tenant to know what they're
> >> attaching their VMs to?
> >>
> >>
> >> I'll defer to Madhu who is implementing this, but I don't
> >> believe that's the intention at all.
> >>
> >> Currently, an admin will be able to assign particular flavors,
> >> unfortunately, this is not going to be tenant specific flavors.
> >>
> 
> To be clear - I wasn't suggesting or asking for tenant-specific flavors.  I 
> only meant that a tenant might choose which network to attach a particular 
> set of VMs to, depending on the flavors of the available networks.  (E.g. as 
> in Kevin's example above.)
> 
> >> As you might have seen in the review, we are just using tenant_id
> >> to bypass the keystone checks implemented in base.py and it is
> >> not stored in the db as well. It is something to do in the future
> >> and documented the same in the blueprint.
> >>
> >>
> >> How does it differ from provider:network-type?  (I guess,
> >> because the latter is supposed to be for implementation
> >> consumption only - but is that correct?)
> >>
> >>
> >> Flavors are created and curated by operators, and

Re: [openstack-dev] [qa] identity v3 issue causing non-admin job to fail

2015-07-16 Thread Andrea Frittoli
Hi David,

admin_domain_name is used at the moment to fill in the domain when missing
in a few cases. The get_credentials method is one of them.
Another two are setting the default domain for the users in tempest.conf
[0], and setting the domain for credentials loaded from a YAML file [1].

[0]
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/config.py#n1246
[1]
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/common/accounts.py#n219

There is also a tenant_isolation_domain_name, which is used when
provisioning v3 isolated credentials.
Because tenant_isolation and pre-provisioned credentials are mutually
exclusive, and to avoid having too many config options, I would suggest to
rename tenant_isolation_domain_name to default_credentials_domain_name (or
something similar), and to use it in [0], [1] and in the code you quoted.

The admin_domain_name would then become fully optional, and it should be
assumed == default_credentials_domain_name unless configured otherwise.

andrea



On Tue, Jul 14, 2015 at 8:49 PM David Kranz  wrote:

> Now that the tempest periodic jobs are back (thanks infra!), I was
> looking into the real failures. It seems the main one is caused by the
> fact that the v3 check for primary creds fails if 'admin_domain_name' in
> the identity section is None, which it is when devstack configures
> tempest for non-admin.
>
> The problem is with this code and there is even a comment related to
> this issue. There are various ways to fix this but I'm not sure what the
> value should be for the non-admin case. Andrea, any ideas?
>
>   -David
>
> def get_credentials(fill_in=True, identity_version=None, **kwargs):
>  params = dict(DEFAULT_PARAMS, **kwargs)
>  identity_version = identity_version or CONF.identity.auth_version
>  # In case of "v3" add the domain from config if not specified
>  if identity_version == 'v3':
>  domain_fields = set(x for x in
> auth.KeystoneV3Credentials.ATTRIBUTES
>  if 'domain' in x)
>  if not domain_fields.intersection(kwargs.keys()):
>  # TODO(andreaf) It might be better here to use a dedicated
> config
>  # option such as CONF.auth.tenant_isolation_domain_name
>  params['user_domain_name'] = CONF.identity.admin_domain_name
>  auth_url = CONF.identity.uri_v3
>  else:
>  auth_url = CONF.identity.uri
>  return auth.get_credentials(auth_url,
>  fill_in=fill_in,
>  identity_version=identity_version,
>  **params)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Proposing Jordan Pittier for Tempest Core

2015-07-16 Thread Andrea Frittoli
I'm late in my reply, but I can still give my +1 and say welcome to the
team Jordan!

On Mon, Jun 29, 2015 at 4:26 PM Jordan Pittier 
wrote:

> Thanks a lot !
> I just want to say that I am happy about this and I look forward to
> continue working on Tempest with you all.
>
> Cheers,
> Jordan
>
> On Mon, Jun 29, 2015 at 3:59 PM, Matthew Treinish 
> wrote:
>
>> On Mon, Jun 22, 2015 at 04:23:30PM -0400, Matthew Treinish wrote:
>> >
>> >
>> > Hi Everyone,
>> >
>> > I'd like to propose we add Jordan Pittier (jordanP) to the tempest core
>> team.
>> > Jordan has been a steady contributor and reviewer on tempest over the
>> past few
>> > cycles and he's been actively engaged in the Tempest community. Jordan
>> has had
>> > one of the higher review counts on Tempest for the past cycle, and he
>> has
>> > consistently been providing reviews that show insight into both the
>> project
>> > internals and it's future direction. I feel that Jordan will make an
>> excellent
>> > addition to the core team.
>> >
>> > As per the usual, if the current Tempest core team members would please
>> vote +1
>> > or -1(veto) to the nomination when you get a chance. We'll keep the
>> polls open
>> > for 5 days or until everyone has voted.
>> >
>>
>> So, after > 5 days and it's been all positive feedback. Welcome to the
>> team
>> Jordan.
>>
>> -Matt Treinish
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Status of account creds in the [identity] section of tempest.conf

2015-07-16 Thread Andrea Frittoli
On Fri, Jun 19, 2015 at 10:30 AM Matthew Treinish 
wrote:

> On Thu, Jun 18, 2015 at 02:13:56PM -0400, David Kranz wrote:
> > We had a discussion about this at the qa meeting today around the
> following
> > proposal:
> >
> > tl;dr The test accounts feature provides the same functionality as the
> > embedded credentials. We should deprecate the account information
> embedded
> > directly in tempest.conf in favor of test-accounts, and remove those
> options
> > at the beginning of the M cycle. We would also rework the non-isolated
> jobs
> > to use parallel test accounts, with and without admin creds.

+1


> Starting now,
> > new features such as cleanup and tempest config will not be required to
> work
> > well (or at all) if the embedded creds are used instead of test accounts.
>
> So this was always the long term plan when we started work on the
> test-accounts
> mechanism about a year ago. We were holding off on deprecating the original
> config option based approach until finished the role and network support
> for
> test accounts and we had jobs setup using the mechanism. Now that the we
> finished both role and network support all that's left is setting up the
> jobs.
> I don't think there would be any opposition to marking the user and alt
> user
> options as deprecated after that. Also leaving in line comments (and maybe
> emit
> a warning) marking the non-locking provider mechanism as going away,
> probably
> in M. That way we start clearly marking to users that these options will be
> going away.
>
+1 because the options are completely moving out of conf, we cannot use the
deprecation mechanism from oslo.config - emitting a warning is a good idea


> >
> > We have (at least) three use cases that are important, and we want
> tempest
> > to work well with all of them, but that means something different in each
> > case:
> >
> > 1. throw-away clouds (ci, gate)
> > 2. test clouds
> > 3. production clouds
>
> Well, tempest is designed to and tries to support running against any
> OpenStack
> cloud. I'm not sure if there are more deployment types than these 3
> categories
> but we should also be supporting those too.
>
> >
> > For (1), the most important thing is that failing tests not cause false
> > negatives in other tests due to re-using a tenant. This makes tenant
> > isolation continue to be a good choice here, and requiring admin is not
> an
> > issue. In a perfect world where tempest never left behind any resources
> > regardless of an error at any line of code, test accounts could be used.
> But
> > we are probably a long way from that.
>
> So the cleanup issue here is actually a wider openstack issue. Tempest will
> *always* call delete on created projects and users. This was a requirement
> for
> making test accounts work. (the mechanism for calling delete or freeing a
> credential set from the list is shared) With tenant isolation this means
> we'll
> be deleting a project and users and resources scoped to either may not be
> deleted first. (if there is a tempest or openstack bug) This is a wider
> issue
> for all OpenStack projects that there was a thread a few months ago
> discussing.
>
> >
> > For (3), we cannot use admin creds for tempest runs, and test accounts
> with
> > cleanup allow parallel execution, accepting the risk of a leak causing a
> > false negative. The only way to avoid that risk is to stamp out all leak
> > bugs in tempest.
>
> Well depending on the resource in leak in question test accounts would
> likely
> catch the issues if there is a list on that resource in a later test. But,
> I
> agree, resource leaks have always been treated as bugs and we'll continue
> to
> do so.
>
> >
> > For (2), either isolation or test accounts with cleanup can be used
> >
> > The tempest.conf values are not used in any of these scenarios. Is there
> a
> > reason they are needed for anything?
> >
>
> So the only thing that uses config options for credentials is actually
> tenant
> isolation, which uses them to provide admin credentials to do the dynamic
> creation of accounts. The real advantage of tenant isolation, besides not
> reusing anything, is actually its configuration simplicity. Using an
> accounts
> file can be tricky to use, there are a lot of little gotchas and
> assumptions
> in how you write the file. (which we try to document in both the config
> guide
> and the sample accounts.yaml file) It also requires a large number of
> users to
> be provided depending on the concurrency you're running with. While tenant
> isolation requires just setting 3-5 config options and you're fine after
> that.
>
> I don't think requiring the use of an accounts file for tenant isolation
> makes
> much sense, it's really heavyweight for 1 set of admin creds. Which
> probably
> means we should keep the admin user config option around. Although it
> might make
> more sense to move those options to the auth section. (and maybe rename
> them to
> make it clear that it's for tenant isolation only)

Re: [openstack-dev] [glance] Progress of the Python 3 port

2015-07-16 Thread Louis Taylor
On Wed, Jul 15, 2015 at 10:39:11AM +0200, Victor Stinner wrote:
> Hi,
> Any update on this release?

This has now been released in glance_store 0.7.0

Cheers,
Louis


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >