Re: [openstack-dev] [fuel] [FFE] FF Exception request for Deploy nova-compute (VCDriver) feature

2015-07-27 Thread Vladimir Kuklin
There is a slight change needed, e.g. fixing of noop tests. Then we can
merge it and accept for FFE, I think.

On Fri, Jul 24, 2015 at 1:34 PM, Andrian Noga an...@mirantis.com wrote:

 Colleagues,
 actually, i'm tottaly agree with Mike. We can merge
 https://review.openstack.org/#/c/196114/ w/o additional Ceilometer
 support (will be moved to next release). So if we merge it today we dont
 need FFE for this feature.


 Regards,
 Andrian

 On Fri, Jul 24, 2015 at 1:18 AM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 Since we are in FF state already, I'd like to have urgent estimate from
 one of fuel-library cores:
 - holser
 - alex_didenko
 - aglarendil
 - bogdando

 aglarendil is on vacation though. Guys, please take a look at
 https://review.openstack.org/#/c/196114/ - can we accept it as
 exception? Seems to be good to go...

 I still think that additional Ceilometer support should be moved to the
 next release.

 Thanks,

 On Thu, Jul 23, 2015 at 1:56 PM Mike Scherbakov mscherba...@mirantis.com
 wrote:

 Hi Andrian,
 this is High priority blueprint [1] for 7.0 timeframe. It seems we still
 didn't merge the main part [2], and need FF exception for additional stuff.

 The question is about quality. If we focus on enhancements, then we
 don't focus on bugs. Which whether means to deliver work with lower quality
 of slip the release.

 My opinion is rather don't give FF exception in this case, and don't
 have Ceilometer support for this new feature.

 [1] https://blueprints.launchpad.net/fuel/+spec/compute-vmware-role
 [2] https://review.openstack.org/#/c/196114/

 On Thu, Jul 23, 2015 at 1:39 PM Andrian Noga an...@mirantis.com wrote:

 Hi,

 The patch patch for fuel-library
 https://review.openstack.org/#/c/196114/  that implements
 'compute-vwmare' role (https://mirantis.jira.com/browse/PROD-627) requires
 additional work to do (ceilometer support.), but as far as I can see it
 doesn't affect any other parts of the product.

 We plan to implement it in 3 working days (2 for implementation, 1 day
 for writing system test and test runs), it should not be hard since we
 already support ceilometer compute agent deployment on controller
 nodes.

 We need 1 DevOps engineer and 1 QA engineer to be engaged for this work.

 So I think it's ok to accept this feature as an exception for feature
 freeze.

 Regards,
 Andrian Noga
 Project manager
 Partner Centric Engineering
 Mirantis, Inc

 Mob.phone: +38 (063) 966-21-24

 Email: an...@mirantis.com
 Skype: bigfoot_ua

 --
 Mike Scherbakov
 #mihgen

 --
 Mike Scherbakov
 #mihgen




 --
 --
 Regards,
 Andrian
 Mirantis, Inc

 Mob.phone: +38 (063) 966-21-24
 Email: an...@mirantis.com
 Skype: bigfoot_ua

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3: 5 more projects with a py34 voting gate, only 4 remaing

2015-07-27 Thread Victor Stinner

Hi,

On 27/07/2015 12:35, Roman Vasilets wrote:

Hi, just what to share with you. Rally project also have voting py34
jobs. Thank you.


Cool! I don't know if Rally port the Python 3 is complete or not, so I 
wrote work in progress. Please update the wiki page if the port is done:

https://wiki.openstack.org/wiki/Python3#OpenStack_applications

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] version.yaml in the context of packages

2015-07-27 Thread Matthew Mosesohn
 2) production - It is always equal to docker which means we deploy docker 
 containers on the master node. Formally it comes from one of fuel-main 
 variables, which is set into docker by default, but not a single job in CI 
 customizes this variable. Looks like it makes no sense to have this.
This gets set to docker-build during fuel ISO creation because several
tasks cannot be done in the containers during docker build phase. We
can replace this by moving it to astute.yaml easily enough.
 4) openstack_version - It is just an extraction from openstack.yaml [2].
Without installing nailgun, it's impossible to know what the repo
directories should be. Abstracting it buried in some other package
makes puppet tasks laborious. Keeping it in a YAML allows it to be
accessible.

The rest won't impact Fuel Master deployment significantly.

On Fri, Jul 24, 2015 at 8:21 PM, Vladimir Kozhukalov
vkozhuka...@mirantis.com wrote:
 Dear colleagues,

 Although we are focused on fixing bugs during next few weeks I still have to
 ask everyone's opinion about /etc/fuel/version.yaml. We introduced this file
 when all-inclusive ISO image was the only way of delivering Fuel. We had to
 have somewhere the information about SHA commits for all Fuel related git
 repos. But everything is changing and we are close to flexible package based
 delivery approach. And this file is becoming kinda fifth wheel.

 Here is how version.yaml looks like

 VERSION:
   feature_groups:
 - mirantis
   production: docker
   release: 7.0
   openstack_version: 2015.1.0-7.0
   api: 1.0
   build_number: 82
   build_id: 2015-07-23_10-59-34
   nailgun_sha: d1087923e45b0e6d946ce48cb05a71733e1ac113
   python-fuelclient_sha: 471948c26a8c45c091c5593e54e6727405136eca
   fuel-agent_sha: bc25d3b728e823e6154bac0442f6b88747ac48e1
   astute_sha: b1f37a988e097175cbbd14338286017b46b584c3
   fuel-library_sha: 58d94955479aee4b09c2b658d90f57083e668ce4
   fuel-ostf_sha: 94a483c8aba639be3b96616c1396ef290dcc00cd
   fuelmain_sha: 68871248453b432ecca0cca5a43ef0aad6079c39


 Let's go through this file.

 1) feature_groups - This is, in fact, runtime parameter rather then build
 one, so we'd better store it in astute.yaml or other runtime config file.
 2) production - It is always equal to docker which means we deploy docker
 containers on the master node. Formally it comes from one of fuel-main
 variables, which is set into docker by default, but not a single job in CI
 customizes this variable. Looks like it makes no sense to have this.
 3) release - It is the number of Fuel release. Frankly, don't need this
 because it is nothing more than the version of fuel meta package [1].
 4) openstack_version - It is just an extraction from openstack.yaml [2].
 5) api - It is 1.0 currently. And we still don't have other versions of API.
 Frankly, it contradicts to the common practice to make several different
 versions available at the same time. And a user should be able to ask API
 which versions are currently available.
 6) build_number and build_id - These are the only parameters that relate to
 the build process. But let's think if we actually need these parameters if
 we switch to package based approach. RPM/DEB repositories are going to
 become the main way of delivering Fuel, not ISO. So, it also makes little
 sense to put store them, especially if we upgrade some of the packages.
 7) X_sha - This does not even require any explanation. It should be rpm -qa
 instead.

 I am raising this topic, because it is kind of blocker for switching to
 package based upgrades. Our current upgrade script assumes we have this file
 version.yaml in the tarball and we put this new file instead of old one
 during upgrade. But this file could not be packaged into rpm because it can
 only be built together with ISO.


 [1] https://github.com/stackforge/fuel-main/blob/master/specs/fuel-main.spec
 [2]
 https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml

 Vladimir Kozhukalov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create a network filter

2015-07-27 Thread Silvia Fichera
Hi Sahid.
Thank you for your answer.
What I want to do is to check if the link that connects the compute node
with the switch is up and than collect information about the available BW
using OpenFlow's API. This is because I want information related to the
real physical network.That's why I want to use ODL.
So there is a way using monitors to check if a physical link is up?

Silvia

2015-07-27 12:34 GMT+02:00 Sahid Orentino Ferdjaoui 
sahid.ferdja...@redhat.com:

 On Thu, Jul 23, 2015 at 04:44:01PM +0200, Silvia Fichera wrote:
  Hi all,
 
  I'm using OpenStack together with OpenDaylight to add a network awareness
  feature to the scheduler.
  I have 3 compute nodes (one of these is also the Openstack Controller)
  connected by a openvswitch controlled by OpenDaylight.
  What I would like to do is to write a filter to check if a link is up and
  then assign weight acconding to the available bw (I think I will collect
  this data by ODL and update an entry in a db).

 So you would like to check if a link is up in compute nodes and order
 compute nodes by BW, right ? I do not think you can use OpenDaylight
 for something like that that will be too specific.

 One solution could be to create a new monitor, they run on compute
 nodes and are used to collect any kind of data.

   nova/compute/monitors

 Then you may want to create a new weight to order hosts eligible by
 data you have collected from the monitor.

   nova/scheduler/weights

  For each host I have a management interface (eth0) and an interface
  connected to the OVS switch to build the physical network (eth1).
  Have you got any suggestion to check the link status?
  I thought I can be inspired by the second script in this link
 
 http://stackoverflow.com/questions/17434079/python-check-to-see-if-host-is-connected-to-network
  to verify if the iface is up and then check the connectivity but It has
 to
  be run in the compute node and I don't know which IP address I could
 point
  at.
 
 
  Thank you
 
  --
  Silvia Fichera

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Silvia Fichera
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [FFE] FF Exception request for Deploy nova-compute (VCDriver) feature

2015-07-27 Thread Sergii Golovatiuk
Hi,

I have checked the code. After fixing tests, this patch maybe included to
FFE as it has minimal impact on core functionality. +1 for FFE for
https://review.openstack.org/#/c/196114/

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Jul 27, 2015 at 1:38 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 There is a slight change needed, e.g. fixing of noop tests. Then we can
 merge it and accept for FFE, I think.

 On Fri, Jul 24, 2015 at 1:34 PM, Andrian Noga an...@mirantis.com wrote:

 Colleagues,
 actually, i'm tottaly agree with Mike. We can merge
 https://review.openstack.org/#/c/196114/ w/o additional Ceilometer
 support (will be moved to next release). So if we merge it today we dont
 need FFE for this feature.


 Regards,
 Andrian

 On Fri, Jul 24, 2015 at 1:18 AM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Since we are in FF state already, I'd like to have urgent estimate from
 one of fuel-library cores:
 - holser
 - alex_didenko
 - aglarendil
 - bogdando

 aglarendil is on vacation though. Guys, please take a look at
 https://review.openstack.org/#/c/196114/ - can we accept it as
 exception? Seems to be good to go...

 I still think that additional Ceilometer support should be moved to the
 next release.

 Thanks,

 On Thu, Jul 23, 2015 at 1:56 PM Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Hi Andrian,
 this is High priority blueprint [1] for 7.0 timeframe. It seems we
 still didn't merge the main part [2], and need FF exception for additional
 stuff.

 The question is about quality. If we focus on enhancements, then we
 don't focus on bugs. Which whether means to deliver work with lower quality
 of slip the release.

 My opinion is rather don't give FF exception in this case, and don't
 have Ceilometer support for this new feature.

 [1] https://blueprints.launchpad.net/fuel/+spec/compute-vmware-role
 [2] https://review.openstack.org/#/c/196114/

 On Thu, Jul 23, 2015 at 1:39 PM Andrian Noga an...@mirantis.com
 wrote:

 Hi,

 The patch patch for fuel-library
 https://review.openstack.org/#/c/196114/  that implements
 'compute-vwmare' role (https://mirantis.jira.com/browse/PROD-627) requires
 additional work to do (ceilometer support.), but as far as I can see it
 doesn't affect any other parts of the product.

 We plan to implement it in 3 working days (2 for implementation, 1
 day for writing system test and test runs), it should not be hard since we
 already support ceilometer compute agent deployment on controller
 nodes.

 We need 1 DevOps engineer and 1 QA engineer to be engaged for this
 work.

 So I think it's ok to accept this feature as an exception for feature
 freeze.

 Regards,
 Andrian Noga
 Project manager
 Partner Centric Engineering
 Mirantis, Inc

 Mob.phone: +38 (063) 966-21-24

 Email: an...@mirantis.com
 Skype: bigfoot_ua

 --
 Mike Scherbakov
 #mihgen

 --
 Mike Scherbakov
 #mihgen




 --
 --
 Regards,
 Andrian
 Mirantis, Inc

 Mob.phone: +38 (063) 966-21-24
 Email: an...@mirantis.com
 Skype: bigfoot_ua

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Accelerating the Enterprise Adoption of OpenStack

2015-07-27 Thread Erlon Cruz
That are great news!! I'm really glad to know that!
Congratulations, Rackspace and Intel for the effort!

On Thu, Jul 23, 2015 at 2:01 PM, Egle Sigler egle.sig...@rackspace.com
wrote:

  Hello OpenStack Community,



 I am very excited to let you know that today Rackspace and Intel announced
 our plans to form the “OpenStack Innovation Center,” which is an exciting
 community-oriented initiative focused on accelerating the enterprise
 features and adoption of OpenStack.  This initiative includes:



 ·  *Largest OpenStack Developer Cloud *– We are building and making
 available to the community two 1,000 node clusters to support advanced,
 large-scale and testing of OpenStack.  The clusters should be available to
 the community within six months and you can sign up here
 http://goo.gl/forms/vCkfNBmXm4 to receive updates on this effort.

 ·  *OpenStack Developer Training* – We are creating a new training
 curriculum designed to onboard and significantly increase the number of
 developers working upstream in the community.

 ·  *Joint OpenStack Engineering* – Rackspace and Intel developers
 will work together in collaboration with the Enterprise Work Group and
 community to eliminate bugs and develop new enterprise features.  Both
 companies will recruit new developers to help further OpenStack development.

 ·  *OpenStack Innovation Center *– The center will be comprised of
 Rackspace and Intel developers who will work upstream, using existing
 community tools and processes to improve the scalability, manageability and
 usability of OpenStack.



 To find out more, please check out the following resources:



 Rackspace press release
 http://www.rackspace.com/blog/newsarticles/rackspace-collaborates-with-intel-to-accelerate-openstack-enterprise-feature-development-and-adoption/

 Rackspace blog
 http://www.rackspace.com/blog/rackspace-and-intel-form-the-openstack-innovation-center

 Intel release
 http://newsroom.intel.com/community/intel_newsroom/blog/2015/07/23/intel-announces-cloud-for-all-initiative-to-deliver-benefits-of-the-cloud-to-more-businesses

 Intel blog
 https://communities.intel.com/community/itpeernetwork/datastack/blog/2015/07/23/cloud-for-all



 We look forward to working with you to continue advancing the leading open
 source cloud platform and welcome your feedback!



 Best regards,



 Egle Sigler

 Rackspace Principal Architect

 OpenStack Foundation Board Member

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] Announcing HyperStack project

2015-07-27 Thread Peng Zhao
Adrian and all,


I believe that Magnum and HyperStack are targeting at different problems, 
though it certainly makes sense to integrate HyperStack as a bay type in 
Magnum, which we would love to explore later. I've setup a separate project for 
HyperStack: https://launchpad.net/hyperstack. My apology for the confusion.


I understand the concern of duplicating Nova and others. But imagine a vision 
that application can seamlessly migrate or scale out/in between LXC-based 
private CaaS and Hypervisor-based public CaaS, without the need to pre-build a 
bay. 


This ultimate portability  simplicity simply overweighs the rest!


HyperStack advocates the true multi-tenant, secure, public CaaS, which is 
really the first one and is built within OpenStack framework. I think 
HyperStack provides a seamless and probably the best path to upgrade to the 
container era.


For the team meeting, it is sometime very late for me (2am Beijing). I'll try 
to join more and look forward to speak with you and others in person.


Sorry again for the misunderstanding,
Peng


 
 
-- Original --
From:  Adrian Ottoadrian.o...@rackspace.com;
Date:  Mon, Jul 27, 2015 12:43 PM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] Announcing HyperStack project

 
 Peng, 
 
 For the record, the Magnum team is not yet comfortable with this proposal. 
This arrangement is not the way we think containers should be integrated with 
OpenStack. It completely bypasses Nova, and offers no Bay abstraction, so there 
is no user  selectable choice of a COE (Container Orchestration Engine). We 
advised that it would be smarter to build a nova virt driver for Hyper, and 
integrate that with Magnum so that it could work with all the different bay 
types. It also produces a situation where  operators can not effectively bill 
for the services that are in use by the consumers, there is no sensible 
infrastructure layer capacity management (scheduler), no encryption management 
solution for the communication between k8s minions/nodes and the k8s master,  
and a number of other weaknesses. I’m not convinced the single-tenant approach 
here makes sense. 
 
 To be fair, the concept is interesting, and we are discussing how it could be 
integrated with Magnum. It’s appropriate for experimentation, but I would not 
characterize it as a “solution for cloud providers” for the above reasons, and 
the callouts  I mentioned here:
 
 
 http://lists.openstack.org/pipermail/openstack-dev/2015-July/069940.html
 
 
 Positioning it that way is simply premature. I strongly suggest that you 
attend the Magnum team meetings, and work through these concerns as we had 
Hyper on the agenda last Tuesday, but you did not show up to discuss it. The ML 
thread was confused  by duplicate responses, which makes it rather hard to 
follow.
 
 
 I think it’s a really bad idea to basically re-implement Nova in Hyper. Your’e 
already re-implementing Docker in Hyper. With a scope that’s too wide, you 
won’t be able to keep up with the rapid changes in these projects, and anyone 
using them  will be unable to use new features that they would expect from 
Docker and Nova while you are busy copying all of that functionality each time 
new features are added. I think there’s a better approach available that does 
not require you to duplicate such a  wide range of functionality. I suggest we 
work together on this, and select an approach that sets you up for success, and 
gives OpenStack could operators what they need to build services on Hyper.
  
 
 Regards,
 
 
 Adrian
 
   On Jul 26, 2015, at 7:40 PM, Peng Zhao p...@hyper.sh wrote:
 
Hi all,
  I am glad to introduce the HyperStack project to you.
  HyperStack is a native, multi-tenant CaaS solution built on top of OpenStack. 
In terms of architecture, HyperStack = Bare-metal + Hyper + Kubernetes + Cinder 
+ Neutron.
  HyperStack is different from Magnum in that HyperStack doesn't employ the Bay 
concept. Instead, HyperStack pools all bare-metal servers into one singe 
cluster. Due to the hypervisor nature in Hyper, different tenants' applications 
are completely isolated (no  shared kernel), thus co-exist without security 
concerns in a same cluster.
  Given this, HyperStack is a solution for public cloud providers who want to 
offer the secure, multi-tenant CaaS.
  Ref:  
https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/1258x535/1c85a755dcb5e4a4147d37e6aa22fd40/upload_7_23_2015_at_11_00_41_AM.png
  The next step is to present a working beta of HyperStack at Tokyo summit, 
which we submitted a presentation: 
https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/Presentation/4030.
  Please vote if you are interested.
  In the future, we want to integrate HyperStack with Magnum and Nova to make 
sure one OpenStack deployment can offer both IaaS and native CaaS services.
  Best,
 Peng
  -- 

Re: [openstack-dev] [requirements] propose adding Robert Collins to requirements-core

2015-07-27 Thread Sean Dague
+1

On 07/24/2015 12:31 PM, Davanum Srinivas wrote:
 +1 from me. Thanks for the hard work @lifeless
 
 -- dims
 
 On Fri, Jul 24, 2015 at 12:21 PM, Doug Hellmann d...@doughellmann.com wrote:
 Requirements reviewers,

 I propose that we add Robert Collins (lifeless) to the requirements-core
 review team.

 Robert has been doing excellent work this cycle with updating pip and
 our requirements repository to support constraints. As a result he has a
 full understanding of the sorts of checks we should be doing for new
 requirements, and I think he would make a good addition to the team.

 Please indicate +1 or -1 with concerns on this thread, as usual.

 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] version.yaml in the context of packages

2015-07-27 Thread Vitaly Kramskikh
Vladimir,

2015-07-24 20:21 GMT+03:00 Vladimir Kozhukalov vkozhuka...@mirantis.com:

 Dear colleagues,

 Although we are focused on fixing bugs during next few weeks I still have
 to ask everyone's opinion about /etc/fuel/version.yaml. We introduced this
 file when all-inclusive ISO image was the only way of delivering Fuel. We
 had to have somewhere the information about SHA commits for all Fuel
 related git repos. But everything is changing and we are close to flexible
 package based delivery approach. And this file is becoming kinda fifth
 wheel.

 Here is how version.yaml looks like

 VERSION:
   feature_groups:
 - mirantis
   production: docker
   release: 7.0
   openstack_version: 2015.1.0-7.0
   api: 1.0
   build_number: 82
   build_id: 2015-07-23_10-59-34
   nailgun_sha: d1087923e45b0e6d946ce48cb05a71733e1ac113
   python-fuelclient_sha: 471948c26a8c45c091c5593e54e6727405136eca
   fuel-agent_sha: bc25d3b728e823e6154bac0442f6b88747ac48e1
   astute_sha: b1f37a988e097175cbbd14338286017b46b584c3
   fuel-library_sha: 58d94955479aee4b09c2b658d90f57083e668ce4
   fuel-ostf_sha: 94a483c8aba639be3b96616c1396ef290dcc00cd
   fuelmain_sha: 68871248453b432ecca0cca5a43ef0aad6079c39


 Let's go through this file.

 1) *feature_groups* - This is, in fact, runtime parameter rather then
 build one, so we'd better store it in astute.yaml or other runtime config
 file.

This parameter must be available in nailgun - there is code in nailgun and
UI which relies on this parameter.

 2) *production* - It is always equal to docker which means we deploy
 docker containers on the master node. Formally it comes from one of
 fuel-main variables, which is set into docker by default, but not a
 single job in CI customizes this variable. Looks like it makes no sense to
 have this.

This parameter can be set to other values when used for fake UI and
functional tests for UI and fuelclient.

 3) *release *- It is the number of Fuel release. Frankly, don't need this
 because it is nothing more than the version of fuel meta package [1].

It is shown on UI.

 4) *openstack_version *- It is just an extraction from openstack.yaml [2].
 5) *api *- It is 1.0 currently. And we still don't have other versions of
 API. Frankly, it contradicts to the common practice to make several
 different versions available at the same time. And a user should be able to
 ask API which versions are currently available.
 6) *build_number *and *build_id *- These are the only parameters that
 relate to the build process. But let's think if we actually need these
 parameters if we switch to package based approach. RPM/DEB repositories are
 going to become the main way of delivering Fuel, not ISO. So, it also makes
 little sense to put store them, especially if we upgrade some of the
 packages.
 7) *X_sha* - This does not even require any explanation. It should be rpm
 -qa instead.


 I am raising this topic, because it is kind of blocker for switching to
 package based upgrades. Our current upgrade script assumes we have this
 file version.yaml in the tarball and we put this new file instead of old
 one during upgrade. But this file could not be packaged into rpm because it
 can only be built together with ISO.


 [1]
 https://github.com/stackforge/fuel-main/blob/master/specs/fuel-main.spec
 [2]
 https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml

 Vladimir Kozhukalov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][testing] How to modify DSVM tests to use a DevStack plugin?

2015-07-27 Thread Sean Dague
You would build variants of the jobs you want that specifically enable
your plugin.

That being said, you should focus on jobs that substantially test your
component, not just the giant list of all jobs. Part of our focus in on
decoupling so that for something like vpnaas you can start with the
assumption that neutron base services are sufficiently tested elsewhere,
and the only thing you should test is the additional function and
complexity that your component brings to the mix.

-Sean

On 07/27/2015 07:44 AM, Paul Michali wrote:
 Yes, the plugin enables the service, and for the neutron-vpnaas DSVM
 based jobs, I have the enable_plugin line added to the job so that
 everything works.
 
 However, for the DevStack repo, which runs a bunch of other DSVM jobs,
 this fails, as there is (obviously) no enable_plugin line.:
 
   * gate-tempest-dsvm-full
 
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-full/98be491/ 
 SUCCESS in
 58m 37s
   * gate-tempest-dsvm-postgres-full
 
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-postgres-full/85c5b92/
  SUCCESS in
 50m 45s
   * gate-tempest-dsvm-neutron-full
 
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-full/0050bfe/
  FAILURE in
 1h 25m 30s
   * gate-grenade-dsvm
 http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm/b224606/ 
 SUCCESS in
 44m 23s
   * gate-tempest-dsvm-large-ops
 
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-large-ops/a250cf5/
  SUCCESS in
 26m 49s
   * gate-tempest-dsvm-neutron-large-ops
 
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-large-ops/6faa1be/
  SUCCESS in
 25m 51s
   * gate-devstack-bashate
 
 http://logs.openstack.org/19/201119/1/check/gate-devstack-bashate/65ad952/ 
 SUCCESS in
 13s
   * gate-devstack-unit-tests
 
 http://logs.openstack.org/19/201119/1/check/gate-devstack-unit-tests/ccdbe4e/
  SUCCESS in
 1m 02s
   * gate-devstack-dsvm-cells
 
 http://logs.openstack.org/19/201119/1/check/gate-devstack-dsvm-cells/a6ca00c/
  SUCCESS in
 24m 08s
   * gate-grenade-dsvm-partial-ncpu
 
 http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm-partial-ncpu/744deb8/
  SUCCESS in
 48m 36s
   * gate-tempest-dsvm-ironic-pxe_ssh
 
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-ironic-pxe_ssh/8eb4315/
  FAILURE in
 40m 10s
   * gate-devstack-dsvm-updown
 
 http://logs.openstack.org/19/201119/1/check/gate-devstack-dsvm-updown/85f1de5/
  SUCCESS in
 21m 12s
   * gate-tempest-dsvm-f21
 
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-f21/35a04c4/ 
 FAILURE in
 51m 01s (non-voting)
   * gate-tempest-dsvm-centos7
 
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-centos7/b9c99c9/
  SUCCESS in
 30m 23s (non-voting)
   * gate-devstack-publish-docs
 
 http://docs-draft.openstack.org/19/201119/1/check/gate-devstack-publish-docs/f794b1c//doc/build/html/
  SUCCESS in
 2m 23s
   * gate-swift-dsvm-functional-nv
 
 http://logs.openstack.org/19/201119/1/check/gate-swift-dsvm-functional-nv/13d2c58/
  SUCCESS in
 27m 12s (non-voting)
   * gate-grenade-dsvm-neutron
 
 http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm-neutron/8675f0c/
  FAILURE in
 47m 49s
   * gate-tempest-dsvm-multinode-smoke
 
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-multinode-smoke/bd69c45/
  SUCCESS in
 36m 53s (non-voting)
   * gate-tempest-dsvm-neutron-multinode-smoke
 
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-multinode-smoke/01e1d45/
  FAILURE in
 44m 16s (non-voting)
 
 
 I'm wondering what's the best way to modify those jobs... is there some
 common location where I can enable the plugin to handle all DSVM based
 jobs, do I just update the 5 failing tests, do I update just the 3
 voting tests, or do I update all 16 DSVM based jobs?
 
 Regards,
 PCM
 
 On Fri, Jul 24, 2015 at 5:12 PM Clark Boylan cboy...@sapwetik.org
 mailto:cboy...@sapwetik.org wrote:
 
 On Fri, Jul 24, 2015, at 02:05 PM, Paul Michali wrote:
  Hi,
 
  I've created a DevStack plugin for the neutron-vpnaas repo. Now, I'm
  trying
  to remove the q-vpn service setup from the DevStack repo (
  https://review.openstack.org/#/c/201119/).
 
  However, I'm hitting an issue in that (almost) every test that uses
  DevStack fails, because it is no longer setting up q-vpn.
 
  How should I modify the tests, so that they setup the q-vpn
 service, in
  light of the fact that there is a DevStack plugin available for it. Is
  there some common place that I can do the enable_plugin
  neutron-vpnaas...
  line?
 
 Your devstack plugin should enable the service. Then in your jobs you
 just need to enable the plugin which will then enable the vpn service.
 There should be plenty of prior 

Re: [openstack-dev] [neutron][testing] How to modify DSVM tests to use a DevStack plugin?

2015-07-27 Thread Paul Michali
Maybe I'm not explaining myself well (sorry)...

For VPN commits, there are functional jobs that (now) enable the devstack
plugin for neutron-vpnaas as needed (and grenade job will do the same).
From the neutron-vpnaas repo standpoint everything is in place.

Now that there is a devstack plugin for neutron-vpnaas, I want to remove
all the VPN setup from the *DevStack* repo's setup, as the user of DevStack
can specify the enable_plugin in their local.conf file now. The commit is
https://review.openstack.org/#/c/201119/.

The issue I see though, is that the DevStack repo's jobs are failing,
because they are using devstack, are relying on VPN being set up, and the
enable_plugin line for VPN isn't part of any of the jobs shown in my last
post.

How do we resolve that issue?

Regards,

PCM


On Mon, Jul 27, 2015 at 8:09 AM Sean Dague s...@dague.net wrote:

 You would build variants of the jobs you want that specifically enable
 your plugin.

 That being said, you should focus on jobs that substantially test your
 component, not just the giant list of all jobs. Part of our focus in on
 decoupling so that for something like vpnaas you can start with the
 assumption that neutron base services are sufficiently tested elsewhere,
 and the only thing you should test is the additional function and
 complexity that your component brings to the mix.

 -Sean

 On 07/27/2015 07:44 AM, Paul Michali wrote:
  Yes, the plugin enables the service, and for the neutron-vpnaas DSVM
  based jobs, I have the enable_plugin line added to the job so that
  everything works.
 
  However, for the DevStack repo, which runs a bunch of other DSVM jobs,
  this fails, as there is (obviously) no enable_plugin line.:
 
* gate-tempest-dsvm-full
  
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-full/98be491/
 SUCCESS in
  58m 37s
* gate-tempest-dsvm-postgres-full
  
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-postgres-full/85c5b92/
 SUCCESS in
  50m 45s
* gate-tempest-dsvm-neutron-full
  
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-full/0050bfe/
 FAILURE in
  1h 25m 30s
* gate-grenade-dsvm
  
 http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm/b224606/
 SUCCESS in
  44m 23s
* gate-tempest-dsvm-large-ops
  
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-large-ops/a250cf5/
 SUCCESS in
  26m 49s
* gate-tempest-dsvm-neutron-large-ops
  
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-large-ops/6faa1be/
 SUCCESS in
  25m 51s
* gate-devstack-bashate
  
 http://logs.openstack.org/19/201119/1/check/gate-devstack-bashate/65ad952/
 SUCCESS in
  13s
* gate-devstack-unit-tests
  
 http://logs.openstack.org/19/201119/1/check/gate-devstack-unit-tests/ccdbe4e/
 SUCCESS in
  1m 02s
* gate-devstack-dsvm-cells
  
 http://logs.openstack.org/19/201119/1/check/gate-devstack-dsvm-cells/a6ca00c/
 SUCCESS in
  24m 08s
* gate-grenade-dsvm-partial-ncpu
  
 http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm-partial-ncpu/744deb8/
 SUCCESS in
  48m 36s
* gate-tempest-dsvm-ironic-pxe_ssh
  
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-ironic-pxe_ssh/8eb4315/
 FAILURE in
  40m 10s
* gate-devstack-dsvm-updown
  
 http://logs.openstack.org/19/201119/1/check/gate-devstack-dsvm-updown/85f1de5/
 SUCCESS in
  21m 12s
* gate-tempest-dsvm-f21
  
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-f21/35a04c4/
 FAILURE in
  51m 01s (non-voting)
* gate-tempest-dsvm-centos7
  
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-centos7/b9c99c9/
 SUCCESS in
  30m 23s (non-voting)
* gate-devstack-publish-docs
  
 http://docs-draft.openstack.org/19/201119/1/check/gate-devstack-publish-docs/f794b1c//doc/build/html/
 SUCCESS in
  2m 23s
* gate-swift-dsvm-functional-nv
  
 http://logs.openstack.org/19/201119/1/check/gate-swift-dsvm-functional-nv/13d2c58/
 SUCCESS in
  27m 12s (non-voting)
* gate-grenade-dsvm-neutron
  
 http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm-neutron/8675f0c/
 FAILURE in
  47m 49s
* gate-tempest-dsvm-multinode-smoke
  
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-multinode-smoke/bd69c45/
 SUCCESS in
  36m 53s (non-voting)
* gate-tempest-dsvm-neutron-multinode-smoke
  
 http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-multinode-smoke/01e1d45/
 FAILURE in
  44m 16s (non-voting)
 
 
  I'm wondering what's the best way to modify those jobs... is there some
  common location where I can enable the plugin to handle all DSVM based
  jobs, do I just update the 5 failing tests, do I update just the 3
  voting tests, or do I update all 16 DSVM based jobs?
 
  Regards,
  PCM
 
  On Fri, Jul 24, 2015 at 5:12 PM Clark Boylan 

Re: [openstack-dev] [stable][neutron] dvr job for kilo?

2015-07-27 Thread Kyle Mestery
On Mon, Jul 27, 2015 at 6:57 AM, Thierry Carrez thie...@openstack.org
wrote:

 Ihar Hrachyshka wrote:
  I noticed that dvr job is now voting for all stable branches, and
  failing, because the branch misses some important fixes from master.
 
  Initially, I tried to just disable votes for stable branches for the
  job: https://review.openstack.org/#/c/205497/ Due to limitations of
  project-config, we would need to rework the patch though to split the
  job into stable non-voting and liberty+ voting one, and disable the
  votes just for the first one.
 
  My gut feeling is that since the job never actually worked for kilo,
  we should just kill it for all stable branches. It does not provide
  any meaningful actionable feedback anyway.
 
  Thoughts?

 +1 to kill it.


Agreed, lets get rid of it for stable branches.


 --
 Thierry Carrez (ttx)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Vladimir Kuklin
Folks

We saw several High issues with how keystone manages regular memcached
tokens. I know, this is not the perfect time as you already decided to push
it from 7.0, but I would reconsider declaring it as FFE as it affects HA
and UX poorly. If we can enable tokens simply by altering configuration,
let's do it. I see commit for this feature is pretty trivial.

On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Fuel Library team, I expect your immediate reply here.

 I'd like upgrades team to take a look at this one, as well as at the one
 which moves Keystone under Apache, in order to check that there are no
 issues here.

 -1 from me for this time in the cycle. I'm concerned about:

1. I don't see any reference to blueprint or bug which explains (with
measurements) why we need this change in reference architecture, and what
are the thoughts about it in puppet-openstack, and OpenStack Keystone. We
need to get datapoints, and point to them. Just knowing that Keystone team
implemented support for it doesn't yet mean that we need to rush in
enabling this.
2. It is quite noticeable change, not a simple enhancement. I reviewed
the patch, there are questions raised.
3. It doesn't pass CI, and I don't have information on risks
associated, and additional effort required to get this done (how long would
it take to get it done)
4. This feature increases complexity of reference architecture. Now
I'd like every complexity increase to be optional. I have feedback from the
field, that our prescriptive architecture just doesn't fit users' needs,
and it is so painful to decouple then what is needed vs what is not. Let's
start extending stuff with an easy switch, being propagated from Fuel
Settings. Is it possible to do? How complex would it be?

 If we get answers for all of this, and decide that we still want the
 feature, then it would be great to have it. I just don't feel that it's
 right timing anymore - we entered FF.

 Thanks,

 On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov amaka...@mirantis.com
 wrote:

 Colleagues,

 I would like to request an exception from the Feature Freeze for Fernet
 tokens support added to the fuel-library in the following CR:
 https://review.openstack.org/#/c/201029/

 Keystone part of the feature is implemented in the upstream and the
 change impacts setup configuration only.

 Please, respond if you have any questions or concerns related to this
 request.

 Thanks in advance.

 --
 Kind Regards,
 Alexander Makarov,
 Senior Software Developer,

 Mirantis, Inc.
 35b/3, Vorontsovskaya St., 109147, Moscow, Russia

 Tel.: +7 (495) 640-49-04
 Tel.: +7 (926) 204-50-60

 Skype: MAKAPOB.AJIEKCAHDP

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Mike Scherbakov
 #mihgen

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][testing] How to modify DSVM tests to use a DevStack plugin?

2015-07-27 Thread Paul Michali
Yes, the plugin enables the service, and for the neutron-vpnaas DSVM based
jobs, I have the enable_plugin line added to the job so that everything
works.

However, for the DevStack repo, which runs a bunch of other DSVM jobs, this
fails, as there is (obviously) no enable_plugin line.:


   - gate-tempest-dsvm-full
   http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-full/98be491/
SUCCESS in 58m 37s
   - gate-tempest-dsvm-postgres-full
   
http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-postgres-full/85c5b92/
SUCCESS in 50m 45s
   - gate-tempest-dsvm-neutron-full
   
http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-full/0050bfe/
FAILURE in 1h 25m 30s
   - gate-grenade-dsvm
   http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm/b224606/
   SUCCESS in 44m 23s
   - gate-tempest-dsvm-large-ops
   
http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-large-ops/a250cf5/
SUCCESS in 26m 49s
   - gate-tempest-dsvm-neutron-large-ops
   
http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-large-ops/6faa1be/
SUCCESS in 25m 51s
   - gate-devstack-bashate
   http://logs.openstack.org/19/201119/1/check/gate-devstack-bashate/65ad952/
SUCCESS in 13s
   - gate-devstack-unit-tests
   
http://logs.openstack.org/19/201119/1/check/gate-devstack-unit-tests/ccdbe4e/
SUCCESS in 1m 02s
   - gate-devstack-dsvm-cells
   
http://logs.openstack.org/19/201119/1/check/gate-devstack-dsvm-cells/a6ca00c/
SUCCESS in 24m 08s
   - gate-grenade-dsvm-partial-ncpu
   
http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm-partial-ncpu/744deb8/
SUCCESS in 48m 36s
   - gate-tempest-dsvm-ironic-pxe_ssh
   
http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-ironic-pxe_ssh/8eb4315/
FAILURE in 40m 10s
   - gate-devstack-dsvm-updown
   
http://logs.openstack.org/19/201119/1/check/gate-devstack-dsvm-updown/85f1de5/
SUCCESS in 21m 12s
   - gate-tempest-dsvm-f21
   http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-f21/35a04c4/
FAILURE in 51m 01s (non-voting)
   - gate-tempest-dsvm-centos7
   
http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-centos7/b9c99c9/
SUCCESS in 30m 23s (non-voting)
   - gate-devstack-publish-docs
   
http://docs-draft.openstack.org/19/201119/1/check/gate-devstack-publish-docs/f794b1c//doc/build/html/
SUCCESS in 2m 23s
   - gate-swift-dsvm-functional-nv
   
http://logs.openstack.org/19/201119/1/check/gate-swift-dsvm-functional-nv/13d2c58/
SUCCESS in 27m 12s (non-voting)
   - gate-grenade-dsvm-neutron
   
http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm-neutron/8675f0c/
FAILURE in 47m 49s
   - gate-tempest-dsvm-multinode-smoke
   
http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-multinode-smoke/bd69c45/
SUCCESS in 36m 53s (non-voting)
   - gate-tempest-dsvm-neutron-multinode-smoke
   
http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-multinode-smoke/01e1d45/
FAILURE in 44m 16s (non-voting)


I'm wondering what's the best way to modify those jobs... is there some
common location where I can enable the plugin to handle all DSVM based
jobs, do I just update the 5 failing tests, do I update just the 3 voting
tests, or do I update all 16 DSVM based jobs?

Regards,
PCM

On Fri, Jul 24, 2015 at 5:12 PM Clark Boylan cboy...@sapwetik.org wrote:

 On Fri, Jul 24, 2015, at 02:05 PM, Paul Michali wrote:
  Hi,
 
  I've created a DevStack plugin for the neutron-vpnaas repo. Now, I'm
  trying
  to remove the q-vpn service setup from the DevStack repo (
  https://review.openstack.org/#/c/201119/).
 
  However, I'm hitting an issue in that (almost) every test that uses
  DevStack fails, because it is no longer setting up q-vpn.
 
  How should I modify the tests, so that they setup the q-vpn service, in
  light of the fact that there is a DevStack plugin available for it. Is
  there some common place that I can do the enable_plugin
  neutron-vpnaas...
  line?
 
 Your devstack plugin should enable the service. Then in your jobs you
 just need to enable the plugin which will then enable the vpn service.
 There should be plenty of prior art with the ec2api plugin, glusterfs
 plugin, and others.

 Clark

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Use of injected-files in compute service

2015-07-27 Thread nithish B
Hi Priyanka,
Those options are used when arbitrary files has to be placed within the
instance. For example, if you wish to use your own authorized_keys,
instead of the regular keys which exists, you may use this method.  The
only limitation is that you may add upto 5 files only. So to summarize, for
injecting ssh keys, net info, root password, or arbitrary files, these
options are used.

Hope this helps.

Regards,
Nitish B.

On Mon, Jul 27, 2015 at 5:02 PM, Priyanka ppn...@cse.iitb.ac.in wrote:

  Hi,

 What is the use of injected-files,injected-file-content-bytes and
 injected-file-path-bytes in compute service. The OpenStack guide says its 
 Number
 of injected files allowed per tenant.I did not get the actual meaning of
 this.

 Thanks,

 Priyanka

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Storing Heat Templates on Glance Artifact Repo on Kilo

2015-07-27 Thread Kuvaja, Erno
Hi Thiago,

As it is now, we do not support storing anything else than images in Glance. 
Obviously nothing really permits it either as we do not check the images 
uploaded being images as their content anyways.

You might want to look into the work that is ongoing around Images API v.3 also 
known as Artifacts. The planned implementation would most probably address your 
use case even it does not initially provide images at all ((V1)V2V3 can and 
probably needs to be running concurrently to address all the needs).


-  Erno

From: Martinx - ジェームズ [mailto:thiagocmarti...@gmail.com]
Sent: Friday, July 24, 2015 11:04 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Storing Heat Templates on Glance Artifact Repo on Kilo

Guys,

 I have a bunch of Heat Templates and I would like to know if it is possible to 
store those templates on Glance.

 Is it possible?

 If yes, how?

 I'm using OpenStack Kilo on top of Ubuntu Trusty (using Ubuntu Cloud Archive).

Thanks!
Thiago
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Boris Bobrov
I agree. Configuration with memcache made by Fuel now has issues which badly 
affect overall OpenStack experience.

On Monday 27 July 2015 14:34:59 Vladimir Kuklin wrote:
 Folks
 
 We saw several High issues with how keystone manages regular memcached
 tokens. I know, this is not the perfect time as you already decided to push
 it from 7.0, but I would reconsider declaring it as FFE as it affects HA
 and UX poorly. If we can enable tokens simply by altering configuration,
 let's do it. I see commit for this feature is pretty trivial.
 
 On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov mscherba...@mirantis.com
 
 wrote:
  Fuel Library team, I expect your immediate reply here.
  
  I'd like upgrades team to take a look at this one, as well as at the one
  which moves Keystone under Apache, in order to check that there are no
  issues here.
  
  -1 from me for this time in the cycle. I'm concerned about:
 1. I don't see any reference to blueprint or bug which explains (with
 measurements) why we need this change in reference architecture, and
 what
 are the thoughts about it in puppet-openstack, and OpenStack Keystone.
 We
 need to get datapoints, and point to them. Just knowing that Keystone
 team
 implemented support for it doesn't yet mean that we need to rush in
 enabling this.
 2. It is quite noticeable change, not a simple enhancement. I reviewed
 the patch, there are questions raised.
 3. It doesn't pass CI, and I don't have information on risks
 associated, and additional effort required to get this done (how long
 would
 it take to get it done)
 4. This feature increases complexity of reference architecture. Now
 I'd like every complexity increase to be optional. I have feedback from
 the
 field, that our prescriptive architecture just doesn't fit users'
 needs,
 and it is so painful to decouple then what is needed vs what is not.
 Let's
 start extending stuff with an easy switch, being propagated from Fuel
 Settings. Is it possible to do? How complex would it be?
  
  If we get answers for all of this, and decide that we still want the
  feature, then it would be great to have it. I just don't feel that it's
  right timing anymore - we entered FF.
  
  Thanks,
  
  On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov amaka...@mirantis.com
  
  wrote:
  Colleagues,
  
  I would like to request an exception from the Feature Freeze for Fernet
  tokens support added to the fuel-library in the following CR:
  https://review.openstack.org/#/c/201029/
  
  Keystone part of the feature is implemented in the upstream and the
  change impacts setup configuration only.
  
  Please, respond if you have any questions or concerns related to this
  request.
  
  Thanks in advance.
  
  --
  Kind Regards,
  Alexander Makarov,
  Senior Software Developer,
  
  Mirantis, Inc.
  35b/3, Vorontsovskaya St., 109147, Moscow, Russia
  
  Tel.: +7 (495) 640-49-04
  Tel.: +7 (926) 204-50-60
  
  Skype: MAKAPOB.AJIEKCAHDP
  
  _
  _
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  --
  Mike Scherbakov
  #mihgen
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][vpnaas] No VPNaaS IRC meeting Tuesday, July 27th.

2015-07-27 Thread Paul Michali
I'm on vacation tomorrow (yeah!), and there wasn't much new to discuss, so
I was planning on canceling the meeting this week. If you have something
pressing and want to host the meeting, let everyone know, by updating the
agenda and responding to this message. Otherwise you can use neutron IRC
channel or ML to discuss items.

Regards,

Paul Michali (pc_m)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Alexander Makarov
Actually Fernet token IS the best bet on stability and quality.

On Mon, Jul 27, 2015 at 3:23 PM, Sergii Golovatiuk sgolovat...@mirantis.com
 wrote:

 Guys, I object of merging Fernet tokens. I set -2 for any Fernet related
 activities. Firstly, there are some ongoing discussions how we should
 distribute, revoke, rotate SSL keys for Fernet. Secondly, there some
 discussion in community about potential security concerns where user may
 renew token instantly. Additionally, we've already introduced apache wsgi
 which may have own implication on keystone itself. It's a bit late for 7.0.
 Let's focus on stability and quality.



 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Mon, Jul 27, 2015 at 1:52 PM, Alexander Makarov amaka...@mirantis.com
 wrote:

 I've filed a ticket to test Fernet token on the scale lab:
 https://mirantis.jira.com/browse/MOSS-235

 If this feature is not granted FFE we still can configure it manually by
 changing keystone config.
 So I think internal how-to document backed-up with scale and bvt testing
 will allow our deployers to deliver Fernet to our customers.
 1 more thing: in the Community this feature is considered experimantal,
 so maybe setting it as a default is a bit premature?

 On Mon, Jul 27, 2015 at 2:34 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Folks

 We saw several High issues with how keystone manages regular memcached
 tokens. I know, this is not the perfect time as you already decided to push
 it from 7.0, but I would reconsider declaring it as FFE as it affects HA
 and UX poorly. If we can enable tokens simply by altering configuration,
 let's do it. I see commit for this feature is pretty trivial.

 On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Fuel Library team, I expect your immediate reply here.

 I'd like upgrades team to take a look at this one, as well as at the
 one which moves Keystone under Apache, in order to check that there are no
 issues here.

 -1 from me for this time in the cycle. I'm concerned about:

1. I don't see any reference to blueprint or bug which explains
(with measurements) why we need this change in reference architecture, 
 and
what are the thoughts about it in puppet-openstack, and OpenStack 
 Keystone.
We need to get datapoints, and point to them. Just knowing that Keystone
team implemented support for it doesn't yet mean that we need to rush in
enabling this.
2. It is quite noticeable change, not a simple enhancement. I
reviewed the patch, there are questions raised.
3. It doesn't pass CI, and I don't have information on risks
associated, and additional effort required to get this done (how long 
 would
it take to get it done)
4. This feature increases complexity of reference architecture. Now
I'd like every complexity increase to be optional. I have feedback from 
 the
field, that our prescriptive architecture just doesn't fit users' needs,
and it is so painful to decouple then what is needed vs what is not. 
 Let's
start extending stuff with an easy switch, being propagated from Fuel
Settings. Is it possible to do? How complex would it be?

 If we get answers for all of this, and decide that we still want the
 feature, then it would be great to have it. I just don't feel that it's
 right timing anymore - we entered FF.

 Thanks,

 On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov 
 amaka...@mirantis.com wrote:

 Colleagues,

 I would like to request an exception from the Feature Freeze for
 Fernet tokens support added to the fuel-library in the following CR:
 https://review.openstack.org/#/c/201029/

 Keystone part of the feature is implemented in the upstream and the
 change impacts setup configuration only.

 Please, respond if you have any questions or concerns related to this
 request.

 Thanks in advance.

 --
 Kind Regards,
 Alexander Makarov,
 Senior Software Developer,

 Mirantis, Inc.
 35b/3, Vorontsovskaya St., 109147, Moscow, Russia

 Tel.: +7 (495) 640-49-04
 Tel.: +7 (926) 204-50-60

 Skype: MAKAPOB.AJIEKCAHDP

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com


 

Re: [openstack-dev] [Fuel] Switching keystone to apache wsgi

2015-07-27 Thread Sergii Golovatiuk
Hi,

Do we have any results from Scale team? I would like to compare Apache
results with eventlet. Also we need to perform destructive tests and get
numbers when one controller is down.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Jul 24, 2015 at 12:29 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Just to ensure that everyone knows - patch is merged. I hope that we will
 see performance improvements, and looking for test results :)

 On Thu, Jul 23, 2015 at 1:13 PM Aleksandr Didenko adide...@mirantis.com
 wrote:

 Hi,

 guys, we're about to switch keystone to apache wsgi by merging [0]. Just
 wanted to make sure everyone is aware of this change.
 If you have any questions or concerns let's discuss them in this thread.

 Regards,
 Alex

 [0] https://review.openstack.org/#/c/204111/
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Mike Scherbakov
 #mihgen

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [puppet] module dependencies and different openstack versions

2015-07-27 Thread Emilien Macchi


On 07/27/2015 02:32 AM, Sam Morrison wrote:
 We currently use our own custom puppet modules to deploy openstack, I have 
 been looking into the official openstack modules and have a few barriers to 
 switching.
 
 We are looking at doing this at a project at a time but the modules have a 
 lot of dependencies. Eg. they all depend on the keystone module and try to do 
 things in keystone suck as create users, service endpoints etc.
 
 This is a pain as I don’t want it to mess with keystone (for one we don’t 
 support setting endpoints via an API) but also we don’t want to move to the 
 official keystone module at the same time. We have some custom keystone stuff 
 which means we’ll may never move to the official keystone puppet module.

Well, in that case it's going to be very hard for you to use the
modules. Trying to give up forks and catch-up to upstream is really
expensive and challenging (Fuel is currently working on this).

What I suggest is:
1/ have a look at the diff between your manifests and upstream ones.
2/ try to use upstream modules with the maximum number of classes, and
put the rest in a custom module (or a manifest somewhere).
3/ submit patches if you think we're missing something in the modules.

 The neutron module pulls in the vswitch module but we don’t use vswitch and 
 it doesn’t seem to be a requirement of the module so maybe doesn’t need to be 
 in metadata dependencies?

AFIK there is no conditional in metadata.json, so we need the module
anyway. It should not cause any trouble to you, except if you have a
custom 'vswitch' module.

 
 It looks as if all the openstack puppet modules are designed to all be used 
 at once? Does anyone else have these kind of issues? It would be great if eg. 
 the neutron module would just manage neutron and not try and do things in 
 nova, keystone, mysql etc.

We try to design our modules to work together because Puppet OpenStack
is a single project composed of modules that are supposed to -together-
deploy OpenStack.

In your case, I would just install the module from source (git) and not
trying to pull them from Puppetforge.

 
 The other issue we have is that we have different services in openstack 
 running different versions. Currently we have Kilo, Juno and Icehouse 
 versions of different bits in the same cloud. It seems as if the puppet 
 modules are designed just to manage one openstack version? Is there any 
 thoughts on making it support different versions at the same time? Does this 
 work?

1/ you're running Kilo, Juno and Icehouse in the same cloud? Wow. You're
brave!

2/ Puppet modules do not hardcode OpenStack packages version. Though our
current master is targeting Liberty, but we have stable/kilo,
stable/juno, etc. You can even disable the package dependency in most of
the classes.

I'm not sure this is an issue here, maybe a misunderstanding of how to
use the modules.

Good luck,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all] [oslo] Suggestion to change verbose=false to true in oslo.log by default

2015-07-27 Thread Sean Dague
Honestly, I think deprecating and removing 'verbose' is probably the
best option. INFO is probably the right default behavior, and it's not
really verbose in any real openstack usage. It is unlikely that anyone
would want that to be in the off state, and if so, they can do that via
python logging config.

On 07/27/2015 06:32 AM, Dmitry Tantsur wrote:
 Hi all!
 
 I didn't find the discussion on the ML so I feel like starting one.
 What was the reason of setting verbose to false by default? The patch
 [1] does not provide any reasoning for it.
 
 We all know that software does fail from time to time. While the default
 level of WARN might give some signal to an operator that *something* is
 wrong, it usually does not give much clues on *what* and *why*. Our logs
 guidelines define INFO as units of work, and the default means that
 operators/people debugging their logs won't even be able to track
 transitions in their system that lead to an error/warning.
 
 Of all people I know 100% are using DEBUG level by default, the only
 post I've found here on this topic [2] seems to state the same. I
 realize that DEBUG might give too much information to process (though I
 always request people to enable debugging log before sending my any bug
 reports). But is there really a compelling reason to disable INFO?
 
 Examples of INFO logs from ironic tempest run:
 ironic cond:
 http://logs.openstack.org/62/202562/7/check/gate-tempest-dsvm-ironic-pxe_ssh/090871b/logs/screen-ir-cond.txt.gz?level=INFO
 
 nova cpu:
 http://logs.openstack.org/62/202562/7/check/gate-tempest-dsvm-ironic-pxe_ssh/090871b/logs/screen-n-cpu.txt.gz?level=INFO
 
 and the biggest one neutron agt:
 http://logs.openstack.org/62/202562/7/check/gate-tempest-dsvm-ironic-pxe_ssh/090871b/logs/screen-q-agt.txt.gz?level=INFO
 
 
 As you can see, these logs are so small, you can just read through them
 without any tooling! Of course it's not a real world example, but I'm
 dealing with hundrer-of-megabytes debug-level text logs from nova +
 ironic nearly every day. It's still manangeable for me, grep handles it
 pretty well (to say nothing about journalctl).
 
 WDYT about changing this default on oslo.log level?
 
 Thanks,
 Dmitry
 
 [1] https://review.openstack.org/#/c/18110/
 [2]
 http://lists.openstack.org/pipermail/openstack-operators/2014-March/004156.html
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel Zabbix in deployment tasks

2015-07-27 Thread Sergii Golovatiuk
Hi,

Experimental feature may be removed at any time. That's why it's
experimental. However, I agree that upgrade of such environments should be
disabled.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #44

2015-07-27 Thread Emilien Macchi
Hello team,

Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
in #openstack-meeting-4:

https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150728

Please add additional items you'd like to discuss.
If our schedule allows it, we'll make bug triage during the meeting.

See you there!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Alexander Makarov
I've filed a ticket to test Fernet token on the scale lab:
https://mirantis.jira.com/browse/MOSS-235

If this feature is not granted FFE we still can configure it manually by
changing keystone config.
So I think internal how-to document backed-up with scale and bvt testing
will allow our deployers to deliver Fernet to our customers.
1 more thing: in the Community this feature is considered experimantal, so
maybe setting it as a default is a bit premature?

On Mon, Jul 27, 2015 at 2:34 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Folks

 We saw several High issues with how keystone manages regular memcached
 tokens. I know, this is not the perfect time as you already decided to push
 it from 7.0, but I would reconsider declaring it as FFE as it affects HA
 and UX poorly. If we can enable tokens simply by altering configuration,
 let's do it. I see commit for this feature is pretty trivial.

 On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 Fuel Library team, I expect your immediate reply here.

 I'd like upgrades team to take a look at this one, as well as at the one
 which moves Keystone under Apache, in order to check that there are no
 issues here.

 -1 from me for this time in the cycle. I'm concerned about:

1. I don't see any reference to blueprint or bug which explains (with
measurements) why we need this change in reference architecture, and what
are the thoughts about it in puppet-openstack, and OpenStack Keystone. We
need to get datapoints, and point to them. Just knowing that Keystone team
implemented support for it doesn't yet mean that we need to rush in
enabling this.
2. It is quite noticeable change, not a simple enhancement. I
reviewed the patch, there are questions raised.
3. It doesn't pass CI, and I don't have information on risks
associated, and additional effort required to get this done (how long 
 would
it take to get it done)
4. This feature increases complexity of reference architecture. Now
I'd like every complexity increase to be optional. I have feedback from 
 the
field, that our prescriptive architecture just doesn't fit users' needs,
and it is so painful to decouple then what is needed vs what is not. Let's
start extending stuff with an easy switch, being propagated from Fuel
Settings. Is it possible to do? How complex would it be?

 If we get answers for all of this, and decide that we still want the
 feature, then it would be great to have it. I just don't feel that it's
 right timing anymore - we entered FF.

 Thanks,

 On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov amaka...@mirantis.com
 wrote:

 Colleagues,

 I would like to request an exception from the Feature Freeze for Fernet
 tokens support added to the fuel-library in the following CR:
 https://review.openstack.org/#/c/201029/

 Keystone part of the feature is implemented in the upstream and the
 change impacts setup configuration only.

 Please, respond if you have any questions or concerns related to this
 request.

 Thanks in advance.

 --
 Kind Regards,
 Alexander Makarov,
 Senior Software Developer,

 Mirantis, Inc.
 35b/3, Vorontsovskaya St., 109147, Moscow, Russia

 Tel.: +7 (495) 640-49-04
 Tel.: +7 (926) 204-50-60

 Skype: MAKAPOB.AJIEKCAHDP

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Mike Scherbakov
 #mihgen

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Plugins on separate launchpad projects

2015-07-27 Thread Patrick Petit

On 26 Jul 2015 at 20:25:43, Sheena Gregson (sgreg...@mirantis.com) wrote:

Patrick –

 

Are you suggesting one project for all Fuel plugins, or individual projects for 
each plugin?  I believe it is the former, which I prefer – but I wanted to 
check.

Sheen,

I meant, one individual project for each plugin or one individual project for 
several plugins when it makes sense to regroupe them under one umbrella like 
LMA toolchain as stated earlier.


 

Sheena

 

From: Patrick Petit [mailto:ppe...@mirantis.com]
Sent: Saturday, July 25, 2015 12:25 PM
To: Igor Kalnitsky; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Fuel][Plugins] Plugins on separate launchpad 
projects

 

Igor, thanks for your comments. Please see below.

Patrick

On 25 Jul 2015 at 13:08:24, Igor Kalnitsky (ikalnit...@mirantis.com) wrote:

Hello Patrick,

Thank you for raising this topic. I think that it'd be nice to create
a separate projects for Fuel plugins if it wasn't done yet.

Yes there is a launchpad project for fuel plugins although it’s currently not 
possible to create blueprints in that project.

But that’s not what I meant. I meant dedicated projets for each fuel plugins or 
for a group of fuel plugins if desired.

For example a project for LMA series of fuel plugins.

Fuel
plugins have different release cycles and do not share core group. So
it makes pretty much sense to me to create separate projects.

Correct. We are on the same page.




Otherwise, I have no idea how to work with LP's milestones since again
- plugins have different release cycles.

Thanks,
Igor

On Fri, Jul 24, 2015 at 8:27 PM, Patrick Petit ppe...@mirantis.com wrote:
 Hi There,

 I have been thinking that it would make a lot of sense to have separate
 launchpad projects for Fuel plugins.

 The main benefits I foresee….

 - Fuel project will be less of a bottleneck for bug triage and it should be
 more effective to have team members do the bug triage. After all, they are
 best placed to make the required judgement call.
 - A feature can be spread across multiple plugins, like it’s the case with
 LMA toolchain, and so it would be better to have a separate project to
 regroup them.
 - It is counter intuitive and awkward to create blueprints for plugins in
 Fuel project itself in addition to making it cluttered with stuffs that
 unrelated to Fuel.

 Can you please tell me what’s your thinking about this?
 Thanks
 Patrick

 --
 Patrick Petit
 Mirantis France


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Cloud Foundry Service Broker Api in Murano

2015-07-27 Thread Nikolay Starodubtsev
If you're interested in this feature you can join us tomorrow at murano
weekly meeting, tomorrow at 17 UTC in #openstack-meeting-alt



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-06-16 18:26 GMT+03:00 Nikolay Starodubtsev nstarodubt...@mirantis.com
:

 Here is a draft spec for this: https://review.openstack.org/#/c/192250/



 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1

 2015-06-16 13:11 GMT+03:00 Nikolay Starodubtsev 
 nstarodubt...@mirantis.com:

 Hi all,
 I've started a work on bp:
 https://blueprints.launchpad.net/murano/+spec/cloudfoundry-api-support
 I plan to publish a spec in a day or two. If anyone interesting to
 cooperate please drop me a message here or in IRC: Nikolay_St



 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to debug neutron using eclipse pydev?

2015-07-27 Thread Assaf Muller
We need to update that page. I haven't used PyDev in years, I use PyCharm.
There's an option in PyCharm called 'Enable Gevent debugging' (Gevent is
a green threads library very similar to eventlet, which is what we use
in OpenStack). I read that PyDev 3.7+ has support for Gevent debugging
as well. Can you check if simply enabling that (And not editing any code)
solves your issue? If so I can update the wiki with your conclusions.

- Original Message -
 Hi,All
 
 I follow the wiki page:
 https://wiki.openstack.org/wiki/NeutronDevelopment
 
 
 * Eclipse pydev - Free. It works! (Thanks to gong yong sheng ). You need
 to modify quantum-server and __init__.py as following: From:
 eventlet.monkey_patch() To: eventlet.monkey_patch(os=False,
 thread=False)
 
 but instruction about Eclipse pydev is invalid,as the file has changed,no
 mokey_patch any more.
 so what can i do?
 
 --
 Regards,
 kkxue
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [ptls] Liberty-2 development milestone coming up

2015-07-27 Thread Thierry Carrez
Hi PTLs with deliverables using the development milestone model,

This week is the *liberty-2* development milestone week. That means you
should plan to reach out to the release team on #release-mgmt-office
during office hours tomorrow:

08:00 - 10:00 UTC: ttx
18:00 - 20:00 UTC: dhellmann

During this sync point we'll be adjusting the completed blueprints and
fixed bugs list in preparation for the tag.

The tag itself should be communicated through a proposed change to the
openstack/releases repository, sometimes between Tuesday and Thursday.
We'll go through the process during the sync tomorrow.

If you can't make it to the office hours tomorrow, please reach out on
the channel so that we can arrange another time.

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception for LP-1464656 fix (update ceph PG calculation algorithm)

2015-07-27 Thread Vitaly Kramskikh
+1 to Stanislav's proposal.

2015-07-27 3:05 GMT+03:00 Stanislav Makar sma...@mirantis.com:

 Hello
 I went through LP-1464656 and I see that it covers two things:
 1. Bad pg num calculation algorithm.
 2. Add the possibility to set pg num via GUI.

 First is the most important and a BUG by itself, second is nice to have
 feature and no more.
 Hence we should split it into a bug and a feature.

 As the main part is a bug it does not impact FF at all.

 My +1 to close bad pg num calculation algorithm as a bug and postpone
 specifying pg_num via GUI to the next release.

 /All the best
 Stanislav Makar
 +1 for FFE
 Given how borken pg_num calculations are now, this is essential to the
 ceph story and there is no point in testing ceph at scale with out it.

 The only work-around for not having this is to delete all of the pools by
 hand after deployment and calculate the values by hand, and re-create the
 pools by hand. The story from that alone makes it high on the UX scale,
 which means we might as well fix it as a bug.

 The scope of impact is limited to only ceph, the testing plan needs more
 detail, and we are still comming to turns with some of the data format to
 process between nailgun calculating and puppet consuming.

 We would need about 1.2 week to get these landed.

 On Fri, Jul 24, 2015 at 3:51 AM Konstantin Danilov kdani...@mirantis.com
 wrote:

 Team,

 I would like to request an exception from the Feature Freeze for [1]
 fix. It requires changes in
 fuel-web [2], fuel-library [3] and in UI. [2] and [3] are already
 tested, I'm fixing UT now.
 BP - [4]

 Code has backward-compatibility mode. I need one more week to finish it.
 Also
 I'm asking someone to be an assigned code-reviewer for this ticket to
 speed-up
 review process.

 Thanks

 [1] https://bugs.launchpad.net/fuel/+bug/1464656
 [2] https://review.openstack.org/#/c/204814
 [3] https://review.openstack.org/#/c/204811
 [4] https://review.openstack.org/#/c/203062

 --
 Kostiantyn Danilov aka koder.ua
 Principal software engineer, Mirantis

 skype:koder.ua
 http://koder-ua.blogspot.com/
 http://mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --

 --

 Andrew Woodward

 Mirantis

 Fuel Community Ambassador

 Ceph Community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Use of injected-files in compute service

2015-07-27 Thread Priyanka

Hi,

What is the use of |injected-files,|||injected-file-content-bytes and 
|injected-file-path-bytes|| in compute service. The OpenStack guide 
says its |Number of injected files allowed per tenant.I did not get the 
actual meaning of this.


Thanks,

Priyanka
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [stable][neutron] dvr job for kilo?

2015-07-27 Thread Thierry Carrez
Ihar Hrachyshka wrote:
 I noticed that dvr job is now voting for all stable branches, and
 failing, because the branch misses some important fixes from master.
 
 Initially, I tried to just disable votes for stable branches for the
 job: https://review.openstack.org/#/c/205497/ Due to limitations of
 project-config, we would need to rework the patch though to split the
 job into stable non-voting and liberty+ voting one, and disable the
 votes just for the first one.
 
 My gut feeling is that since the job never actually worked for kilo,
 we should just kill it for all stable branches. It does not provide
 any meaningful actionable feedback anyway.
 
 Thoughts?

+1 to kill it.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Sergii Golovatiuk
Guys, I object of merging Fernet tokens. I set -2 for any Fernet related
activities. Firstly, there are some ongoing discussions how we should
distribute, revoke, rotate SSL keys for Fernet. Secondly, there some
discussion in community about potential security concerns where user may
renew token instantly. Additionally, we've already introduced apache wsgi
which may have own implication on keystone itself. It's a bit late for 7.0.
Let's focus on stability and quality.



--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Jul 27, 2015 at 1:52 PM, Alexander Makarov amaka...@mirantis.com
wrote:

 I've filed a ticket to test Fernet token on the scale lab:
 https://mirantis.jira.com/browse/MOSS-235

 If this feature is not granted FFE we still can configure it manually by
 changing keystone config.
 So I think internal how-to document backed-up with scale and bvt testing
 will allow our deployers to deliver Fernet to our customers.
 1 more thing: in the Community this feature is considered experimantal, so
 maybe setting it as a default is a bit premature?

 On Mon, Jul 27, 2015 at 2:34 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Folks

 We saw several High issues with how keystone manages regular memcached
 tokens. I know, this is not the perfect time as you already decided to push
 it from 7.0, but I would reconsider declaring it as FFE as it affects HA
 and UX poorly. If we can enable tokens simply by altering configuration,
 let's do it. I see commit for this feature is pretty trivial.

 On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Fuel Library team, I expect your immediate reply here.

 I'd like upgrades team to take a look at this one, as well as at the one
 which moves Keystone under Apache, in order to check that there are no
 issues here.

 -1 from me for this time in the cycle. I'm concerned about:

1. I don't see any reference to blueprint or bug which explains
(with measurements) why we need this change in reference architecture, 
 and
what are the thoughts about it in puppet-openstack, and OpenStack 
 Keystone.
We need to get datapoints, and point to them. Just knowing that Keystone
team implemented support for it doesn't yet mean that we need to rush in
enabling this.
2. It is quite noticeable change, not a simple enhancement. I
reviewed the patch, there are questions raised.
3. It doesn't pass CI, and I don't have information on risks
associated, and additional effort required to get this done (how long 
 would
it take to get it done)
4. This feature increases complexity of reference architecture. Now
I'd like every complexity increase to be optional. I have feedback from 
 the
field, that our prescriptive architecture just doesn't fit users' needs,
and it is so painful to decouple then what is needed vs what is not. 
 Let's
start extending stuff with an easy switch, being propagated from Fuel
Settings. Is it possible to do? How complex would it be?

 If we get answers for all of this, and decide that we still want the
 feature, then it would be great to have it. I just don't feel that it's
 right timing anymore - we entered FF.

 Thanks,

 On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov 
 amaka...@mirantis.com wrote:

 Colleagues,

 I would like to request an exception from the Feature Freeze for Fernet
 tokens support added to the fuel-library in the following CR:
 https://review.openstack.org/#/c/201029/

 Keystone part of the feature is implemented in the upstream and the
 change impacts setup configuration only.

 Please, respond if you have any questions or concerns related to this
 request.

 Thanks in advance.

 --
 Kind Regards,
 Alexander Makarov,
 Senior Software Developer,

 Mirantis, Inc.
 35b/3, Vorontsovskaya St., 109147, Moscow, Russia

 Tel.: +7 (495) 640-49-04
 Tel.: +7 (926) 204-50-60

 Skype: MAKAPOB.AJIEKCAHDP

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 

Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-07-27 Thread Sean Dague
On 07/24/2015 02:15 PM, Michael Still wrote:
 On Fri, Jul 24, 2015 at 3:55 AM, Daniel P. Berrange berra...@redhat.com
 mailto:berra...@redhat.com wrote:
 
 On Thu, Jul 23, 2015 at 11:57:01AM -0500, Michael Still wrote:
  In fact, I did an example of what I thought it would look like already:
 
  https://review.openstack.org/#/c/205154/
 
  I welcome discussion on this, especially from people who couldn't make
  it to the mid-cycle. Its up to y'all if you do that on this thread or
  in that review.
 
 I think this kind of thing needs to have a spec proposed for it, so we
 can go through the details of the problem and the design considerations
 for it. This is especially true considering this proposal comes out of
 a f2f meeting where the majority of the community was not present to
 participate in the discussion.
 
  
 So, I think discussion is totally fair here -- I want to be clear that
 what is in the review was a worked example of what we were thinking
 about, not a finished product. For example, I hit circular dependancy
 issues which caused the proposal to change.
 
 However, we weren't trying to solve all issues with flags ever here.
 Specifically what we were trying to address was ops feedback that the
 help text for our config options was unhelpfully terse, and that docs
 weren't covering the finer details that ops need to understand. Adding
 more help text is fine, but we were working through how to avoid having
 hundreds of lines of help text at the start of code files.
 
 I don't personally think that passing configuration options around as
 arguments really buys us much apart from an annoying user interface
 though. We already have to declare where we use a flag (especially if we
 move the flag definitions out of the code). That gives us a framework
 to enforce the interdependencies better, which in fact we partially do
 already via hacking rules.

I think there is also a trade off here. Config options can be close to
the code they are used in, or close to other config options. And
locality is going to impact things.

Right now with config options being local to code we get the incentive
to grow up lots of little config options to tweak everything under the
sun, and they end up buried away from a global view of if that makes
sense. But config is global, for better or worse, and it's an interface
to our operators. Pulling it all together as an interface into a
dedicated part of the code might make it simpler to keep it consistent,
and realize how big a scope of the problem is of conf sprawl.

Because it would be nice to get detailed help into config options,
instead of randomly in our heads, or having to read the code. It would
also be nice to actually do the thing that markmc propose a long time
ago of categorizing config options being the ones that you expect to
change, the ones that are really only for debug, the ones that open up
experimental stuff, etc.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][testing] How to modify DSVM tests to use a DevStack plugin?

2015-07-27 Thread Sean Dague
On 07/27/2015 08:21 AM, Paul Michali wrote:
 Maybe I'm not explaining myself well (sorry)...
 
 For VPN commits, there are functional jobs that (now) enable the
 devstack plugin for neutron-vpnaas as needed (and grenade job will do
 the same). From the neutron-vpnaas repo standpoint everything is in place.
 
 Now that there is a devstack plugin for neutron-vpnaas, I want to remove
 all the VPN setup from the *DevStack* repo's setup, as the user of
 DevStack can specify the enable_plugin in their local.conf file now. The
 commit is https://review.openstack.org/#/c/201119/.
 
 The issue I see though, is that the DevStack repo's jobs are failing,
 because they are using devstack, are relying on VPN being set up, and
 the enable_plugin line for VPN isn't part of any of the jobs shown in my
 last post.
 
 How do we resolve that issue?

Presumably there is a flag in Tempest for whether or not this service
should be tested? That would be where I'd look.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Can't launch docker instance, Unexpected vif_type=binding_failed.

2015-07-27 Thread Assaf Muller
Also can you paste the configuration for both the OVS agent and your neutron 
server? Binding failure is almost always a configuration mismatch.

- Original Message -
 
 
 Is the neutron openvswitch agent running on host compute2? What do the logs
 say for the agent there?
 On Jul 22, 2015 07:22, Asmaa Chebba  ache...@cdta.dz  wrote:
 
 
 
 Hi,
 I installed Docker with juno release on Ubuntu
 all compute/networking services are up and enabled, and I can add docker
 images with glance however, I can't launch an instance (stopped at spawning
 step)
 in the nova-compute log, I found :
 Instance failed to spawn
 InstanceDeployFailure: Cannot setup network: Unexpected
 vif_type=binding_failed
 and when verifying the neutron-server log:
 Failed to bind port 5d299cc9-e3f3-48a0-a80f-f204910a47e7 on host compute2
 
 Any idea on how to solve this?
 I appriciate your help.
 Tahnks.
 
 
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [fuel] Fuel plugin as docker containers images

2015-07-27 Thread Konstantin Danilov
Hi,

Is there a plans to allow plugin to be delivered as docker container images?

Thanks

-- 
Kostiantyn Danilov aka koder.ua
Principal software engineer, Mirantis

skype:koder.ua
http://koder-ua.blogspot.com/
http://mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-07-27 Thread Daniel P. Berrange
On Fri, Jul 24, 2015 at 09:48:15AM +0100, Daniel P. Berrange wrote:
 On Thu, Jul 23, 2015 at 05:55:36PM +0300, mhorban wrote:
  Hi all,
  
  During development process in nova I faced with an issue related with config
  options. Now we have lists of config options and registering options mixed
  with source code in regular files.
  From one side it can be convenient: to have module-encapsulated config
  options. But problems appear when we need to use some config option in
  different modules/packages.
  
  If some option is registered in module X and module X imports module Y for
  some reasons...
  and in one day we need to import this option in module Y we will get
  exception
  NoSuchOptError on import_opt in module Y.
  Because of circular dependency.
  To resolve it we can move registering of this option in Y module(in the
  inappropriate place) or use other tricks.
  
  I offer to create file options.py in each package and move all package's
  config options and registration code there.
  Such approach allows us to import any option in any place of nova without
  problems.
  
  Implementations of this refactoring can be done piece by piece where piece
  is
  one package.
  
  What is your opinion about this idea?
 
 I tend to think that focusing on problems with dependancy ordering when
 modules import each others config options is merely attacking a symptom
 of the real root cause problem.
 
 The way we use config options is really entirely wrong. We have gone
 to the trouble of creating (or trying to create) structured code with
 isolated functional areas, files and object classes, and then we throw
 in these config options which are essentially global variables which are
 allowed to be accessed by any code anywhere. This destroys the isolation
 of the various classes we've created, and means their behaviour often
 based on side effects of config options from unrelated pieces of code.
 It is total madness in terms of good design practices to have such use
 of global variables.
 
 So IMHO, if we want to fix the real big problem with config options, we
 need to be looking to solution where we stop using config options as
 global variables. We should change our various classes so that the
 neccessary configurable options as passed into object constructors
 and/or methods as parameters.
 
 As an example in the libvirt driver.
 
 I would set it up so that /only/ the LibvirtDriver class in driver.py
 was allowed to access the CONF config options. In its constructor it
 would load all the various config options it needs, and either set
 class attributes for them, or pass them into other methods it calls.
 So in the driver.py, instead of calling CONF.libvirt.libvirt_migration_uri
 everywhere in the code,  in the constructor we'd save that config param
 value to an attribute 'self.mig_uri = CONF.libvirt.libvirt_migration_uri'
 and then where needed, we'd just call self.mig_uri.
 
 Now in the various other libvirt files, imagebackend.py, volume.py
 vif.py, etc. None of those files would /ever/ access CONF.*. Any time
 they needed a config parameter, it would be passed into their constructor
 or method, by the LibvirtDriver or whatever invoked them.
 
 Getting rid of the global CONF object usage in all these files trivially
 now solves the circular dependancy import problem, as well as improving
 the overall structure and isolation of our code, freeing all these methods
 from unexpected side-effects from global variables.

Another significant downside of using CONF objects as global variables
is that it is largely impossible to say which nova.conf setting is
used by which service. Figuring out whether a setting affects nova-compute
or nova-api or nova-conductor, or ... largely comes down to guesswork or
reliance on tribal knowledge. It would make life significantly easier for
both developers and administrators if we could clear this up and in fact
have separate configuration files for each service, holding only the
options that are relevant for that service.  Such a cleanup is not going
to be practical though as long as we're using global variables for config
as it requires control-flow analysis find out what affects what :-(

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Glance Issue

2015-07-27 Thread Karan Chhabra
Hi,

I am facing problem with glance. I am working on Kilo and trying to update
image to glance but the status is queued for infinite time. I have tried to
re install the packages but the problem still exists.

Can anyone help me?

-- 
Thanks  Regards
Karan Chhabra

Email | karanchhabra2...@gmail.com

Phone | +353 (0) 89 940 2705

LinkedIn | https://ie.linkedin.com/in/karanchhabra1
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Jay Pipes

On 07/27/2015 04:52 AM, Alexander Makarov wrote:

I've filed a ticket to test Fernet token on the scale lab:
https://mirantis.jira.com/browse/MOSS-235


This is good, but keep in mind that the broader community does not have 
access to the Mirantis JIRA :) Probably better to just mention you have 
submitted a request to our scale lab than provide a link that only a 
subset of the community may access.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-27 Thread Assaf Muller


- Original Message -
 
 On 7/23/15, 9:42 AM, Carl Baldwin c...@ecbaldwin.net wrote:
 
 On Wed, Jul 22, 2015 at 3:21 PM, Kevin Benton blak...@gmail.com wrote:
  The issue with the availability zone solution is that we now force
  availability zones in Nova to be constrained to network configuration.
 In
  the L3 ToR/no overlay configuration, this means every rack is its own
  availability zone. This is pretty annoying for users to deal with
 because
  they have to choose from potentially hundreds of availability zones and
 it
  rules out making AZs based on other things (e.g.  current phase, cooling
  systems, etc).
 
  I may be misunderstanding and you could be suggesting to not expose this
  availability zone to the end user and only make it available to the
  scheduler. However, this defeats one of the purposes of availability
 zones
  which is to let users select different AZs to spread their instances
 across
  failure domains.
 
 I was actually talking with some guys at dinner during the Nova
 mid-cycle last night (Andrew ??, Robert Collins, Dan Smith, probably
 others I can't remember) about the relationship of these network
 segments to AZs and cells.  I think we were all in agreement that we
 can't confine segments to AZs or cells nor the other way around.
 
 
 I just want to +1 this one from the operators’ perspective.  Network
 segments, availability zones, and cells are all separate constructs which
 are used for different purposes.  We prefer to not have any relationships
 forced between them.

I agree, which is why I later corrected to expose physical_networks details
via the API instead, and feed that info to the Nova scheduler.

 
 Mike
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-27 Thread Mike Dorman

On 7/23/15, 8:54 AM, Carl Baldwin c...@ecbaldwin.net wrote:

On Thu, Jul 23, 2015 at 8:51 AM, Kevin Benton blak...@gmail.com wrote:
Or, migration scheduling would need to respect the constraint that a
 port may be confined to a set of hosts.  How can be assign a port to a
 different network?  The VM would wake up and what?  How would it know
 to reconfigure its network stack?

 Right, that's a big mess. Once a network is picked for a port I think we
 just need to rely on a scheduler filter that limits the migration to 
where
 that network is available.

+1.  That's where I was going.

Agreed, this seems reasonable to me for the migration scheduling case.

I view the pre-created port scenario as an edge case.  By explicitly 
pre-creating a port and using it for a new instance (rather than letting 
nova create a port for you), you are implicitly stating that you have more 
knowledge about the networking setup.  In so doing, you’re removing the 
guard rails (of nova scheduling the instance to a good network for the 
host it's on), and therefore are at higher risk to crash and burn.  To me 
that’s an acceptable trade-off.

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Nova migrate-flavor-data woes

2015-07-27 Thread Jay Pipes

On 07/26/2015 01:15 PM, Lars Kellogg-Stedman wrote:

So, the Kilo release notes say:

 nova-manage migrate-flavor-data

But nova-manage says:

 nova-manage db migrate_flavor_data

But that says:

 Missing arguments: max_number

And the help says:

 usage: nova-manage db migrate_flavor_data [-h]
   [--max-number number]

Which indicates that --max-number is optional, but whatever, so you
try:

 nova-manage db migrate_flavor_data --max-number 100

And that says:

 Missing arguments: max_number

So just for kicks you try:

 nova-manage db migrate_flavor_data --max_number 100

And that says:

 nova-manage: error: unrecognized arguments: --max_number

So finally you try:

 nova-manage db migrate_flavor_data 100

And holy poorly implement client, Batman, it works.


LOL. Well, the important thing is that the thing eventually worked. ;P

In all seriousness, though, yeah, the nova-manage CLI tool is entirely 
different from the main python-novaclient CLI tool. It's not been a 
priority whatsoever to clean it up, but I think it would be some pretty 
low-hanging fruit to make the CLI consistent with the design of, say, 
python-openstackclient...


Perhaps something we should develop a backlog spec for.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Fuel] version.yaml in the context of packages

2015-07-27 Thread Vladimir Kozhukalov
Vitaly,

1) feature_groups - This is, in fact, runtime parameter rather then build
one, so we'd better store it in astute.yaml or other runtime config file.
This parameter must be available in nailgun - there is code in nailgun and
UI which relies on this parameter.

Sure it must, but since it is runtime parameter, it should be defined in a
config file, which is to be a part of rpm package. Let's say it will be
/etc/sysconfig/fuel.

2) production - It is always equal to docker which means we deploy
docker containers on the master node. Formally it comes from one of
fuel-main variables, which is set into docker by default, but not a
single job in CI customizes this variable. Looks like it makes no sense to
have this.
This parameter can be set to other values when used for fake UI and
functional tests for UI and fuelclient.

If so, then it is also runtime parameter and it should be moved into a
config file. Again /etc/sysconfig/fuel seems fine.

3) release - It is the number of Fuel release. Frankly, don't need this
because it is nothing more than the version of fuel meta package [1].
It is shown on UI.

It is version of this package
https://github.com/stackforge/fuel-main/blob/master/specs/fuel-main.spec
Again let's put it into /etc/sysconfig/fuel.

Matt,

 4) openstack_version - It is just an extraction from openstack.yaml [2].
Without installing nailgun, it's impossible to know what the repo
directories should be. Abstracting it buried in some other package
makes puppet tasks laborious. Keeping it in a YAML allows it to be
accessible.

I think we can put openstack.yaml into a separate package. Other packages
(including nailgun) will require this package.

Andrew,

6) build_number and build_id - These are the only parameters that relate
to the build process. But let's think if we actually need these parameters
if we switch to package based approach. RPM/DEB repositories are going to
become the main way of delivering Fuel, not ISO. So, it also makes little
sense to put store them, especially if we upgrade some of the packages.
These are useful to track which iso the issue occurred in and if my iso or
another might have the issue. I find it the attributes I use the most in
this data. Again is un-related to packages so it should only be copied off
the iso for development reasons

Yes, we can copy it from the iso to /etc/fuel-build or something like this.

7) X_sha - This does not even require any explanation. It should be rpm
-qa instead.
We need this information. It can easily be replaced with the identifier
from the package, but it still needs to describe source. We need to know
exactly which commit was the head. It's solved many other hard to find
problems that we added it for in the first place

We certainly need to substitute it with rpm package versions. As far as I
know we have plans to append sha to the name of a package. Something like
this fuel-7.0.0-1.mos6065-a38b34.noarch.rpm will be fine.

Ok, I think no one is against of deprecating this file and moving some
parameters into package related files. I'll describe this in details in a
spec.



Vladimir Kozhukalov

On Mon, Jul 27, 2015 at 1:57 PM, Matthew Mosesohn mmoses...@mirantis.com
wrote:

  2) production - It is always equal to docker which means we deploy
 docker containers on the master node. Formally it comes from one of
 fuel-main variables, which is set into docker by default, but not a
 single job in CI customizes this variable. Looks like it makes no sense to
 have this.
 This gets set to docker-build during fuel ISO creation because several
 tasks cannot be done in the containers during docker build phase. We
 can replace this by moving it to astute.yaml easily enough.
  4) openstack_version - It is just an extraction from openstack.yaml [2].
 Without installing nailgun, it's impossible to know what the repo
 directories should be. Abstracting it buried in some other package
 makes puppet tasks laborious. Keeping it in a YAML allows it to be
 accessible.

 The rest won't impact Fuel Master deployment significantly.

 On Fri, Jul 24, 2015 at 8:21 PM, Vladimir Kozhukalov
 vkozhuka...@mirantis.com wrote:
  Dear colleagues,
 
  Although we are focused on fixing bugs during next few weeks I still
 have to
  ask everyone's opinion about /etc/fuel/version.yaml. We introduced this
 file
  when all-inclusive ISO image was the only way of delivering Fuel. We had
 to
  have somewhere the information about SHA commits for all Fuel related git
  repos. But everything is changing and we are close to flexible package
 based
  delivery approach. And this file is becoming kinda fifth wheel.
 
  Here is how version.yaml looks like
 
  VERSION:
feature_groups:
  - mirantis
production: docker
release: 7.0
openstack_version: 2015.1.0-7.0
api: 1.0
build_number: 82
build_id: 2015-07-23_10-59-34
nailgun_sha: d1087923e45b0e6d946ce48cb05a71733e1ac113
python-fuelclient_sha: 471948c26a8c45c091c5593e54e6727405136eca

Re: [openstack-dev] [TripleO] diskimage-builder 1.0.0

2015-07-27 Thread Chris Jones
Hey

 On 27 Jul 2015, at 16:22, Gregory Haynes g...@greghaynes.net wrote:
 I just cut the 1.0.0 release, so no going back now. Enjoy!

woot!

Cheers,
Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [designate] Associate dynamically project name as domain

2015-07-27 Thread Jaime Fernández
I would like to register DNS records with the following format:
name.interface.projectName.baseDomain
to avoid collision between IP addresses for the same host but on different
interfaces, and to reserve a domain per project. However, it's not an easy
task.

The notifications received by designate-sink report the tenant-id (but not
project name) apart from other valuable information to register a virtual
machine.

After reading nova (see
https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py)
and neutron handlers, these handlers register the IP addresses as managed
records, associating the resource_id (i.e. host instance_id). It simplifies
the process of removing the records when the host is removed.

I would like to register (via designate-api) a domain per project (or
tenant) using the project name, and to assign the tenant_id when
registering the domain. When a host is created, designate-sink receives a
notification with its tenant_id, and we could search the domain by
tenant_id in order to register the host record. However, I'm afraid that
these managed attributes are not available via REST API (only by Python
API).

It would be nice to have the possibility to register or access these
managed attributes via REST API. Otherwise, I don't know how to proceed
with registered hosts. I don't think it's feasible to request for
reinstalling these virtual hosts. I would prefer to register manually, via
designate-api, those hosts that were already registered but with the
managed attribute resource_id so that when designate-sink receives the
notification about VM destruction, it is capable to unregister the host
entry searching by its resource_id.

Do you have any suggestion about how to proceed to configure a subdomain
for each project?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [keystone] token revocation woes

2015-07-27 Thread Dolph Mathews
Adam Young shared a patch to convert the tree back to a linear list:

  https://review.openstack.org/#/c/205266/

This shouldn't be merged without benchmarking as it's purely a
performance-oriented change.

On Thu, Jul 23, 2015 at 11:40 AM, Matt Fischer m...@mattfischer.com wrote:

 Morgan asked me to post some of my numbers here. From my staging
 environment:

 With 0 revocations:
 Requests per second:104.67 [#/sec] (mean)
 Time per request:   191.071 [ms] (mean)

 With 500 revocations:
 Requests per second:52.48 [#/sec] (mean)
 Time per request:   381.103 [ms] (mean)

 I have some more numbers up on my blog post about this but that's from a
 virtual test environment and focused on throughput.

 Thanks for the attention on this.

 On Thu, Jul 23, 2015 at 8:45 AM, Lance Bragstad lbrags...@gmail.com
 wrote:


 On Wed, Jul 22, 2015 at 10:06 PM, Adam Young ayo...@redhat.com wrote:

  On 07/22/2015 05:39 PM, Adam Young wrote:

 On 07/22/2015 03:41 PM, Morgan Fainberg wrote:

 This is an indicator that the bottleneck is not the db strictly
 speaking, but also related to the way we match. This means we need to spend
 some serious cycles on improving both the stored record(s) for revocation
 events and the matching algorithm.


 The simplest approach to revocation checking is to do a linear search
 through the events.  I think the old version of the code that did that is
 in a code review, and I will pull it out.

 If we remove the tree, then the matching will have to run through each
 of the records and see if there is a match;  the test will be linear with
 the number of records (slightly shorter if a token is actually revoked).


 This was the origianal, linear search version of the code.


 https://review.openstack.org/#/c/55908/50/keystone/contrib/revoke/model.py,cm



 What initially landed for Revocation Events was the tree-structure,
 right? We didn't land a linear approach prior to that and then switch to
 the tree, did we?







 Sent via mobile

 On Jul 22, 2015, at 11:51, Matt Fischer m...@mattfischer.com wrote:

   Dolph,

  Per our IRC discussion, I was unable to see any performance
 improvement here although not calling DELETE so often will reduce the
 number of deadlocks when we're under heavy load especially given the
 globally replicated DB we use.



 On Tue, Jul 21, 2015 at 5:26 PM, Dolph Mathews dolph.math...@gmail.com
 wrote:

 Well, you might be in luck! Morgan Fainberg actually implemented an
 improvement that was apparently documented by Adam Young way back in
 March:

   https://bugs.launchpad.net/keystone/+bug/1287757

  There's a link to the stable/kilo backport in comment #2 - I'd be
 eager to hear how it performs for you!

 On Tue, Jul 21, 2015 at 5:58 PM, Matt Fischer m...@mattfischer.com
 wrote:

  Dolph,

  Excuse the delayed reply, was waiting for a brilliant solution from
 someone. Without one, personally I'd prefer the cronjob as it seems to be
 the type of thing cron was designed for. That will be a painful change as
 people now rely on this behavior so I don't know if its feasible. I will 
 be
 setting up monitoring for the revocation count and alerting me if it
 crosses probably 500 or so. If the problem gets worse then I think a 
 custom
 no-op or sql driver is the next step.

  Thanks.


 On Wed, Jul 15, 2015 at 4:00 PM, Dolph Mathews 
 dolph.math...@gmail.com wrote:



 On Wed, Jul 15, 2015 at 4:51 PM, Matt Fischer m...@mattfischer.com
 wrote:

 I'm having some issues with keystone revocation events. The bottom
 line is that due to the way keystone handles the clean-up of these
 events[1], having more than a few leads to:

   - bad performance, up to 2x slower token validation with about
 600 events based on my perf measurements.
  - database deadlocks, which cause API calls to fail, more likely
 with more events it seems

  I am seeing this behavior in code from trunk on June 11 using
 Fernet tokens, but the token backend does not seem to make a difference.

  Here's what happens to the db in terms of deadlock:
 2015-07-15 21:25:41.082 31800 TRACE keystone.common.wsgi DBDeadlock:
 (OperationalError) (1213, 'Deadlock found when trying to get lock; try
 restarting transaction') 'DELETE FROM revocation_event WHERE
 revocation_event.revoked_at  %s' (datetime.datetime(2015, 7, 15, 18, 
 55,
 41, 55186),)

  When this starts happening, I just go truncate the table, but this
 is not ideal. If [1] is really true then the design is not great, it 
 sounds
 like keystone is doing a revocation event clean-up on every token
 validation call. Reading and deleting/locking from my db cluster is not
 something I want to do on every validate call.


  Unfortunately, that's *exactly* what keystone is doing. Adam and I
 had a conversation about this problem in Vancouver which directly 
 resulted
 in opening the bug referenced on the operator list:

   https://bugs.launchpad.net/keystone/+bug/1456797

  Neither of us remembered the actual implemented behavior, which is
 

Re: [openstack-dev] [fuel] FF Exception for LP-1464656 fix (update ceph PG calculation algorithm)

2015-07-27 Thread Eugene Bogdanov
Good, thanks everyone for your feedback. As suggested, let's merge 
pb_num calculation as a bugfix (no exception needed). With regards to UI 
part, I do agree that it's just nice to have feature and I don't see the 
review with GUI part amongst of those nominated for exception. So - 
let's put it to the next release.


--
EugeneB


Vitaly Kramskikh mailto:vkramsk...@mirantis.com
27 июля 2015 г., 13:57
+1 to Stanislav's proposal.




--
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Stanislav Makar mailto:sma...@mirantis.com
27 июля 2015 г., 3:05

Hello
I went through LP-1464656 and I see that it covers two things:
1. Bad pg num calculation algorithm.
2. Add the possibility to set pg num via GUI.

First is the most important and a BUG by itself, second is nice to 
have feature and no more.

Hence we should split it into a bug and a feature.

As the main part is a bug it does not impact FF at all.

My +1 to close bad pg num calculation algorithm as a bug and postpone 
specifying pg_num via GUI to the next release.


/All the best
Stanislav Makar

+1 for FFE
Given how borken pg_num calculations are now, this is essential to the 
ceph story and there is no point in testing ceph at scale with out it.


The only work-around for not having this is to delete all of the pools 
by hand after deployment and calculate the values by hand, and 
re-create the pools by hand. The story from that alone makes it high 
on the UX scale, which means we might as well fix it as a bug.


The scope of impact is limited to only ceph, the testing plan needs 
more detail, and we are still comming to turns with some of the data 
format to process between nailgun calculating and puppet consuming.


We would need about 1.2 week to get these landed.

--

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Andrew Woodward mailto:xar...@gmail.com
25 июля 2015 г., 1:58
+1 for FFE
Given how borken pg_num calculations are now, this is essential to the 
ceph story and there is no point in testing ceph at scale with out it.


The only work-around for not having this is to delete all of the pools 
by hand after deployment and calculate the values by hand, and 
re-create the pools by hand. The story from that alone makes it high 
on the UX scale, which means we might as well fix it as a bug.


The scope of impact is limited to only ceph, the testing plan needs 
more detail, and we are still comming to turns with some of the data 
format to process between nailgun calculating and puppet consuming.


We would need about 1.2 week to get these landed.

--

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Konstantin Danilov mailto:kdani...@mirantis.com
24 июля 2015 г., 13:46
Team,

I would like to request an exception from the Feature Freeze for [1]
fix. It requires changes in
fuel-web [2], fuel-library [3] and in UI. [2] and [3] are already
tested, I'm fixing UT now.
BP - [4]

Code has backward-compatibility mode. I need one more week to finish 
it. Also
I'm asking someone to be an assigned code-reviewer for this ticket to 
speed-up

review process.

Thanks

[1] https://bugs.launchpad.net/fuel/+bug/1464656
[2] https://review.openstack.org/#/c/204814
[3] https://review.openstack.org/#/c/204811
[4] https://review.openstack.org/#/c/203062




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] thinking additional tags

2015-07-27 Thread Thierry Carrez
Thierry Carrez wrote:
 The next tags workgroup will be having a meeting this week on Friday
 at 14:00 UTC in #openstack-meeting. Join us if you're interested !
 
 In the mean time, we are braindumping at:
 https://etherpad.openstack.org/p/next-tags-wg

The work group met 10 days ago and decided to tackle tags in the
following categories:

* Integration tags

Those describe whether the project is integrated with something else.
Sean Dague proposed to kick off this category with devstack support
tags, and proposed them at https://review.openstack.org/#/c/203785/

* Team tags

Team tags communicate whether the people behind a given project form a
sustainable team. Russell Bryant agreed to draft tags about team size
and team monoculture to further improve on our communication there.

* Contract tags

Contract tags are promises that project teams make about their
deliverables. For example, I'll draft three tags describing feature/API
deprecation policies that projects teams may opt to follow. These
communicate clearly what to expect from a given project. In this same
category, Zane Bitter will draft a tag about forward-compatible
configuration files.

* QA tags

QA tags communicate what a given project actually tests. Does it do full
stack testing, upgrade testing, partial upgrade testing ?

* Horizontal team support tags

These communicate which projects are directly supported by horizontal
teams. We already have the release:managed tag and the
vulnerability:managed tags to describe which projects are directly
supported by the release management or the vulnerability management
teams. I'll draft a tag to describe which projects have stable branches
that follow the stable branch maintenance team policy.

Sorry for the late report,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [Neutron] [Large Deployments Team] Discussion around routed networks

2015-07-27 Thread Mike Dorman
I wanted to bring this to the attention of anybody who may have missed it 
on openstack-dev.  Particularly the LDT team folks who have been talking 
about the routed networks/disparate L2 domains stuff [1] [2].

http://lists.openstack.org/pipermail/openstack-dev/2015-July/thread.html#70
028

This is a discussion stemming from Carl’s segmented, routed networks spec 
[3].  I think the “ask” from operators has been somewhat well represented, 
but if others could review and chime in as appropriate, I think that could 
be useful.

Also somewhat related is this patch [4] for better scheduling DHCP agents 
on the appropriate L2 segment.  Might be worth a +1 if it would be useful 
to you as an operator.

[1] https://bugs.launchpad.net/neutron/+bug/1458890
[2] https://etherpad.openstack.org/p/Network_Segmentation_Usecases
[3] https://review.openstack.org/#/c/196812/
[4] https://review.openstack.org/#/c/205631/

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Fuel][python-fuelclient] Implementing new commands

2015-07-27 Thread Sergii Golovatiuk
Hi,

Every functionality should be applied to both clients. Core developers
should set -1 if it's not applied to second version of plugin. Though I
believe we should completely get rid of first version of CLI in Fuel 8.0

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Jul 24, 2015 at 11:41 AM, Oleg Gelbukh ogelb...@mirantis.com
wrote:

 FWIW, I'm for option B, combined with clear timeline for porting features
 of fuel-variant to fuel2. For example, we are developing client-side
 functions for fuel-octane (cluster upgrade) extensions only for fuel2, and
 don't plan to implement it for 'fuel'.

 The main reason why we can't just drop 'fuel', or rather switch it to
 fuel2 syntax, IMO, is the possibility that someone somewhere uses its
 current syntax for automation. However, if the function is completely new,
 the automation of this function should be implemented with the new version
 of syntax.

 --
 Best regards,
 Oleg Gelbukh

 On Fri, Jul 24, 2015 at 12:09 PM, Fedor Zhadaev fzhad...@mirantis.com
 wrote:

 Hi all,

 I think that in current situation the best solution will be to add new
 features for the both versions of client. It may cause a little slowing
 down of developing each feature, but we won't have to return to them in
 future.

 2015-07-24 11:58 GMT+03:00 Igor Kalnitsky ikalnit...@mirantis.com:

 Hello,

 My 2 cents on it.

 Following plan C requires a huge effort from developer, and it may be
 unacceptable when FF is close and there're a lot of work to do. So it
 looks like the plan B is most convenient for us and eventually we will
 have all features in fuel2.

 Alternatively we can go with C.. but only if implementing support in
 either fuel or fuel2 may be postponed to SCF.

 Thanks,
 Igor

 On Fri, Jul 24, 2015 at 10:58 AM, Evgeniy L e...@mirantis.com wrote:
  Hi Sebastian, thanks for clarification, in this case I think we
  should follow plan C, new features should not slow us down
  in migration from old to new version of the client.
 
  On Thu, Jul 23, 2015 at 8:52 PM, Sebastian Kalinowski
  skalinow...@mirantis.com wrote:
 
  2015-07-23 18:28 GMT+02:00 Stanislaw Bogatkin sbogat...@mirantis.com
 :
 
  Hi,
 
  can we just add all needed functionality from old fuel client that
 fuel2
  needs, then say that old fuel-client is deprecated now and will not
 support
  some new features, then add new features to fuel2 only? It seems
 like best
  way for me, cause with this approach:
  1. Clients will can use only one version of client (new one) w/o
  switching between 2 clients with different syntax
  2. We won't have to add new features to two clients.
 
 
  Stas, of course moving it all to new fuel2 would be the best way to
 do it,
  but this refactoring took place in previous release. There is no one
 that
  ported a single command (except new ones) since then and there is no
 plan
  for doing so since other activities have higher priority. And
 features are
  still coming so it would be nice to have a policy for the time all
 commands
  will move to new fuel2.
 
 
 
  On Thu, Jul 23, 2015 at 9:19 AM, Evgeniy L e...@mirantis.com wrote:
 
  Hi,
 
  The best option is to add new functionality to fuel2 only, but I
  don't think that we can do that if there is not enough functionality
  in fuel2, we should not ask user to switch between fuel and fuel2
  to get some specific functionality.
  Do we have some list of commands which is not covered in fuel2?
  I'm just wondering how much time will it take to implement all
  required commands in fuel2.
 
 
  So to compare: this is a help message for old fuel [1] and new
 fuel2
  [2]. There are only node, env and task actions covered and even
 they
  are not covered in 100%.
 
  [1] http://paste.openstack.org/show/404439/
  [2] http://paste.openstack.org/show/404440/
 
 
 
 
  Thanks,
 
  On Thu, Jul 23, 2015 at 1:51 PM, Sebastian Kalinowski
  skalinow...@mirantis.com wrote:
 
  Hi folks,
 
  For a some time in python-fuelclient we have two CLI apps: `fuel`
 and
  `fuel2`. It was done as an implementation of blueprint [1].
  Right now there is a situation where some new features are added
 just
  to old `fuel`, some to just `fuel2`, some to both. We cannot
 simply switch
  completely to new `fuel2` as it doesn't cover all old commands.
  As far as I remember there was no agreement how we should proceed
 with
  adding new things to python-fuelclient, so to keep all development
 for new
  commands I would like us to choose what will be our approach.
 There are 3
  ways to do it (with some pros and cons):
 
  A) Add new features only to old `fuel`.
  Pros:
   - Implement feature in one place
   - Almost all features are covered there
  Cons:
   - Someone will need to port this features to new `fuel2`
   - Issues that forced us to reimplement whole `fuel` as `fuel2`
 
  B) Add new features only to new `fuel2`
  Pros:
   - Implement feature in one place
   - No need to cope with issues in old `fuel` (like worse UX, etc.)
  Cons:
   

Re: [openstack-dev] Announcing HyperStack project

2015-07-27 Thread Jay Lau
Adrian,

Can we put hyper as a topic for this week's (Tomorrow) meeting? I want to
have some discussion with you.

Thanks

2015-07-27 0:43 GMT-04:00 Adrian Otto adrian.o...@rackspace.com:

  Peng,

  For the record, the Magnum team is not yet comfortable with this
 proposal. This arrangement is not the way we think containers should be
 integrated with OpenStack. It completely bypasses Nova, and offers no Bay
 abstraction, so there is no user selectable choice of a COE (Container
 Orchestration Engine). We advised that it would be smarter to build a nova
 virt driver for Hyper, and integrate that with Magnum so that it could work
 with all the different bay types. It also produces a situation where
 operators can not effectively bill for the services that are in use by the
 consumers, there is no sensible infrastructure layer capacity management
 (scheduler), no encryption management solution for the communication
 between k8s minions/nodes and the k8s master, and a number of other
 weaknesses. I’m not convinced the single-tenant approach here makes sense.

  To be fair, the concept is interesting, and we are discussing how it
 could be integrated with Magnum. It’s appropriate for experimentation, but
 I would not characterize it as a “solution for cloud providers” for the
 above reasons, and the callouts I mentioned here:

  http://lists.openstack.org/pipermail/openstack-dev/2015-July/069940.html

  Positioning it that way is simply premature. I strongly suggest that you
 attend the Magnum team meetings, and work through these concerns as we had
 Hyper on the agenda last Tuesday, but you did not show up to discuss it.
 The ML thread was confused by duplicate responses, which makes it rather
 hard to follow.

  I think it’s a really bad idea to basically re-implement Nova in Hyper.
 Your’e already re-implementing Docker in Hyper. With a scope that’s too
 wide, you won’t be able to keep up with the rapid changes in these
 projects, and anyone using them will be unable to use new features that
 they would expect from Docker and Nova while you are busy copying all of
 that functionality each time new features are added. I think there’s a
 better approach available that does not require you to duplicate such a
 wide range of functionality. I suggest we work together on this, and select
 an approach that sets you up for success, and gives OpenStack could
 operators what they need to build services on Hyper.

  Regards,

  Adrian

  On Jul 26, 2015, at 7:40 PM, Peng Zhao p...@hyper.sh wrote:

   Hi all,
  I am glad to introduce the HyperStack project to you.
  HyperStack is a native, multi-tenant CaaS solution built on top of
 OpenStack. In terms of architecture, HyperStack = Bare-metal + Hyper +
 Kubernetes + Cinder + Neutron.
  HyperStack is different from Magnum in that HyperStack doesn't employ the
 Bay concept. Instead, HyperStack pools all bare-metal servers into one
 singe cluster. Due to the hypervisor nature in Hyper, different tenants'
 applications are completely isolated (no shared kernel), thus co-exist
 without security concerns in a same cluster.
  Given this, HyperStack is a solution for public cloud providers who want
 to offer the secure, multi-tenant CaaS.
  Ref:
 https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/1258x535/1c85a755dcb5e4a4147d37e6aa22fd40/upload_7_23_2015_at_11_00_41_AM.png
  The next step is to present a working beta of HyperStack at Tokyo summit,
 which we submitted a presentation:
 https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/Presentation/4030.
 Please vote if you are interested.
  In the future, we want to integrate HyperStack with Magnum and Nova to
 make sure one OpenStack deployment can offer both IaaS and native CaaS
 services.
  Best,
 Peng
  -- Background
 ---
  Hyper is a hypervisor-agnostic Docker runtime. It allows to run Docker
 images with any hypervisor (KVM, Xen, Vbox, ESX). Hyper is different from
 the minimalist Linux distros like CoreOS by that Hyper runs on the physical
 box and load the Docker images from the metal into the VM instance, in
 which no guest OS is present. Instead, Hyper boots a minimalist kernel in
 the VM to host the Docker images (Pod).
  With this approach, Hyper is able to bring some encouraging results,
 which are similar to container:
 - 300ms to boot a new HyperVM instance with a pod of Docker images
 - 20MB for min mem footprint of a HyperVM instance
 - Immutable HyperVM, only kernel+images, serves as atomic unit (Pod) for
 scheduling
 - Immune from the shared kernel problem in LXC, isolated by VM
 - Work seamlessly with OpenStack components, Neutron, Cinder, due to the
 hypervisor nature
 - BYOK, bring-your-own-kernel is somewhat mandatory for a public cloud
 platform


 __
 OpenStack Development Mailing List (not for usage questions)
 

Re: [openstack-dev] [stable][neutron] dvr job for kilo?

2015-07-27 Thread Ryan Moats
+1

Kyle Mestery mest...@mestery.com wrote on 07/27/2015 08:16:07 AM [with a
bit of cleanup]:

  On Mon, Jul 27, 2015 at 6:57 AM, Thierry Carrez thie...@openstack.org
wrote:
  Ihar Hrachyshka wrote:
   I noticed that dvr job is now voting for all stable branches, and
   failing, because the branch misses some important fixes from master.
  
   Initially, I tried to just disable votes for stable branches for the
   job: https://review.openstack.org/#/c/205497/ Due to limitations of
   project-config, we would need to rework the patch though to split the
   job into stable non-voting and liberty+ voting one, and disable the
   votes just for the first one.
  
   My gut feeling is that since the job never actually worked for kilo,
   we should just kill it for all stable branches. It does not provide
   any meaningful actionable feedback anyway.
  
   Thoughts?
 
  +1 to kill it.

 Agreed, lets get rid of it for stable branches.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-07-27 Thread Sean Dague
On 07/27/2015 10:05 AM, Daniel P. Berrange wrote:
 On Fri, Jul 24, 2015 at 09:48:15AM +0100, Daniel P. Berrange wrote:
 On Thu, Jul 23, 2015 at 05:55:36PM +0300, mhorban wrote:
 Hi all,

 During development process in nova I faced with an issue related with config
 options. Now we have lists of config options and registering options mixed
 with source code in regular files.
 From one side it can be convenient: to have module-encapsulated config
 options. But problems appear when we need to use some config option in
 different modules/packages.

 If some option is registered in module X and module X imports module Y for
 some reasons...
 and in one day we need to import this option in module Y we will get
 exception
 NoSuchOptError on import_opt in module Y.
 Because of circular dependency.
 To resolve it we can move registering of this option in Y module(in the
 inappropriate place) or use other tricks.

 I offer to create file options.py in each package and move all package's
 config options and registration code there.
 Such approach allows us to import any option in any place of nova without
 problems.

 Implementations of this refactoring can be done piece by piece where piece
 is
 one package.

 What is your opinion about this idea?

 I tend to think that focusing on problems with dependancy ordering when
 modules import each others config options is merely attacking a symptom
 of the real root cause problem.

 The way we use config options is really entirely wrong. We have gone
 to the trouble of creating (or trying to create) structured code with
 isolated functional areas, files and object classes, and then we throw
 in these config options which are essentially global variables which are
 allowed to be accessed by any code anywhere. This destroys the isolation
 of the various classes we've created, and means their behaviour often
 based on side effects of config options from unrelated pieces of code.
 It is total madness in terms of good design practices to have such use
 of global variables.

 So IMHO, if we want to fix the real big problem with config options, we
 need to be looking to solution where we stop using config options as
 global variables. We should change our various classes so that the
 neccessary configurable options as passed into object constructors
 and/or methods as parameters.

 As an example in the libvirt driver.

 I would set it up so that /only/ the LibvirtDriver class in driver.py
 was allowed to access the CONF config options. In its constructor it
 would load all the various config options it needs, and either set
 class attributes for them, or pass them into other methods it calls.
 So in the driver.py, instead of calling CONF.libvirt.libvirt_migration_uri
 everywhere in the code,  in the constructor we'd save that config param
 value to an attribute 'self.mig_uri = CONF.libvirt.libvirt_migration_uri'
 and then where needed, we'd just call self.mig_uri.

 Now in the various other libvirt files, imagebackend.py, volume.py
 vif.py, etc. None of those files would /ever/ access CONF.*. Any time
 they needed a config parameter, it would be passed into their constructor
 or method, by the LibvirtDriver or whatever invoked them.

 Getting rid of the global CONF object usage in all these files trivially
 now solves the circular dependancy import problem, as well as improving
 the overall structure and isolation of our code, freeing all these methods
 from unexpected side-effects from global variables.

How does that address config reload on SIGHUP? It seems like that
approach would break that feature.

 Another significant downside of using CONF objects as global variables
 is that it is largely impossible to say which nova.conf setting is
 used by which service. Figuring out whether a setting affects nova-compute
 or nova-api or nova-conductor, or ... largely comes down to guesswork or
 reliance on tribal knowledge. It would make life significantly easier for
 both developers and administrators if we could clear this up and in fact
 have separate configuration files for each service, holding only the
 options that are relevant for that service.  Such a cleanup is not going
 to be practical though as long as we're using global variables for config
 as it requires control-flow analysis find out what affects what :-(

Part of the idea that came up in the room is to annotate variables with
the service they were used in, and deny access to in services they are
not for.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] diskimage-builder 1.0.0

2015-07-27 Thread Gregory Haynes
Excerpts from Gregory Haynes's message of 2015-06-29 12:44:18 +:
 Hello all,
 
 DIB has come a long way and we seem to have a fairly stable interface
 for the elements and the image creation scripts. As such, I think it's
 about time we commit to a major version release. Hopefully this can give
 our users the (correct) impression that DIB is ready for use by folks
 who want some level of interface stability.
 
 AFAICT our bug list does not have any major issues that might require us
 to break our interface, so I dont see any harm in 'just going for it'.
 If anyone has input on fixes/features we should consider including
 before a 1.0.0 release please speak up now. If there are no objections
 by next week I'd like to try and cut a release then. :)
 
 Cheers,
 Greg

I just cut the 1.0.0 release, so no going back now. Enjoy!

-Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [Fuel] changing assigned VLANs after install

2015-07-27 Thread Alexander Liemieshko
Hello, the guide below details how you can change assigned VLANs after
install:


Changing VLAN on (post-deployment environment) on Fuel 6.1 (Ubuntu)

Note: Before performing any operations, you should schedule the maintenance
window, perform backups of all databases

Preparation: in the worst case your new VLANs, you are going to use to run
OpenStack networks through, may be already reserved as tenant VLANs in
Neutron. Please re-configure Neutron and re-create affected tenant networks
 first in order to free up the needed VLANs to host machines.

1. Modify DB nailgun(on master node) :

(first of all you have to create backup DB nailgun)

dockerctl shell postgres

su - postgres

pg_dump nailgun  nailgun_back.sql


Vlan_range for Neutorn L2 Configuration:

Note: For example, we are changing: vlan_range='[1700, 1730]' ,

vlan_start='104' for Storage, vlan_start='103' for Management

dockerctl shell postgres

su - postgres

psql -d nailgun

nailgun=# select * from neutron_config ;

id | vlan_range | gre_id_range | base_mac | internal_cidr |
internal_gateway | ...

...

3 | [1000, 1030] | [2, 65535] | fa:16:3e:00:00:00 | 192.168.111.0/24 |
192.168.111.1 | vlan | ovs

(1 row)

nailgun=# update neutron_config set vlan_range='[1700, 1730]' where id=3;

UPDATE 1

nailgun=# select * from neutron_config;

id | vlan_range | gre_id_range | base_mac | internal_cidr |
internal_gateway | ...

...

3 | [1700, 1730] | [2, 65535] | fa:16:3e:00:00:00 | 192.168.111.0/24 |
192.168.111.1 | vlan | ovs

(1 row)

Vlan for Management:

nailgun=# select * from network_groups;

id | name | release | vlan_start | cidr ...

…

11 | management | 2 | 101 | 192.168.0.0/24 |

…

nailgun=# update network_groups set vlan_start='103' where id=11;

UPDATE 1

nailgun=# select * from network_groups;

id | name | release | vlan_start | cidr ...

…

11 | management | 2 | 103 | 192.168.0.0/24 |

…

Vlan for Storage:

nailgun=# select * from network_groups;

id | name | release | vlan_start | cidr ...

…

12 | storage | 2 | 102 | 192.168.1.0/24 |

…

nailgun=# update network_groups set vlan_start='104' where id=12;

UPDATE 1

nailgun=# select * from network_groups;

id | name | release | vlan_start | cidr ...

…

12 | storage | 2 | 104 | 192.168.1.0/24 |

…

All changes you can to see on Fuel UI → Networks:

-- Before

-- After


2. Modify 'ml2_conf.ini' (on all nodes in environment):

(Note: run on master node)

# for i in $(fuel nodes --env env_ID | awk '/ready.*True/{print $1}'); do
ssh node-$i 'cd /etc/neutron/plugins/ml2  sed -i
s/^network_vlan_ranges.*/network_vlan_ranges=physnet2:1700:1730/
ml2_conf.ini';done

Reboot neutron-server on all controllers :

# for i in $(fuel nodes --env env_ID | awk
'/ready.*controller.*True/{print $1}'); do ssh node-$i initctl restart
neutron-server;done

3. Modify astute.yaml (on all nodes in environment):

(Note: run on master node)

Change 'vlan_range' and 'vlans':

# for i in $(fuel nodes --env env_ID | awk '/ready.*True/{print $1}'); do
ssh node-$i 'cd /etc/  sed -i s/vlan_range.*/vlan_range: 1700:1730/
astute.yaml';done

# for i in $(fuel nodes --env env_ID | awk
'/ready.*controller.*True/{print $1}'); do ssh node-$i 'cd /etc/  sed -i
s/vlan_range.*/vlan_range: 1700:1730/ primary-controller.yaml';done

# for i in $(fuel nodes --env env_ID | awk '/ready.*True/{print $1}'); do
ssh node-$i 'cd /etc/  sed -i s/vlans: 1000:1030/vlans: 1700:1730/
astute.yaml';done

# for i in $(fuel nodes --env env_ID | awk
'/ready.*controller.*True/{print $1}'); do ssh node-$i 'cd /etc/  sed -i
s/vlans: 1000:1030/vlans: 1700:1730/ primary-controller.yaml';done

Change 'vlans' for 'br-storage' and 'br-mgmt':

# for i in $(fuel nodes --env env_ID | awk '/ready.*True/{print $1}'); do
ssh node-$i 'cd /etc/  sed -i s/vlans: 102/vlans: 104/ astute.yaml';done

# for i in $(fuel nodes --env env_ID | awk
'/ready.*controller.*True/{print $1}'); do ssh node-$i 'cd /etc/  sed -i
s/vlans: 102/vlans: 104/ primary-controller.yaml';done

# for i in $(fuel nodes --env env_ID | awk '/ready.*True/{print $1}'); do
ssh node-$i 'cd /etc/  sed -i s/vlans: 101/vlans: 103/ astute.yaml';done

# for i in $(fuel nodes --env env_ID | awk
'/ready.*controller.*True/{print $1}'); do ssh node-$i 'cd /etc/  sed -i
s/vlans: 101/vlans: 103/ primary-controller.yaml';done

# for i in $(fuel nodes --env env_ID | awk '/ready.*True/{print $1}'); do
ssh node-$i 'cd /etc/  sed -i s/eth0.101/eth0.103/ astute.yaml';done

# for i in $(fuel nodes --env env_ID | awk
'/ready.*controller.*True/{print $1}'); do ssh node-$i 'cd /etc/  sed -i
s/eth0.101/eth0.103/ primary-controller.yaml';done

# for i in $(fuel nodes --env env_ID | awk '/ready.*True/{print $1}'); do
ssh node-$i 'cd /etc/  sed -i s/eth0.102/eth0.104/ astute.yaml';done

# for i in $(fuel nodes --env env_ID | awk
'/ready.*controller.*True/{print $1}'); do ssh node-$i 'cd /etc/  sed -i
s/eth0.102/eth0.104/ primary-controller.yaml';done

Check:

# for i in $(fuel nodes --env env_ID | awk '/ready.*True/{print $1}'); 

Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-27 Thread Mike Dorman

On 7/23/15, 9:42 AM, Carl Baldwin c...@ecbaldwin.net wrote:

On Wed, Jul 22, 2015 at 3:21 PM, Kevin Benton blak...@gmail.com wrote:
 The issue with the availability zone solution is that we now force
 availability zones in Nova to be constrained to network configuration. 
In
 the L3 ToR/no overlay configuration, this means every rack is its own
 availability zone. This is pretty annoying for users to deal with 
because
 they have to choose from potentially hundreds of availability zones and 
it
 rules out making AZs based on other things (e.g.  current phase, cooling
 systems, etc).

 I may be misunderstanding and you could be suggesting to not expose this
 availability zone to the end user and only make it available to the
 scheduler. However, this defeats one of the purposes of availability 
zones
 which is to let users select different AZs to spread their instances 
across
 failure domains.

I was actually talking with some guys at dinner during the Nova
mid-cycle last night (Andrew ??, Robert Collins, Dan Smith, probably
others I can't remember) about the relationship of these network
segments to AZs and cells.  I think we were all in agreement that we
can't confine segments to AZs or cells nor the other way around.


I just want to +1 this one from the operators’ perspective.  Network 
segments, availability zones, and cells are all separate constructs which 
are used for different purposes.  We prefer to not have any relationships 
forced between them.

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-27 Thread Fox, Kevin M
A lot of heat templates precreate the ports though. its sometimes easier to 
build the template that way.

May not matter too much. Just pointing out its more common then you might think.

Thanks,
Kevin

From: Mike Dorman [mdor...@godaddy.com]
Sent: Monday, July 27, 2015 7:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][L3] Representing a networks connected by 
routers

On 7/23/15, 8:54 AM, Carl Baldwin c...@ecbaldwin.net wrote:

On Thu, Jul 23, 2015 at 8:51 AM, Kevin Benton blak...@gmail.com wrote:
Or, migration scheduling would need to respect the constraint that a
 port may be confined to a set of hosts.  How can be assign a port to a
 different network?  The VM would wake up and what?  How would it know
 to reconfigure its network stack?

 Right, that's a big mess. Once a network is picked for a port I think we
 just need to rely on a scheduler filter that limits the migration to
where
 that network is available.

+1.  That's where I was going.

Agreed, this seems reasonable to me for the migration scheduling case.

I view the pre-created port scenario as an edge case.  By explicitly
pre-creating a port and using it for a new instance (rather than letting
nova create a port for you), you are implicitly stating that you have more
knowledge about the networking setup.  In so doing, you’re removing the
guard rails (of nova scheduling the instance to a good network for the
host it's on), and therefore are at higher risk to crash and burn.  To me
that’s an acceptable trade-off.

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Nova migrate-flavor-data woes

2015-07-27 Thread Mike Dorman
I had this frustration, too, when doing this the first time.

FYI (and for the Googlers who stumble across this in the future), this 
patch [1] fixes the --max_number thing.

[1] https://review.openstack.org/#/c/175890/






On 7/27/15, 8:45 AM, Jay Pipes jaypi...@gmail.com wrote:

On 07/26/2015 01:15 PM, Lars Kellogg-Stedman wrote:
 So, the Kilo release notes say:

  nova-manage migrate-flavor-data

 But nova-manage says:

  nova-manage db migrate_flavor_data

 But that says:

  Missing arguments: max_number

 And the help says:

  usage: nova-manage db migrate_flavor_data [-h]
[--max-number number]

 Which indicates that --max-number is optional, but whatever, so you
 try:

  nova-manage db migrate_flavor_data --max-number 100

 And that says:

  Missing arguments: max_number

 So just for kicks you try:

  nova-manage db migrate_flavor_data --max_number 100

 And that says:

  nova-manage: error: unrecognized arguments: --max_number

 So finally you try:

  nova-manage db migrate_flavor_data 100

 And holy poorly implement client, Batman, it works.

LOL. Well, the important thing is that the thing eventually worked. ;P

In all seriousness, though, yeah, the nova-manage CLI tool is entirely 
different from the main python-novaclient CLI tool. It's not been a 
priority whatsoever to clean it up, but I think it would be some pretty 
low-hanging fruit to make the CLI consistent with the design of, say, 
python-openstackclient...

Perhaps something we should develop a backlog spec for.

Best,
-jay

___
Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [mistral] Team meeting minutes

2015-07-27 Thread Nikolay Makhotkin
Thanks for joining the meeting today!

Meeting minutes:
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-07-27-16.01.html
Meeting log:
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-07-27-16.01.log.html

The next meeting will be on Aug 3. You can post your agenda items at
https://wiki.openstack.org/wiki/Meetings/MistralAgenda

-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [designate] Associate dynamically project name as domain

2015-07-27 Thread Hayes, Graham
Hi Jaime,

What you want to do should be possible, but will require some custom
code to investigate.

See inline for a few suggestions.

On 27/07/15 16:32, Jaime Fernández wrote:
 I would like to register DNS records with the following format:
 name.interface.projectName.baseDomain
 to avoid collision between IP addresses for the same host but on
 different interfaces, and to reserve a domain per project. However, it's
 not an easy task.

In keystone you can set up notifications (like nova), so when a project
(or tenant) is created / deleted you can get a similar event.

(http://docs.openstack.org/developer/keystone/event_notifications.html)

It does not look like it gives back out the project name though - you
might need to call the keystone API to get it.

You could use this to trigger a Designate domain create / delete, using
the X-Auth-Sudo-Project-ID Header to impersonate the project (which
would make the new project the owner of the domain)

There may be issues with users creating a domain that is a subdomain of
another designate managed domain (e.g. baseDomain).

If you do not need to have Designate manage this domain you could set
baseDomain to be a tld (blocking all users from creating this domain
in designate).

With the v2 client bindings there is also zone transfer requests
which allows domains to be moved between tenants / projects.

If you need the baseDomain to be managed as part of Designate you
could do the following:

Create Domain (in Admin Project)
   |
   V
Create a Zone Transfer Request (in Admin Project)
   |
   V
Accept the Zone Transfer Request (in newly created project,
using the X-Auth-Sudo-Project-ID header)

It is a bit long winded, but should work.

 The notifications received by designate-sink report the tenant-id (but
 not project name) apart from other valuable information to register a
 virtual machine.
 
 After reading nova (see
 https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py)
 and neutron handlers, these handlers register the IP addresses as
 managed records, associating the resource_id (i.e. host instance_id).
 It simplifies the process of removing the records when the host is removed.
 
 I would like to register (via designate-api) a domain per project (or
 tenant) using the project name, and to assign the tenant_id when
 registering the domain. When a host is created, designate-sink receives
 a notification with its tenant_id, and we could search the domain by
 tenant_id in order to register the host record. However, I'm afraid that
 these managed attributes are not available via REST API (only by
 Python API).

You can edit managed records with the edit_managed_records URL
parameter, or the X-Designate-Edit-Managed-Records HTTP Header

(http://docs.openstack.org/developer/designate/rest.html#http-headers)

The newer versions of the client support this as a flag as well.

Unfortunately this will not allow you to set the managed_* fields, just
edit the record data.

 It would be nice to have the possibility to register or access these
 managed attributes via REST API. Otherwise, I don't know how to proceed
 with registered hosts. I don't think it's feasible to request for
 reinstalling these virtual hosts. I would prefer to register manually,
 via designate-api, those hosts that were already registered but with the
 managed attribute resource_id so that when designate-sink receives
 the notification about VM destruction, it is capable to unregister the
 host entry searching by its resource_id.

As a one off, starter you could write a script that uses the internal
RPCAPI to create these, but that could prove problematic to maintain,
and could end up being a significant amount of work.

 Do you have any suggestion about how to proceed to configure a subdomain
 for each project?

I hope this helps!

 - Graham


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
Although using a node's *local* filesystem requires external configuration
management to manage the distribution of rotated keys, it's always
available, easy to secure, and can be updated atomically per node. Note
that Fernet's rotation strategy uses a staged key that can be distributed
to all nodes in advance of it being used to create new tokens.

Also be aware that you wouldn't want to store encryption keys in plaintext
in a shared database, so you must introduce an additional layer of
complexity to solve that problem.

Barbican seems like much more logical next-step beyond the local
filesystem, as it shifts the burden onto a system explicitly designed to
handle this issue (albeit in a multitenant environment).

On Mon, Jul 27, 2015 at 12:01 PM, Alexander Makarov amaka...@mirantis.com
wrote:

 Greetings!

 I'd like to discuss pro's and contra's of having Fernet encryption keys
 stored in a database backend.
 The idea itself emerged during discussion about synchronizing rotated keys
 in HA environment.
 Now Fernet keys are stored in the filesystem that has some availability
 issues in unstable cluster.
 OTOH, making SQL highly available is considered easier than that for a
 filesystem.

 --
 Kind Regards,
 Alexander Makarov,
 Senior Software Developer,

 Mirantis, Inc.
 35b/3, Vorontsovskaya St., 109147, Moscow, Russia

 Tel.: +7 (495) 640-49-04
 Tel.: +7 (926) 204-50-60

 Skype: MAKAPOB.AJIEKCAHDP

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Looking at _sync_instance_power_state makes me want to gouge my eyes out

2015-07-27 Thread Matt Riedemann
garyk has a change up [1] which proposes to add a config option to log a 
warning rather than call the stop API when nova thinks that an instance 
is in an inconsistent state between the database and hypervisor and 
decides to stop it.


Regardless of that proposal, it brings up the fact that this code is a 
big pile of spaghetti and I kind of hate it. :)


It's called from the periodic task and the virt driver lifecycle event 
callback (implemented by libvirt and hyperv).


I was thinking it'd be nice to abstract some of that state - action 
logic into objects.  Like you create a factory which given some state 
value(s) yields an action, which could be logging/calling stop API, etc, 
but the point is that logic is abstracted away from 
_sync_instance_power_state so we don't have that giant mess of conditionals.


I don't really have a clear picture in my head for this, but wanted to 
dump it in the mailing list for something to think about if people want 
something to work on.


[1] https://review.openstack.org/#/c/190047/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [glance] Removal of Catalog Index Service from Glance

2015-07-27 Thread Louis Taylor
On Fri, Jul 17, 2015 at 07:50:55PM +0100, Louis Taylor wrote:
 Hi operators,
 
 In Kilo, we added the Catalog Index Service as an experimental API in Glance.
 It soon became apparent this would be better suited as a separate project, so
 it was split into the Searchlight project:
 
 https://wiki.openstack.org/wiki/Searchlight
 
 We've now started the process of removing the service from Glance for the
 Liberty release. Since the service was originally had the status of being
 experimental, we felt it would be okay to remove it without a cycle of
 deprecation.
 
 Is this something that would cause issues for any existing deployments? If you
 have any feelings about this one way or the other, feel free to share your
 thoughts on this mailing list or in the review to remove the code:
 
 https://review.openstack.org/#/c/197043/

Some time has passed and no one has complained about this, so I propose we go
ahead and remove it in liberty.

Cheers,
Louis


signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [neutron][lbaas] questions about scenario tests

2015-07-27 Thread Phillip Toohill
?Wonder if this is the same behavior as the TLS scenario? I have some higher 
priorities but I am attempting to debug the TLS test in between doing other 
things. Ill let you know if I come across anything.


Phillip V. Toohill III
Software Developer
[http://600a2794aa4ab5bae6bd-8d3014ab8e4d12d3346853d589a26319.r53.cf1.rackcdn.com/signatures/images/rackspace_logo.png]
phone: 210-312-4366
mobile: 210-440-8374


From: Madhusudhan Kandadai madhusudhan.openst...@gmail.com
Sent: Sunday, July 26, 2015 10:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][lbaas] questions about scenario tests

Hi there,

We have two working scenario tests in neutron lbaas tree and those are getting 
succeeded locally, however when running them in Jenkins, it is behaving 
differently: One of them got passed and the other one is facing  time-out 
issues when trying to curl two backend servers after setting up two simple 
webservers. Both the tests are using the same base.py to setup backend servers. 
For info: 
https://github.com/openstack/neutron-lbaas/tree/master/neutron_lbaas/tests/tempest/v2/scenario

Tried increasing the default ping_timeout from 120 to 180, but no luck. Their 
logs are shown here: 
http://logs.openstack.org/13/205713/4/experimental/gate-neutron-lbaasv2-dsvm-scenario/09bbbd1/

If anyone has any idea about this, could you shed some light on the same?

Thanks!

Madhu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Implementation of ABC MetaClasses

2015-07-27 Thread Mike Perez
On 14:26 Jul 15, John Griffith wrote:
 ​Ok, so I spent a little time on this; first gathering some detail around
 what's been done as well as proposing a patch to sort of step back a bit
 and take another look at this [1].
 
 Here's some more detail on what is bothering me here:
 * Inheritance model

Some good discussions happened in the Cinder IRC channel today [1] about this.

To sum things up:

1) Cinder has a matrix of optional features.
2) Majority of people in Cinder are OK with the cost of having multiple classes
   representing features that a driver can choose to support.
3) The benefit of this is seeing which drivers support [2] which features.

People are still interested in discussing this at the next Cinder midcycle
sprint [3].

My decision is going to be that unless folks want to go and remove optional
features like consistency group, replication, etc, we need something to keep
track of things. I think there are current problems with the current
implementation [4], and I do see value in John's proposal if we didn't have
these optional features.


[1] - 
http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2015-07-27.log.html#t2015-07-27T16:30:28
[2] - https://review.openstack.org/#/c/160346/
[3] - https://wiki.openstack.org/wiki/Sprints/CinderLibertySprint
[4] - http://lists.openstack.org/pipermail/openstack-dev/2015-June/067572.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][testing] How to modify DSVM tests to use a DevStack plugin?

2015-07-27 Thread Paul Michali
Not being very familiar with how this all works, can someone provide a bit
more hand holding here?

The overall question is, do we remove VPN from all the DevStack based tests
(except for those run by VPN repo)?

Thanks,

PCM


On Mon, Jul 27, 2015 at 8:26 AM Sean Dague s...@dague.net wrote:

 On 07/27/2015 08:21 AM, Paul Michali wrote:
  Maybe I'm not explaining myself well (sorry)...
 
  For VPN commits, there are functional jobs that (now) enable the
  devstack plugin for neutron-vpnaas as needed (and grenade job will do
  the same). From the neutron-vpnaas repo standpoint everything is in
 place.
 
  Now that there is a devstack plugin for neutron-vpnaas, I want to remove
  all the VPN setup from the *DevStack* repo's setup, as the user of
  DevStack can specify the enable_plugin in their local.conf file now. The
  commit is https://review.openstack.org/#/c/201119/.
 
  The issue I see though, is that the DevStack repo's jobs are failing,
  because they are using devstack, are relying on VPN being set up, and
  the enable_plugin line for VPN isn't part of any of the jobs shown in my
  last post.
 
  How do we resolve that issue?

 Presumably there is a flag in Tempest for whether or not this service
 should be tested? That would be where I'd look.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Removal of Catalog Index Service from Glance

2015-07-27 Thread Louis Taylor
On Fri, Jul 17, 2015 at 07:50:55PM +0100, Louis Taylor wrote:
 Hi operators,
 
 In Kilo, we added the Catalog Index Service as an experimental API in Glance.
 It soon became apparent this would be better suited as a separate project, so
 it was split into the Searchlight project:
 
 https://wiki.openstack.org/wiki/Searchlight
 
 We've now started the process of removing the service from Glance for the
 Liberty release. Since the service was originally had the status of being
 experimental, we felt it would be okay to remove it without a cycle of
 deprecation.
 
 Is this something that would cause issues for any existing deployments? If you
 have any feelings about this one way or the other, feel free to share your
 thoughts on this mailing list or in the review to remove the code:
 
 https://review.openstack.org/#/c/197043/

Some time has passed and no one has complained about this, so I propose we go
ahead and remove it in liberty.

Cheers,
Louis


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Alexander Makarov
Greetings!

I'd like to discuss pro's and contra's of having Fernet encryption keys
stored in a database backend.
The idea itself emerged during discussion about synchronizing rotated keys
in HA environment.
Now Fernet keys are stored in the filesystem that has some availability
issues in unstable cluster.
OTOH, making SQL highly available is considered easier than that for a
filesystem.

-- 
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] Removal of Catalog Index Service from Glance

2015-07-27 Thread Ian Cordasco


On 7/27/15, 11:29, Louis Taylor lo...@kragniz.eu wrote:

On Fri, Jul 17, 2015 at 07:50:55PM +0100, Louis Taylor wrote:
 Hi operators,
 
 In Kilo, we added the Catalog Index Service as an experimental API in
Glance.
 It soon became apparent this would be better suited as a separate
project, so
 it was split into the Searchlight project:
 
 https://wiki.openstack.org/wiki/Searchlight
 
 We've now started the process of removing the service from Glance for
the
 Liberty release. Since the service was originally had the status of
being
 experimental, we felt it would be okay to remove it without a cycle of
 deprecation.
 
 Is this something that would cause issues for any existing deployments?
If you
 have any feelings about this one way or the other, feel free to share
your
 thoughts on this mailing list or in the review to remove the code:
 
 https://review.openstack.org/#/c/197043/

Some time has passed and no one has complained about this, so I propose
we go
ahead and remove it in liberty.

Cheers,
Louis


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [glance] Removal of Catalog Index Service from Glance

2015-07-27 Thread Ian Cordasco


On 7/27/15, 11:29, Louis Taylor lo...@kragniz.eu wrote:

On Fri, Jul 17, 2015 at 07:50:55PM +0100, Louis Taylor wrote:
 Hi operators,
 
 In Kilo, we added the Catalog Index Service as an experimental API in
Glance.
 It soon became apparent this would be better suited as a separate
project, so
 it was split into the Searchlight project:
 
 https://wiki.openstack.org/wiki/Searchlight
 
 We've now started the process of removing the service from Glance for
the
 Liberty release. Since the service was originally had the status of
being
 experimental, we felt it would be okay to remove it without a cycle of
 deprecation.
 
 Is this something that would cause issues for any existing deployments?
If you
 have any feelings about this one way or the other, feel free to share
your
 thoughts on this mailing list or in the review to remove the code:
 
 https://review.openstack.org/#/c/197043/

Some time has passed and no one has complained about this, so I propose
we go
ahead and remove it in liberty.

Cheers,
Louis


+1

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Clint Byrum
Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
 Greetings!
 
 I'd like to discuss pro's and contra's of having Fernet encryption keys
 stored in a database backend.
 The idea itself emerged during discussion about synchronizing rotated keys
 in HA environment.
 Now Fernet keys are stored in the filesystem that has some availability
 issues in unstable cluster.
 OTOH, making SQL highly available is considered easier than that for a
 filesystem.
 

I don't think HA is the root of the problem here. The problem is
synchronization. If I have 3 keystone servers (n+1), and I rotate keys on
them, I must very carefully restart them all at the exact right time to
make sure one of them doesn't issue a token which will not be validated
on another. This is quite a real possibility because the validation
will not come from the user, but from the service, so it's not like we
can use simple persistence rules. One would need a layer 7 capable load
balancer that can find the token ID and make sure it goes back to the
server that issued it.

A database will at least ensure that it is updated in one place,
atomically, assuming each server issues a query to find the latest
key at every key validation request. That would be a very cheap query,
but not free. A cache would be fine, with the cache being invalidated
on any failed validation, but then that opens the service up to DoS'ing
the database simply by throwing tons of invalid tokens at it.

So an alternative approach is to try to reload the filesystem based key
repository whenever a validation fails. This is quite a bit cheaper than a
SQL query, so the DoS would have to be a full-capacity DoS (overwhelming
all the nodes, not just the database) which you can never prevent. And
with that, you can simply sync out new keys at will, and restart just
one of the keystones, whenever you are confident the whole repository is
synchronized. This is also quite a bit simpler, as one basically needs
only to add a single piece of code that issues load_keys and retries
inside validation.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Glance] Fixing up Cinder store in Glance

2015-07-27 Thread Mike Perez
On 23:04 Jul 02, Tomoki Sekiyama wrote:
 Hi Cinder experts,
 
 Currently Glance has cinder backend but it is broken for a long time.
 I am proposing a glance-spec/patch to fix it by implementing the
 uploading/downloading images to/from cinder volumes.
 
 Glance-spec: https://review.openstack.org/#/c/183363/

CCing Nikhil,

This spec has some approval from the Cinder team. Is there anything else
needed?

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
  Greetings!
 
  I'd like to discuss pro's and contra's of having Fernet encryption keys
  stored in a database backend.
  The idea itself emerged during discussion about synchronizing rotated
 keys
  in HA environment.
  Now Fernet keys are stored in the filesystem that has some availability
  issues in unstable cluster.
  OTOH, making SQL highly available is considered easier than that for a
  filesystem.
 

 I don't think HA is the root of the problem here. The problem is
 synchronization. If I have 3 keystone servers (n+1), and I rotate keys on
 them, I must very carefully restart them all at the exact right time to
 make sure one of them doesn't issue a token which will not be validated
 on another. This is quite a real possibility because the validation
 will not come from the user, but from the service, so it's not like we
 can use simple persistence rules. One would need a layer 7 capable load
 balancer that can find the token ID and make sure it goes back to the
 server that issued it.


This is not true (or if it is, I'd love see a bug report). keystone-manage
fernet_rotate uses a three phase rotation strategy (staged - primary -
secondary) that allows you to distribute a staged key (used only for token
validation) throughout your cluster before it becomes a primary key (used
for token creation and validation) anywhere. Secondary keys are only used
for token validation.

All you have to do is atomically replace the fernet key directory with a
new key set.

You also don't have to restart keystone for it to pickup new keys dropped
onto the filesystem beneath it.



 A database will at least ensure that it is updated in one place,
 atomically, assuming each server issues a query to find the latest
 key at every key validation request. That would be a very cheap query,
 but not free. A cache would be fine, with the cache being invalidated
 on any failed validation, but then that opens the service up to DoS'ing
 the database simply by throwing tons of invalid tokens at it.

 So an alternative approach is to try to reload the filesystem based key
 repository whenever a validation fails. This is quite a bit cheaper than a
 SQL query, so the DoS would have to be a full-capacity DoS (overwhelming
 all the nodes, not just the database) which you can never prevent. And
 with that, you can simply sync out new keys at will, and restart just
 one of the keystones, whenever you are confident the whole repository is
 synchronized. This is also quite a bit simpler, as one basically needs
 only to add a single piece of code that issues load_keys and retries
 inside validation.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] Liberty-2 development milestone coming up

2015-07-27 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-07-27 15:48:52 +0200:
 Hi PTLs with deliverables using the development milestone model,
 
 This week is the *liberty-2* development milestone week. That means you
 should plan to reach out to the release team on #release-mgmt-office
 during office hours tomorrow:
 
 08:00 - 10:00 UTC: ttx
 18:00 - 20:00 UTC: dhellmann

I have an appointment tomorrow, so I'll actually be available 19:00-21:00 UTC.

Doug

 
 During this sync point we'll be adjusting the completed blueprints and
 fixed bugs list in preparation for the tag.
 
 The tag itself should be communicated through a proposed change to the
 openstack/releases repository, sometimes between Tuesday and Thursday.
 We'll go through the process during the sync tomorrow.
 
 If you can't make it to the office hours tomorrow, please reach out on
 the channel so that we can arrange another time.
 
 Regards,
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Unable to create the secret after Integrating Barbican with HSM HA

2015-07-27 Thread John Vrbanac
Asha,

I've used the Safenet HSM HA virtual slot setup and it does work. However, 
the setup is very interesting because you need to generate the MKEK and HMAC on 
a single HSM and then replicate it to the other HSMs out of band of anything we 
have in Barbican. If I recall correctly, the Safenet Luna docs mention how to 
replicate keys or partitions between HSMs.


John Vrbanac

From: Asha Seshagiri asha.seshag...@gmail.com
Sent: Monday, July 27, 2015 2:00 PM
To: openstack-dev
Cc: John Wood; Douglas Mendizabal; John Vrbanac; Reller, Nathan S.
Subject: Barbican : Unable to create the secret after Integrating Barbican with 
HSM HA

Hi All ,

I am working on Integrating Barbican with HSM HA set up.
I have configured slot 1 and slot 2 to be on HA on Luna SA set up . Slot 6 is a 
virtual slot on the client side which acts as the proxy for the slot 1 and 2. 
Hence on the Barbican side , I mentioned the slot number 6 and its password 
which is identical to that of the passwords of slot1 and slot 2 in 
barbican.conf file.

Please find the contents of the file  :

# = Secret Store Plugin ===
[secretstore]
namespace = barbican.secretstore.plugin
enabled_secretstore_plugins = store_crypto

# = Crypto plugin ===
[crypto]
namespace = barbican.crypto.plugin
enabled_crypto_plugins = p11_crypto

[simple_crypto_plugin]
# the kek should be a 32-byte value which is base64 encoded
kek = 'YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY='

[dogtag_plugin]
pem_path = '/etc/barbican/kra_admin_cert.pem'
dogtag_host = localhost
dogtag_port = 8443
nss_db_path = '/etc/barbican/alias'
nss_db_path_ca = '/etc/barbican/alias-ca'
nss_password = 'password123'
simple_cmc_profile = 'caOtherCert'

[p11_crypto_plugin]
# Path to vendor PKCS11 library
library_path = '/usr/lib/libCryptoki2_64.so'
# Password to login to PKCS11 session
login = 'test5678'
# Label to identify master KEK in the HSM (must not be the same as HMAC label)
mkek_label = 'ha_mkek'
# Length in bytes of master KEK
mkek_length = 32
# Label to identify HMAC key in the HSM (must not be the same as MKEK label)
hmac_label = 'ha_hmac'
# HSM Slot id (Should correspond to a configured PKCS11 slot). Default: 1
slot_id = 6

Was able to create MKEK and HMAC successfully for the slots 1 and 2 on the HSM 
when we run the pkcs11-key-generation script  for slot 6 which should be the 
expected behaviour.

[root@HSM-Client bin]# python pkcs11-key-generation --library-path 
'/usr/lib/libCryptoki2_64.so'  --passphrase 'test5678' --slot-id 6 mkek --label 
'ha_mkek'
Verified label !
MKEK successfully generated!
[root@HSM-Client bin]# python pkcs11-key-generation --library-path 
'/usr/lib/libCryptoki2_64.so' --passphrase 'test5678' --slot-id 6 hmac --label 
'ha_hmac'
HMAC successfully generated!
[root@HSM-Client bin]#

Please find the HSM commands and responses to show the details of the 
partitions and partitions contents :

root@HSM-Client bin]# ./vtl verify


The following Luna SA Slots/Partitions were found:


Slot Serial # Label

  =

1 489361010 barbican2

2 489361011 barbican3


[HSMtestLuna1] lunash: partition showcontents -partition barbican2



Please enter the user password for the partition:

 



Partition Name: barbican2

Partition SN: 489361010

Storage (Bytes): Total=1046420, Used=256, Free=1046164

Number objects: 2


Object Label: ha_mkek

Object Type: Symmetric Key


Object Label: ha_hmac

Object Type: Symmetric Key



Command Result : 0 (Success)

[HSMtestLuna1] lunash: partition showcontents -partition barbican3



Please enter the user password for the partition:

 



Partition Name: barbican3

Partition SN: 489361011

Storage (Bytes): Total=1046420, Used=256, Free=1046164

Number objects: 2


Object Label: ha_mkek

Object Type: Symmetric Key


Object Label: ha_hmac

Object Type: Symmetric Key




[root@HSM-Client bin]# ./lunacm


LunaCM V2.3.3 - Copyright (c) 2006-2013 SafeNet, Inc.


Available HSM's:


Slot Id - 1

HSM Label - barbican2

HSM Serial Number - 489361010

HSM Model - LunaSA

HSM Firmware Version - 6.2.1

HSM Configuration - Luna SA Slot (PW) Signing With Cloning Mode

HSM Status - OK


Slot Id - 2

HSM Label - barbican3

HSM Serial Number - 489361011

HSM Model - LunaSA

HSM Firmware Version - 6.2.1

HSM Configuration - Luna SA Slot (PW) Signing With Cloning Mode

HSM Status - OK


Slot Id - 6

HSM Label - barbican_ha

HSM Serial Number - 1489361010

HSM Model - LunaVirtual

HSM Firmware Version - 6.2.1

HSM Configuration - Virtual HSM (PW) Signing With Cloning Mode

HSM Status - N/A - HA Group


Current Slot Id: 1

Tried creating the secrets using the below command :

root@HSM-Client barbican]# curl -X POST -H 'content-type:application/json' -H 
'X-Project-Id:12345' -d '{payload: my-secret-here, payload_content_type: 
text/plain}' http://localhost:9311/v1/secrets
{code: 500, description: Secret creation failure seen - please contact 

[openstack-dev] [puppet] Proposing Yanis Guenane core

2015-07-27 Thread Emilien Macchi
Puppet group,

Yanis has been working in our group for a while now.
He has been involved in a lot of tasks, let me highlight some of them:

* Many times, involved in improving consistency across our modules.
* Strong focus on data binding, backward compatibility and flexibility.
* Leadership on cookiebutter project [1].
* Active participation to meetings, always with actions, and thoughts.
* Beyond the stats, he has a good understanding of our modules, and
quite good number of reviews, regularly.

Yanis is for our group a strong asset and I would like him part of our
core team.
I really think his involvement, regularity and strong knowledge in
Puppet OpenStack will really help to make our project better and stronger.

I would like to open the vote to promote Yanis part of Puppet OpenStack
core reviewers.

Best regards,

[1] https://github.com/openstack/puppet-openstack-cookiecutter
-- 
Emilien Macchi





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
Matt Fischer also discusses key rotation here:

  http://www.mattfischer.com/blog/?p=648

And here:

  http://www.mattfischer.com/blog/?p=665

On Mon, Jul 27, 2015 at 2:30 PM, Dolph Mathews dolph.math...@gmail.com
wrote:



 On Mon, Jul 27, 2015 at 2:03 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Dolph Mathews's message of 2015-07-27 11:48:12 -0700:
  On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum cl...@fewbar.com wrote:
 
   Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34
 -0700:
Greetings!
   
I'd like to discuss pro's and contra's of having Fernet encryption
 keys
stored in a database backend.
The idea itself emerged during discussion about synchronizing
 rotated
   keys
in HA environment.
Now Fernet keys are stored in the filesystem that has some
 availability
issues in unstable cluster.
OTOH, making SQL highly available is considered easier than that
 for a
filesystem.
   
  
   I don't think HA is the root of the problem here. The problem is
   synchronization. If I have 3 keystone servers (n+1), and I rotate
 keys on
   them, I must very carefully restart them all at the exact right time
 to
   make sure one of them doesn't issue a token which will not be
 validated
   on another. This is quite a real possibility because the validation
   will not come from the user, but from the service, so it's not like we
   can use simple persistence rules. One would need a layer 7 capable
 load
   balancer that can find the token ID and make sure it goes back to the
   server that issued it.
  
 
  This is not true (or if it is, I'd love see a bug report).
 keystone-manage
  fernet_rotate uses a three phase rotation strategy (staged - primary -
  secondary) that allows you to distribute a staged key (used only for
 token
  validation) throughout your cluster before it becomes a primary key
 (used
  for token creation and validation) anywhere. Secondary keys are only
 used
  for token validation.
 
  All you have to do is atomically replace the fernet key directory with a
  new key set.
 
  You also don't have to restart keystone for it to pickup new keys
 dropped
  onto the filesystem beneath it.
 

 That's great news! Is this documented anywhere? I dug through the
 operators guides, security guide, install guide, etc. Nothing described
 this dance, which is impressive and should be written down!


 (BTW, your original assumption would normally have been an accurate one!)

 I don't believe it's documented in any of those places, yet. The best
 explanation of the three phases in tree I'm aware of is probably this
 (which isn't particularly accessible..):


 https://github.com/openstack/keystone/blob/6a6fcc2/keystone/cmd/cli.py#L208-L223

 Lance Bragstad and I also gave a small presentation at the Vancouver
 summit on the behavior and he mentions the same on one of his blog posts:

   https://www.youtube.com/watch?v=duRBlm9RtCwfeature=youtu.be
   http://lbragstad.com/?p=133


 I even tried to discern how it worked from the code but it actually
 looks like it does not work the way you describe on casual investigation.


 I don't blame you! I'll work to improve the user-facing docs on the topic.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[OpenStack-Infra] [Infra] Meeting Tuesday July 28th at 19:00 UTC

2015-07-27 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday July 28th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full log from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-07-21-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-07-21-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-07-21-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[openstack-dev] [Ironic] weekly subteam status report

2015-07-27 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted. (There wasn't any
subteam report last week since the meeting was cancelled.)

Bugs (dtantsur)

Dashboard moved to a new home on openshift:
- http://ironic-divius.rhcloud.com/
- source: https://github.com/dtantsur/ironic-bug-dashboard

As of Mon, Jul 27 (diff with Jul 13 as we skipped one meeting):
- Open: 147. 6 new, 53 in progress (+1), 0 critical, 11 high (+1) and 8
incomplete
- Nova bugs with Ironic tag: 24. 0 new, 0 critical, 0 high


Neutron/Ironic work (jroll)

Specs landed, code being worked on


Testing (adam_g/jlvillal)
==
(dtantsur) WIP devstack patch to support ENROLL:
https://review.openstack.org/#/c/206055/

(dtantsur) tempest test for microversions passed for the 1st time:
https://review.openstack.org/#/c/166386/
- good time to review


Inspector (dtansur)
===
Our job worked fine in experimental pipeline, proposing for non-voting
check:
- https://review.openstack.org/#/c/202682/


Bifrost (TheJulia)
=
Currently investigating issues with simple-init


Drivers
==

DRAC (ifarkas/lucas)

introducing python-dracclient under openstack/ironic:
https://review.openstack.org/#/c/204609/ and
https://review.openstack.org/#/c/203991/

iLO (wanyen)
--
secure boot for pxe-ilo driver spec https://review.openstack.org/#/c/174295/
got a +2.  It still needs one more core reviewer's approval.  This work is
a carry-over item from Kilo.  Please review.

iRMC (naohirot)
-
https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z

Status: Active (spec and vendor passthru reviews are on going)
- Enhance Power Interface for Soft Reboot and NMI
- bp/enhance-power-interface-for-soft-reboot-and-nmi

Status: Active (code review is on going)
- iRMC out of band inspection
- bp/ironic-node-properties-discovery

Status: TODO
- iRMC Virtual Media Deploy Driver
- document update Enabling drivers page
- follow up patch to fix nits



Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Unable to create the secret after Integrating Barbican with HSM HA

2015-07-27 Thread Asha Seshagiri
Hi John ,

Thanks  a lot for providing me the response:)
I followed the link[1] for configuring the HA SETUP
[1] : http://docs.aws.amazon.com/cloudhsm/latest/userguide/ha-setup.html

the final step in the above link  is haAdmin command which is run on the
client side(on Barbican) .
The slot 6 is the virtual slot(only on the client side and not visible on
LUNA SA ) and 1 and 2 are actual slots on LUNA SA HSM

Please find the response below :

[root@HSM-Client bin]# ./vtl haAdmin show



  HA Global Configuration Settings ===


 HA Proxy: disabled

HA Auto Recovery: disabled

Maximum Auto Recovery Retry: 0

Auto Recovery Poll Interval: 60 seconds

HA Logging: disabled

Only Show HA Slots: no



  HA Group and Member Information 


 HA Group Label: barbican_ha

HA Group Number: 1489361010

HA Group Slot #: 6

Synchronization: enabled

Group Members: 489361010, 489361011

Standby members: none


 Slot # Member S/N Member Label Status

== ==  ==

1 489361010 barbican2 alive

2 489361011 barbican3 alive

After knowing the virtual slot HA number , I ran the pkcs11-key-generation
with slot number 6 which did create mkek and hmac in slot/partition 1 and 2
automatically . I am not sure why do we have to replicate the keys between
partitions? Configured the slot 6 on the barbican.conf as mentioned in my
first email

Not sure what might be the issue and

It would be great if you could tell me the steps or where I would have gone
wrong.

Thanks and Regards,

Asha Seshagiri

On Mon, Jul 27, 2015 at 2:36 PM, John Vrbanac john.vrba...@rackspace.com
wrote:

  Asha,

 I've used the Safenet HSM HA virtual slot setup and it does work.
 However, the setup is very interesting because you need to generate the
 MKEK and HMAC on a single HSM and then replicate it to the other HSMs out
 of band of anything we have in Barbican. If I recall correctly, the Safenet
 Luna docs mention how to replicate keys or partitions between HSMs.


 John Vrbanac
  --
 *From:* Asha Seshagiri asha.seshag...@gmail.com
 *Sent:* Monday, July 27, 2015 2:00 PM
 *To:* openstack-dev
 *Cc:* John Wood; Douglas Mendizabal; John Vrbanac; Reller, Nathan S.
 *Subject:* Barbican : Unable to create the secret after Integrating
 Barbican with HSM HA

Hi All ,

  I am working on Integrating Barbican with HSM HA set up.
  I have configured slot 1 and slot 2 to be on HA on Luna SA set up . Slot
 6 is a virtual slot on the client side which acts as the proxy for the slot
 1 and 2. Hence on the Barbican side , I mentioned the slot number 6 and its
 password which is identical to that of the passwords of slot1 and slot 2 in
 barbican.conf file.

  Please find the contents of the file  :

 # = Secret Store Plugin ===
 [secretstore]
 namespace = barbican.secretstore.plugin
 enabled_secretstore_plugins = store_crypto

 # = Crypto plugin ===
 [crypto]
 namespace = barbican.crypto.plugin
 enabled_crypto_plugins = p11_crypto

 [simple_crypto_plugin]
 # the kek should be a 32-byte value which is base64 encoded
 kek = 'YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY='

 [dogtag_plugin]
 pem_path = '/etc/barbican/kra_admin_cert.pem'
 dogtag_host = localhost
 dogtag_port = 8443
 nss_db_path = '/etc/barbican/alias'
 nss_db_path_ca = '/etc/barbican/alias-ca'
 nss_password = 'password123'
 simple_cmc_profile = 'caOtherCert'















 *[p11_crypto_plugin] # Path to vendor PKCS11 library library_path =
 '/usr/lib/libCryptoki2_64.so' # Password to login to PKCS11 session login =
 'test5678' # Label to identify master KEK in the HSM (must not be the same
 as HMAC label) mkek_label = 'ha_mkek' # Length in bytes of master KEK
 mkek_length = 32 # Label to identify HMAC key in the HSM (must not be the
 same as MKEK label) hmac_label = 'ha_hmac' # HSM Slot id (Should correspond
 to a configured PKCS11 slot). Default: 1 slot_id = 6 *
 *Was able to create MKEK and HMAC successfully for the slots 1 and 2 on
 the HSM when we run the *
 *pkcs11-key-generation script  for slot 6 which should be the expected
 behaviour. *

 [root@HSM-Client bin]# python pkcs11-key-generation --library-path
 '/usr/lib/libCryptoki2_64.so'  --passphrase 'test5678' --slot-id 6 mkek
 --label 'ha_mkek'
 Verified label !
 MKEK successfully generated!
 [root@HSM-Client bin]# python pkcs11-key-generation --library-path
 '/usr/lib/libCryptoki2_64.so' --passphrase 'test5678' --slot-id 6 hmac
 --label 'ha_hmac'
 HMAC successfully generated!
 [root@HSM-Client bin]#

 Please find the HSM commands and responses to show the details of the
 partitions and partitions contents :

 root@HSM-Client bin]# ./vtl verify


  The following Luna SA Slots/Partitions were found:


  Slot Serial # Label

   =

 1 489361010 barbican2

 2 489361011 barbican3


  [HSMtestLuna1] lunash: partition showcontents -partition barbican2



  Please 

[openstack-dev] [Infra] Meeting Tuesday July 28th at 19:00 UTC

2015-07-27 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday July 28th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full log from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-07-21-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-07-21-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-07-21-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Yanis Guenane core

2015-07-27 Thread Rich Megginson

On 07/27/2015 01:06 PM, Emilien Macchi wrote:

Puppet group,

Yanis has been working in our group for a while now.
He has been involved in a lot of tasks, let me highlight some of them:

* Many times, involved in improving consistency across our modules.
* Strong focus on data binding, backward compatibility and flexibility.
* Leadership on cookiebutter project [1].
* Active participation to meetings, always with actions, and thoughts.
* Beyond the stats, he has a good understanding of our modules, and
quite good number of reviews, regularly.

Yanis is for our group a strong asset and I would like him part of our
core team.
I really think his involvement, regularity and strong knowledge in
Puppet OpenStack will really help to make our project better and stronger.

I would like to open the vote to promote Yanis part of Puppet OpenStack
core reviewers.


+1



Best regards,

[1] https://github.com/openstack/puppet-openstack-cookiecutter


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing HyperStack project

2015-07-27 Thread Matt Riedemann



On 7/26/2015 11:43 PM, Adrian Otto wrote:

Peng,

For the record, the Magnum team is not yet comfortable with this
proposal. This arrangement is not the way we think containers should be
integrated with OpenStack. It completely bypasses Nova, and offers no
Bay abstraction, so there is no user selectable choice of a COE
(Container Orchestration Engine). We advised that it would be smarter to
build a nova virt driver for Hyper, and integrate that with Magnum so
that it could work with all the different bay types. It also produces a


The nova-hyper virt driver idea has already been proposed:

http://lists.openstack.org/pipermail/openstack-dev/2015-June/067501.html


situation where operators can not effectively bill for the services that
are in use by the consumers, there is no sensible infrastructure layer
capacity management (scheduler), no encryption management solution for
the communication between k8s minions/nodes and the k8s master, and a
number of other weaknesses. I’m not convinced the single-tenant approach
here makes sense.

To be fair, the concept is interesting, and we are discussing how it
could be integrated with Magnum. It’s appropriate for experimentation,
but I would not characterize it as a “solution for cloud providers” for
the above reasons, and the callouts I mentioned here:

http://lists.openstack.org/pipermail/openstack-dev/2015-July/069940.html

Positioning it that way is simply premature. I strongly suggest that you
attend the Magnum team meetings, and work through these concerns as we
had Hyper on the agenda last Tuesday, but you did not show up to discuss
it. The ML thread was confused by duplicate responses, which makes it
rather hard to follow.

I think it’s a really bad idea to basically re-implement Nova in Hyper.
Your’e already re-implementing Docker in Hyper. With a scope that’s too
wide, you won’t be able to keep up with the rapid changes in these
projects, and anyone using them will be unable to use new features that
they would expect from Docker and Nova while you are busy copying all of
that functionality each time new features are added. I think there’s a
better approach available that does not require you to duplicate such a
wide range of functionality. I suggest we work together on this, and
select an approach that sets you up for success, and gives OpenStack
could operators what they need to build services on Hyper.

Regards,

Adrian


On Jul 26, 2015, at 7:40 PM, Peng Zhao p...@hyper.sh
mailto:p...@hyper.sh wrote:

Hi all,
I am glad to introduce the HyperStack project to you.
HyperStack is a native, multi-tenant CaaS solution built on top of
OpenStack. In terms of architecture, HyperStack = Bare-metal + Hyper +
Kubernetes + Cinder + Neutron.
HyperStack is different from Magnum in that HyperStack doesn't employ
the Bay concept. Instead, HyperStack pools all bare-metal servers into
one singe cluster. Due to the hypervisor nature in Hyper, different
tenants' applications are completely isolated (no shared kernel), thus
co-exist without security concerns in a same cluster.
Given this, HyperStack is a solution for public cloud providers who
want to offer the secure, multi-tenant CaaS.
Ref:
https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/1258x535/1c85a755dcb5e4a4147d37e6aa22fd40/upload_7_23_2015_at_11_00_41_AM.png
https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/1258x535/1c85a755dcb5e4a4147d37e6aa22fd40/upload_7_23_2015_at_11_00_41_AM.png
The next step is to present a working beta of HyperStack at Tokyo
summit, which we submitted a presentation:
https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/Presentation/4030.
Please vote if you are interested.
In the future, we want to integrate HyperStack with Magnum and Nova to
make sure one OpenStack deployment can offer both IaaS and native CaaS
services.
Best,
Peng
-- Background
---
Hyper is a hypervisor-agnostic Docker runtime. It allows to run Docker
images with any hypervisor (KVM, Xen, Vbox, ESX). Hyper is different
from the minimalist Linux distros like CoreOS by that Hyper runs on
the physical box and load the Docker images from the metal into the VM
instance, in which no guest OS is present. Instead, Hyper boots a
minimalist kernel in the VM to host the Docker images (Pod).
With this approach, Hyper is able to bring some encouraging results,
which are similar to container:
- 300ms to boot a new HyperVM instance with a pod of Docker images
- 20MB for min mem footprint of a HyperVM instance
- Immutable HyperVM, only kernel+images, serves as atomic unit (Pod)
for scheduling
- Immune from the shared kernel problem in LXC, isolated by VM
- Work seamlessly with OpenStack components, Neutron, Cinder, due to
the hypervisor nature
- BYOK, bring-your-own-kernel is somewhat mandatory for a public cloud
platform

__
OpenStack 

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Fox, Kevin M
Barbican depends on Keystone though for authentication. Its not a silver bullet 
here.

Kevin

From: Dolph Mathews [dolph.math...@gmail.com]
Sent: Monday, July 27, 2015 10:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

Although using a node's *local* filesystem requires external configuration 
management to manage the distribution of rotated keys, it's always available, 
easy to secure, and can be updated atomically per node. Note that Fernet's 
rotation strategy uses a staged key that can be distributed to all nodes in 
advance of it being used to create new tokens.

Also be aware that you wouldn't want to store encryption keys in plaintext in a 
shared database, so you must introduce an additional layer of complexity to 
solve that problem.

Barbican seems like much more logical next-step beyond the local filesystem, as 
it shifts the burden onto a system explicitly designed to handle this issue 
(albeit in a multitenant environment).

On Mon, Jul 27, 2015 at 12:01 PM, Alexander Makarov 
amaka...@mirantis.commailto:amaka...@mirantis.com wrote:
Greetings!

I'd like to discuss pro's and contra's of having Fernet encryption keys stored 
in a database backend.
The idea itself emerged during discussion about synchronizing rotated keys in 
HA environment.
Now Fernet keys are stored in the filesystem that has some availability issues 
in unstable cluster.
OTOH, making SQL highly available is considered easier than that for a 
filesystem.

--
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04tel:%2B7%20%28495%29%20640-49-04
Tel.: +7 (926) 204-50-60tel:%2B7%20%28926%29%20204-50-60

Skype: MAKAPOB.AJIEKCAHDP

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican : Unable to create the secret after Integrating Barbican with HSM HA

2015-07-27 Thread Asha Seshagiri
Hi All ,

I am working on Integrating Barbican with HSM HA set up.
I have configured slot 1 and slot 2 to be on HA on Luna SA set up . Slot 6
is a virtual slot on the client side which acts as the proxy for the slot 1
and 2. Hence on the Barbican side , I mentioned the slot number 6 and its
password which is identical to that of the passwords of slot1 and slot 2 in
barbican.conf file.

Please find the contents of the file  :

# = Secret Store Plugin ===
[secretstore]
namespace = barbican.secretstore.plugin
enabled_secretstore_plugins = store_crypto

# = Crypto plugin ===
[crypto]
namespace = barbican.crypto.plugin
enabled_crypto_plugins = p11_crypto

[simple_crypto_plugin]
# the kek should be a 32-byte value which is base64 encoded
kek = 'YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY='

[dogtag_plugin]
pem_path = '/etc/barbican/kra_admin_cert.pem'
dogtag_host = localhost
dogtag_port = 8443
nss_db_path = '/etc/barbican/alias'
nss_db_path_ca = '/etc/barbican/alias-ca'
nss_password = 'password123'
simple_cmc_profile = 'caOtherCert'















*[p11_crypto_plugin]# Path to vendor PKCS11 librarylibrary_path =
'/usr/lib/libCryptoki2_64.so'# Password to login to PKCS11 sessionlogin =
'test5678'# Label to identify master KEK in the HSM (must not be the same
as HMAC label)mkek_label = 'ha_mkek'# Length in bytes of master
KEKmkek_length = 32# Label to identify HMAC key in the HSM (must not be the
same as MKEK label)hmac_label = 'ha_hmac'# HSM Slot id (Should correspond
to a configured PKCS11 slot). Default: 1slot_id = 6*
*Was able to create MKEK and HMAC successfully for the slots 1 and 2 on the
HSM when we run the *
*pkcs11-key-generation script  for slot 6 which should be the expected
behaviour.*

[root@HSM-Client bin]# python pkcs11-key-generation --library-path
'/usr/lib/libCryptoki2_64.so'  --passphrase 'test5678' --slot-id 6 mkek
--label 'ha_mkek'
Verified label !
MKEK successfully generated!
[root@HSM-Client bin]# python pkcs11-key-generation --library-path
'/usr/lib/libCryptoki2_64.so' --passphrase 'test5678' --slot-id 6 hmac
--label 'ha_hmac'
HMAC successfully generated!
[root@HSM-Client bin]#

Please find the HSM commands and responses to show the details of the
partitions and partitions contents :

root@HSM-Client bin]# ./vtl verify


 The following Luna SA Slots/Partitions were found:


 Slot Serial # Label

  =

1 489361010 barbican2

2 489361011 barbican3


 [HSMtestLuna1] lunash: partition showcontents -partition barbican2



 Please enter the user password for the partition:

 



 Partition Name: barbican2

Partition SN: 489361010

Storage (Bytes): Total=1046420, Used=256, Free=1046164

Number objects: 2


 Object Label: ha_mkek

Object Type: Symmetric Key


 Object Label: ha_hmac

Object Type: Symmetric Key



 Command Result : 0 (Success)

[HSMtestLuna1] lunash: partition showcontents -partition barbican3



 Please enter the user password for the partition:

 



 Partition Name: barbican3

Partition SN: 489361011

Storage (Bytes): Total=1046420, Used=256, Free=1046164

Number objects: 2


 Object Label: ha_mkek

Object Type: Symmetric Key


 Object Label: ha_hmac

Object Type: Symmetric Key




[root@HSM-Client bin]# ./lunacm


 LunaCM V2.3.3 - Copyright (c) 2006-2013 SafeNet, Inc.


 Available HSM's:


 Slot Id - 1

HSM Label - barbican2

HSM Serial Number - 489361010

HSM Model - LunaSA

HSM Firmware Version - 6.2.1

HSM Configuration - Luna SA Slot (PW) Signing With Cloning Mode

HSM Status - OK


 Slot Id - 2

HSM Label - barbican3

HSM Serial Number - 489361011

HSM Model - LunaSA

HSM Firmware Version - 6.2.1

HSM Configuration - Luna SA Slot (PW) Signing With Cloning Mode

HSM Status - OK


 Slot Id - 6

HSM Label - barbican_ha

HSM Serial Number - 1489361010

HSM Model - LunaVirtual

HSM Firmware Version - 6.2.1

HSM Configuration - Virtual HSM (PW) Signing With Cloning Mode

HSM Status - N/A - HA Group


 Current Slot Id: 1

*Tried creating the secrets using the below command :*

root@HSM-Client barbican]# curl -X POST -H 'content-type:application/json'
-H 'X-Project-Id:12345' -d '{payload: my-secret-here,
payload_content_type: text/plain}' http://localhost:9311/v1/secrets
{code: 500, description: Secret creation failure seen - please contact
site administrator., title: Internal Server Error}[root@HSM-

*Please find the logs below :*

2015-07-27 11:57:07.586 16362 ERROR barbican.api.controllers Traceback
(most recent call last):
2015-07-27 11:57:07.586 16362 ERROR barbican.api.controllers   File
/root/barbican/barbican/api/controllers/__init__.py, line 104, in handler
2015-07-27 11:57:07.586 16362 ERROR barbican.api.controllers return
fn(inst, *args, **kwargs)
2015-07-27 11:57:07.586 16362 ERROR barbican.api.controllers   File
/root/barbican/barbican/api/controllers/__init__.py, line 90, in enforcer
2015-07-27 11:57:07.586 16362 ERROR barbican.api.controllers return
fn(inst, *args, 

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
On Mon, Jul 27, 2015 at 2:03 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Dolph Mathews's message of 2015-07-27 11:48:12 -0700:
  On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum cl...@fewbar.com wrote:
 
   Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
Greetings!
   
I'd like to discuss pro's and contra's of having Fernet encryption
 keys
stored in a database backend.
The idea itself emerged during discussion about synchronizing rotated
   keys
in HA environment.
Now Fernet keys are stored in the filesystem that has some
 availability
issues in unstable cluster.
OTOH, making SQL highly available is considered easier than that for
 a
filesystem.
   
  
   I don't think HA is the root of the problem here. The problem is
   synchronization. If I have 3 keystone servers (n+1), and I rotate keys
 on
   them, I must very carefully restart them all at the exact right time to
   make sure one of them doesn't issue a token which will not be validated
   on another. This is quite a real possibility because the validation
   will not come from the user, but from the service, so it's not like we
   can use simple persistence rules. One would need a layer 7 capable load
   balancer that can find the token ID and make sure it goes back to the
   server that issued it.
  
 
  This is not true (or if it is, I'd love see a bug report).
 keystone-manage
  fernet_rotate uses a three phase rotation strategy (staged - primary -
  secondary) that allows you to distribute a staged key (used only for
 token
  validation) throughout your cluster before it becomes a primary key (used
  for token creation and validation) anywhere. Secondary keys are only used
  for token validation.
 
  All you have to do is atomically replace the fernet key directory with a
  new key set.
 
  You also don't have to restart keystone for it to pickup new keys dropped
  onto the filesystem beneath it.
 

 That's great news! Is this documented anywhere? I dug through the
 operators guides, security guide, install guide, etc. Nothing described
 this dance, which is impressive and should be written down!


(BTW, your original assumption would normally have been an accurate one!)

I don't believe it's documented in any of those places, yet. The best
explanation of the three phases in tree I'm aware of is probably this
(which isn't particularly accessible..):


https://github.com/openstack/keystone/blob/6a6fcc2/keystone/cmd/cli.py#L208-L223

Lance Bragstad and I also gave a small presentation at the Vancouver summit
on the behavior and he mentions the same on one of his blog posts:

  https://www.youtube.com/watch?v=duRBlm9RtCwfeature=youtu.be
  http://lbragstad.com/?p=133


 I even tried to discern how it worked from the code but it actually
 looks like it does not work the way you describe on casual investigation.


I don't blame you! I'll work to improve the user-facing docs on the topic.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-27 Thread Gorka Eguileor
Hi all,

I know we've all been looking at the HA Active-Active problem in Cinder
and trying our best to figure out possible solutions to the different
issues, and since current plan is going to take a while (because it
requires that we finish first fixing Cinder-Nova interactions), I've been
looking at alternatives that allow Active-Active configurations without
needing to wait for those changes to take effect.

And I think I have found a possible solution, but since the HA A-A
problem has a lot of moving parts I ended up upgrading my initial
Etherpad notes to a post [1].

Even if we decide that this is not the way to go, which we'll probably
do, I still think that the post brings a little clarity on all the
moving parts of the problem, even some that are not reflected on our
Etherpad [2], and it can help us not miss anything when deciding on a
different solution.

Cheers,
Gorka.

[1]: http://gorka.eguileor.com/a-cinder-road-to-activeactive-ha/
[2]: https://etherpad.openstack.org/p/cinder-active-active-vol-service-issues

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-27 Thread Duncan Thomas
Thanks for this work Gorka. Even if we don't end up taking the approach you
suggest, there are parts that are undoubtedly useful piece of quality, well
thought out code, posted in clean patches, that can be used to easily try
out ideas that were not possible previously. I'm both impressed, and
imthusiastic about moving forward on this for the first time in a while.
Appreciated.

-- 
Duncan Thomas

On 27 July 2015 at 22:35, Gorka Eguileor gegui...@redhat.com wrote:

 Hi all,

 I know we've all been looking at the HA Active-Active problem in Cinder
 and trying our best to figure out possible solutions to the different
 issues, and since current plan is going to take a while (because it
 requires that we finish first fixing Cinder-Nova interactions), I've been
 looking at alternatives that allow Active-Active configurations without
 needing to wait for those changes to take effect.

 And I think I have found a possible solution, but since the HA A-A
 problem has a lot of moving parts I ended up upgrading my initial
 Etherpad notes to a post [1].

 Even if we decide that this is not the way to go, which we'll probably
 do, I still think that the post brings a little clarity on all the
 moving parts of the problem, even some that are not reflected on our
 Etherpad [2], and it can help us not miss anything when deciding on a
 different solution.

 Cheers,
 Gorka.

 [1]: http://gorka.eguileor.com/a-cinder-road-to-activeactive-ha/
 [2]:
 https://etherpad.openstack.org/p/cinder-active-active-vol-service-issues

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Let's talk about API versions

2015-07-27 Thread Jim Rollenhagen
Hi friends.

Ironic implemented API micro versions in Kilo. We originally did this
to allow for breaking changes in the API while allowing users to opt in
to the breakage.

Since then, we've had a default version for our client that we bump to
something sensible with each release. Currently it is at 1.8.
Negotiation is done with the server to figure out what is supported and
adjust accordingly.

Now we've landed a patch[0] with a new version (1.11) that is not
backward compatible. It causes newly added Node objects to begin life in
the ENROLL state, rather than AVAILABLE. This is a good thing, and
people should want this! However, it is a breaking change. Automation
that adds nodes to Ironic will need to do different things after the
node-create call.

Our API versioning scheme makes this opt-in (by specifying the API
version). However, some folks have a problem with releasing this change
as-is. The logic is that we might release a client that defaults to 1.11
or higher, or the user may request 1.12 later to get a new feature, thus
breaking their application that enrolls nodes.

This is clearly backwards. Users should read release notes and be aware
of what changes between versions in the API. Users need to be aware of
the fact that our API is versioned, and use that to their advantage.

It seems to me that the goal of the version negotiation in our client
has been to pretend that our API versions don't exist, from a user
perspective. We need to stop doing this and force users to think about
what they are doing when they interact with our API.

It seems to me we have a few options here:

1) Default the python client and CLI to the earliest supported version.
This will never break users by default.

2) Default the python client and CLI to use the special version
'latest'. This will always use the latest API version, and always
break people when a new server version (that is not backwards
compatible) is deployed.

3) Do what Nova does[1]. Default CLI to latest and python client to
earliest. This assumes that CLI is typically used for one-time commands
(and it isn't a big deal if we break a one-off command once), and the
python client is used for applications.

4) Require a version to use the client at all. This would be a one-time
break with how applications initialize the client (perhaps we could fall
back to the earliest version or something for a deprecation period).
This isn't a great user experience, however, it's the best way to get
users to think about versioning. And no, this requires typing another
argument every time! is not a valid argument against this; we already
require a number of arguments, anyone sane doesn't type --ironic-api-url
or --os-username every time they use the client.

5) Do what we're doing now. Bump the client's default version with every
release. This mostly hides these versions from end users, and in general
those users probably won't know they exist. And then we run into
arguments every time we want to make a breaking change to the API. :)

I think I like option 1 or 3 the best. I certainly don't like option 5
because we're going to break users every time we release a new client.

What do other folks think we should do?

// jim

[0] 
https://github.com/openstack/ironic/commit/1410e59228c3835cfc4f89db1ec482137a3cfa10
[1] 
http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/novaclient-api-microversions.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The proposed Neutron API extension for packet forwarding has a lot of duplication to the Neutron SFC API

2015-07-27 Thread Anita Kuno
On 07/24/2015 06:50 PM, Cathy Zhang wrote:
 Hi Everyone,
 In our last networking-sfc project IRC meeting, an issue was brought up that 
 the API proposed in https://review.openstack.org/#/c/186663/ has a lot of 
 duplication to the SFC API https://review.openstack.org/#/c/192933/ that is 
 being currently implemented. In the IRC meeting, the project team reached 
 consensus that we only need one API and the service chain API can cover the 
 functionality needed by https://review.openstack.org/#/c/186663/. Please 
 refer to the meeting log 
 http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-07-23-17.02.log.html
  for more discussion info. Please let us know if you have different opinion.
 Thanks,
 Cathy
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

I think you need to acknowledge in both email topic and in content that
Sean tried to draw the fact that you are duplicating this work on July
16th. Collaboration is much more than our meeting decided you shouldn't
do your work. Perhaps taking a step back and acknowledging the work of
others might set a nicer tone to your efforts.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [Infra] Meeting Tuesday July 28th at 19:00 UTC

2015-07-27 Thread James E. Blair
Elizabeth K. Joseph l...@princessleia.com writes:

 Hi everyone,

 The OpenStack Infrastructure (Infra) team is having our next weekly
 meeting on Tuesday July 28th, at 19:00 UTC in #openstack-meeting

 Meeting agenda available here:
 https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
 welcome to to add agenda items)

 Everyone interested in infrastructure and process surrounding
 automated testing and deployment is encouraged to attend.

I know we're generally not keen on status reports, but I think it might
be a good idea to check in on the priority efforts this week and find
out what we can do to clear some of them off our plate.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to debug neutron using eclipse pydev?

2015-07-27 Thread Daniel Comnea
+100 on what Sean said

On Mon, Jul 27, 2015 at 9:39 PM, Sean M. Collins s...@coreitpro.com wrote:

 We should have the Wiki page redirect, or link to:

 https://github.com/openstack/neutron/blob/master/TESTING.rst#debugging

 And then update that RST file it to add any info we have about
 debugging under IDEs. Generally, I dislike wikis because they go stale
 very quick and aren't well maintained, compared to files in the code repo
 (hopefully).


 --
 Sean M. Collins

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to debug neutron using eclipse pydev?

2015-07-27 Thread Madhusudhan Kandadai
I agree with Sean's note.

On Mon, Jul 27, 2015 at 1:49 PM, Assaf Muller amul...@redhat.com wrote:

 +1

 - Original Message -
  We should have the Wiki page redirect, or link to:
 
  https://github.com/openstack/neutron/blob/master/TESTING.rst#debugging
 
  And then update that RST file it to add any info we have about
  debugging under IDEs. Generally, I dislike wikis because they go stale
  very quick and aren't well maintained, compared to files in the code repo
  (hopefully).
 
 
  --
  Sean M. Collins
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-announce] [release][oslo] taskflow release 1.17.0 (liberty)

2015-07-27 Thread davanum
We are content to announce the release of:

taskflow 1.17.0: Taskflow structured state management library.

This release is part of the liberty release series.

With source available at:

http://git.openstack.org/cgit/openstack/taskflow

With package available at:

https://pypi.python.org/pypi/taskflow

For more details, please see the git log history below and:

http://launchpad.net/taskflow/+milestone/1.17.0

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

Changes in taskflow 1.16.0..1.17.0
--

9ad7ec6 Modify listeners to handle the results now possible from revert()
16d9914 Updated from global requirements
7e1d330 Fix lack of space between functions
359cc49 Create and use a serial retry executor
e5092b6 Just link to the worker engine docs instead of including a TOC inline
5bbf8cf Link to run() method in engines doc
58fbfd0 Add ability to reset an engine via a `reset` method
02c83d4 Remove **most** usage of taskflow.utils in examples
7bc1be0 Unify the zookeeper/redis jobboard iterators
e004197 Use io.open vs raw open
14de80d Make currently implemented jobs use @functools.total_ordering
0d884a2 Use encodeutils for exception - string function
db7af3f Remove kazoo hack/fix for issue no longer needed
3e16e24 Update states comment to refer to task section
9e6ef18 Document more of the retry subclasses special keyword arguments

Diffstat (except docs and test files)
-

requirements.txt  |   2 +-
taskflow/engines/action_engine/actions/base.py|  16 ++
taskflow/engines/action_engine/actions/retry.py   |  66 +++-
taskflow/engines/action_engine/actions/task.py|   6 +-
taskflow/engines/action_engine/completer.py   |  11 +-
taskflow/engines/action_engine/engine.py  |  37 +++--
taskflow/engines/action_engine/executor.py|  48 +-
taskflow/engines/action_engine/runner.py  |   3 +-
taskflow/engines/action_engine/runtime.py |  10 +-
taskflow/engines/action_engine/scheduler.py   |   6 +-
taskflow/engines/base.py  |  15 +-
taskflow/examples/dump_memory_backend.py  |  14 +-
taskflow/examples/hello_world.py  |  14 +-
taskflow/examples/parallel_table_multiply.py  |   6 +-
taskflow/examples/persistence_example.py  |   3 +-
taskflow/examples/resume_from_backend.py  |  31 ++--
taskflow/examples/resume_vm_boot.py   |  18 ++-
taskflow/examples/resume_volume_create.py |  13 +-
taskflow/examples/run_by_iter.py  |   7 +-
taskflow/examples/run_by_iter_enumerate.py|   7 +-
taskflow/examples/switch_graph_flow.py|  12 +-
taskflow/exceptions.py|  33 ++--
taskflow/jobs/backends/impl_redis.py  |  64 +---
taskflow/jobs/backends/impl_zookeeper.py  |  95 +++
taskflow/jobs/base.py |  73 +
taskflow/listeners/base.py|   8 +-
taskflow/listeners/logging.py |  26 +--
taskflow/persistence/backends/impl_dir.py |   8 +-
taskflow/persistence/models.py|  94 +++
taskflow/retry.py |  39 -
taskflow/states.py|   1 +
taskflow/types/failure.py |  12 +-
taskflow/utils/kazoo_utils.py |   8 +-
taskflow/utils/mixins.py  |  35 
taskflow/utils/persistence_utils.py   |  75 -
test-requirements.txt |   3 +-
45 files changed, 776 insertions(+), 427 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 4c9740c..7ae9099 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -17 +17 @@ enum34;python_version=='2.7' or python_version=='2.6'
-futurist=0.1.1 # Apache-2.0
+futurist=0.1.2 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index b0ed54c..5a06c57 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7,2 +7 @@ oslotest=1.7.0 # Apache-2.0
-mock!=1.1.4,=1.1;python_version!='2.6'
-mock==1.0.1;python_version=='2.6'
+mock=1.2



___
OpenStack-announce mailing list
OpenStack-announce@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce


[openstack-announce] [release][oslo] oslo.service release 0.5.0 (liberty)

2015-07-27 Thread davanum
We are pleased to announce the release of:

oslo.service 0.5.0: oslo.service library

This release is part of the liberty release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.service

With package available at:

https://pypi.python.org/pypi/oslo.service

For more details, please see the git log history below and:

http://launchpad.net/oslo.service/+milestone/0.5.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.service

Changes in oslo.service 0.4.0..0.5.0


87cdb36 Updated from global requirements
3d9ae77 Updated from global requirements
2b01b95 Updated from global requirements
6726c25 Add oslo_debug_helper to tox.ini
a6cd5df Add usage documentation for oslo_service.service module
390b934 save docstring, name etc using six.wraps

Diffstat (except docs and test files)
-

oslo_service/loopingcall.py |  2 +
oslo_service/service.py | 13 +--
requirements.txt|  2 +-
setup.py|  2 +-
test-requirements.txt   |  5 +--
tox.ini |  5 ++-
8 files changed, 103 insertions(+), 19 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 1ed395c..b4d7b3d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ monotonic=0.1 # Apache-2.0
-oslo.utils=1.6.0 # Apache-2.0
+oslo.utils=1.9.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 2d275f9..3455de1 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -6,3 +6,2 @@ hacking0.11,=0.10.0
-mock=1.1;python_version!='2.6'
-mock==1.0.1;python_version=='2.6'
-oslotest=1.5.1 # Apache-2.0
+mock=1.2
+oslotest=1.7.0 # Apache-2.0



___
OpenStack-announce mailing list
OpenStack-announce@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce


[openstack-announce] [release][oslo] oslo.cache release 0.4.0 (liberty)

2015-07-27 Thread davanum
We are gleeful to announce the release of:

oslo.cache 0.4.0: Cache storage for Openstack projects.

This release is part of the liberty release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.cache

With package available at:

https://pypi.python.org/pypi/oslo.cache

For more details, please see the git log history below and:

http://launchpad.net/oslo.cache/+milestone/0.4.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.cache

Changes in oslo.cache 0.3.0..0.4.0
--

5881ad7 Added NO_VALUE to core file
212af3d Updated from global requirements
9e20d07 Updated from global requirements
789daa9 Updated from global requirements
a267390 Fix some reminders of 'keystone' in oslo.cache
3d4e2d6 Updated from global requirements

Diffstat (except docs and test files)
-

oslo_cache/_opts.py  |  4 ++--
oslo_cache/backends/dictionary.py| 14 ++--
oslo_cache/backends/mongo.py | 19 +---
oslo_cache/backends/noop.py  |  7 +++---
oslo_cache/core.py   | 10 +
requirements.txt |  4 ++--
setup.py |  2 +-
test-requirements.txt|  5 ++---
11 files changed, 64 insertions(+), 41 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 29bd4fa..d2bd1d5 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10,2 +10,2 @@ oslo.i18n=1.5.0 # Apache-2.0
-oslo.log=1.2.0 # Apache-2.0
-oslo.utils=1.6.0 # Apache-2.0
+oslo.log=1.6.0 # Apache-2.0
+oslo.utils=1.9.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 5be8937..773ccda 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5,3 +5,2 @@ hacking0.11,=0.10.0
-mock=1.1;python_version!='2.6'
-mock==1.0.1;python_version=='2.6'
-oslotest=1.5.1 # Apache-2.0
+mock=1.2
+oslotest=1.7.0 # Apache-2.0



___
OpenStack-announce mailing list
OpenStack-announce@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce


[openstack-announce] [release][oslo] oslo.policy release 0.8.0 (liberty)

2015-07-27 Thread davanum
We are stoked to announce the release of:

oslo.policy 0.8.0: Oslo Policy library

This release is part of the liberty release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.policy

With package available at:

https://pypi.python.org/pypi/oslo.policy

For more details, please see the git log history below and:

http://launchpad.net/oslo.policy/+milestone/0.8.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.policy

Changes in oslo.policy 0.7.0..0.8.0
---

92cc71d Updated from global requirements
b5f78aa Fix typo of 'available' in token_fixture.py
59ebc39 Fixes up the API docs and module index

Diffstat (except docs and test files)
-

requirements.txt   |  2 +-
setup.cfg  |  1 +
setup.py   |  2 +-
test-requirements.txt  |  2 +-
8 files changed, 9 insertions(+), 25 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 62ebd6c..6016da0 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.serialization=1.4.0 # Apache-2.0
-oslo.utils=1.6.0 # Apache-2.0
+oslo.utils=1.9.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index eb6240f..d649626 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -6 +6 @@ hacking0.11,=0.10.0
-oslotest=1.5.1 # Apache-2.0
+oslotest=1.7.0 # Apache-2.0



___
OpenStack-announce mailing list
OpenStack-announce@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce


[openstack-announce] [release][oslo] tooz release 1.20.0 (liberty)

2015-07-27 Thread davanum
We are excited to announce the release of:

tooz 1.20.0: Coordination library for distributed systems.

This release is part of the liberty release series.

With source available at:

http://git.openstack.org/cgit/openstack/tooz

With package available at:

https://pypi.python.org/pypi/tooz

For more details, please see the git log history below and:

http://launchpad.net/python-tooz/+milestone/1.20.0

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

Changes in tooz 1.19.0..1.20.0
--

7accc78 Updated from global requirements
a07405d Updated from global requirements
8511d4c Use futurist to allow for executor providing and unifying
3216c90 Use a lua script(s) instead of transactions
c5bda82 Update .gitignore

Diffstat (except docs and test files)
-

.gitignore|  10 ++-
requirements.txt  |   1 +
test-requirements.txt |   8 +-
tooz/drivers/file.py  |  37 +++--
tooz/drivers/ipc.py   |   5 +-
tooz/drivers/memcached.py |   8 +-
tooz/drivers/redis.py | 196 --
tooz/utils.py |  87 
9 files changed, 309 insertions(+), 96 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 340681d..f849a5c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -13,0 +14 @@ futures=3.0;python_version=='2.7' or python_version=='2.6'
+futurist=0.1.2 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 5c2e6bc..8e27295 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -10,2 +10 @@ doc8 # Apache-2.0
-mock!=1.1.4,=1.1;python_version!='2.6'
-mock==1.0.1;python_version=='2.6'
+mock=1.2
@@ -26 +25 @@ kazoo=2.2
-pymemcache=1.2.9 # Apache 2.0 License
+pymemcache!=1.3.0,=1.2.9 # Apache 2.0 License
@@ -27,0 +27,3 @@ redis=2.10.0
+
+# Ensure that the eventlet executor continues to operate...
+eventlet=0.17.4



___
OpenStack-announce mailing list
OpenStack-announce@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce


[openstack-announce] [release][oslo] oslo.i18n release 2.2.0 (liberty)

2015-07-27 Thread davanum
We are happy to announce the release of:

oslo.i18n 2.2.0: Oslo i18n library

This release is part of the liberty release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.i18n

With package available at:

https://pypi.python.org/pypi/oslo.i18n

For more details, please see the git log history below and:

http://launchpad.net/oslo.i18n/+milestone/2.2.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.i18n

Changes in oslo.i18n 2.1.0..2.2.0
-

011da7d Imported Translations from Transifex
4da12d3 Updated from global requirements
aa48eb7 Updated from global requirements
8bbcfb9 Updated from global requirements
77e504d Updated from global requirements
1b0b713 Fix mock use for 1.1.0
75b76ee Add requirements for pre-release test scripts
def5cfc Imported Translations from Transifex
ecb9120 Only define CONTEXT_SEPARATOR once

Diffstat (except docs and test files)
-

oslo.i18n/locale/de/LC_MESSAGES/oslo.i18n.po| 4 ++--
oslo.i18n/locale/en_GB/LC_MESSAGES/oslo.i18n.po | 4 ++--
oslo.i18n/locale/es/LC_MESSAGES/oslo.i18n.po| 2 +-
oslo.i18n/locale/fr/LC_MESSAGES/oslo.i18n.po| 4 ++--
oslo.i18n/locale/it/LC_MESSAGES/oslo.i18n.po| 4 ++--
oslo.i18n/locale/ko_KR/LC_MESSAGES/oslo.i18n.po | 4 ++--
oslo.i18n/locale/pl_PL/LC_MESSAGES/oslo.i18n.po | 4 ++--
oslo.i18n/locale/zh_CN/LC_MESSAGES/oslo.i18n.po | 2 +-
oslo_i18n/_factory.py   | 2 +-
requirements.txt| 2 +-
setup.py| 2 +-
test-requirements.txt   | 6 +-
14 files changed, 24 insertions(+), 20 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 7a4021d..c76df5d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@
-pbr2.0,=0.11
+pbr2.0,=1.3
diff --git a/test-requirements.txt b/test-requirements.txt
index 6a5e8d9..cb4b355 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -10 +10,2 @@ oslosphinx=2.5.0 # Apache-2.0
-oslotest=1.5.1 # Apache-2.0
+mock=1.2
+oslotest=1.7.0 # Apache-2.0
@@ -11,0 +13,3 @@ coverage=3.6
+
+# for pre-release tests
+oslo.config=1.11.0 # Apache-2.0



___
OpenStack-announce mailing list
OpenStack-announce@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce


  1   2   >