Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-28 Thread Lars Kellogg-Stedman
On Tue, Jun 19, 2018 at 05:17:36PM +0200, Jiří Stránský wrote:
> For the puppet modules specifically, we might also add another
> directory+mount into the docker-puppet container, which would be blank by
> default (unlike the existing, already populated /etc/puppet and
> /usr/share/openstack-puppet/modules). And we'd put that directory at the
> very start of modulepath. Then i *think* puppet would use a particular
> module from that dir *only*, not merge the contents with the rest of
> modulepath...

No, you would still have the problem that types/providers from *all*
available paths are activated, so if in your container you have
/etc/puppet/modules/themodule/lib/puppet/provider/something/foo.rb,
and you mount into the container
/container/puppet/modules/themodule/lib/puppet/provider/something/bar.rb,
then you end up with both foo.rb and bar.rb active and possibly
conflicting.

This only affects module lib directories. As Alex pointed out, puppet
classes themselves behave differently and don't conflict in this
fashion.

-- 
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-28 Thread Lars Kellogg-Stedman
On Tue, Jun 19, 2018 at 10:12:54AM -0600, Alex Schultz wrote:
> -1 to more services. We take a Heat time penalty for each new
> composable service we add and in this case I don't think this should
> be a service itself.  I think for this case, it would be better suited
> as a host prep task than a defined service.  Providing a way for users
> to define external host prep tasks might make more sense.

But right now, the only way to define a host_prep_task is via a
service template, right?  What I've done for this particular case is
create a new service template that exists only to provide a set of
host_prep_tasks:

  
https://github.com/CCI-MOC/rhosp-director-config/blob/master/templates/services/patch-puppet-modules.yaml

Is there a better way to do this?

-- 
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun][zun-ui] Priorities of new feature on Zun UI

2018-06-28 Thread Shu M.
Hi Hongbin,

Thank you for filling your opinion! I'd like to consider plan for Stein's
Zun UI.

Best regards,
Shu

2018年6月27日(水) 21:45 Hongbin Lu :

> Hi Shu,
>
> Thanks for the raising this discussion. I have filled my opinion in the
> etherpad. In general, I am quite satisfied by the current feature set
> provided by the Zun UI. Thanks for the great work from the UI team.
>
> Best regards,
> Hongbin
>
> On Wed, Jun 27, 2018 at 12:18 AM Shu M.  wrote:
>
>> Hi folks,
>>
>> Could you let me know your thoughts for priorities of new features on Zun
>> UI.
>> Could you jump to following pad, and fill your opinions?
>> https://etherpad.openstack.org/p/zun-ui
>>
>> Best regards,
>> Shu
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation

2018-06-28 Thread Waines, Greg
In-lined comments / questions below,
Greg.



From: "Csatari, Gergely (Nokia - HU/Budapest)" 
mailto:gergely.csat...@nokia.com>>
Date: Thursday, June 28, 2018 at 3:35 AM
To: "ekuv...@redhat.com" 
mailto:ekuv...@redhat.com>>, Greg Waines 
mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>, 
"edge-comput...@lists.openstack.org" 
mailto:edge-comput...@lists.openstack.org>>
Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible 
architectures for image synchronisation

Hi,

I’ve added the following pros and cons to the different options:




  *   One Glance with multiple backends 
[1]
[Greg]
I’m not sure I understand this option.

Is each Glance Backend completely independent ?   e.g. when I do a “glance 
image-create ...” am I specifying a backend and that’s where the image is to be 
stored ?
This is what I was originally thinking.
So I was thinking that synchronization of images to Edge Clouds is simply done 
by doing “glance image-create ...” to the appropriate backends.

But then you say “The syncronisation of the image data is the responsibility of 
the backend (eg.: CEPH).” ... which makes it sound like my thinking above is 
wrong and the Backends are NOT completely independent, but instead in some sort 
of replication configuration ... is this leveraging ceph replication factor or 
something (for example) ?



 *   Pros:
*   Relatively easy to implement based on the current Glance 
architecture
 *   Cons:
*   Requires the same Glance backend in every edge cloud instance
*   Requires the same OpenStack version in every edge cloud instance 
(apart from during upgrade)
*   Sensitivity for network connection loss is not clear
[Greg] I could be wrong, but even though the OpenStack services in the edge 
clouds are using the images in their glance backend with a direct URL,
I think the OpenStack services (e.g. nova) still need to get the direct URL via 
the Glance API which is ONLY available at the central site.
So don’t think this option supports autonomy of edge Subcloud when connectivity 
is lost to central site.



  *   Several Glances with an independent syncronisation service, sych via 
Glance API 
[2]
 *   Pros:
*   Every edge cloud instance can have a different Glance backend
*   Can support multiple OpenStack versions in the different edge cloud 
instances
*   Can be extended to support multiple VIM types
 *   Cons:
*   Needs a new synchronisation service
[Greg] Don’t believe this is a big con ... suspect we are going to need this 
new synchronization service for synchronizing resources of a number of other 
openstack services ... not just glance.



  *   Several Glances with an independent syncronisation service, synch using 
the backend 
[3]
[Greg] This option seems a little odd to me.
We are synching the GLANCE DB via some new synchronization service, but 
synching the Images themselves via the backend ... I think that would be tricky 
to ensure consistency.
 *   Pros:
*   I could not find any
 *   Cons:
*   Needs a new synchronisation service



  *   One Glance and multiple Glance API servers 
[4]
 *   Pros:
*   Implicitly location aware
 *   Cons:
*   First usage of an image always takes a long time
*   In case of network connection error to the central Galnce Nova will 
have access to the images, but will not be able to figure out if the user have 
rights to use the image and will not have path to the images data
[Greg] Yeah we tripped over the issue that although the Glance API can cache 
the image itself, it does NOT cache the image meta data (which I am guessing 
has info like “user access” etc.) ... so this option improves latency of access 
to image itself but does NOT provide autonomy.

We plan on looking at options to resolve this, as we like the “implicit 
location awareness” of this option ... and believe it is an option that some 
customers will like.
If anyone has any ideas ?
Are these correct? Do I miss anything?

Thanks,
Gerg0

[1]: 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_with_multiple_backends
[2]: 

[openstack-dev] [Puppet] Requirements for running puppet unit tests?

2018-06-28 Thread Lars Kellogg-Stedman
Hey folks,

I'm looking for some guidance on how to successfully run rspec tests
for openstack puppet modules (specifically, puppet-keystone).  I
started with CentOS 7, but running the 'bundle install command' told
me:

  Gem::InstallError: public_suffix requires Ruby version >= 2.1.
  An error occurred while installing public_suffix (3.0.2), and Bundler cannot 
continue.
  Make sure that `gem install public_suffix -v '3.0.2'` succeeds before 
bundling.

So I tried it on my Fedora 28 system, and while the 'bundle install'
completed successfully, running `bundle exec rake lint` told me:

  $ bundle exec rake lint
  
/home/lars/vendor/bundle/ruby/2.4.0/gems/puppet-2.7.26/lib/puppet/util/monkey_patches.rb:93:
 warning: constant ::Fixnum is deprecated
  rake aborted!
  NoMethodError: undefined method `<<' for nil:NilClass

...followed by a traceback.

So then I tried it on Ubuntu 18.04, and the bundle install fails with:

  Gem::RuntimeRequirementNotMetError: grpc requires Ruby version < 2.5, >= 2.0. 
The current ruby
  version is 2.5.0.
  An error occurred while installing grpc (1.7.0), and Bundler cannot continue.

And finally I tried Ubuntu 17.10.  The bundle install completed
successfully, but the 'rake lint' failed with:

  $ bundle exec rake lint
  
/home/lars/vendor/bundle/ruby/2.3.0/gems/puppet-2.7.26/lib/puppet/defaults.rb:164:
 warning: key :queue_type is duplicated and overwritten on line 165
  rake aborted!
  can't modify frozen Symbol

What is required to successfully run the rspec tests?

-- 
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-28 Thread Ghanshyam Mann



  On Fri, 29 Jun 2018 00:05:09 +0900 Dmitry Tantsur  
wrote  
 > On 06/27/2018 03:17 AM, Ghanshyam Mann wrote:
 > > 
 > > 
 > > 
 > >    On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann 
 > >  wrote 
 > >   > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400:
 > >   > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote:
 > >   > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 
 > > +0100:
 > >   > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, 
 > >  wrote:
 > >   > > > >
 > >   > > > > > Dmitry Tantsur wrote:
 > >   > > > > > > [...]
 > >   > > > > > > My suggestion: tempest has to be compatible with all 
 > > supported releases
 > >   > > > > > > (of both services and plugins) OR be branched.
 > >   > > > > > > [...]
 > >   > > > > > I tend to agree with Dmitry... We have a model for things that 
 > > need
 > >   > > > > > release alignment, and that's the cycle-bound series. The 
 > > reason tempest
 > >   > > > > > is branchless was because there was no compatibility issue. If 
 > > the split
 > >   > > > > > of tempest plugins introduces a potential incompatibility, 
 > > then I would
 > >   > > > > > prefer aligning tempest to the existing model rather than 
 > > introduce a
 > >   > > > > > parallel tempest-specific cycle just so that tempest can stay
 > >   > > > > > release-independent...
 > >   > > > > >
 > >   > > > > > I seem to remember there were drawbacks in branching tempest, 
 > > though...
 > >   > > > > > Can someone with functioning memory brain cells summarize them 
 > > again ?
 > >   > > > > >
 > >   > > > >
 > >   > > > >
 > >   > > > > Branchless Tempest enforces api stability across branches.
 > >   > > >
 > >   > > > I'm sorry, but I'm having a hard time taking this statement 
 > > seriously
 > >   > > > when the current source of tension is that the Tempest API itself
 > >   > > > is breaking for its plugins.
 > >   > > >
 > >   > > > Maybe rather than talking about how to release compatible things
 > >   > > > together, we should go back and talk about why Tempest's API is 
 > > changing
 > >   > > > in a way that can't be made backwards-compatible. Can you give 
 > > some more
 > >   > > > detail about that?
 > >   > > >
 > >   > >
 > >   > > Well it's not, if it did that would violate all the stability 
 > > guarantees
 > >   > > provided by Tempest's library and plugin interface. I've not ever 
 > > heard of
 > >   > > these kind of backwards incompatibilities in those interfaces and we 
 > > go to
 > >   > > all effort to make sure we don't break them. Where did the idea that
 > >   > > backwards incompatible changes where being introduced come from?
 > >   >
 > >   > In his original post, gmann said, "There might be some changes in
 > >   > Tempest which might not work with older version of Tempest Plugins."
 > >   > I was surprised to hear that, but I'm not sure how else to interpret
 > >   > that statement.
 > > 
 > > I did not mean to say that Tempest will introduce the changes in backward 
 > > incompatible way which can break plugins. That cannot happen as all 
 > > plugins and tempest are branchless and they are being tested with master 
 > > Tempest so if we change anything backward incompatible then it break the 
 > > plugins gate. Even we have to remove any deprecated interfaces from 
 > > Tempest, we fix all plugins first like - 
 > > https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged)
 > > 
 > > What I mean to say here is that adding new or removing deprecated 
 > > interface in Tempest might not work with all released version or 
 > > unreleased Plugins. That point is from point of view of using Tempest and 
 > > Plugins in production cloud testing not gate(where we keep the 
 > > compatibility). Production Cloud user use Tempest cycle based version. 
 > > Pike based Cloud will be tested by Tempest 17.0.0 not latest version 
 > > (though latest version might work).
 > > 
 > > This thread is not just for gate testing point of view (which seems to be 
 > > always interpreted), this is more for user using Tempest and Plugins for 
 > > their cloud testing. I am looping  operator mail list also which i forgot 
 > > in initial post.
 > > 
 > > We do not have any tag/release from plugins to know what version of plugin 
 > > can work with what version of tempest. For Example If There is new 
 > > interface introduced by Tempest 19.0.0 and pluginX start using it. Now it 
 > > can create issues for pluginX in both release model 1. plugins with no 
 > > release (I will call this PluginNR), 2. plugins with independent release 
 > > (I will call it PluginIR).
 > > 
 > > Users (Not Gate) will face below issues:
 > > - User cannot use PluginNR with Tempest <19.0.0 (where that new interface 
 > > was not present). And there is no PluginNR release/tag as this is 
 > > unreleased and not branched software.
 > > - User cannot find a PluginIR particular 

[openstack-dev] [neutron] Canceling Neutron drivers meeting on June 29th

2018-06-28 Thread Miguel Lavalle
Dear Neutron Team,

This week we don't have RFEs in the triaged stage to be discussed during
our weekly drivers meeting. As a consequence, I am canceling the meeting on
June 29th at 1400UTC.

We have new RFEs and RFEs in the confirmed stage. I encourage the team to
look and them, add your opinion and help to move them to the triaged stage


Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][heat] Identifying secrets in Barbican

2018-06-28 Thread Zane Bitter

On 28/06/18 15:00, Douglas Mendizabal wrote:

Replying inline.

[snip]

IIRC, using URIs instead of UUIDs was a federation pre-optimization
done many years ago when Barbican was brand new and we knew we wanted
federation but had no idea how it would work.  The rationale was that
the URI would contain both the ID of the secret as well as the location
of where it was stored.

In retrospect, that was a terrible idea, and using UUIDs for
consistency with the rest of OpenStack would have been a better choice.
  I've added a story to the python-barbicanclient storyboard to enable
usage of UUIDs instead of URLs:

https://storyboard.openstack.org/#!/story/2002754


Cool, thanks for clearing that up. If UUID is going to become the/a 
standard way to reference stuff in the future then we'll just use the 
UUID for the property value.



I'm sure you've noticed, but the URI that identifies the secret
includes the UUID that Barbican uses to identify the secret internally:

http://{barbican-host}:9311/v1/secrets/{UUID}

So you don't actually need to store the URI, since it can be
reconstructed by just saving the UUID and then using whatever URL
Barbican has in the service catalog.



In a tangentially related question, since secrets are immutable once
they've been uploaded, what's the best way to handle a case where
you
need to rotate a secret without causing a temporary condition where
there is no version of the secret available? (The fact that there's
no
way to do this for Nova keypairs is a perpetual problem for people,
and
I'd anticipate similar use cases for Barbican.) I'm going to guess
it's:

* Create a new secret with the same name
* GET /v1/secrets/?name==created:desc=1 to find out
the
URL for the newest secret with that name
* Use that URL when accessing the secret
* Once the new secret is created, delete the old one

Should this, or whatever the actual recommended way of doing it is,
be
baked in to the client somehow so that not every user needs to
reimplement it?



When you store a secret (e.g. using POST /v1/secrets), the response
includes the URI both in the JSON body and in the Location: header.
  
There is no need for you to mess around with searching by name, since

Barbican does not use the name to identify a secret.  You should just
save the URI (or UUID) from the response, and then update the resource
using the old secret to point to the new secret instead.


Sometimes user will want to be able to rotate secrets without updating 
all of the places that they're referenced from though.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Problems with multi-regional OpenStack installation

2018-06-28 Thread Fei Long Wang
Hi Andrei,

Thanks for raising this issue. I'm keen to review and happy to help. I
just done a quick look for https://review.openstack.org/#/c/578356, it
looks good for me.

As for heat-container-eingine issue, it's probably a bug. I will test an
propose a patch, which needs to release a new image then. Will update
progress here. Cheers.



On 28/06/18 19:11, Andrei Ozerov wrote:
> Greetings.
>
> Has anyone successfully deployed Magnum in the multi-regional
> OpenStack installation?
> In my case different services (Nova, Heat) have different public
> endpoint in every region. I couldn't start Kube-apiserver until I
> added "region" to a kube_openstack_config.
> I created a story with full description of that problem:
> https://storyboard.openstack.org/#!/story/2002728
>  and opened a
> review with a small fix: https://review.openstack.org/#/c/578356.
>
> But apart from that I have another problem with such kind of OpenStack
> installation.
> Say I have two regions. When I create a cluster in the second
> OpenStack region, Heat-container-engine tries to fetch Stack data from
> the first region.
> It then throws the following error: "The Stack (hame-uuid) could not
> be found". I can see GET requests for that stack in logs of Heat-API
> in the first region but I don't see them in the second one (where that
> Heat stack actually exists).
>
> I'm assuming that Heat-container-engine doesn't pass "region_name"
> when it searches for Heat endpoints:
> https://github.com/openstack/magnum/blob/master/magnum/drivers/common/image/heat-container-agent/scripts/heat-config-notify#L149.
> I've tried to change it but it's tricky because the
> Heat-container-engine is installed via Docker system-image and it
> won't work after restart if it's failed in the initial bootstrap
> (because /var/run/heat-config/heat-config is empty).
> Can someone help me with that? I guess it's better to create a
> separate story for that issue?
>
> -- 
> Ozerov Andrei
> oze...@selectel.com 
> +7 (800) 555 06 75
> 
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][python-openstackclient] osc-included image signing

2018-06-28 Thread Dean Troyer
On Thu, Jun 28, 2018 at 8:04 AM, Josephine Seifert
 wrote:
>> Go ahead and post WIP reviews and we can look at it further.  To merge
>> I'll want all of the usual tests, docs, release notes, etc but don't
>> wait if that is not all done up front.
> Here are the two WIP reviews:
>
> cursive: https://review.openstack.org/#/c/578767/
> osc: https://review.openstack.org/#/c/578769/

So one problem I have here is the dependencies of cursive, all of
which become OSC dependencies if cursive is added.  It includes
oslo.log which OSC does not use and doesn't want to use for $REASONS
that boil down to assumptions it makes for server-side use that are
not good for client-side use.

cursive includes castellan which also includes oslo.log and
oslo.context, which I must admit I don't know how it affects a CLI
because we've never tried to include it before.  python-barbicanclient
is also included by cursive, which would make that a new permanent
dependency.  This may be acceptable, it is partially up to the
barbican team if they want to be subject to OSC testing that they may
not have now.

Looking at the changes you have to cursive, if that is all you need
from it those bits could easily go somewhere in osc or osc-lib if you
don't also need them elsewhere.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][python-openstackclient] osc-included image signing

2018-06-28 Thread Dean Troyer
On Thu, Jun 28, 2018 at 8:04 AM, Josephine Seifert
 wrote:
>> Go ahead and post WIP reviews and we can look at it further.  To merge
>> I'll want all of the usual tests, docs, release notes, etc but don't
>> wait if that is not all done up front.
> Here are the two WIP reviews:
>
> cursive: https://review.openstack.org/#/c/578767/
> osc: https://review.openstack.org/#/c/578769/

So one problem I have here is the dependencies of cursive, all of
which become OSC dependencies if cursive is added.  It includes
oslo.log which OSC does not use and doesn't want to use for $REASONS
that boil down to assumptions it makes for server-side use that are
not good for client-side use.

cursive includes castellan which also includes oslo.log and
oslo.context, which I must admit I don't know how it affects a CLI
because we've never tried to include it before.  python-barbicanclient
is also included by cursive, which would make that a new permanent
dependency.  This may be acceptable, it is partially up to the
barbican team if they want to be subject to OSC testing that they may
not have now.

Looking at the changes you have to cursive, if that is all you need
from it those bits could easily go somewhere in osc or osc-lib if you
don't also need them elsewhere.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] dropping selinux support

2018-06-28 Thread Mohammed Naser
Hi Paul:

On Thu, Jun 28, 2018 at 5:03 PM, Paul Belanger  wrote:
> On Thu, Jun 28, 2018 at 12:56:22PM -0400, Mohammed Naser wrote:
>> Hi everyone:
>>
>> This email is to ask if there is anyone out there opposed to removing
>> SELinux bits from OpenStack ansible, it's blocking some of the gates
>> and the maintainers for them are no longer working on the project
>> unfortunately.
>>
>> I'd like to propose removing any SELinux stuff from OSA based on the 
>> following:
>>
>> 1) We don't gate on it, we don't test it, we don't support it.  If
>> you're running OSA with SELinux enforcing, please let us know how :-)
>> 2) It extends beyond the scope of the deployment project and there are
>> no active maintainers with the resources to deal with them
>> 3) With the work currently in place to let OpenStack Ansible install
>> distro packages, we can rely on upstream `openstack-selinux` package
>> to deliver deployments that run with SELinux on.
>>
>> Is there anyone opposed to removing it?  If so, please let us know. :-)
>>
> While I don't use OSA, I would be surprised to learn that selinux wouldn't be
> supported.  I also understand it requires time and care to maintain. Have you
> tried reaching out to people in #RDO, IIRC all those packages should support
> selinux.

Indeed, the support from RDO for SELinux works very well.  In this case however,
OpenStack ansible deploys from source and therefore places binaries in different
places than the default expected locations for the upstream `openstack-selinux`.

As we work towards adding 'distro' support (which to clarify, it means
install from
RPMs or DEBs rather than from source), we'll be able to pull in that package and
automagically get SELinux support that's supported by an upstream that
tracks it.

> As for gating, maybe default to selinux passive for it to report errors, but 
> not
> fail.  And if anybody is interested in support it, they can do so and enable
> enforcing again when everything is fixed.

That's reasonable.  However, right now we have bugs around the distribution
of SELinux modules and how they are compiled inside the the containers,
which means that we're not having problems with the rules as much as uploading
the rules and getting them compiled inside the server.

I hope I cleared up a bit more of our side of things, I'm actually
looking forward
for us being able to support upstream distro packages.

> - Paul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] dropping selinux support

2018-06-28 Thread Paul Belanger
On Thu, Jun 28, 2018 at 12:56:22PM -0400, Mohammed Naser wrote:
> Hi everyone:
> 
> This email is to ask if there is anyone out there opposed to removing
> SELinux bits from OpenStack ansible, it's blocking some of the gates
> and the maintainers for them are no longer working on the project
> unfortunately.
> 
> I'd like to propose removing any SELinux stuff from OSA based on the 
> following:
> 
> 1) We don't gate on it, we don't test it, we don't support it.  If
> you're running OSA with SELinux enforcing, please let us know how :-)
> 2) It extends beyond the scope of the deployment project and there are
> no active maintainers with the resources to deal with them
> 3) With the work currently in place to let OpenStack Ansible install
> distro packages, we can rely on upstream `openstack-selinux` package
> to deliver deployments that run with SELinux on.
> 
> Is there anyone opposed to removing it?  If so, please let us know. :-)
> 
While I don't use OSA, I would be surprised to learn that selinux wouldn't be
supported.  I also understand it requires time and care to maintain. Have you
tried reaching out to people in #RDO, IIRC all those packages should support
selinux.

As for gating, maybe default to selinux passive for it to report errors, but not
fail.  And if anybody is interested in support it, they can do so and enable
enforcing again when everything is fixed.

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-28 Thread Fox, Kevin M
I'll weigh in a bit with my operator hat on as recent experience it pertains to 
the current conversation

Kubernetes has largely succeeded in common distribution tools where OpenStack 
has not been able to.
kubeadm was created as a way to centralize deployment best practices, config, 
and upgrade stuff into a common code based that other deployment tools can 
build on.

I think this has been successful for a few reasons:
 * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. (Eating 
its own dogfood)
 * was willing to make their api robust enough to handle that self enhancement. 
(secrets are a thing, orchestration is not optional, etc)
 * they decided to produce a reference product (very important to adoption IMO. 
You don't have to "build from source" to kick the tires.)
 * made the barrier to testing/development as low as 'curl 
http://..minikube; minikube start' (this spurs adoption and contribution)
 * not having large silo's in deployment projects allowed better communication 
on common tooling.
 * Operator focused architecture, not project based architecture. This 
simplifies the deployment situation greatly.
 * try whenever possible to focus on just the commons and push vendor specific 
needs to plugins so vendors can deal with vendor issues directly and not 
corrupt the core.

I've upgraded many OpenStacks since Essex and usually it is multiple weeks of 
prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, 
something breaks only on the production system and needs hot patching on the 
spot. About 10% of the time, I've had to write the patch personally.

I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For comparison, 
what did I have to do? A couple hours of looking at release notes and trying to 
dig up examples of where things broke for others. Nothing popped up. Then:

on the controller, I ran:
yum install -y kubeadm #get the newest kubeadm
kubeadm upgrade plan #check things out

It told me I had 2 choices. I could:
 * kubeadm upgrade v1.9.8
 * kubeadm upgrade v1.10.5

I ran:
kubeadm upgrade v1.10.5

The control plane was down for under 60 seconds and then the cluster was 
upgraded. The rest of the services did a rolling upgrade live and took a few 
more minutes.

I can take my time to upgrade kubelets as mixed kubelet versions works well.

Upgrading kubelet is about as easy.

Done.

There's a lot of things to learn from the governance / architecture of 
Kubernetes..

Fundamentally, there isn't huge differences in what Kubernetes and OpenStack 
tries to provide users. Scheduling a VM or a Container via an api with some 
kind of networking and storage is the same kind of thing in either case.

The how to get the software (openstack or k8s) running is about as polar 
opposite you can get though.

I think if OpenStack wants to gain back some of the steam it had before, it 
needs to adjust to the new world it is living in. This means:
 * Consider abolishing the project walls. They are driving bad architecture 
(not intentionally but as a side affect of structure)
 * focus on the commons first.
 * simplify the architecture for ops:
   * make as much as possible stateless and centralize remaining state.
   * stop moving config options around with every release. Make it promote 
automatically and persist it somewhere.
   * improve serial performance before sharding. k8s can do 5000 nodes on one 
control plane. No reason to do nova cells and make ops deal with it except for 
the most huge of clouds
 * consider a reference product (think Linux vanilla kernel. distro's can 
provide their own variants. thats ok)
 * come up with an architecture team for the whole, not the subsystem. The 
whole thing needs to work well.
 * encourage current OpenStack devs to test/deploy Kubernetes. It has some very 
good ideas that OpenStack could benefit from. If you don't know what they are, 
you can't adopt them.

And I know its hard to talk about, but consider just adopting k8s as the 
commons and build on top of it. OpenStack's api's are good. The implementations 
right now are very very heavy for ops. You could tie in K8s's pod scheduler 
with vm stuff running in containers and get a vastly simpler architecture for 
operators to deal with. Yes, this would be a major disruptive change to 
OpenStack. But long term, I think it would make for a much healthier OpenStack.

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Wednesday, June 27, 2018 4:23 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 27/06/18 07:55, Jay Pipes wrote:
> WARNING:
>
> Danger, Will Robinson! Strong opinions ahead!

I'd have been disappointed with anything less :)

> On 06/26/2018 10:00 PM, Zane Bitter wrote:
>> On 26/06/18 09:12, Jay Pipes wrote:
>>> Is (one of) the problem(s) with our community that we have too small
>>> of a scope/footprint? No. Not in the slightest.
>>
>> 

Re: [openstack-dev] [barbican][heat] Identifying secrets in Barbican

2018-06-28 Thread Douglas Mendizabal
Replying inline.

On Wed, 2018-06-27 at 16:39 -0400, Zane Bitter wrote:
> We're looking at using Barbican to implement a feature in Heat[1]
> and 
> ran into some questions about how secrets are identified in the
> client.
> 
> With most openstack clients, resources are identified by a UUID. You 
> pass the UUID on the command line (or via the Python API or
> whatever) 
> and the client combines that with the endpoint of the service
> obtained 
> from the service catalog and a path to the resource type to generate
> the 
> URL used to access the resource.
> 
> While there appears to be no technical reason that barbicanclient 
> couldn't also do this, instead of just the UUID it uses the full URL
> as 
> the identifier for the resource. This is extremely cumbersome for
> the 
> user, and invites confused-deputy attacks where if the attacker can 
> control the URL, they can get barbicanclient to send a token to an 
> arbitrary URL. What is the rationale for doing it this way?
> 

IIRC, using URIs instead of UUIDs was a federation pre-optimization
done many years ago when Barbican was brand new and we knew we wanted
federation but had no idea how it would work.  The rationale was that
the URI would contain both the ID of the secret as well as the location
of where it was stored.

In retrospect, that was a terrible idea, and using UUIDs for
consistency with the rest of OpenStack would have been a better choice.
 I've added a story to the python-barbicanclient storyboard to enable
usage of UUIDs instead of URLs:

https://storyboard.openstack.org/#!/story/2002754

I'm sure you've noticed, but the URI that identifies the secret
includes the UUID that Barbican uses to identify the secret internally:

http://{barbican-host}:9311/v1/secrets/{UUID}

So you don't actually need to store the URI, since it can be
reconstructed by just saving the UUID and then using whatever URL
Barbican has in the service catalog.

> 
> In a tangentially related question, since secrets are immutable once 
> they've been uploaded, what's the best way to handle a case where
> you 
> need to rotate a secret without causing a temporary condition where 
> there is no version of the secret available? (The fact that there's
> no 
> way to do this for Nova keypairs is a perpetual problem for people,
> and 
> I'd anticipate similar use cases for Barbican.) I'm going to guess
> it's:
> 
> * Create a new secret with the same name
> * GET /v1/secrets/?name==created:desc=1 to find out
> the 
> URL for the newest secret with that name
> * Use that URL when accessing the secret
> * Once the new secret is created, delete the old one
> 
> Should this, or whatever the actual recommended way of doing it is,
> be 
> baked in to the client somehow so that not every user needs to 
> reimplement it?
> 

When you store a secret (e.g. using POST /v1/secrets), the response
includes the URI both in the JSON body and in the Location: header. 
 
There is no need for you to mess around with searching by name, since
Barbican does not use the name to identify a secret.  You should just
save the URI (or UUID) from the response, and then update the resource
using the old secret to point to the new secret instead.

> 
> Bottom line: how should Heat expect/require a user to refer to a 
> Barbican secret in a property of a Heat resource, given that:
> - We don't want Heat to become the deputy in "confused deputy
> attack".
> - We shouldn't do things radically differently to the way Barbican
> does 
> them, because users will need to interact with Barbican first to
> store 
> the secret.
> - Many services will likely end up implementing integration with 
> Barbican and we'd like them all to have similar user interfaces.
> - Users will need to rotate credentials without downtime.
> 
> cheers,
> Zane.
> 
> BTW the user documentation for Barbican is really hard to find.
> Y'all 
> might want to look in to cross-linking all of the docs you have 
> together. e.g. there is no link from the Barbican docs to the 
> python-barbicanclient docs or vice-versa.
> 
> [1] https://storyboard.openstack.org/#!/story/2002126
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-06-28 Thread Michael McCune
Greetings OpenStack community,

Today's meeting covered a few topics, but was mainly focused on a few
updates to the errors guideline.

We began with a review of last week's actions. Ed Leafe has sent an
email[7] to mailing list to let the folks working on the GraphQL
experiments know that the API-SIG StoryBoard was available for them to
use to track their progress.

We mentioned the proposed time slot[7] for the API-SIG at the upcoming
PTG, but as we are still a few months out from that event no other
actions were proposed.

Next we moved into a discussion of two guideline updates[8][9] that
Chris Dent is proposing. The first review adds concrete examples for
the error responses described in the guideline, and the second adds
some clarifying language and background on the intent of error codes.
During discussion among the group, a few minor areas of improvement
were identified and recorded on the reviews with updates to be made by
Chris.

We discussed the transition to StoryBoard[10] during our bug
discussion, noting the places where the workflow differs from
Launchpad. Chris also showed us a Board[11] that he created to help
figure out how to best use this new tool.

As always if you're interested in helping out, in addition to coming
to the meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for
changes over time. If you find something that's not quite right,
submit a patch [6] to fix it.
* Have you done something for which you think guidance would have made
things easier but couldn't find any? Submit a patch and help others
[6].

# Newly Published Guidelines

None

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None

# Guidelines Currently Under Review [3]

* Add links to errors-example.json
  https://review.openstack.org/#/c/578369/

* Expand error code document to expect clarity
  https://review.openstack.org/#/c/577118/

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and
service discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet
ready for review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs
that you are developing or changing, please address your concerns in
an email to the OpenStack developer mailing list[1] with the tag
"[api]" in the subject. In your email, you should include any relevant
reviews, links, and comments to help guide the discussion of the
specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our
wiki page [4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131881.html
[8] https://review.openstack.org/#/c/578369/
[9] https://review.openstack.org/#/c/577118/
[10] https://storyboard.openstack.org/#!/project/1039
[11] https://storyboard.openstack.org/#!/board/91

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-8, July 2-6

2018-06-28 Thread Sean McGinnis
Your long awaited countdown email...

Development Focus
-

Teams should be focused on implementing planned work for the cycle. It is also
a good time to review those plans and reprioritize anything if needed based on
the what progress has been made and what looks realistic to complete in the
next few weeks.

General Information
---

We have a few deadlines coming up as we get closer to the end of the cycle:

* Non-client libraries (generally, any library that is not
  python-${PROJECT}client) must have a final release by July 19. Only
  critical bugfixes will be allowed past this point. Please make sure any
  important feature works has required library changes by this time.

* Client libraries must have a final release by July 26.

Thierry posted an initial schedule for the PTG in September. Please take a look
and make sure it looks OK for your team:

http://lists.openstack.org/pipermail/openstack-dev/2018-June/131881.html

Upcoming Deadlines & Dates
--

Final non-client library release deadline: July 19
Final client library release deadline: July 26
Rocky-3 Milestone: July 26

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] dropping selinux support

2018-06-28 Thread Mohammed Naser
Also, this is the change that drops it, so feel free to vote with your
opinion there too:

https://review.openstack.org/578887 Drop SELinux support from os_swift

On Thu, Jun 28, 2018 at 12:56 PM, Mohammed Naser  wrote:
> Hi everyone:
>
> This email is to ask if there is anyone out there opposed to removing
> SELinux bits from OpenStack ansible, it's blocking some of the gates
> and the maintainers for them are no longer working on the project
> unfortunately.
>
> I'd like to propose removing any SELinux stuff from OSA based on the 
> following:
>
> 1) We don't gate on it, we don't test it, we don't support it.  If
> you're running OSA with SELinux enforcing, please let us know how :-)
> 2) It extends beyond the scope of the deployment project and there are
> no active maintainers with the resources to deal with them
> 3) With the work currently in place to let OpenStack Ansible install
> distro packages, we can rely on upstream `openstack-selinux` package
> to deliver deployments that run with SELinux on.
>
> Is there anyone opposed to removing it?  If so, please let us know. :-)
>
> Thanks!
> Mohammed
>
> --
> Mohammed Naser — vexxhost
> -
> D. 514-316-8872
> D. 800-910-1726 ext. 200
> E. mna...@vexxhost.com
> W. http://vexxhost.com



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] dropping selinux support

2018-06-28 Thread Mohammed Naser
Hi everyone:

This email is to ask if there is anyone out there opposed to removing
SELinux bits from OpenStack ansible, it's blocking some of the gates
and the maintainers for them are no longer working on the project
unfortunately.

I'd like to propose removing any SELinux stuff from OSA based on the following:

1) We don't gate on it, we don't test it, we don't support it.  If
you're running OSA with SELinux enforcing, please let us know how :-)
2) It extends beyond the scope of the deployment project and there are
no active maintainers with the resources to deal with them
3) With the work currently in place to let OpenStack Ansible install
distro packages, we can rely on upstream `openstack-selinux` package
to deliver deployments that run with SELinux on.

Is there anyone opposed to removing it?  If so, please let us know. :-)

Thanks!
Mohammed

-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-28 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2018-06-28 17:05:09 +0200:
> On 06/27/2018 03:17 AM, Ghanshyam Mann wrote:
> > Users (Not Gate) will face below issues:
> > - User cannot use PluginNR with Tempest <19.0.0 (where that new interface 
> > was not present). And there is no PluginNR release/tag as this is 
> > unreleased and not branched software.
> > - User cannot find a PluginIR particular tag/release which can work with 
> > tempest <19.0.0 (where that new interface was not present). Only way for 
> > user to make it work is to manually find out the PluginIR tag/commit before 
> > PluginIR started consuming the new interface.
> 
> In these discussions I always think: how is it solved outside of the 
> openstack 
> world. And the solutions seem to be:
> 1. for PluginNR - do releases
> 2. for PluginIR - declare their minimum version of tempest in requirements.txt
> 
> Why isn't it sufficient for us?

It is. We just haven't been doing it; in part I think because most
developers interact with the plugins via the CI system and don't realize
they are also "libraries" that need to be released so that refstack
users can install them.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Shared envdir across tox environments

2018-06-28 Thread Jay Pipes

On 06/28/2018 11:18 AM, Stephen Finucane wrote:

Just a quick heads up that an upcoming change to nova's 'tox.ini' will
change the behaviour of multiple environments slightly.

   https://review.openstack.org/#/c/534382/9

With this change applied, tox will start sharing environment
directories (e.g. '.tox/py27') among environments with identical
requirements and Python versions. This will mean you won't need to
download dependencies for every environment, which should massively
reduce the amount of time taken to (re)initialize many environments and
  save a bit of disk space to boot. This shouldn't affect most people
but it could affect people that use some fancy tooling that depends on
these directories. If this _is_ going to affect you, be sure to make
your concerns known on the review sooner rather than later so we can
resolve said concerns.


+100

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Shared envdir across tox environments

2018-06-28 Thread Stephen Finucane
Just a quick heads up that an upcoming change to nova's 'tox.ini' will
change the behaviour of multiple environments slightly.

  https://review.openstack.org/#/c/534382/9

With this change applied, tox will start sharing environment
directories (e.g. '.tox/py27') among environments with identical
requirements and Python versions. This will mean you won't need to
download dependencies for every environment, which should massively
reduce the amount of time taken to (re)initialize many environments and
 save a bit of disk space to boot. This shouldn't affect most people
but it could affect people that use some fancy tooling that depends on
these directories. If this _is_ going to affect you, be sure to make
your concerns known on the review sooner rather than later so we can
resolve said concerns.

Cheers,
Stephen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-28 Thread Dmitry Tantsur

On 06/27/2018 03:17 AM, Ghanshyam Mann wrote:




   On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann  
wrote 
  > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400:
  > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote:
  > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100:
  > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez,  
wrote:
  > > > >
  > > > > > Dmitry Tantsur wrote:
  > > > > > > [...]
  > > > > > > My suggestion: tempest has to be compatible with all supported 
releases
  > > > > > > (of both services and plugins) OR be branched.
  > > > > > > [...]
  > > > > > I tend to agree with Dmitry... We have a model for things that need
  > > > > > release alignment, and that's the cycle-bound series. The reason 
tempest
  > > > > > is branchless was because there was no compatibility issue. If the 
split
  > > > > > of tempest plugins introduces a potential incompatibility, then I 
would
  > > > > > prefer aligning tempest to the existing model rather than introduce 
a
  > > > > > parallel tempest-specific cycle just so that tempest can stay
  > > > > > release-independent...
  > > > > >
  > > > > > I seem to remember there were drawbacks in branching tempest, 
though...
  > > > > > Can someone with functioning memory brain cells summarize them 
again ?
  > > > > >
  > > > >
  > > > >
  > > > > Branchless Tempest enforces api stability across branches.
  > > >
  > > > I'm sorry, but I'm having a hard time taking this statement seriously
  > > > when the current source of tension is that the Tempest API itself
  > > > is breaking for its plugins.
  > > >
  > > > Maybe rather than talking about how to release compatible things
  > > > together, we should go back and talk about why Tempest's API is changing
  > > > in a way that can't be made backwards-compatible. Can you give some more
  > > > detail about that?
  > > >
  > >
  > > Well it's not, if it did that would violate all the stability guarantees
  > > provided by Tempest's library and plugin interface. I've not ever heard of
  > > these kind of backwards incompatibilities in those interfaces and we go to
  > > all effort to make sure we don't break them. Where did the idea that
  > > backwards incompatible changes where being introduced come from?
  >
  > In his original post, gmann said, "There might be some changes in
  > Tempest which might not work with older version of Tempest Plugins."
  > I was surprised to hear that, but I'm not sure how else to interpret
  > that statement.

I did not mean to say that Tempest will introduce the changes in backward 
incompatible way which can break plugins. That cannot happen as all plugins and 
tempest are branchless and they are being tested with master Tempest so if we 
change anything backward incompatible then it break the plugins gate. Even we 
have to remove any deprecated interfaces from Tempest, we fix all plugins first 
like - 
https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged)

What I mean to say here is that adding new or removing deprecated interface in 
Tempest might not work with all released version or unreleased Plugins. That 
point is from point of view of using Tempest and Plugins in production cloud 
testing not gate(where we keep the compatibility). Production Cloud user use 
Tempest cycle based version. Pike based Cloud will be tested by Tempest 17.0.0 
not latest version (though latest version might work).

This thread is not just for gate testing point of view (which seems to be 
always interpreted), this is more for user using Tempest and Plugins for their 
cloud testing. I am looping  operator mail list also which i forgot in initial 
post.

We do not have any tag/release from plugins to know what version of plugin can 
work with what version of tempest. For Example If There is new interface 
introduced by Tempest 19.0.0 and pluginX start using it. Now it can create 
issues for pluginX in both release model 1. plugins with no release (I will 
call this PluginNR), 2. plugins with independent release (I will call it 
PluginIR).

Users (Not Gate) will face below issues:
- User cannot use PluginNR with Tempest <19.0.0 (where that new interface was 
not present). And there is no PluginNR release/tag as this is unreleased and not 
branched software.
- User cannot find a PluginIR particular tag/release which can work with tempest 
<19.0.0 (where that new interface was not present). Only way for user to make 
it work is to manually find out the PluginIR tag/commit before PluginIR started 
consuming the new interface.


In these discussions I always think: how is it solved outside of the openstack 
world. And the solutions seem to be:

1. for PluginNR - do releases
2. for PluginIR - declare their minimum version of tempest in requirements.txt

Why isn't it sufficient for us?

Dmitry



Let me make it more clear via diagram:
  

Re: [openstack-dev] [tripleo]Testing ironic in the overcloud

2018-06-28 Thread Derek Higgins
On 23 February 2018 at 14:48, Derek Higgins  wrote:

>
>
> On 1 February 2018 at 16:18, Emilien Macchi  wrote:
>
>> On Thu, Feb 1, 2018 at 8:05 AM, Derek Higgins  wrote:
>> [...]
>>
>>> o Should I create a new tempest test for baremetal as some of the
> networking stuff is different?
>

 I think we would need to run baremetal tests for this new featureset,
 see existing files for examples.

>>> Do you mean that we should use existing tests somewhere or create new
>>> ones?
>>>
>>
>> I mean we should use existing tempest tests from ironic, etc. Maybe just
>> a baremetal scenario that spawn a baremetal server and test ssh into it,
>> like we already have with other jobs.
>>
> Done, the current set of patches sets up a new non voting job
> "tripleo-ci-centos-7-scenario011-multinode-oooq-container" which setup up
> ironic in the overcloud and run the ironic tempest job
> "ironic_tempest_plugin.tests.scenario.test_baremetal_basic_
> ops.BaremetalBasicOps.test_baremetal_server_ops"
>
> its currently passing so I'd appreciate a few eyes on it before it becomes
> out of date again
> there are 4 patches starting here https://review.openstack.
> org/#/c/509728/19
>

This is now working again so If anybody has the time I'd appreciate some
reviews while its still current
See scenario011 on https://review.openstack.org/#/c/509728/




>
>
>>
>> o Is running a script on the controller with NodeExtraConfigPost the best
> way to set this up or should I be doing something with quickstart? I don't
> think quickstart currently runs things on the controler does it?
>

 What kind of thing do you want to run exactly?

>>> The contents to this file will give you an idea, somewhere I need to
>>> setup a node that ironic will control with ipmi
>>> https://review.openstack.org/#/c/485261/19/ci/common/vbmc_setup.yaml
>>>
>>
>> extraconfig works for me in that case, I guess. Since we don't productize
>> this code and it's for CI only, it can live here imho.
>>
>> Thanks,
>> --
>> Emilien Macchi
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-28 Thread Ghanshyam Mann



  On Thu, 28 Jun 2018 04:08:35 +0900 Sean McGinnis  
wrote  
 > > 
 > > There is no issue of backward incompatibility from Tempest and on Gate. 
 > > GATE
 > > is always good as it is going with mater version or minimum supported 
 > > version
 > > in plugins as you mentioned. We take care of all these things you mentioned
 > > which is our main goal also. 
 > > 
 > > But If we think from Cloud tester perspective where they use older version 
 > > of
 > > tempest for particular OpenStack release but there is no corresponding
 > > tag/version from plugins to use them for that OpenStack release. 
 > > 
 > > Idea is here to have a tag from Plugins also like Tempest does currently 
 > > for
 > > each OpenStack release so that user can pickup those tag and test their
 > > Complete Cloud. 
 > > 
 > 
 > Thanks for the further explanation Ghanshyam. So it's not so much that newer
 > versions of tempest may break the current repo plugins, it's more to the fact
 > that any random plugin that gets pulled in has no way of knowing if it can 
 > take
 > advantage of a potentially older version of tempest that had not yet 
 > introduced
 > something the plugin is relying on.
 > 
 > I think it makes sense for the tempest plugins to be following the
 > cycle-with-intermediary model. This would allow plugins to be released at any
 > point during a given cycle and would then have a way to match up a "release" 
 > of
 > the plugin.
 > 
 > Release repo deliverable placeholders are being proposed for all the tempest
 > plugin repos we could find. Thanks to Doug for pulling this all together:
 > 
 > https://review.openstack.org/#/c/578141/
 > 
 > Please comment there if you see any issues.

Thanks. That's correct understanding and goal of this thread which is from 
production cloud testing point of view not just *gate*. 
cycle-with-intermediary model fulfill the user's requirement which they asked 
in summit. 

Doug patch lgtm.

-gmann

 > 
 > Sean
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [osc][python-openstackclient] osc-included image signing

2018-06-28 Thread Josephine Seifert
Sorry, I wrote partially german in my last mail. Here is the english
version ;)

> Go ahead and post WIP reviews and we can look at it further.  To merge
> I'll want all of the usual tests, docs, release notes, etc but don't
> wait if that is not all done up front.
Here are the two WIP reviews:

cursive: https://review.openstack.org/#/c/578767/
osc: https://review.openstack.org/#/c/578769/

On our setup the following tests succeeded:

1.A) Generate Private and Public Key without password

openssl genrsa -out image_signing_key.pem 4096
openssl rsa -pubout -in image_signing_key.pem -out image_signing_pubkey.pem

1.B) Generate Private and Public Key with password

export PASSWORD="my-little-secret"
openssl genrsa -aes256 -passout pass:$PASSWORD -out
image_signing_key.pem 4096
openssl rsa -pubout -in image_signing_key.pem -passin pass:$PASSWORD
-out image_signing_pubkey.pem

2.) generate Public Key certificate 

openssl rsa -pubout -in image_signing_key.pem -out image_signing_pubkey.pem
openssl req -new -key image_signing_key.pem -out image_signing_cert_req.csr
openssl x509 -req -days 365 -in image_signing_cert_req.csr -signkey
image_signing_key.pem -out image_signing_cert.crt

3.) upload certificate to Barbican

openstack secret store --name image-signing-cert --algorithm RSA
--expiration 2020-01-01 --secret-type certificate --payload-content-type
"application/octet-stream" --payload-content-encoding base64 --payload
"$(base64 image_signing_cert.crt)"

4.) sign & upload image to Glance

openstack image create --sign
key-path=image_signing_key.pem,cert-id=$CERT_UUID --container-format
bare --disk-format raw --file $IMAGE_FILE $IMAGE_NAME


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][python-openstackclient] osc-included image signing

2018-06-28 Thread Josephine Seifert
Hi,

> Go ahead and post WIP reviews and we can look at it further.  To merge
> I'll want all of the usual tests, docs, release notes, etc but don't
> wait if that is not all done up front.
Hier sind die zwei WIP reviews:

cursive: https://review.openstack.org/#/c/578767/
osc: https://review.openstack.org/#/c/578769/

Auf unserem System funktionierte folgender Test:

1.A) Generate Private and Public Key without password

openssl genrsa -out image_signing_key.pem 4096
openssl rsa -pubout -in image_signing_key.pem -out image_signing_pubkey.pem

1.B) Generate Private and Public Key with password

export PASSWORD="my-little-secret"
openssl genrsa -aes256 -passout pass:$PASSWORD -out
image_signing_key.pem 4096
openssl rsa -pubout -in image_signing_key.pem -passin pass:$PASSWORD
-out image_signing_pubkey.pem

2.) generate Public Key certificate 

openssl rsa -pubout -in image_signing_key.pem -out image_signing_pubkey.pem
openssl req -new -key image_signing_key.pem -out image_signing_cert_req.csr
openssl x509 -req -days 365 -in image_signing_cert_req.csr -signkey
image_signing_key.pem -out image_signing_cert.crt

3.) upload certificate to Barbican

openstack secret store --name image-signing-cert --algorithm RSA
--expiration 2020-01-01 --secret-type certificate --payload-content-type
"application/octet-stream" --payload-content-encoding base64 --payload
"$(base64 image_signing_cert.crt)"

4.) sign & upload image to Glance

openstack image create --sign
key-path=image_signing_key.pem,cert-id=$CERT_UUID --container-format
bare --disk-format raw --file $IMAGE_FILE $IMAGE_NAME


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][heat] Identifying secrets in Barbican

2018-06-28 Thread Rico Lin
For now we found two ways to get a secret, with secret href or with secret
URI(which is `secrets/UUID`).
We will turn to use  secret URI for now for Heat multi cloud support, but is
there any reason for Barbican client not to
accept only secrets UUID (Secret incorrectly specified error will shows up
when only provide UUID)?



On Thu, Jun 28, 2018 at 4:40 AM Zane Bitter  wrote:

> We're looking at using Barbican to implement a feature in Heat[1] and
> ran into some questions about how secrets are identified in the client.
>
> With most openstack clients, resources are identified by a UUID. You
> pass the UUID on the command line (or via the Python API or whatever)
> and the client combines that with the endpoint of the service obtained
> from the service catalog and a path to the resource type to generate the
> URL used to access the resource.
>
> While there appears to be no technical reason that barbicanclient
> couldn't also do this, instead of just the UUID it uses the full URL as
> the identifier for the resource. This is extremely cumbersome for the
> user, and invites confused-deputy attacks where if the attacker can
> control the URL, they can get barbicanclient to send a token to an
> arbitrary URL. What is the rationale for doing it this way?
>
>
> In a tangentially related question, since secrets are immutable once
> they've been uploaded, what's the best way to handle a case where you
> need to rotate a secret without causing a temporary condition where
> there is no version of the secret available? (The fact that there's no
> way to do this for Nova keypairs is a perpetual problem for people, and
> I'd anticipate similar use cases for Barbican.) I'm going to guess it's:
>
> * Create a new secret with the same name
> * GET /v1/secrets/?name==created:desc=1 to find out the
> URL for the newest secret with that name
> * Use that URL when accessing the secret
> * Once the new secret is created, delete the old one
>
> Should this, or whatever the actual recommended way of doing it is, be
> baked in to the client somehow so that not every user needs to
> reimplement it?
>
>
> Bottom line: how should Heat expect/require a user to refer to a
> Barbican secret in a property of a Heat resource, given that:
> - We don't want Heat to become the deputy in "confused deputy attack".
> - We shouldn't do things radically differently to the way Barbican does
> them, because users will need to interact with Barbican first to store
> the secret.
> - Many services will likely end up implementing integration with
> Barbican and we'd like them all to have similar user interfaces.
> - Users will need to rotate credentials without downtime.
>
> cheers,
> Zane.
>
> BTW the user documentation for Barbican is really hard to find. Y'all
> might want to look in to cross-linking all of the docs you have
> together. e.g. there is no link from the Barbican docs to the
> python-barbicanclient docs or vice-versa.
>
> [1] https://storyboard.openstack.org/#!/story/2002126
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> May The Force of OpenStack Be With You,
>
> *Rico Lin*irc: ricolin
>
>
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [ptg] PTG high-level schedule

2018-06-28 Thread Thierry Carrez

Hi everyone,

In the attached picture you will find the proposed schedule for the 
various tracks at the Denver PTG in September.


We did our best to avoid the key conflicts that the track leads (PTLs, 
SIG leads...) mentioned in their PTG survey responses, although there 
was no perfect solution that would avoid all conflicts. If there is a 
critical conflict that was missed, please let us know, but otherwise we 
are not planning to change this proposal.


You'll notice that:

- The Ops meetup team is still evaluating what days would be best for 
the Ops meetup that will be co-located with the PTG. We'll communicate 
about it as soon as we have the information.


- Keystone track is split in two: one day on Monday for cross-project 
discussions around identity management, and two days on Thursday/Friday 
for team discussions.


- The "Ask me anything" project helproom on Monday/Tuesday is for 
horizontal support teams (infrastructure, release management, stable 
maint, requirements...) to provide support for other teams, SIGs and 
workgroups and answer their questions. Goal champions should also be 
available there to help with Stein goal completion questions.


- Like in Dublin, a number of tracks do not get pre-allocated time, and 
will be scheduled on the spot in available rooms at the time that makes 
the most sense for the participants.


- Every track will be able to book extra time and space in available 
extra rooms at the event.


To find more information about the event, register or book a room at the 
event hotel, visit: https://www.openstack.org/ptg


Note that the first round of applications for travel support to the 
event is closing at the end of this week ! Apply if you need financial 
help attending the event:


https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018

See you there !

--
Thierry Carrez (ttx)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation

2018-06-28 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi,

I’ve added the following pros and cons to the different options:

  *   One Glance with multiple backends 
[1]
 *   Pros:
*   Relatively easy to implement based on the current Glance 
architecture
 *   Cons:
*   Requires the same Glance backend in every edge cloud instance
*   Requires the same OpenStack version in every edge cloud instance 
(apart from during upgrade)
*   Sensitivity for network connection loss is not clear
  *   Several Glances with an independent syncronisation service, sych via 
Glance API 
[2]
 *   Pros:
*   Every edge cloud instance can have a different Glance backend
*   Can support multiple OpenStack versions in the different edge cloud 
instances
*   Can be extended to support multiple VIM types
 *   Cons:
*   Needs a new synchronisation service
  *   Several Glances with an independent syncronisation service, synch using 
the backend 
[3]
 *   Pros:
*   I could not find any
 *   Cons:
*   Needs a new synchronisation service
  *   One Glance and multiple Glance API servers 
[4]
 *   Pros:
*   Implicitly location aware
 *   Cons:
*   First usage of an image always takes a long time
*   In case of network connection error to the central Galnce Nova will 
have access to the images, but will not be able to figure out if the user have 
rights to use the image and will not have path to the images data
Are these correct? Do I miss anything?

Thanks,
Gerg0

[1]: 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_with_multiple_backends
[2]: 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_sych_via_Glance_API
[3]: 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend
[4]: 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_and_multiple_Glance_API_servers






From: Csatari, Gergely (Nokia - HU/Budapest)
Sent: Monday, June 11, 2018 4:29 PM
To: Waines, Greg ; OpenStack Development Mailing 
List (not for usage questions) ; 
edge-comput...@lists.openstack.org
Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible 
architectures for image synchronisation

Hi,

Thanks for the comments.
I’ve updated the wiki: 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend

Br,
Gerg0

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Friday, June 8, 2018 1:46 PM
To: Csatari, Gergely (Nokia - HU/Budapest) 
mailto:gergely.csat...@nokia.com>>; OpenStack 
Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>; 
edge-comput...@lists.openstack.org
Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible 
architectures for image synchronisation

Responses in-lined below,
Greg.

From: "Csatari, Gergely (Nokia - HU/Budapest)" 
mailto:gergely.csat...@nokia.com>>
Date: Friday, June 8, 2018 at 3:39 AM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>, 
"edge-comput...@lists.openstack.org" 
mailto:edge-comput...@lists.openstack.org>>
Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible 
architectures for image synchronisation

Hi,

Going inline.

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Thursday, June 7, 2018 2:24 PM
I had some additional questions/comments on the Image Synchronization Options ( 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ):


One Glance with multiple backends

  *   In this scenario, are all Edge Clouds simply configured with the one 
central glance for its GLANCE ENDPOINT ?
 *   i.e. GLANCE is a typical shared service in a multi-region environment ?

[G0]: In my understanding yes.


  *   If so,
how does this OPTION support the requirement for Edge Cloud Operation when 
disconnected from Central Location ?

[G0]: This is an open question for me also.


Several Glances with an independent synchronization service(PUSH)

  *   I refer to this as the PUSH 

[openstack-dev] [magnum] Problems with multi-regional OpenStack installation

2018-06-28 Thread Andrei Ozerov
Greetings.

Has anyone successfully deployed Magnum in the multi-regional OpenStack
installation?
In my case different services (Nova, Heat) have different public endpoint
in every region. I couldn't start Kube-apiserver until I added "region" to
a kube_openstack_config.
I created a story with full description of that problem:
https://storyboard.openstack.org/#!/story/2002728 and opened a review with
a small fix: https://review.openstack.org/#/c/578356.

But apart from that I have another problem with such kind of OpenStack
installation.
Say I have two regions. When I create a cluster in the second OpenStack
region, Heat-container-engine tries to fetch Stack data from the first
region.
It then throws the following error: "The Stack (hame-uuid) could not be
found". I can see GET requests for that stack in logs of Heat-API in the
first region but I don't see them in the second one (where that Heat stack
actually exists).

I'm assuming that Heat-container-engine doesn't pass "region_name" when it
searches for Heat endpoints:
https://github.com/openstack/magnum/blob/master/magnum/drivers/common/image/heat-container-agent/scripts/heat-config-notify#L149
.
I've tried to change it but it's tricky because the Heat-container-engine
is installed via Docker system-image and it won't work after restart if
it's failed in the initial bootstrap (because
/var/run/heat-config/heat-config is empty).
Can someone help me with that? I guess it's better to create a separate
story for that issue?

-- 
Ozerov Andrei
oze...@selectel.com
+7 (800) 555 06 75

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][karbor] Can karbor restore resource's backup to other region?

2018-06-28 Thread 何健乐
Hi All,
There are two questions that are bothering me now.
Firstly, can Karbor restore resource to other region? 
Secondly, I have notice that when we create a restore with CLI,the command will 
be like this:
openstack data protection restore create cf56bd3e-97a7-4078-b6d5-f36246333fd9 
c2ddf803-3655-4e26-8605-de36bdbeb701 --restore_target 
http://xx.xx.xx.xx/indentity  --restore_username demo --restore_password admin 
--parameters 
resource_type=OS::Nova::Server,restore_net_id=c6b392d4-20ec-483f-9411-d188b3ba79ae,restore_name=vm_restore
what do the "--restart_target", "--restore_username" and "--restore_password" 
parameters be used for?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][glance_store] Functional testing of multiple backend

2018-06-28 Thread Abhishek Kekane
Hi All,

In Rocky I have proposed a spec [1] for adding support for multiple backend
in glance. I have completed the coding part and so far tested this feature
with file, rbd and swift store. However I need support in testing this
feature thoroughly. So kindly help me (or provide a way to configure
cinder, sheepdog and vmware stores using devstack) in functional testing
for remaining drivers.

I have created one etherpad [2] with steps to configure this feature and
some scenarios I have tested with file, rbd and swift drivers.

Please do the needful.

[1] https://review.openstack.org/562467
[2] https://etherpad.openstack.org/p/multi-store-scenarios

Summary of upstream patches:
https://review.openstack.org/#/q/topic:bp/multi-store+(status:open+OR+status:merged)



Thanks & Best Regards,

Abhishek Kekane
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev