[openstack-dev] [python-keystoneclient] Return request-id to caller

2016-04-05 Thread koshiya maho
Hi Keystone devs,

Blueprint [1] related to request-ids is approved for Mitaka and 
its implementation [2] is up for review for quite a long time. 

I would like to apply for a goal for this blueprint so that 
it can be included in Newton.

[1]
https://blueprints.launchpad.net/python-keystoneclient/+spec/return-request-id-to-caller

[2]
https://review.openstack.org/#/c/261188/
https://review.openstack.org/#/c/267449/
https://review.openstack.org/#/c/267456/
https://review.openstack.org/#/c/268003/
https://review.openstack.org/#/c/276644/

Thank you,
--
Maho Koshiya
E-Mail : koshiya.m...@po.ntts.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FPGA as a resource

2016-04-05 Thread Roman Dobosz
On Tue, 5 Apr 2016 13:58:44 +0100
"Daniel P. Berrange"  wrote:

> Along similar lines we have proposals to add vGPU support to Nova,
> where the vGPUs may or may not be exposed using SR-IOV. We also want
> to be able to on the fly decide whether any physical GPU is assigned
> entirely to a guest as a full PCI device, or whether we only assign
> individual "virtual functions" of the GPU. This means that even if
> the GPU in question does *not* use SR-IOV, we still need to track
> the GPU and vGPUs in the same way as we track PCI devices, so that
> we can avoid assigning a vGPU to the guest, if the underlying physical
> PCI device is already assigned to the guest.

That's correct. I'd like to mention, that FPGAs can be also exposed
other way than PCI (like in Xeon+FPGA). Not sure if this also applies
to GPU.

> I can see we will have much the same issue with FPGAs, where we may
> either want to assign the entire physical PCI device to a guest, or
> just assign a particular slot in the FPGA to the guest. So even if
> the FPGA is not using SR-IOV, we need to tie this all into the PCI
> device tracking code, as we are intending for vGPUs.
> 
> All in all, I think we probably ought to generalize the PCI device
> assignment modelling so that we're actually modelling generic
> hardware devices which may or may not be PCI based, so that we can
> accurately track the relationships between the devices.
> 
> With NIC devices we're also seeing a need to expose capabilities
> against the PCI devices, so that the schedular can be more selective
> in deciding which particular devices to assign. eg so we can distinguish
> between NICs which support RDMA and those which don't, or identify NIC
> with hardware offload features, and so on. I can see this need to
> associate capabilities with devices is something that will likely
> be needed for the FPGA scenario, and vGPUs too. So again this points
> towards more general purpose modelling of assignable hardware devices
> beyond the limited PCI device modelling we've got today.
> 
> Looking to the future I think we'll see more usecases for device
> assignment appearing for other types of device.
> 
> IOW, I think it would be a mistake to model FPGAs as a distinct
> object type on their own. Generalization of assignable devices
> is the way to go

That's why I've bring the topic here on the list, so we can think about
similar devices which could be generalized into one common accelerator
type or even think about modeling PCI as such.

> > All of that makes modelling resource extremely complicated, contrary to 
> > CPU resource for example. I'd like to discuss how the goal of having 
> > reprogrammable accelerators in OpenStack can be achieved. Ideally I'd 
> > like to fit it into Jay and Chris work on resource-*.
> I think you shouldn't look at the FPGAs as being like CPU resource, but
> rather look at them as a generalization of PCI device asignment.

CPU in this context was only an example of "easy" resource, which
doesn't need any preparation before VM can use it :)

-- 
Cheers,
Roman Dobosz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-05 Thread Shinobu Kinjo
On Wed, Apr 6, 2016 at 1:44 PM, Qiming Teng  wrote:
> Not an expert of Nova but I am really shocked by such a change. Because
> I'm not a Nova expert, I don't have a say on the *huge* efforts in
> maintaining some builtin/default flavors. As a user I don't care where
> the data have been stored, but I do care that they are gone. They are
> gone because they **WILL** be supported by devstack. They are gone with
> the workflow +1'ed **BEFORE** the devstack patch gets merged (many
> thanks to the depends-on tag). They are gone in hope that all deployment
> tools will know this when they fail, or fortunately they read this email,
> or they were reviewing nova patches.
>
> It would be a little nicer to initiate a discussion on the mailinglist
> before such a change is introduced.

Agree.

Cheers,
Shinobu

>
> Regards,
>   Qiming
>
> On Tue, Apr 05, 2016 at 08:09:50AM -0700, Dan Smith wrote:
>> Just as a heads up, we are removing the default flavors from nova in
>> this patch:
>>
>>   https://review.openstack.org/#/c/300127/
>>
>> Since long ago, Nova's default flavors were baked in at the database
>> migration level. Now that we have moved them to another database
>> entirely, this means we have to migrate them from the old/original place
>> to the new one, even for new deployments. It also means that our tests
>> have flavor assumptions that run (way too) deep.
>>
>> Devstack will get support for auto-creating the flavors you are used to,
>> as well as some less-useless ones:
>>
>>   https://review.openstack.org/#/c/301257/
>>
>> Normal developers shouldn't really notice, but the deployment tool
>> projects may want/need to do something here along the same lines.
>>
>> --Dan
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Juan Antonio Osorio
On Wed, Apr 6, 2016 at 4:06 AM, Dan Prince  wrote:

> On Sat, 2016-04-02 at 17:28 -0400, Adam Young wrote:
> > I finally have enough understanding of what is going on with Tripleo
> > to
> > reasonably discuss how to implement solutions for some of the main
> > security needs of a deployment.
> >
> >
> > FreeIPA is an identity management solution that can provide support
> > for:
> >
> > 1. TLS on all network communications:
> > A. HTTPS for web services
> > B. TLS for the message bus
> > C. TLS for communication with the Database.
> > 2. Identity for all Actors in the system:
> >A.  API services
> >B.  Message producers and consumers
> >C.  Database consumers
> >D.  Keystone service users
> > 3. Secure  DNS DNSSEC
> > 4. Federation Support
> > 5. SSH Access control to Hosts for both undercloud and overcloud
> > 6. SUDO management
> > 7. Single Sign On for Applications running in the overcloud.
> >
> >
> > The main pieces of FreeIPA are
> > 1. LDAP (the 389 Directory Server)
> > 2. Kerberos
> > 3. DNS (BIND)
> > 4. Certificate Authority (CA) server (Dogtag)
> > 5. WebUI/Web Service Management Interface (HTTPD)
> >
> > Of these, the CA is the most critical.  Without a centralized CA, we
> > have no reasonable way to do certificate management.
>
> Would using Barbican to provide an API to manage the certificates make
> more sense for our deployment tooling? This could be useful for both
> undercloud and overcloud cases.
>

Actually, even if we start using Barbican, which wouldn't also be a bad
idea. It still needs a backend. The default settings for Barbican are not
suitable for usage in production. One of the supported backends is DogTag
(which is deployed with FreeIPA). But ellaborating on this, using only
Barbican (with Dogtag as a backend) doesn't really address the issue, but
merely puts a REST interface on top of the problem. We would then need to
come up with a solution that we can propose to nova and further propose a
blueprint. Which is similar to what Kevin Fox once tried to do.

>
> As for the rest of this, how invasive is the implementation of
> FreeIPA.? Is this something that we can layer on top of an existing
> deployment such that users wishing to use FreeIPA can opt-in.
>
> >
> > Now, I know a lot of people have an allergic reaction to some, maybe
> > all, of these technologies. They should not be required to be running
> > in
> > a development or testbed setup.  But we need to make it possible to
> > secure an end deployment, and FreeIPA was designed explicitly for
> > these
> > kinds of distributed applications.  Here is what I would like to
> > implement.
> >
> > Assuming that the Undercloud is installed on a physical machine, we
> > want
> > to treat the FreeIPA server as a managed service of the undercloud
> > that
> > is then consumed by the rest of the overcloud. Right now, there are
> > conflicts for some ports (8080 used by both swift and Dogtag) that
> > prevent a drop-in run of the server on the undercloud
> > controller.  Even
> > if we could deconflict, there is a possible battle between Keystone
> > and
> > the FreeIPA server on the undercloud.  So, while I would like to see
> > the
> > ability to run the FreeIPA server on the Undercloud machine
> > eventuall, I
> > think a more realistic deployment is to build a separate virtual
> > machine, parallel to the overcloud controller, and install FreeIPA
> > there. I've been able to modify Tripleo Quickstart to provision this
> > VM.
> >
> > I was also able to run FreeIPA in a container on the undercloud
> > machine,
> > but this is, I think, not how we want to migrate to a container
> > based
> > strategy. It should be more deliberate.
> >
> >
> > While the ideal setup would be to install the IPA layer first, and
> > create service users in there, this produces a different install
> > path
> > between with-FreeIPA and without-FreeIPA. Thus, I suspect the right
> > approach is to run the overcloud deploy, then "harden" the
> > deployment
> > with the FreeIPA steps.
> >
> >
> > The IdM team did just this last summer in preparing for the Tokyo
> > summit, using Ansible and Packstack.  The Rippowam project
> > https://github.com/admiyo/rippowam was able to fully lock down a
> > Packstack based install.  I'd like to reuse as much of Rippowam as
> > possible, but called from Heat Templates as part of an overcloud
> > deploy.  I do not really want to re implement Rippowam in Puppet.
>
> As we are using Puppet for our configuration I think this is currently
> a requirement. There are many good puppet examples out there of various
> servers and a quick google search showed some IPA modules are available
> as well.
>
> I think most TripleO users are quite happy in using puppet modules for
> configuration in that the puppet openstack modules are quite mature and
> well tested. Making a one-off exception for FreeIPA at this point
> doesn't make sense to me.
>
> >
> > So, big question: is 

Re: [openstack-dev] [Nova] FPGA as a resource

2016-04-05 Thread Gerard Braad
Hi,

On Tue, Apr 5, 2016 at 8:58 PM, Daniel P. Berrange  wrote:
>> As you can see, it seems not complicated at this point, however it
>> become more complex due to following things we also have to take into
>> consideration:

I agree with this. Many kinds of resources could be considered, not
just to augment processing, but also to provide input (or other forms
of output), such as video devices, etc.

regards,

Gerard

-- 
Gerard Braad — 吉拉德
   F/OSS & IT Consultant in Beijing
   http://gbraad.nl  gpg: 0x592CFE75

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-05 Thread Qiming Teng
Not an expert of Nova but I am really shocked by such a change. Because
I'm not a Nova expert, I don't have a say on the *huge* efforts in
maintaining some builtin/default flavors. As a user I don't care where
the data have been stored, but I do care that they are gone. They are
gone because they **WILL** be supported by devstack. They are gone with
the workflow +1'ed **BEFORE** the devstack patch gets merged (many
thanks to the depends-on tag). They are gone in hope that all deployment
tools will know this when they fail, or fortunately they read this email,
or they were reviewing nova patches.

It would be a little nicer to initiate a discussion on the mailinglist
before such a change is introduced. 

Regards,
  Qiming

On Tue, Apr 05, 2016 at 08:09:50AM -0700, Dan Smith wrote:
> Just as a heads up, we are removing the default flavors from nova in
> this patch:
> 
>   https://review.openstack.org/#/c/300127/
> 
> Since long ago, Nova's default flavors were baked in at the database
> migration level. Now that we have moved them to another database
> entirely, this means we have to migrate them from the old/original place
> to the new one, even for new deployments. It also means that our tests
> have flavor assumptions that run (way too) deep.
> 
> Devstack will get support for auto-creating the flavors you are used to,
> as well as some less-useless ones:
> 
>   https://review.openstack.org/#/c/301257/
> 
> Normal developers shouldn't really notice, but the deployment tool
> projects may want/need to do something here along the same lines.
> 
> --Dan
> 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-05 Thread Steve Martinelli

+1000. Totally in favor of this, if anything it seems overdue, I'm a bit
surprised that they aren't already deprecated. Two alternatives exist for
the CLI (osc and glanceclient), and two alternatives exist for python API
bindings (SDK and glanceclient).

This should follow the same case with the nova volume-* commands, deprecate
for 2 releases [1] and then remove [2]. The deprecation message can point
users to OSC and glanceclient.

[1]
https://github.com/openstack/python-novaclient/commit/23f13437dd64496fcbc138bbaa9b0ac615a3cf23
[2]
https://github.com/openstack/python-novaclient/commit/a42570268915f42405ed0b0a67c25686b5db22ce

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead



From:   Matt Riedemann 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   2016/04/05 03:49 PM
Subject:[openstack-dev] [nova][glance] Proposal to remove `nova
image-*` commands from novaclient



As we discuss the glance v2 spec for nova, questions are coming up
around what to do about the nova images API which is a proxy for glance v1.

I don't want to add glance v2 support to the nova API since that's just
more proxy gorp.

I don't think we can just make the nova images API fail if we're using
glance v2 in the backend, but we'll need some translation later for:

user->nova-images->glance.v2->glance.v1(ish)->user

But we definitely want people to stop using the nova images API.

I think we can start by deprecating the nova images-* commands in
python-novaclient, and probably the python API bindings in novaclient too.

People should be using python-glanceclient or python-openstackclient for
the CLI, and python-glanceclient or some SDK for the python API bindings.

We recently removed the deprecated nova volume-* stuff from novaclient,
this would be the same idea.

Does anyone have an issue with this?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FPGA as a resource

2016-04-05 Thread Qiming Teng
Emm... finally this is brought up. We from IBM have already done some
work on FPGA/GPU resource management [1]. Let me bring the SMEs into
this discussion and see if we together can work out a concrete roadmap
to land this upstream.

Fei and Yonghua, this is indeed very interesting a topic for us.


[1] SuperVessel Cloud: https://ptopenlab.com/

Regards,
  Qiming

On Tue, Apr 05, 2016 at 02:27:30PM +0200, Roman Dobosz wrote:
> Hey all,
> 
> On yesterday's scheduler meeting I was raised the idea of bringing up 
> the FPGA to the OpenStack as the resource, which than might be exposed 
> to the VMs.
> 
> The use cases for motivations, why one want do this, are pretty broad -
> having such chip ready on the computes might be beneficial either for
> consumers of the technology and data center administrators. The
> utilization of the hardware is very broad - the only limitations are
> human imagination and hardware capability - since it might be used for
> accelerating execution of algorithms from compression and cryptography,
> through pattern recognition, transcoding to voice/video analysis and
> processing and all the others in between. Using FPGA to perform data
> processing may significantly reduce CPU utilization, the time and power
> consumption, which is a benefit on its own.
> 
> On OpenStack side, unlike utilizing the CPU or memory, for actually 
> using specific algorithm with FPGAs, it has to be programmed first. So 
> in simplified scenario, it might go like this:
> 
> * User selects VM with image which supports acceleration,
> * Scheduler selects the appropriate compute host with FPGA available,
> * Compute gets request, program IP into FPGA and then boot up the 
>   VM with accelerator attached.
> * If VM is removed, it may optionally erase the FPGA.
> 
> As you can see, it seems not complicated at this point, however it 
> become more complex due to following things we also have to take into 
> consideration:
> 
> * recent FPGA are divided into regions (or slots) and every of them 
>   can be programmed separately
> * slots may or may not fit the same bitstream (the program which FPGA
>   is fed, the IP)
> * There is several products around (Altera, Xilinx, others), which make 
>   bitstream incompatible. Even between the products of the same company
> * libraries which abstract the hardware layer like AAL[1] and their 
>   versions
> * for some products, there is a need for tracking memory usage, which 
>   is located on PCI boards
> * some of the FPGAs can be exposed using SR-IOV, while some other not, 
>   which implies multiple usage abilities
> 
> In other words, it may be necessary to incorporate another actions:
> 
> * properly discover FPGA and its capabilities
> * schedule right bitstream with corresponding matching unoccupied FPGA
>   device/slot
> * actual program FPGA
> * provide libraries into VM, which are necessary for interacting between
>   user program and the exposed FPGA (or AAL) (this may be optional, 
>   since user can upload complete image with everything in place)
> * bitstream images have to be keep in some kind of service (Glance?) 
>   with some kind of way for identifying which image match what FPGA
> 
> All of that makes modelling resource extremely complicated, contrary to 
> CPU resource for example. I'd like to discuss how the goal of having 
> reprogrammable accelerators in OpenStack can be achieved. Ideally I'd 
> like to fit it into Jay and Chris work on resource-*.
> 
> Looking forward for any comments :)
> 
> [1] 
> http://www.intel.com/content/dam/doc/white-paper/quickassist-technology-aal-white-paper.pdf
> 
> -- 
> Cheers,
> Roman Dobosz
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OVSDB native interface as default in gate jobs

2016-04-05 Thread IWAMOTO Toshihiro
At Tue, 5 Apr 2016 12:57:33 -0400,
Assaf Muller wrote:
> 
> On Tue, Apr 5, 2016 at 12:35 PM, Sean M. Collins  wrote:
> > Russell Bryant wrote:
> >> because they are related to two different command line utilities
> >> (ovs-vsctl vs ovs-ofctl) that speak two different protocols (OVSDB vs
> >> OpenFlow) that talk to two different daemons on the system (ovsdb-server vs
> >> ovs-vswitchd) ?
> >
> > True, they influence two different daemons - but it's really two options
> > that both have two settings:
> >
> > * "talk to it via the CLI tool"
> > * "talk to it via a native interface"
> >
> > How likely is it to have one talking via native interface and the other
> > via CLI?
> 
> The ovsdb native interface is a couple of cycles more mature than the
> openflow one, I see how some users would use one but not the other.

The native of_interface has been tested by periodic jobs and seems
pretty stable.

http://graphite.openstack.org/dashboard/#neutron-ovs-native

> > Also, if the native interface is faster, I think we should consider
> > making it the default.
> 
> Definitely. I'd prefer to deprecate and delete the cli interfaces and
> keep only the native interfaces in the long run.
> 
> >
> > --
> > Sean M. Collins

The native of_interface is definitely faster than the CLI alternative,
but (un?)fortunately that's not a performance bottleneck.

The transition would be a gain, but it comes with uncovering a few
unidentified bugs etc.

Anyway, I'll post an updated version of performance comparison shortly.

--
IWAMOTO Toshihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-05 Thread Monty Taylor

On 04/05/2016 05:07 PM, Michael Still wrote:

As a recent newcomer to using our client libraries, my only real
objection to this plan is that our client libraries as a mess [1][2].
The interfaces we expect users to use are quite different for basic
things like initial auth between the various clients, and by introducing
another library we insist people use we're going to force a lot of devs
to eventually go through having to understand how those other people did
that thing.


The big misunderstanding is thinking that our client libs are for our 
users. I made that mistake early on. Our client libs are really for 
server to server communication.


If you want a client lib that works - use either openstacksdk or shade. 
Which of the you use depends on what you want to accomplish. 
openstacksdk is a well designed interface to the OpenStack APIs. Of 
course, the OpenStack APIs are really hard to use across clouds. Shade 
hides the incompatibilities between the clouds, but is not a direct 
mapping to REST API docs.



I guess I could ease my concerns here if we could agree to some sort of
standard for what auth in a client library looks like...

Some examples of auth at the moment:


If you want a client object from one of the client libraries, I STRONGLY 
suggest using os-client-config to construct them instead of using the 
constructors directly. This is because the constructors are really only 
for other OpenStack services.


http://inaugust.com/posts/simple-openstack-clients.html




self.glance = glance_client.Client('2', endpoint, token=token)


There are next to zero cases where the thing you want to do is talk to 
glance using a token and an endpoint. In fact, outside of being inside 
of nova and making proxy calls, the only 'valid' use case is keystone 
service catalog bootstrapping ... and that got replaced with something 
sane in mitaka.


What you want is this:

client = os_client_config.make_client(
'network',
auth_url='https://example.com', username='my-user',
password='awesome-password', project_name='my-project',
region_name='the-best-region')

You should ALWAYS pass an auth_url, a username, a password and a 
project_name. Otherwise, the keystoneauth Session will not be able to 
re-auth you. Also, it saves you having to construct a token yourself.



self.ironic = ironic_client.get_client(1, ironic_url=endpoint,
os_auth_token=token)
self.nova = nova_client.Client('2', bypass_url=endpoint, auth_token=token)

Note how we can't decide if the version number is a string or an int,
and the argument names for the endpoint and token are different in all
three. Its like we're _trying_ to make this hard.


The python-*client libraries are exceptionally painful to use. Any time 
you want to use them, I recommend not using them, as above.



Michael

1: I guess I might be doing it wrong, but in that case I'll just mutter
about the API docs instead.
2: I haven't looked at the unified openstack client library to see if
its less crazy.

On Wed, Apr 6, 2016 at 5:46 AM, Matt Riedemann
> wrote:

As we discuss the glance v2 spec for nova, questions are coming up
around what to do about the nova images API which is a proxy for
glance v1.

I don't want to add glance v2 support to the nova API since that's
just more proxy gorp.

I don't think we can just make the nova images API fail if we're
using glance v2 in the backend, but we'll need some translation
later for:

user->nova-images->glance.v2->glance.v1(ish)->user

But we definitely want people to stop using the nova images API.

I think we can start by deprecating the nova images-* commands in
python-novaclient, and probably the python API bindings in
novaclient too.

People should be using python-glanceclient or python-openstackclient
for the CLI, and python-glanceclient or some SDK for the python API
bindings.

We recently removed the deprecated nova volume-* stuff from
novaclient, this would be the same idea.

Does anyone have an issue with this?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Rackspace Australia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Horizon][Searchlight] Plans for Horizon cross-region view

2016-04-05 Thread Tripp, Travis S
Sorry the for delayed response on this message. Finishing out Mitaka has been 
quite time consuming!

Cross region searching is a high priority item for Searchlight in Newton.  
Steve has begun work on the spec [1] with initial prototyping. We also are 
considering this as a likely candidate for the design summit.  Please take a 
look and help us work through the design!

[1] https://review.openstack.org/#/c/301227/

Thanks,
Travis

From: Brad Pokorny >
Reply-To: OpenStack List 
>
Date: Thursday, February 25, 2016 at 3:17 PM
To: OpenStack List 
>
Subject: [openstack-dev] [Horizon][Searchlight] Plans for Horizon cross-region 
view

The last info I've found on the ML about a cross-region view in Horizon is [1], 
which mentions making asynchronous calls to the APIs. Has anyone done further 
work on such a view?

If not, I think it would make sense to only show the view if Searchlight is 
enabled. One of the Searchlight use cases is cross-region searching, and only 
using the searchlight APIs would cut down on the slowness of going directly to 
the service APIs for what would potentially be a lot of records. Thoughts?

[1] http://openstack.markmail.org/message/huk5l73un7t255ox
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Proposing Anton Arefiev (aarefiev) for ironic-inspector-core

2016-04-05 Thread Haomeng, Wang
+1 from me, and congrats!

On Wed, Apr 6, 2016 at 12:45 AM, Jim Rollenhagen 
wrote:

> +1 from me :)
>
> // jim
>
> > On Apr 5, 2016, at 03:24, Dmitry Tantsur  wrote:
> >
> > Hi!
> >
> > I'd like to propose Anton to the ironic-inspector core reviewers team.
> His stats are pretty nice [1], he's making meaningful reviews and he's
> pushing important things (discovery, now tempest).
> >
> > Members of the current ironic-inspector-team and everyone interested,
> please respond with your +1/-1. A lazy consensus will be applied: if nobody
> objects by the next Tuesday, the change will be in effect.
> >
> > Thanks
> >
> > [1] http://stackalytics.com/report/contribution/ironic-inspector/60
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Adam Young

On 04/05/2016 09:06 PM, Dan Prince wrote:

On Sat, 2016-04-02 at 17:28 -0400, Adam Young wrote:

I finally have enough understanding of what is going on with Tripleo
to
reasonably discuss how to implement solutions for some of the main
security needs of a deployment.


FreeIPA is an identity management solution that can provide support
for:

1. TLS on all network communications:
 A. HTTPS for web services
 B. TLS for the message bus
 C. TLS for communication with the Database.
2. Identity for all Actors in the system:
A.  API services
B.  Message producers and consumers
C.  Database consumers
D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)

Of these, the CA is the most critical.  Without a centralized CA, we
have no reasonable way to do certificate management.

Would using Barbican to provide an API to manage the certificates make
more sense for our deployment tooling? This could be useful for both
undercloud and overcloud cases.
Barbican is not a CA.  However, it can use the KRA deployed with Dogtag 
to store its secrets, so this actually supports Barbican nicely.




As for the rest of this, how invasive is the implementation of
FreeIPA.? Is this something that we can layer on top of an existing
deployment such that users wishing to use FreeIPA can opt-in.
Yep.  The big thing it gives you is the Cert management, and I don't 
want to rewrite that, but the rest can stay out of the way.


I do suspect that, once it is there, we will want to use more of IPA, 
but that is not the goal.







Now, I know a lot of people have an allergic reaction to some, maybe
all, of these technologies. They should not be required to be running
in
a development or testbed setup.  But we need to make it possible to
secure an end deployment, and FreeIPA was designed explicitly for
these
kinds of distributed applications.  Here is what I would like to
implement.

Assuming that the Undercloud is installed on a physical machine, we
want
to treat the FreeIPA server as a managed service of the undercloud
that
is then consumed by the rest of the overcloud. Right now, there are
conflicts for some ports (8080 used by both swift and Dogtag) that
prevent a drop-in run of the server on the undercloud
controller.  Even
if we could deconflict, there is a possible battle between Keystone
and
the FreeIPA server on the undercloud.  So, while I would like to see
the
ability to run the FreeIPA server on the Undercloud machine
eventuall, I
think a more realistic deployment is to build a separate virtual
machine, parallel to the overcloud controller, and install FreeIPA
there. I've been able to modify Tripleo Quickstart to provision this
VM.

I was also able to run FreeIPA in a container on the undercloud
machine,
but this is, I think, not how we want to migrate to a container
based
strategy. It should be more deliberate.


While the ideal setup would be to install the IPA layer first, and
create service users in there, this produces a different install
path
between with-FreeIPA and without-FreeIPA. Thus, I suspect the right
approach is to run the overcloud deploy, then "harden" the
deployment
with the FreeIPA steps.


The IdM team did just this last summer in preparing for the Tokyo
summit, using Ansible and Packstack.  The Rippowam project
https://github.com/admiyo/rippowam was able to fully lock down a
Packstack based install.  I'd like to reuse as much of Rippowam as
possible, but called from Heat Templates as part of an overcloud
deploy.  I do not really want to re implement Rippowam in Puppet.

As we are using Puppet for our configuration I think this is currently
a requirement. There are many good puppet examples out there of various
servers and a quick google search showed some IPA modules are available
as well.

I think most TripleO users are quite happy in using puppet modules for
configuration in that the puppet openstack modules are quite mature and
well tested. Making a one-off exception for FreeIPA at this point
doesn't make sense to me.


Yeah, and I think I am fine with that.  It just means I have to rewrite 
some stuff, and that makes sense in keeping thing consistent.  Just 
figured I'd ask first before I had to star getting deep into Puppet.





So, big question: is Heat->ansible (instead of Puppet) for an
overcloud
deployment an acceptable path?  We are talking Ansible 1.0
Playbooks,
which should be relatively straightforward ports to 2.0 when the time
comes.

Thus, the sequence would be:

1. Run existing overcloud deploy steps.
2. Install IPA server on the allocated VM
3. Register the compute nodes and the controller as IPA clients
4. Convert 

Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Adam Young

On 04/05/2016 08:02 AM, Hayes, Graham wrote:

On 02/04/2016 22:33, Adam Young wrote:

I finally have enough understanding of what is going on with Tripleo to
reasonably discuss how to implement solutions for some of the main
security needs of a deployment.


FreeIPA is an identity management solution that can provide support for:

1. TLS on all network communications:
  A. HTTPS for web services
  B. TLS for the message bus
  C. TLS for communication with the Database.
2. Identity for all Actors in the system:
 A.  API services
 B.  Message producers and consumers
 C.  Database consumers
 D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)






There are a couple ongoing efforts that will tie in with this:

1. Designate should be able to use the DNS from FreeIPA.  That was the
original implementation.

Designate cannot use FreeIPA - we haven't had a driver for it since
Kilo.

There have been various efforts since to support FreeIPA, but it
requires that it is the point of truth for DNS information, as does
Designate.

If FreeIPA supported the traditional Notify and Zone Transfer mechanisms
then we would be fine, but unfortunately it does not.

[1] Actually points out that the goal of FreeIPA's DNS integration
"... is NOT to provide general-purpose DNS server. Features beyond
easing FreeIPA deployment and maintenance are explicitly out of scope."

1 - http://www.freeipa.org/page/DNS#Goals



Lets table that for now. No reason they should not be able to 
interoperate somehow.






2.  Juan Antonio Osorio  has been working on TLS everywhere.  The issue
thus far has been Certificate management.  This provides a Dogtag server
for Certs.

3. Rob Crittenden has been working on auto-registration of virtual
machines with an Identity Provider upon launch.  This gives that efforts
an IdM to use.

4. Keystone can make use of the Identity store for administrative users
in their own domain.

5. Many of the compliance audits have complained about cleartext
passwords in config files. This removes most of them.  MySQL supports
X509 based authentication today, and there is Kerberos support in the
works, which should remove the last remaining cleartext Passwords.

I mentioned Centralized SUDO and HBAC.  These are both tools that may be
used by administrators if so desired on the install. I would recommend
that they be used, but there is no requirement to do so.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Adam Young

On 04/05/2016 11:42 AM, Fox, Kevin M wrote:
Yeah, and they just deprecated vendor data plugins too, which 
eliminates my other workaround. :/


We need to really discuss this problem at the summit and get a viable 
path forward. Its just getting worse. :/


Thanks,
Kevin

*From:* Juan Antonio Osorio [jaosor...@gmail.com]
*Sent:* Tuesday, April 05, 2016 5:16 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [TripleO] FreeIPA integration



On Tue, Apr 5, 2016 at 2:45 PM, Fox, Kevin M > wrote:


This sounds suspiciously like, "how do you get a secret to the
instance to get a secret from the secret store" issue :)

Yeah, sounds pretty familiar. We were using the nova hooks mechanism 
for this means, but it was deprecated recently. So bummer :/



Nova instance user spec again?

Thanks,
Kevin



Yep, and we need a solution.  I think the right solution is a keypair 
generated on the instance, public key posted by the instace to the 
hypervisor and stored with the instance data in the database.  I wrote 
that to the mailing list earlier today.


A basic rule of a private key is that it never leaves the machine on 
which it is generated.  The rest falls out from there.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Adam Young

On 04/05/2016 09:01 AM, Steven Hardy wrote:

On Tue, Apr 05, 2016 at 02:07:06PM +0300, Juan Antonio Osorio wrote:

On Tue, Apr 5, 2016 at 11:36 AM, Steven Hardy  wrote:

  On Sat, Apr 02, 2016 at 05:28:57PM -0400, Adam Young wrote:
  > I finally have enough understanding of what is going on with Tripleo
  to
  > reasonably discuss how to implement solutions for some of the main
  security
  > needs of a deployment.
  >
  >
  > FreeIPA is an identity management solution that can provide support
  for:
  >
  > 1. TLS on all network communications:
  >Â  Â  A. HTTPS for web services
  >Â  Â  B. TLS for the message bus
  >Â  Â  C. TLS for communication with the Database.
  > 2. Identity for all Actors in the system:
  >   A.  API services
  >   B.  Message producers and consumers
  >   C.  Database consumers
  >   D.  Keystone service users
  > 3. Secure  DNS DNSSEC
  > 4. Federation Support
  > 5. SSH Access control to Hosts for both undercloud and overcloud
  > 6. SUDO management
  > 7. Single Sign On for Applications running in the overcloud.
  >
  >
  > The main pieces of FreeIPA are
  > 1. LDAP (the 389 Directory Server)
  > 2. Kerberos
  > 3. DNS (BIND)
  > 4. Certificate Authority (CA) server (Dogtag)
  > 5. WebUI/Web Service Management Interface (HTTPD)
  >
  > Of these, the CA is the most critical.  Without a centralized CA, we
  have no
  > reasonable way to do certificate management.
  >
  > Now, I know a lot of people have an allergic reaction to some, maybe
  all, of
  > these technologies. They should not be required to be running in a
  > development or testbed setup.  But we need to make it possible to
  secure an
  > end deployment, and FreeIPA was designed explicitly for these kinds of
  > distributed applications.  Here is what I would like to implement.
  >
  > Assuming that the Undercloud is installed on a physical machine, we
  want to
  > treat the FreeIPA server as a managed service of the undercloud that
  is then
  > consumed by the rest of the overcloud. Right now, there are conflicts
  for
  > some ports (8080 used by both swift and Dogtag) that prevent a drop-in
  run
  > of the server on the undercloud controller.  Even if we could
  deconflict,
  > there is a possible battle between Keystone and the FreeIPA server on
  the
  > undercloud.  So, while I would like to see the ability to run the
  FreeIPA
  > server on the Undercloud machine eventuall, I think a more realistic
  > deployment is to build a separate virtual machine, parallel to the
  overcloud
  > controller, and install FreeIPA there. I've been able to modify
  Tripleo
  > Quickstart to provision this VM.

  IMO these services shouldn't be deployed on the undercloud - we only
  support a single node undercloud, and atm it's completely possible to
  take
  the undercloud down without any impact to your deployed cloud (other
  than
  losing the ability to manage it temporarily).

This is fair enough, however, for CI purposes, would it be acceptable to
deploy it there? Or where do you recommend we have it?

We're already well beyond capacity in CI, so to me this seems like
something that's probably appropriate for a third-party CI job?

To me it just doesn't make sense to integrate these pieces on the
undercloud, and integrating it there just because we need it available for
CI purposes seems like a poor argument, because we're not testing a
representative/realistic environment.

If we have to wire this in to TripleO CI I'd favor spinning up an extra
node with the FreeIPA pieces in, e.g a separate Heat stack (so, e.g the
nonha job takes 3 nodes, a "freeipa" stack of 1 and an overcloud of 2).
So, this is actually what I proposed.  If you reread my original, put 
the emphasis on the first part:


"Assuming that the Undercloud is installed on a physical machine, we 
want to  treat the FreeIPA server as a managed service of the undercloud 
that is then  consumed by the rest of the overcloud." Running it on the 
Undercloud machine was only


"I would like ...  ability ... eventually"

As I said, with quickstart, I have the ability to deploy an additional 
VM and throw IPA on there.  I have it all set with Ansible.  This 
machine could avoid using Puppet itself.


But, it is possible to install IPA using Puppet, and we could do that 
too, its just new code to be written.


The ability to run with an existing IPA server is also important. Either 
way, though, what is important is that IPA be available, or we lose the 
Certificate management.  So, for CI, I would like to drive on with 
running it this way.


There are a couple one efforts to make this happen out there;


Re: [openstack-dev] [oslo] oslo.context and name fields

2016-04-05 Thread Jamie Lennox
from_environ was mine, it's reasonably new and at the time i was blocked
upon getting a release before pushing it out to services. Since then i've
been distracted with other things. The intent at the time was exactly this
to standardize the values on the context object, though in my case i was
particularly interested with how we could handle authentication plugins.

The problems i hit were specifically around how we could seperate values
that were relevant to things like policy from values that were relevant for
RPC etc rather that the big to_dict that is used for everything at the
moment.

There were a number of problems with this, however nothing that would
prevent more standardization of the base attributes and using from_environ
now.


On 6 April 2016 at 07:39, Ronald Bradford  wrote:

> I have created a version that use constructor arguments. [5]
> I will review in more detail across projects the use of keystone
> middleware to see if we can utilize a constructor environment attribute to
> simply constructor usage.
>
> [5] https://review.openstack.org/301918
>
> Ronald Bradford
>
> Web Site: http://ronaldbradford.com
> LinkedIn:  http://www.linkedin.com/in/ronaldbradford
> Twitter:@RonaldBradford 
> Skype: RonaldBradford
> GTalk: Ronald.Bradford
> IRC: rbradfor
>
>
> On Tue, Apr 5, 2016 at 3:49 PM, Sean Dague  wrote:
>
>> Cool. Great.
>>
>> In looking at this code a bit more I think we're missing out on some
>> commonality by the fact that this nice bit of common parsing -
>>
>> https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L138-L161
>> is actually hidden behind a factory pattern, and not used by anyone in
>> OpenStack -
>> http://codesearch.openstack.org/?q=from_environ=nope==
>>
>> If instead of standardizing the args to the context constructor, we
>> could remove a bunch of them and extract that data from a passed
>> environment during the constructor that should remove a bunch of parsing
>> code in every project. It would also mean that we could easily add
>> things like project_name and user_name in, and they would be available
>> to all consumers.
>>
>> -Sean
>>
>> On 04/05/2016 03:39 PM, Ronald Bradford wrote:
>> > Sean,
>> >
>> > I cannot speak to historically why there were not there, but I am
>> > working through the app-agnostic-logging-parameters blueprint [1] right
>> > now and it's very related to this.  As part of this work I would be
>> > reviewing attributes that are more commonly used in subclassed context
>> > objects for inclusion into the base oslo.context class, a step before a
>> > more kwargs init() approach that many subclassed context object utilize
>> now.
>> >
>> > I am also proposing the standardization of context arguments [2]
>> > (specifically ids), and names are not mentioned but I would like to
>> > follow the proposed convention.
>> >
>> > However, as you point out in the middleware [3], if the information is
>> > already available I see no reason not to facilite the base oslo.context
>> > class to consume this for subsequent use by logging.  FYI the
>> > get_logging_values() work in [4] is specially to add logging only values
>> > and this can be the first use case.
>> >
>> > While devstack uses these logging format string options, the defaults
>> > (which I presume are operator centric do not).  One of my goals of the
>> > Austin Ops summit it to get to talk with actual operators and find out
>> > what is really in use.   Regardless, the capacity to choose should be
>> > available when possible if the information is already identified without
>> > subsequent lookup.
>> >
>> >
>> > Ronald
>> >
>> >
>> > [1]
>> https://blueprints.launchpad.net/oslo.log/+spec/app-agnostic-logging-parameters
>> > [2] https://review.openstack.org/#/c/290907/
>> > [3]
>> http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
>> > [4] https://review.openstack.org/#/c/274186/
>> >
>> >
>> >
>> >
>> >
>> > On Tue, Apr 5, 2016 at 2:31 PM, Sean Dague > > > wrote:
>> >
>> > I was trying to clean up the divergent logging definitions in
>> devstack
>> > as part of scrubbing out 'tenant' references -
>> > https://review.openstack.org/#/c/301801/ and in doing so stumbled
>> over
>> > the fact that the extremely useful project_name and user_name
>> fields are
>> > not in base oslo.context.
>> >
>> >
>> https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L148-L159
>> >
>> > These are always available to be set -
>> >
>> http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
>> >
>> > And they 

Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Rich Megginson

On 04/05/2016 07:06 PM, Dan Prince wrote:

On Sat, 2016-04-02 at 17:28 -0400, Adam Young wrote:

I finally have enough understanding of what is going on with Tripleo
to
reasonably discuss how to implement solutions for some of the main
security needs of a deployment.


FreeIPA is an identity management solution that can provide support
for:

1. TLS on all network communications:
 A. HTTPS for web services
 B. TLS for the message bus
 C. TLS for communication with the Database.
2. Identity for all Actors in the system:
A.  API services
B.  Message producers and consumers
C.  Database consumers
D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)

Of these, the CA is the most critical.  Without a centralized CA, we
have no reasonable way to do certificate management.

Would using Barbican to provide an API to manage the certificates make
more sense for our deployment tooling? This could be useful for both
undercloud and overcloud cases.

As for the rest of this, how invasive is the implementation of
FreeIPA.? Is this something that we can layer on top of an existing
deployment such that users wishing to use FreeIPA can opt-in.


Now, I know a lot of people have an allergic reaction to some, maybe
all, of these technologies. They should not be required to be running
in
a development or testbed setup.  But we need to make it possible to
secure an end deployment, and FreeIPA was designed explicitly for
these
kinds of distributed applications.  Here is what I would like to
implement.

Assuming that the Undercloud is installed on a physical machine, we
want
to treat the FreeIPA server as a managed service of the undercloud
that
is then consumed by the rest of the overcloud. Right now, there are
conflicts for some ports (8080 used by both swift and Dogtag) that
prevent a drop-in run of the server on the undercloud
controller.  Even
if we could deconflict, there is a possible battle between Keystone
and
the FreeIPA server on the undercloud.  So, while I would like to see
the
ability to run the FreeIPA server on the Undercloud machine
eventuall, I
think a more realistic deployment is to build a separate virtual
machine, parallel to the overcloud controller, and install FreeIPA
there. I've been able to modify Tripleo Quickstart to provision this
VM.

I was also able to run FreeIPA in a container on the undercloud
machine,
but this is, I think, not how we want to migrate to a container
based
strategy. It should be more deliberate.


While the ideal setup would be to install the IPA layer first, and
create service users in there, this produces a different install
path
between with-FreeIPA and without-FreeIPA. Thus, I suspect the right
approach is to run the overcloud deploy, then "harden" the
deployment
with the FreeIPA steps.


The IdM team did just this last summer in preparing for the Tokyo
summit, using Ansible and Packstack.  The Rippowam project
https://github.com/admiyo/rippowam was able to fully lock down a
Packstack based install.  I'd like to reuse as much of Rippowam as
possible, but called from Heat Templates as part of an overcloud
deploy.  I do not really want to re implement Rippowam in Puppet.

As we are using Puppet for our configuration I think this is currently
a requirement. There are many good puppet examples out there of various
servers and a quick google search showed some IPA modules are available
as well.

I think most TripleO users are quite happy in using puppet modules for
configuration in that the puppet openstack modules are quite mature and
well tested. Making a one-off exception for FreeIPA at this point
doesn't make sense to me.


What about calling an ansible playbook from a puppet module?


So, big question: is Heat->ansible (instead of Puppet) for an
overcloud
deployment an acceptable path?  We are talking Ansible 1.0
Playbooks,
which should be relatively straightforward ports to 2.0 when the time
comes.

Thus, the sequence would be:

1. Run existing overcloud deploy steps.
2. Install IPA server on the allocated VM
3. Register the compute nodes and the controller as IPA clients
4. Convert service users over to LDAP backed services, complete with
necessary kerberos steps to do password-less authentication.
5. Register all web services with IPA and allocate X509 certificates
for
HTTPS.
6. Set up Host based access control (HBAC) rules for SSH access to
overcloud machines.


When we did the Rippowam demo, we used the Proton driver and
Kerberos
for securing the message broker.  Since Rabbit seems to be the tool
of
choice,  we would use X509 authentication and TLS for
encryption.  ACLs,
for now, would stay in the 

Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Dan Prince
On Sat, 2016-04-02 at 17:28 -0400, Adam Young wrote:
> I finally have enough understanding of what is going on with Tripleo
> to 
> reasonably discuss how to implement solutions for some of the main 
> security needs of a deployment.
> 
> 
> FreeIPA is an identity management solution that can provide support
> for:
> 
> 1. TLS on all network communications:
> A. HTTPS for web services
> B. TLS for the message bus
> C. TLS for communication with the Database.
> 2. Identity for all Actors in the system:
>    A.  API services
>    B.  Message producers and consumers
>    C.  Database consumers
>    D.  Keystone service users
> 3. Secure  DNS DNSSEC
> 4. Federation Support
> 5. SSH Access control to Hosts for both undercloud and overcloud
> 6. SUDO management
> 7. Single Sign On for Applications running in the overcloud.
> 
> 
> The main pieces of FreeIPA are
> 1. LDAP (the 389 Directory Server)
> 2. Kerberos
> 3. DNS (BIND)
> 4. Certificate Authority (CA) server (Dogtag)
> 5. WebUI/Web Service Management Interface (HTTPD)
> 
> Of these, the CA is the most critical.  Without a centralized CA, we 
> have no reasonable way to do certificate management.

Would using Barbican to provide an API to manage the certificates make
more sense for our deployment tooling? This could be useful for both
undercloud and overcloud cases.

As for the rest of this, how invasive is the implementation of
FreeIPA.? Is this something that we can layer on top of an existing
deployment such that users wishing to use FreeIPA can opt-in.

> 
> Now, I know a lot of people have an allergic reaction to some, maybe 
> all, of these technologies. They should not be required to be running
> in 
> a development or testbed setup.  But we need to make it possible to 
> secure an end deployment, and FreeIPA was designed explicitly for
> these 
> kinds of distributed applications.  Here is what I would like to
> implement.
> 
> Assuming that the Undercloud is installed on a physical machine, we
> want 
> to treat the FreeIPA server as a managed service of the undercloud
> that 
> is then consumed by the rest of the overcloud. Right now, there are 
> conflicts for some ports (8080 used by both swift and Dogtag) that 
> prevent a drop-in run of the server on the undercloud
> controller.  Even 
> if we could deconflict, there is a possible battle between Keystone
> and 
> the FreeIPA server on the undercloud.  So, while I would like to see
> the 
> ability to run the FreeIPA server on the Undercloud machine
> eventuall, I 
> think a more realistic deployment is to build a separate virtual 
> machine, parallel to the overcloud controller, and install FreeIPA 
> there. I've been able to modify Tripleo Quickstart to provision this
> VM.
> 
> I was also able to run FreeIPA in a container on the undercloud
> machine, 
> but this is, I think, not how we want to migrate to a container
> based 
> strategy. It should be more deliberate.
> 
> 
> While the ideal setup would be to install the IPA layer first, and 
> create service users in there, this produces a different install
> path 
> between with-FreeIPA and without-FreeIPA. Thus, I suspect the right 
> approach is to run the overcloud deploy, then "harden" the
> deployment 
> with the FreeIPA steps.
> 
> 
> The IdM team did just this last summer in preparing for the Tokyo 
> summit, using Ansible and Packstack.  The Rippowam project 
> https://github.com/admiyo/rippowam was able to fully lock down a 
> Packstack based install.  I'd like to reuse as much of Rippowam as 
> possible, but called from Heat Templates as part of an overcloud 
> deploy.  I do not really want to re implement Rippowam in Puppet.

As we are using Puppet for our configuration I think this is currently
a requirement. There are many good puppet examples out there of various
servers and a quick google search showed some IPA modules are available
as well.

I think most TripleO users are quite happy in using puppet modules for
configuration in that the puppet openstack modules are quite mature and
well tested. Making a one-off exception for FreeIPA at this point
doesn't make sense to me.

> 
> So, big question: is Heat->ansible (instead of Puppet) for an
> overcloud 
> deployment an acceptable path?  We are talking Ansible 1.0
> Playbooks, 
> which should be relatively straightforward ports to 2.0 when the time
> comes.
> 
> Thus, the sequence would be:
> 
> 1. Run existing overcloud deploy steps.
> 2. Install IPA server on the allocated VM
> 3. Register the compute nodes and the controller as IPA clients
> 4. Convert service users over to LDAP backed services, complete with 
> necessary kerberos steps to do password-less authentication.
> 5. Register all web services with IPA and allocate X509 certificates
> for 
> HTTPS.
> 6. Set up Host based access control (HBAC) rules for SSH access to 
> overcloud machines.
> 
> 
> When we did the Rippowam demo, we used the Proton driver and
> Kerberos 
> for securing the message broker. 

Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-05 Thread Joshua Harlow

Adam Young wrote:

We have a use case where we want to register a newly spawned Virtual
machine with an identity provider.

Heat also has a need to provide some form of Identity for a new VM.


Looking at the set of utilities right now, there does not seem to be a
secure way to do this. Injecting files does not provide a path that
cannot be seen by other VMs or machines in the system.

For our use case, a short lived One-Time-Password is sufficient, but for
others, I think asymmetric key generation makes more sense.

Is the following possible:

1. In cloud-init, the VM generates a Keypair, then notifies the No0va
infrastructure (somehow) that it has done so.


So this can be somewhat done already:

https://cloudinit.readthedocs.org/en/latest/topics/examples.html#call-a-url-when-finished

But unsure what endpoint u want that thing to call (and the data it 
sends might need to be tweaked); and said calling a URL might need 
https, which then begs the question of what certs and stuff is https 
using to ensure its calling a URL that is 'really nova'.




2. Nova Compute reads the public Key off the device and sends it to
conductor, which would then associate the public key with the server?

3. A third party system could then validate the association of the
public key and the server, and build a work flow based on some signed
document from the VM?


Seems like a useful idea, if we can figure out how to do it.

-Josh







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton blueprints call for action

2016-04-05 Thread Sean M. Collins
Armando M. wrote:
> I would like to understand if these need new owners (both assignees and
> approvers). Code submitted [5,6] has not been touched in a while, and
> whilst I appreciate people have been busy focussing on Mitaka (myself
> included), the Newton master branch has been open for a while.
> 
> With this email I would like to appeal to the people in CC to report back
> their interest in continuing working on these items in their respective
> capacities, and/or the wider community, in case new owners need to be
> identified.

My involvement in the FwaaS project has been reduced significantly over
the past couple of months, and I think because of this, someone should
probably step in and replace me.

I would like to apologize for my failure, because I advocated publicly for
using the FwaaS API as the vehicle for more granular and more advanced
packet filtering features, as opposed to extending the security group
API. It is a failure on my part to advocate for a solution,
then not be able to deliver the required work. I am sorry for this,
truly.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][release][keystone] Mitaka RC3 available

2016-04-05 Thread Davanum Srinivas
Hello everyone,

Due to a release-critical issue spotted in Keystone during RC2 testing, a
new release candidate was created for Mitaka. You can find the RC3
source code tarball at:

https://tarballs.openstack.org/keystone/keystone-9.0.0.0rc3.tar.gz

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, this tarball will be formally released as
final "Mitaka" versions on April 7th. You are therefore strongly
encouraged to test and validate this tarball !

Alternatively, you can directly test the mitaka release branch
at:http://git.openstack.org/cgit/openstack/keystone/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/keystone/+filebug

and tag it *mitaka-rc-potential* to bring it to the Keystone release crew's
attention.

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-05 Thread Dean Troyer
On Tue, Apr 5, 2016 at 4:35 PM, Ian Cordasco  wrote:

> The goal is to centralize the server expertise and have that be combined
> with the folks who know how to better design a library for a good developer
> experience (kind of like with osc). That said, I think the SDK would have
> gained more traction if the OSC project had adopted it (which it is
> hesistant to do given the troubles it has had with conflicting versions of
> the service clients).
>

Actually it is simple timing.  I didn't want OSC to require the SDK until
they had a 1.0 release, but we went ahead and implemented Network commands
using the SDK anyway because there were no compatibility issues.  OSC
pre-dates the SDK by at least 2 years...

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Collaborating topics for Magnum design summit

2016-04-05 Thread Hongbin Lu
Hi team,
As mentioned in the team meeting, Magnum team is using an etherpad [1] to 
collaborate topics for design summit. If you interest to join us in the Newton 
design summit, I would request your inputs in the etherpad. In particular, you 
can do the followings:
* Propose new topics that you want to discuss.
* Vote on existing topics (+1 on the topics you like).
Magnum has 5 fishbowl and 5 workroom session, so I will select 10 topics based 
on the feedback (the rest will be placed on the Friday meetup session). Your 
inputs are greatly appreciated. Thanks.
[1] https://etherpad.openstack.org/p/magnum-newton-design-summit-topics

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Implementing tempest test for Keystone federation functional tests

2016-04-05 Thread Rodrigo Duarte
On Tue, Apr 5, 2016 at 11:53 AM, Minying Lu  wrote:

> Thank you for your awesome feedbacks!
>
>
>> Another option is to add those tests to keystone itself (if you are not
>>> including tests that triggers other components APIs). See
>>> https://blueprints.launchpad.net/keystone/+spec/keystone-tempest-plugin-tests
>>>
>>>
>>
>>
> knikolla and I are looking into the keystone-tempest-plugin too thanks
> Rodrigo!
>
>
>> Again though, the problem is not where the tests live but where we run
>> them. To practically run these tests we need to either add K2K testing
>> support to devstack (not sure this is appropriate) or come up with a new
>> test environment that deploys 2 keystones and federation support that we
>> can CI against in the gate. This is doable but i think something we need
>> support with from infra before worrying about tempest.
>>
>>
> We have engineers in the team that are communicating with the infra team
> on how to set up a environment that runs the federation test. We're
> thinking about creating a 2 devstack setup with the keystones configured as
> Identity provider and service provider with federation support. Meanwhile I
> can just work on writing the test in a pre-configured environment that's
> the same as the 2 devstack setup.
>

Awesome, please share your work so I can help on that front too!


>
>
>>
>>
>>>
 The fly in the ointment for this case will be CI though. For tests to
 live in
 tempest they need to be verified by a CI system before they can land.
 So to
 land the additional testing in tempest you'll have to also ensure there
 is a
 CI job setup in infra to configure the necessary environment. While I
 think
 this is a good thing to have in the long run, it's not necessarily a
 small
 undertaking.

>>>
 -Matt Treinish


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Rodrigo Duarte Sousa
>>> Senior Quality Engineer @ Red Hat
>>> MSc in Computer Science
>>> http://rodrigods.com
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-05 Thread Adam Young
We have a use case where we want to register a newly spawned Virtual 
machine with an identity provider.


Heat also has a need to provide some form of Identity for a new VM.


Looking at the set of utilities right now, there does not seem to be a 
secure way to do this.  Injecting files does not provide a path that 
cannot be seen by other VMs or machines in the system.


For our use case, a short lived One-Time-Password is sufficient, but for 
others, I think asymmetric key generation makes more sense.


Is the following possible:

1.  In cloud-init, the VM generates a Keypair, then notifies the No0va 
infrastructure (somehow) that it has done so.


2.  Nova Compute reads the public Key off the device and sends it to 
conductor, which would then associate the public key with the server?


3.  A third party system could then validate the association of the 
public key and the server, and build a work flow based on some signed 
document from the VM?






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca][release] missing build artifacts

2016-04-05 Thread Hochmuth, Roland M
Thanks for the pointer Clark. We will look into using that, although we were 
running out of time on the Mitaka release to get something like that 
implemented.

Craig Bryant is in the process of manually updating the versions in the pom 
files to match the Mitaka tags. A couple of reviews are available at

monasca-common: https://review.openstack.org/#/c/301925/
monasca-api: https://review.openstack.org/#/c/301925


Assuming that is an acceptable method, we can also update 

monasca-thresh
monasca-persister

using the same approach. 

Note, I thought he was going to try and pull the tag from git, but maybe that 
didn't turn out OK.

Assuming those work, then at least we would have jars available that match the 
tags for the Mitaka release.








On 4/5/16, 1:10 PM, "Clark Boylan"  wrote:

>On Tue, Apr 5, 2016, at 11:56 AM, Hochmuth, Roland M wrote:
>> Thanks Doug, Thierry and Davanum. Sorry about the all the extra work that
>> I've caused.
>> 
>> It sounds like all Python projects/deliverables are in reasonable shape,
>> but if not, please let me know. 
>> 
>> Not sure what we should do about the jars at this point. We had started
>> to discuss a plan to manually copy the jars over to the proper location.
>> I was hoping we could just do this temporarily for Mitaka. Unfortunately,
>> there are a few steps that need to be resolved prior to doing that.
>> 
>> Currently, the java builds overwrite the previous build. The version
>> number of the jar that is built, matches the version in the pom file.
>> See, http://tarballs.openstack.org/ci/monasca-thresh/, for an example.
>> 
>> What we are looking into is modifying the pom files for the java repos,
>> so that the version number of the jar matches the tag when built (not
>> what is in the pom), and modifying the name of the jar, by removing the
>> word SNAPSHOT.
>> 
>> If we do that, we think we can get a name for the jar with a version that
>> matches the latest tag on whatever branch is being used. This should be
>> similar to how the Python wheels that are named.
>> 
>> We could manually copy in the short-term. But the goal is to add an
>> automatic copy to the appropriate location in,
>> http://tarballs.openstack.org/.
>> 
>> Unfortunately, for the java related deliverables, it sounds like we are a
>> little late for all this to get done prior to Mitaka. Not sure if this
>> can be added post Mitaka.
>
>The infra team has to publish jars as well and has a set of jobs for
>that at [0]. It should figure out your versions from git as well and set
>them all automagically with the correct maven configuration. You might
>be able to just add these jobs assuming you use maven.
>
>[0]
>https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/maven-plugin-jobs.yaml
>
>Clark
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-05 Thread Nikhil Komawar

I think in the interest of supporting the OSC effort, I am with
deprecating the CLI stuff (possibly glanceclient too -- BUT in a
different thread).

I believe removing the bindings/modules that support possibly OSC and
other libs might be lot trickier. (nova proxy stuff for glance may be an
exception but I don't know all the libs that use it).

And on Matt's question of glanceclient supporting image-meta stuff,
Glance and in turn glanceclient should be a superset of the Images API
that Nova and other services support. If that's not the case, then we've
a DefCore problem but AFAIK, it's not.

On the note of adding / removing support for Glance v2 API and proxy
jazz in Nova:: Last time I'd a discussion with johnthetubaguy and we
agreed that the proxy API won't change for Nova (the changes needed for
the Glance v2 adoption would ensure the proxy API remains same) also,
the purpose of Glance v2 adoption (in Nova and everywhere else) is to
promote the "right" public facing Glance API (which is in development &
supposed to be v2).

I'm glad we're chatting about deprecating the Nova proxy API proactively
but I think we should not tie it (or get confused that it's tied with)
the Nova's adoption of Glance v2 API.

Yours sincerely!

On 4/5/16 5:30 PM, Michael Still wrote:
> On Wed, Apr 6, 2016 at 7:28 AM, Ian Cordasco  > wrote:
>
>  
>
>
>
> -Original Message-
>
> From: Michael Still >
>
> Reply: OpenStack Development Mailing List (not for usage
> questions)  >
>
> Date: April 5, 2016 at 16:11:05
>
> To: OpenStack Development Mailing List (not for usage questions)
>  >
>
> Subject:  Re: [openstack-dev] [nova][glance] Proposal to remove
> `nova image-*` commands from novaclient
>
>
>
> > As a recent newcomer to using our client libraries, my only real
> objection
>
> > to this plan is that our client libraries as a mess [1][2]. The
> interfaces
>
> > we expect users to use are quite different for basic things like
> initial
>
> > auth between the various clients, and by introducing another
> library we
>
> > insist people use we're going to force a lot of devs to
> eventually go
>
> > through having to understand how those other people did that thing.
>
> >
>
> > I guess I could ease my concerns here if we could agree to some
> sort of
>
> > standard for what auth in a client library looks like...
>
> >
>
> > Some examples of auth at the moment:
>
> >
>
> > self.glance = glance_client.Client('2', endpoint, token=token)
>
> > self.ironic = ironic_client.get_client(1, ironic_url=endpoint,
>
> > os_auth_token=token)
>
> > self.nova = nova_client.Client('2', bypass_url=endpoint,
> auth_token=token)
>
> >
>
> > Note how we can't decide if the version number is a string or an
> int, and
>
> > the argument names for the endpoint and token are different in
> all three.
>
> > Its like we're _trying_ to make this hard.
>
> >
>
> > Michael
>
> >
>
> > 1: I guess I might be doing it wrong, but in that case I'll just
> mutter
>
> > about the API docs instead.
>
> > 2: I haven't looked at the unified openstack client library to
> see if its
>
> > less crazy.
>
> >
>
>
>
> What if we just recommend everyone use the openstacksdk
> (https://pypi.python.org/pypi/openstacksdk)? We could add more
> developer resources by deprecating our individual client libraries
> to use that instead? It's consistent and well-designed and would
> probably benefit from us actively helping with each service's portion.
>
>  
> So like I said, I haven't looked at it at all because I am middle
> aged, stuck in my ways, hate freedom, and because I didn't think of it.
>
> Does it include a command line interface that's not crazy as well?
>
> If so, why are we maintaining duplicate sets of libraries / clients?
> It seems like a lot of wasted effort.
>
> Michael
>
> -- 
> Rackspace Australia
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.context and name fields

2016-04-05 Thread Ronald Bradford
I have created a version that use constructor arguments. [5]
I will review in more detail across projects the use of keystone middleware
to see if we can utilize a constructor environment attribute to simply
constructor usage.

[5] https://review.openstack.org/301918

Ronald Bradford

Web Site: http://ronaldbradford.com
LinkedIn:  http://www.linkedin.com/in/ronaldbradford
Twitter:@RonaldBradford 
Skype: RonaldBradford
GTalk: Ronald.Bradford
IRC: rbradfor


On Tue, Apr 5, 2016 at 3:49 PM, Sean Dague  wrote:

> Cool. Great.
>
> In looking at this code a bit more I think we're missing out on some
> commonality by the fact that this nice bit of common parsing -
>
> https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L138-L161
> is actually hidden behind a factory pattern, and not used by anyone in
> OpenStack -
> http://codesearch.openstack.org/?q=from_environ=nope==
>
> If instead of standardizing the args to the context constructor, we
> could remove a bunch of them and extract that data from a passed
> environment during the constructor that should remove a bunch of parsing
> code in every project. It would also mean that we could easily add
> things like project_name and user_name in, and they would be available
> to all consumers.
>
> -Sean
>
> On 04/05/2016 03:39 PM, Ronald Bradford wrote:
> > Sean,
> >
> > I cannot speak to historically why there were not there, but I am
> > working through the app-agnostic-logging-parameters blueprint [1] right
> > now and it's very related to this.  As part of this work I would be
> > reviewing attributes that are more commonly used in subclassed context
> > objects for inclusion into the base oslo.context class, a step before a
> > more kwargs init() approach that many subclassed context object utilize
> now.
> >
> > I am also proposing the standardization of context arguments [2]
> > (specifically ids), and names are not mentioned but I would like to
> > follow the proposed convention.
> >
> > However, as you point out in the middleware [3], if the information is
> > already available I see no reason not to facilite the base oslo.context
> > class to consume this for subsequent use by logging.  FYI the
> > get_logging_values() work in [4] is specially to add logging only values
> > and this can be the first use case.
> >
> > While devstack uses these logging format string options, the defaults
> > (which I presume are operator centric do not).  One of my goals of the
> > Austin Ops summit it to get to talk with actual operators and find out
> > what is really in use.   Regardless, the capacity to choose should be
> > available when possible if the information is already identified without
> > subsequent lookup.
> >
> >
> > Ronald
> >
> >
> > [1]
> https://blueprints.launchpad.net/oslo.log/+spec/app-agnostic-logging-parameters
> > [2] https://review.openstack.org/#/c/290907/
> > [3]
> http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
> > [4] https://review.openstack.org/#/c/274186/
> >
> >
> >
> >
> >
> > On Tue, Apr 5, 2016 at 2:31 PM, Sean Dague  > > wrote:
> >
> > I was trying to clean up the divergent logging definitions in
> devstack
> > as part of scrubbing out 'tenant' references -
> > https://review.openstack.org/#/c/301801/ and in doing so stumbled
> over
> > the fact that the extremely useful project_name and user_name fields
> are
> > not in base oslo.context.
> >
> >
> https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L148-L159
> >
> > These are always available to be set -
> >
> http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
> >
> > And they are extremely valuable when a human is looking at logs, as
> you
> > actually can remember names when looking at various services to cross
> > reference things. Nova has a custom context that sets these things -
> >
> https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/api/auth.py#L114-L115
> >
> > I would really like these to be available in all services using
> > oslo.context.
> >
> > So the question is, were these not implemented on purpose? Is the
> > opposition to putting them into oslo.context?
> >
> > Please let me know before I start going down this path.
> >
> > -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> >
>  __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > 

Re: [openstack-dev] [all][trove] Trove bug scrub

2016-04-05 Thread Victoria Martínez de la Cruz
++!

2016-04-05 17:54 GMT-03:00 Amrith Kumar :

>
>
> *From:* Victoria Martínez de la Cruz [mailto:
> victo...@vmartinezdelacruz.com]
> *Sent:* Tuesday, April 05, 2016 4:35 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [all][trove] Trove bug scrub
>
>
>
> Thanks Amrith.
>
>
>
> I'll follow your lead and do some triaging as well.
>
>
>
> We should organize bug triage days to make this process easier next time.
>
>
>
> *[amrith] I’ve been meaning to do this for some time and this morning I
> finally got around to it. But in the future, it would be good to stay ahead
> of this so we don’t have so much of backlog.*
>
>
>
> *How about bug fixing days or bug squashing days? I’d love to do that as
> well and further trim the backlog.*
>
>
>
> 2016-04-05 14:44 GMT-03:00 Flavio Percoco :
>
> On 05/04/16 15:15 +, Amrith Kumar wrote:
>
> If you are subscribed to bug notifications from Trove, you’d have received
> a
> lot of email from me over the past couple of days as I’ve gone through the
> LP
> bug list for the project and attempted to do some spring cleaning.
>
>
>
> Here’s (roughly) what I’ve tried to do:
>
>
>
> -many bugs that have been inactive, and/or assigned to people who have
> not
> been active in Trove for a while have been updated and are now no longer
> assigned to anyone
>
> -many bugs that related to reviews that have been abandoned at some
> time
> and were marked as “in-progress” at the time are now updated; some are
> marked
> ‘confirmed’, others which appear to be no longer the case are set to
> ‘incomplete’
>
> -some bugs that were recently fixed, or are in the process of getting
> merged have been nominated for backports to mitaka and in some cases
> liberty
>
>
> Awesome! I'll go through the backport reviews.
>
> Over the next several days, I will continue this process and start
> assigning
> meaningful milestones for the bugs that don’t have them.
>
>
>
> There are now a number of bugs marked as ‘low-hanging-fruit’, and several
> others that are unassigned so please feel free to pitch in and help with
> making
> this list shorter.
>
>
> ++
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-05 Thread Ian Cordasco
 

-Original Message-
From: Michael Still 
Reply: Michael Still 
Date: April 5, 2016 at 16:30:36
To: Ian Cordasco 
CC: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` 
commands from novaclient

> So like I said, I haven't looked at it at all because I am middle aged,
> stuck in my ways, hate freedom, and because I didn't think of it.
>  
> Does it include a command line interface that's not crazy as well?
>  
> If so, why are we maintaining duplicate sets of libraries / clients? It
> seems like a lot of wasted effort.

I think the goal for the python-openstacksdk was to create an effort, like the 
python-openstackclient, which would provide a good user experience (in this 
case for developers) with a consistent design for a single library to eliminate 
the need for every other library.

The goal is to centralize the server expertise and have that be combined with 
the folks who know how to better design a library for a good developer 
experience (kind of like with osc). That said, I think the SDK would have 
gained more traction if the OSC project had adopted it (which it is hesistant 
to do given the troubles it has had with conflicting versions of the service 
clients).

I'll let those projects developers/core reviewers speak more to all of that 
since I've been out of the loop for a while.
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-05 Thread Michael Still
On Wed, Apr 6, 2016 at 7:28 AM, Ian Cordasco  wrote:

>
>
> -Original Message-
> From: Michael Still 
> Reply: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Date: April 5, 2016 at 16:11:05
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject:  Re: [openstack-dev] [nova][glance] Proposal to remove `nova
> image-*` commands from novaclient
>
> > As a recent newcomer to using our client libraries, my only real
> objection
> > to this plan is that our client libraries as a mess [1][2]. The
> interfaces
> > we expect users to use are quite different for basic things like initial
> > auth between the various clients, and by introducing another library we
> > insist people use we're going to force a lot of devs to eventually go
> > through having to understand how those other people did that thing.
> >
> > I guess I could ease my concerns here if we could agree to some sort of
> > standard for what auth in a client library looks like...
> >
> > Some examples of auth at the moment:
> >
> > self.glance = glance_client.Client('2', endpoint, token=token)
> > self.ironic = ironic_client.get_client(1, ironic_url=endpoint,
> > os_auth_token=token)
> > self.nova = nova_client.Client('2', bypass_url=endpoint,
> auth_token=token)
> >
> > Note how we can't decide if the version number is a string or an int, and
> > the argument names for the endpoint and token are different in all three.
> > Its like we're _trying_ to make this hard.
> >
> > Michael
> >
> > 1: I guess I might be doing it wrong, but in that case I'll just mutter
> > about the API docs instead.
> > 2: I haven't looked at the unified openstack client library to see if its
> > less crazy.
> >
>
> What if we just recommend everyone use the openstacksdk (
> https://pypi.python.org/pypi/openstacksdk)? We could add more developer
> resources by deprecating our individual client libraries to use that
> instead? It's consistent and well-designed and would probably benefit from
> us actively helping with each service's portion.
>

So like I said, I haven't looked at it at all because I am middle aged,
stuck in my ways, hate freedom, and because I didn't think of it.

Does it include a command line interface that's not crazy as well?

If so, why are we maintaining duplicate sets of libraries / clients? It
seems like a lot of wasted effort.

Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-05 Thread Ian Cordasco
 

-Original Message-
From: Michael Still 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: April 5, 2016 at 16:11:05
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` 
commands from novaclient

> As a recent newcomer to using our client libraries, my only real objection
> to this plan is that our client libraries as a mess [1][2]. The interfaces
> we expect users to use are quite different for basic things like initial
> auth between the various clients, and by introducing another library we
> insist people use we're going to force a lot of devs to eventually go
> through having to understand how those other people did that thing.
>  
> I guess I could ease my concerns here if we could agree to some sort of
> standard for what auth in a client library looks like...
>  
> Some examples of auth at the moment:
>  
> self.glance = glance_client.Client('2', endpoint, token=token)
> self.ironic = ironic_client.get_client(1, ironic_url=endpoint,
> os_auth_token=token)
> self.nova = nova_client.Client('2', bypass_url=endpoint, auth_token=token)
>  
> Note how we can't decide if the version number is a string or an int, and
> the argument names for the endpoint and token are different in all three.
> Its like we're _trying_ to make this hard.
>  
> Michael
>  
> 1: I guess I might be doing it wrong, but in that case I'll just mutter
> about the API docs instead.
> 2: I haven't looked at the unified openstack client library to see if its
> less crazy.
>  

What if we just recommend everyone use the openstacksdk 
(https://pypi.python.org/pypi/openstacksdk)? We could add more developer 
resources by deprecating our individual client libraries to use that instead? 
It's consistent and well-designed and would probably benefit from us actively 
helping with each service's portion.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-05 Thread Michael Still
As a recent newcomer to using our client libraries, my only real objection
to this plan is that our client libraries as a mess [1][2]. The interfaces
we expect users to use are quite different for basic things like initial
auth between the various clients, and by introducing another library we
insist people use we're going to force a lot of devs to eventually go
through having to understand how those other people did that thing.

I guess I could ease my concerns here if we could agree to some sort of
standard for what auth in a client library looks like...

Some examples of auth at the moment:

self.glance = glance_client.Client('2', endpoint, token=token)
self.ironic = ironic_client.get_client(1, ironic_url=endpoint,
os_auth_token=token)
self.nova = nova_client.Client('2', bypass_url=endpoint, auth_token=token)

Note how we can't decide if the version number is a string or an int, and
the argument names for the endpoint and token are different in all three.
Its like we're _trying_ to make this hard.

Michael

1: I guess I might be doing it wrong, but in that case I'll just mutter
about the API docs instead.
2: I haven't looked at the unified openstack client library to see if its
less crazy.

On Wed, Apr 6, 2016 at 5:46 AM, Matt Riedemann 
wrote:

> As we discuss the glance v2 spec for nova, questions are coming up around
> what to do about the nova images API which is a proxy for glance v1.
>
> I don't want to add glance v2 support to the nova API since that's just
> more proxy gorp.
>
> I don't think we can just make the nova images API fail if we're using
> glance v2 in the backend, but we'll need some translation later for:
>
> user->nova-images->glance.v2->glance.v1(ish)->user
>
> But we definitely want people to stop using the nova images API.
>
> I think we can start by deprecating the nova images-* commands in
> python-novaclient, and probably the python API bindings in novaclient too.
>
> People should be using python-glanceclient or python-openstackclient for
> the CLI, and python-glanceclient or some SDK for the python API bindings.
>
> We recently removed the deprecated nova volume-* stuff from novaclient,
> this would be the same idea.
>
> Does anyone have an issue with this?
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][trove] Trove bug scrub

2016-04-05 Thread Amrith Kumar

From: Victoria Martínez de la Cruz [mailto:victo...@vmartinezdelacruz.com]
Sent: Tuesday, April 05, 2016 4:35 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [all][trove] Trove bug scrub

Thanks Amrith.

I'll follow your lead and do some triaging as well.

We should organize bug triage days to make this process easier next time.

[amrith] I’ve been meaning to do this for some time and this morning I finally 
got around to it. But in the future, it would be good to stay ahead of this so 
we don’t have so much of backlog.

How about bug fixing days or bug squashing days? I’d love to do that as well 
and further trim the backlog.

2016-04-05 14:44 GMT-03:00 Flavio Percoco 
>:
On 05/04/16 15:15 +, Amrith Kumar wrote:
If you are subscribed to bug notifications from Trove, you’d have received a
lot of email from me over the past couple of days as I’ve gone through the LP
bug list for the project and attempted to do some spring cleaning.



Here’s (roughly) what I’ve tried to do:



-many bugs that have been inactive, and/or assigned to people who have not
been active in Trove for a while have been updated and are now no longer
assigned to anyone

-many bugs that related to reviews that have been abandoned at some time
and were marked as “in-progress” at the time are now updated; some are marked
‘confirmed’, others which appear to be no longer the case are set to
‘incomplete’

-some bugs that were recently fixed, or are in the process of getting
merged have been nominated for backports to mitaka and in some cases liberty

Awesome! I'll go through the backport reviews.
Over the next several days, I will continue this process and start assigning
meaningful milestones for the bugs that don’t have them.



There are now a number of bugs marked as ‘low-hanging-fruit’, and several
others that are unassigned so please feel free to pitch in and help with making
this list shorter.

++

Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][trove] Trove bug scrub

2016-04-05 Thread Victoria Martínez de la Cruz
Thanks Amrith.

I'll follow your lead and do some triaging as well.

We should organize bug triage days to make this process easier next time.

2016-04-05 14:44 GMT-03:00 Flavio Percoco :

> On 05/04/16 15:15 +, Amrith Kumar wrote:
>
>> If you are subscribed to bug notifications from Trove, you’d have
>> received a
>> lot of email from me over the past couple of days as I’ve gone through
>> the LP
>> bug list for the project and attempted to do some spring cleaning.
>>
>>
>>
>> Here’s (roughly) what I’ve tried to do:
>>
>>
>>
>> -many bugs that have been inactive, and/or assigned to people who
>> have not
>> been active in Trove for a while have been updated and are now no longer
>> assigned to anyone
>>
>> -many bugs that related to reviews that have been abandoned at some
>> time
>> and were marked as “in-progress” at the time are now updated; some are
>> marked
>> ‘confirmed’, others which appear to be no longer the case are set to
>> ‘incomplete’
>>
>> -some bugs that were recently fixed, or are in the process of getting
>> merged have been nominated for backports to mitaka and in some cases
>> liberty
>>
>>
> Awesome! I'll go through the backport reviews.
>
> Over the next several days, I will continue this process and start
>> assigning
>> meaningful milestones for the bugs that don’t have them.
>>
>>
>>
>> There are now a number of bugs marked as ‘low-hanging-fruit’, and several
>> others that are unassigned so please feel free to pitch in and help with
>> making
>> this list shorter.
>>
>>
> ++
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-05 Thread Ian Cordasco
On Apr 5, 2016 2:49 PM, "Matt Riedemann"  wrote:
>
> As we discuss the glance v2 spec for nova, questions are coming up around
what to do about the nova images API which is a proxy for glance v1.
>
> I don't want to add glance v2 support to the nova API since that's just
more proxy gorp.
>
> I don't think we can just make the nova images API fail if we're using
glance v2 in the backend, but we'll need some translation later for:
>
> user->nova-images->glance.v2->glance.v1(ish)->user
>
> But we definitely want people to stop using the nova images API.
>
> I think we can start by deprecating the nova images-* commands in
python-novaclient, and probably the python API bindings in novaclient too.
>
> People should be using python-glanceclient or python-openstackclient for
the CLI, and python-glanceclient or some SDK for the python API bindings.
>
> We recently removed the deprecated nova volume-* stuff from novaclient,
this would be the same idea.
>
> Does anyone have an issue with this?

Seems reasonable to me. :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.context and name fields

2016-04-05 Thread Sean Dague
Cool. Great.

In looking at this code a bit more I think we're missing out on some
commonality by the fact that this nice bit of common parsing -
https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L138-L161
is actually hidden behind a factory pattern, and not used by anyone in
OpenStack -
http://codesearch.openstack.org/?q=from_environ=nope==

If instead of standardizing the args to the context constructor, we
could remove a bunch of them and extract that data from a passed
environment during the constructor that should remove a bunch of parsing
code in every project. It would also mean that we could easily add
things like project_name and user_name in, and they would be available
to all consumers.

-Sean

On 04/05/2016 03:39 PM, Ronald Bradford wrote:
> Sean,
> 
> I cannot speak to historically why there were not there, but I am
> working through the app-agnostic-logging-parameters blueprint [1] right
> now and it's very related to this.  As part of this work I would be
> reviewing attributes that are more commonly used in subclassed context
> objects for inclusion into the base oslo.context class, a step before a
> more kwargs init() approach that many subclassed context object utilize now.
> 
> I am also proposing the standardization of context arguments [2]
> (specifically ids), and names are not mentioned but I would like to
> follow the proposed convention.
> 
> However, as you point out in the middleware [3], if the information is
> already available I see no reason not to facilite the base oslo.context
> class to consume this for subsequent use by logging.  FYI the
> get_logging_values() work in [4] is specially to add logging only values
> and this can be the first use case.
> 
> While devstack uses these logging format string options, the defaults
> (which I presume are operator centric do not).  One of my goals of the
> Austin Ops summit it to get to talk with actual operators and find out
> what is really in use.   Regardless, the capacity to choose should be
> available when possible if the information is already identified without
> subsequent lookup.
> 
> 
> Ronald 
> 
> 
> [1] 
> https://blueprints.launchpad.net/oslo.log/+spec/app-agnostic-logging-parameters
> [2] https://review.openstack.org/#/c/290907/
> [3] 
> http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
> [4] https://review.openstack.org/#/c/274186/
> 
> 
> 
> 
> 
> On Tue, Apr 5, 2016 at 2:31 PM, Sean Dague  > wrote:
> 
> I was trying to clean up the divergent logging definitions in devstack
> as part of scrubbing out 'tenant' references -
> https://review.openstack.org/#/c/301801/ and in doing so stumbled over
> the fact that the extremely useful project_name and user_name fields are
> not in base oslo.context.
> 
> 
> https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L148-L159
> 
> These are always available to be set -
> 
> http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
> 
> And they are extremely valuable when a human is looking at logs, as you
> actually can remember names when looking at various services to cross
> reference things. Nova has a custom context that sets these things -
> 
> https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/api/auth.py#L114-L115
> 
> I would really like these to be available in all services using
> oslo.context.
> 
> So the question is, were these not implemented on purpose? Is the
> opposition to putting them into oslo.context?
> 
> Please let me know before I start going down this path.
> 
> -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-05 Thread Matt Riedemann
As we discuss the glance v2 spec for nova, questions are coming up 
around what to do about the nova images API which is a proxy for glance v1.


I don't want to add glance v2 support to the nova API since that's just 
more proxy gorp.


I don't think we can just make the nova images API fail if we're using 
glance v2 in the backend, but we'll need some translation later for:


user->nova-images->glance.v2->glance.v1(ish)->user

But we definitely want people to stop using the nova images API.

I think we can start by deprecating the nova images-* commands in 
python-novaclient, and probably the python API bindings in novaclient too.


People should be using python-glanceclient or python-openstackclient for 
the CLI, and python-glanceclient or some SDK for the python API bindings.


We recently removed the deprecated nova volume-* stuff from novaclient, 
this would be the same idea.


Does anyone have an issue with this?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.context and name fields

2016-04-05 Thread Ronald Bradford
Sean,

I cannot speak to historically why there were not there, but I am working
through the app-agnostic-logging-parameters blueprint [1] right now and
it's very related to this.  As part of this work I would be reviewing
attributes that are more commonly used in subclassed context objects for
inclusion into the base oslo.context class, a step before a more kwargs
init() approach that many subclassed context object utilize now.

I am also proposing the standardization of context arguments [2]
(specifically ids), and names are not mentioned but I would like to follow
the proposed convention.

However, as you point out in the middleware [3], if the information is
already available I see no reason not to facilite the base oslo.context
class to consume this for subsequent use by logging.  FYI the
get_logging_values() work in [4] is specially to add logging only values
and this can be the first use case.

While devstack uses these logging format string options, the defaults
(which I presume are operator centric do not).  One of my goals of the
Austin Ops summit it to get to talk with actual operators and find out what
is really in use.   Regardless, the capacity to choose should be available
when possible if the information is already identified without subsequent
lookup.


Ronald


[1]
https://blueprints.launchpad.net/oslo.log/+spec/app-agnostic-logging-parameters
[2] https://review.openstack.org/#/c/290907/
[3]
http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
[4] https://review.openstack.org/#/c/274186/





On Tue, Apr 5, 2016 at 2:31 PM, Sean Dague  wrote:

> I was trying to clean up the divergent logging definitions in devstack
> as part of scrubbing out 'tenant' references -
> https://review.openstack.org/#/c/301801/ and in doing so stumbled over
> the fact that the extremely useful project_name and user_name fields are
> not in base oslo.context.
>
>
> https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L148-L159
>
> These are always available to be set -
>
> http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
>
> And they are extremely valuable when a human is looking at logs, as you
> actually can remember names when looking at various services to cross
> reference things. Nova has a custom context that sets these things -
>
> https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/api/auth.py#L114-L115
>
> I would really like these to be available in all services using
> oslo.context.
>
> So the question is, were these not implemented on purpose? Is the
> opposition to putting them into oslo.context?
>
> Please let me know before I start going down this path.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-04-05 Thread Hongbin Lu
Hi Monty,

Thanks for your guidance. I have appended your inputs to the blueprint [1].

[1] https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips

Best regards,
Honbgin

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com] 
Sent: March-31-16 1:18 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

A few things:

Public IPs and Floating IPs are not the same thing.
Some clouds have public IPs. Some have floating ips. Some have both.

I think it's important to be able to have magnum work with all of the above.

If the cloud does not require using a floating IP (as most do not) to get 
externally routable network access, magnum should work with that.

If the cloud does require using a floating IP (as some do) to get externally 
rouatable network access, magnum should be able to work with that.

In either case, it's also possible the user will not desire the thing they are 
deploying in magnum to be assigned an IP on a network that routes off of the 
cloud. That should also be supported.

Shade has code to properly detect most of those situations that you can look at 
for all of the edge cases - however, since magnum is installed by the operator, 
I'd suggest making a config value for it that allows the operator to express 
whether or not the cloud in question requires floating ips as it's 
EXCEPTIONALLY hard to accurately detect.

On 03/31/2016 12:42 PM, Guz Egor wrote:
> Hongbin,
> It's correct, I was involved in two big OpenStack private cloud 
> deployments and we never had public ips.
> In such case Magnum shouldn't create any private networks, operator 
> need to provide network id/name or it should use default  (we used to 
> have networking selection logic in
> scheduler) .
>
> ---
> Egor
>
> --
> --
> *From:* Hongbin Lu 
> *To:* Guz Egor ; OpenStack Development Mailing 
> List (not for usage questions) 
> *Sent:* Thursday, March 31, 2016 7:29 AM
> *Subject:* RE: [openstack-dev] [magnum] Are Floating IPs really needed 
> for all nodes?
>
> Egor,
> I agree with what you said, but I think we need to address the problem 
> that some clouds are lack of public IP addresses. It is not uncommon 
> that a private cloud is running without public IP addresses, and they 
> already figured out how to route traffics in and out. In such case, a 
> bay doesn’t need to have floating IPs and the NodePort feature seems 
> to work with the private IP address.
> Generally speaking, I think it is useful to have a feature that allows 
> bays to work without public IP addresses. I don’t want to end up in a 
> situation that Magnum is unusable because the clouds don’t have enough 
> public IP addresses.
> Best regards,
> Hongbin
> *From:*Guz Egor [mailto:guz_e...@yahoo.com]
> *Sent:* March-31-16 12:08 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] Are Floating IPs really needed 
> for all nodes?
> -1
> who is going to run/support this proxy? also keep in mind that 
> Kubernetes Service/NodePort
> (http://kubernetes.io/docs/user-guide/services/#type-nodeport)
> functionality is not going to work without public ip and this is very 
> handy feature.
> ---
> Egor
> --
> --
> *From:*王华>
> *To:* OpenStack Development Mailing List (not for usage questions) 
>  >
> *Sent:* Wednesday, March 30, 2016 8:41 PM
> *Subject:* Re: [openstack-dev] [magnum] Are Floating IPs really needed 
> for all nodes?
> Hi yuanying,
> I agree to reduce the usage of floating IP. But as far as I know, if 
> we need to pull docker images from docker hub in nodes floating ips 
> are needed. To reduce the usage of floating ip, we can use proxy. Only 
> some nodes have floating ips, and other nodes can access docker hub by 
> proxy.
> Best Regards,
> Wanghua
> On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao  > wrote:
>
> Hi Yuanying,
> +1
> I think we can add option on whether to using floating ip address 
> since IP address are kinds of resource which not wise to waste.
> On 2016年03月31日10:40, 大塚元央wrote:
>
> Hi team,
> Previously, we had a reason why all nodes should have floating ips [1].
> But now we have a LoadBalancer features for masters [2] and minions [3].
> And also minions do not necessarily need to have floating ips [4].
> I think it’s the time to remove floating ips from all nodes.
> I know we are using floating ips in gate to get log files,
> So it’s not good idea to remove floating ips entirely.
> I want to introduce 

Re: [openstack-dev] [osc] Meeting time preferences for OSC team

2016-04-05 Thread Dean Troyer
On Tue, Apr 5, 2016 at 10:44 AM, Richard Theis  wrote:

> Here are my preferences:
>
> 1. even week: E.1 or E.4
> 2. odd week: O.3
>

I really don't want to go against the API-WG but want to keep it on
Thursday if possible:

E.4
O.3

Thanks for doing this Sheel

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca][release] missing build artifacts

2016-04-05 Thread Clark Boylan
On Tue, Apr 5, 2016, at 11:56 AM, Hochmuth, Roland M wrote:
> Thanks Doug, Thierry and Davanum. Sorry about the all the extra work that
> I've caused.
> 
> It sounds like all Python projects/deliverables are in reasonable shape,
> but if not, please let me know. 
> 
> Not sure what we should do about the jars at this point. We had started
> to discuss a plan to manually copy the jars over to the proper location.
> I was hoping we could just do this temporarily for Mitaka. Unfortunately,
> there are a few steps that need to be resolved prior to doing that.
> 
> Currently, the java builds overwrite the previous build. The version
> number of the jar that is built, matches the version in the pom file.
> See, http://tarballs.openstack.org/ci/monasca-thresh/, for an example.
> 
> What we are looking into is modifying the pom files for the java repos,
> so that the version number of the jar matches the tag when built (not
> what is in the pom), and modifying the name of the jar, by removing the
> word SNAPSHOT.
> 
> If we do that, we think we can get a name for the jar with a version that
> matches the latest tag on whatever branch is being used. This should be
> similar to how the Python wheels that are named.
> 
> We could manually copy in the short-term. But the goal is to add an
> automatic copy to the appropriate location in,
> http://tarballs.openstack.org/.
> 
> Unfortunately, for the java related deliverables, it sounds like we are a
> little late for all this to get done prior to Mitaka. Not sure if this
> can be added post Mitaka.

The infra team has to publish jars as well and has a set of jobs for
that at [0]. It should figure out your versions from git as well and set
them all automagically with the correct maven configuration. You might
be able to just add these jobs assuming you use maven.

[0]
https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/maven-plugin-jobs.yaml

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-04-05 Thread Hongbin Lu
Egor,
I agree with you. I think Magnum should support another option to connect a bay 
to an existing neutron private network instead of creating one. If you like, we 
can discuss it separately in our next team meeting or in the design summit.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-31-16 12:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Hongbin,

It's correct, I was involved in two big OpenStack private cloud deployments and 
we never had public ips.
In such case Magnum shouldn't create any private networks, operator need to 
provide network id/name or
it should use default  (we used to have networking selection logic in 
scheduler) .

---
Egor


From: Hongbin Lu >
To: Guz Egor >; OpenStack 
Development Mailing List (not for usage questions) 
>
Sent: Thursday, March 31, 2016 7:29 AM
Subject: RE: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Egor,

I agree with what you said, but I think we need to address the problem that 
some clouds are lack of public IP addresses. It is not uncommon that a private 
cloud is running without public IP addresses, and they already figured out how 
to route traffics in and out. In such case, a bay doesn’t need to have floating 
IPs and the NodePort feature seems to work with the private IP address.

Generally speaking, I think it is useful to have a feature that allows bays to 
work without public IP addresses. I don’t want to end up in a situation that 
Magnum is unusable because the clouds don’t have enough public IP addresses.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-31-16 12:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

-1

who is going to run/support this proxy? also keep in mind that Kubernetes 
Service/NodePort (http://kubernetes.io/docs/user-guide/services/#type-nodeport)
functionality is not going to work without public ip and this is very handy 
feature.

---
Egor


From: 王华 >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Wednesday, March 30, 2016 8:41 PM
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Hi yuanying,

I agree to reduce the usage of floating IP. But as far as I know, if we need to 
pull
docker images from docker hub in nodes floating ips are needed. To reduce the
usage of floating ip, we can use proxy. Only some nodes have floating ips, and
other nodes can access docker hub by proxy.

Best Regards,
Wanghua

On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao 
> wrote:
Hi Yuanying,
+1
I think we can add option on whether to using floating ip address since IP 
address are
kinds of resource which not wise to waste.

On 2016年03月31日 10:40, 大塚元央 wrote:
Hi team,

Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.

I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.

I want to introduce `disable-floating-ips-to-nodes` parameter to bay model.

Thoughts?

[1]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html

Thanks
-yuanying

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best Regards, Eli Qiao (乔立勇)
Intel OTC China

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack 

Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-04-05 Thread Carl Baldwin
I think the only thing standing in our way is this bug [1].  Ryan
Tidwell and I are working on this.

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1543094

On Mon, Apr 4, 2016 at 3:48 PM, John Belamaric  wrote:
> I was on vacation last week so I am just seeing this now. I agree that now 
> that we are in Newton we should just remove the non-pluggable code and move 
> forward with the migration.
>
> John
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-04-05 Thread Hongbin Lu
Hi all,
Thanks for your inputs. We discussed this proposal in our team meeting [1], and 
we all agreed to support an option to remove the need of floating IPs. A 
blueprint was created for implementing this feature: 
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips . Please 
feel free to assign it to yourselves if you interest to implement it. Thanks.
[1] 
http://eavesdrop.openstack.org/meetings/containers/2016/containers.2016-04-05-16.00.txt

Best regards,
Hongbin

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: April-01-16 1:57 AM
To: Guz Egor; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Hi Egor,

I know some people still want to use floating ips to connect nodes,
So I will not remove floating ips feature completely.
Only introduce the option which will disable to assign floating ip to nodes.
Because some people doesn’t want to assign floating ip to nodes.

Thanks
-yuaning
2016年3月31日(木) 13:11 Guz Egor >:
-1

who is going to run/support this proxy? also keep in mind that Kubernetes 
Service/NodePort (http://kubernetes.io/docs/user-guide/services/#type-nodeport)
functionality is not going to work without public ip and this is very handy 
feature.

---
Egor


From: 王华 >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Wednesday, March 30, 2016 8:41 PM

Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Hi yuanying,

I agree to reduce the usage of floating IP. But as far as I know, if we need to 
pull
docker images from docker hub in nodes floating ips are needed. To reduce the
usage of floating ip, we can use proxy. Only some nodes have floating ips, and
other nodes can access docker hub by proxy.

Best Regards,
Wanghua

On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao 
> wrote:
Hi Yuanying,
+1
I think we can add option on whether to using floating ip address since IP 
address are
kinds of resource which not wise to waste.

On 2016年03月31日 10:40, 大塚元央 wrote:
Hi team,

Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.

I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.

I want to introduce `disable-floating-ips-to-nodes` parameter to bay model.

Thoughts?

[1]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html

Thanks
-yuanying


__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Best Regards, Eli Qiao (乔立勇)

Intel OTC China

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca][release] missing build artifacts

2016-04-05 Thread Hochmuth, Roland M
Thanks Doug, Thierry and Davanum. Sorry about the all the extra work that I've 
caused.

It sounds like all Python projects/deliverables are in reasonable shape, but if 
not, please let me know. 

Not sure what we should do about the jars at this point. We had started to 
discuss a plan to manually copy the jars over to the proper location. I was 
hoping we could just do this temporarily for Mitaka. Unfortunately, there are a 
few steps that need to be resolved prior to doing that.

Currently, the java builds overwrite the previous build. The version number of 
the jar that is built, matches the version in the pom file. See, 
http://tarballs.openstack.org/ci/monasca-thresh/, for an example.

What we are looking into is modifying the pom files for the java repos, so that 
the version number of the jar matches the tag when built (not what is in the 
pom), and modifying the name of the jar, by removing the word SNAPSHOT.

If we do that, we think we can get a name for the jar with a version that 
matches the latest tag on whatever branch is being used. This should be similar 
to how the Python wheels that are named.

We could manually copy in the short-term. But the goal is to add an automatic 
copy to the appropriate location in, http://tarballs.openstack.org/.

Unfortunately, for the java related deliverables, it sounds like we are a 
little late for all this to get done prior to Mitaka. Not sure if this can be 
added post Mitaka.

Regards --Roland



On 4/5/16, 3:15 AM, "Thierry Carrez"  wrote:

>Davanum Srinivas wrote:
>> Please see below:
>>
>> On Sat, Apr 2, 2016 at 8:41 AM, Doug Hellmann  wrote:
>>> Excerpts from Hochmuth, Roland M's message of 2016-04-02 01:35:35 +:
 Hi Doug, You had mentioned issues with three repos:

 1. monasca-ceilometer
 2. monasca-log-api
 3. monasca-thresh

 All the repos that have Python code I believe are in reasonable shape with 
 respect to the Python deliverables except for the following two repos:

 1. monasca-ceilometer
 2. monasca-log-api

 I'm not sure we should attempt to resolve these two repos for the Mitaka 
 release, but we can try. It might be too late. They aren't in heavy usage 
 and are relatively new.
>>>
>>> I think for those you were missing the "venv" environment in tox.ini
>>> that the jobs use to run arbitrary commands. Have a look at some of the
>>> other repos for an example of how to set that up, or ask the infra team
>>> (I'm not sure where it's documented, unfortunately, or I would direct
>>> you there).
>>
>> The monasca-log-api venv problem has been fixed in:
>> https://review.openstack.org/#/c/299936/
>
>Thanks dims!
>
>Roland: Now we'll need a 0.0.3 tag request on stable/mitaka to trigger a 
>tarball rebuild.
>
>If done quickly, that should let us keep Monasca deliverables in Mitaka.
>
>-- 
>Thierry Carrez (ttx)
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Launchpad bug spring cleaning day Monday 4/18

2016-04-05 Thread Matt Riedemann
We're going to have a day of just cleaning out the launchpad bugs for 
Nova on Monday 4/18.


This isn't a bug squashing day where people are proposing patches and 
the core team is reviewing them.


This is purely about cleaning the garbage out of launchpad.

Markus Zoeller has a nice dashboard we can use. I'd like to specifically 
focus on trimming these two tabs:


1. Inconsistent: 
http://45.55.105.55:8082/bugs-dashboard.html#tabInconsistent (142 bugs 
today)


2. Stale Incomplete: 
http://45.55.105.55:8082/bugs-dashboard.html#tabIncompleteStale (59 bugs 
today)


A lot of these are probably duplicates by now, or fixed, or just invalid 
and we should close them out. That's what we'll focus on.


I'd really like to see solid participation from the core team given the 
core team should know a lot of what's already fixed or invalid, and 
being part of the core team is more than just reviewing code, it's also 
making sure our bug backlog is reasonably sane.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.context and name fields

2016-04-05 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-04-05 14:31:46 -0400:
> I was trying to clean up the divergent logging definitions in devstack
> as part of scrubbing out 'tenant' references -
> https://review.openstack.org/#/c/301801/ and in doing so stumbled over
> the fact that the extremely useful project_name and user_name fields are
> not in base oslo.context.
> 
> https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L148-L159
> 
> These are always available to be set -
> http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
> 
> And they are extremely valuable when a human is looking at logs, as you
> actually can remember names when looking at various services to cross
> reference things. Nova has a custom context that sets these things -
> https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/api/auth.py#L114-L115
> 
> I would really like these to be available in all services using
> oslo.context.
> 
> So the question is, were these not implemented on purpose? Is the
> opposition to putting them into oslo.context?

It's probably just a matter of not getting to them. We have a couple of
incomplete specs related to contexts and logging. Ronald and I were
going to try to make some progress on those during Newton.

http://specs.openstack.org/openstack/oslo-specs/specs/kilo/app-agnostic-logging-parameters.html
http://specs.openstack.org/openstack/oslo-specs/specs/liberty/oslo.log-user-identity-format-flexibility.html

Doug

> 
> Please let me know before I start going down this path.
> 
> -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara][release] missing build artifacts

2016-04-05 Thread michael mccune

On 04/05/2016 03:56 AM, Thierry Carrez wrote:

We seem to have a sahara-extra build alright:
http://tarballs.openstack.org/sahara-extra/

Please ignore the noise :)



ah, excellent!

thanks for the clarification Thierry


regards,
mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.context and name fields

2016-04-05 Thread Sean Dague
I was trying to clean up the divergent logging definitions in devstack
as part of scrubbing out 'tenant' references -
https://review.openstack.org/#/c/301801/ and in doing so stumbled over
the fact that the extremely useful project_name and user_name fields are
not in base oslo.context.

https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L148-L159

These are always available to be set -
http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service

And they are extremely valuable when a human is looking at logs, as you
actually can remember names when looking at various services to cross
reference things. Nova has a custom context that sets these things -
https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/api/auth.py#L114-L115

I would really like these to be available in all services using
oslo.context.

So the question is, were these not implemented on purpose? Is the
opposition to putting them into oslo.context?

Please let me know before I start going down this path.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][trove] Trove bug scrub

2016-04-05 Thread Flavio Percoco

On 05/04/16 15:15 +, Amrith Kumar wrote:

If you are subscribed to bug notifications from Trove, you’d have received a
lot of email from me over the past couple of days as I’ve gone through the LP
bug list for the project and attempted to do some spring cleaning.



Here’s (roughly) what I’ve tried to do:



-many bugs that have been inactive, and/or assigned to people who have not
been active in Trove for a while have been updated and are now no longer
assigned to anyone

-many bugs that related to reviews that have been abandoned at some time
and were marked as “in-progress” at the time are now updated; some are marked
‘confirmed’, others which appear to be no longer the case are set to
‘incomplete’

-some bugs that were recently fixed, or are in the process of getting
merged have been nominated for backports to mitaka and in some cases liberty



Awesome! I'll go through the backport reviews.


Over the next several days, I will continue this process and start assigning
meaningful milestones for the bugs that don’t have them.



There are now a number of bugs marked as ‘low-hanging-fruit’, and several
others that are unassigned so please feel free to pitch in and help with making
this list shorter.



++

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OVSDB native interface as default in gate jobs

2016-04-05 Thread Russell Bryant
On Tue, Apr 5, 2016 at 12:57 PM, Assaf Muller  wrote:

> On Tue, Apr 5, 2016 at 12:35 PM, Sean M. Collins 
> wrote:
> > Russell Bryant wrote:
> >> because they are related to two different command line utilities
> >> (ovs-vsctl vs ovs-ofctl) that speak two different protocols (OVSDB vs
> >> OpenFlow) that talk to two different daemons on the system
> (ovsdb-server vs
> >> ovs-vswitchd) ?
> >
> > True, they influence two different daemons - but it's really two options
> > that both have two settings:
> >
> > * "talk to it via the CLI tool"
> > * "talk to it via a native interface"
> >
> > How likely is it to have one talking via native interface and the other
> > via CLI?
>
> The ovsdb native interface is a couple of cycles more mature than the
> openflow one, I see how some users would use one but not the other.


​and they use separate Python libraries, as well (ovs vs ryu).



>
> >
> > Also, if the native interface is faster, I think we should consider
> > making it the default.
>
> Definitely. I'd prefer to deprecate and delete the cli interfaces and
> keep only the native interfaces in the long run.
>

​+1.

-- 
Russell Bryant​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OVSDB native interface as default in gate jobs

2016-04-05 Thread Assaf Muller
On Tue, Apr 5, 2016 at 12:35 PM, Sean M. Collins  wrote:
> Russell Bryant wrote:
>> because they are related to two different command line utilities
>> (ovs-vsctl vs ovs-ofctl) that speak two different protocols (OVSDB vs
>> OpenFlow) that talk to two different daemons on the system (ovsdb-server vs
>> ovs-vswitchd) ?
>
> True, they influence two different daemons - but it's really two options
> that both have two settings:
>
> * "talk to it via the CLI tool"
> * "talk to it via a native interface"
>
> How likely is it to have one talking via native interface and the other
> via CLI?

The ovsdb native interface is a couple of cycles more mature than the
openflow one, I see how some users would use one but not the other.

>
> Also, if the native interface is faster, I think we should consider
> making it the default.

Definitely. I'd prefer to deprecate and delete the cli interfaces and
keep only the native interfaces in the long run.

>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kloudbuster] authorization failed problem

2016-04-05 Thread Alec Hothan (ahothan)

Akshay,

Note that version 6 is now released so please use the official image from the 
OpenStack App Catalog and update your code to latest.
The doc has also been updated, you might want to have a look at the new arch 
section and gallery - those should help you with the questions you had below 
regarding the scale test staging and traffic flows.
http://kloudbuster.readthedocs.org/en/latest/index.html

Thanks

   Alec


From: Akshay Kumar Sanghai 
>
Date: Wednesday, March 30, 2016 at 11:11 AM
To: "Yichen Wang (yicwang)" >
Cc: Alec Hothan >, OpenStack List 
>
Subject: Re: [openstack-dev] [kloudbuster] authorization failed problem

Hi Yichen,
Thanks a lot . I will try with v6 and reach out to you for further help.

Regards,
Akshay

On Wed, Mar 30, 2016 at 11:35 PM, Yichen Wang (yicwang) 
> wrote:
Hi, Akshay,

From the log you attached, the good news is you got KloudBuster installed and 
running fine! The problem is the image you are using (v5) is outdated for the 
latest KloudBuster main code. ☺

Normally for every version of KloudBuster, it needs certain version of image to 
support the full functionality. In the case when new feature is brought in, we 
tag the main code with a new version, and bump up the image version. Like from 
v5 to v6, we added the capability to support storage testing on cinder volume 
and ephemeral disks as well. We are right in our time for publishing the v6 
image to the OpenStack App Catalog, which may take another day or two. This is 
why you are seeing the connection to the redis agent in KB-Proxy is failing…

In order to unblock you, here is the RC image of v6 we are using right now, 
replace it in your cloud and KloudBuster should be good to go:
https://cisco.box.com/s/xelzx15swjra5qr0ieafyxnbyucnnsa0

Now back to your question.
-Does the server side means the cloud generating the traffic and client side 
means the the cloud on which connections are established? Can you please 
elaborate on client, server and proxy?
[Yichen] It is the other way around. Server is running nginx, and client is 
running the traffic generator (wrk2). It is like the way we normally 
understand. Since there might be lots of servers and clients in the same cloud, 
so KB-Proxy is an additional VM that runs in the clients side to orchestrate 
all client VMs to generate traffic, collect the results from each VM, and send 
them back to the main KloudBuster for processing. KB-Proxy is the where the 
redis server is sitting, and acts as the proxy node to connect all internal VMs 
to the external network. This is why a floating IP is needed for the proxy node.

-while running the kloudbuster, I saw "setting up redis connection". Can you 
please expain to which connection is established and why? Is it KB_PROXY?
[Yichen] As I explained above, KB-Proxy is the bridge between internal VM and 
external world (like the host you are running KloudBuster from). “Setting up 
redis connection” means the KloudBuster is trying to connect to the redis 
server on the KB-Proxy node. You may see some retries because it does take some 
time for the VM to be up running.

Thanks very much!

Regards,
Yichen

From: Akshay Kumar Sanghai 
[mailto:akshaykumarsang...@gmail.com]
Sent: Wednesday, March 30, 2016 7:31 AM
To: Alec Hothan (ahothan) >
Cc: OpenStack List 
>; 
Yichen Wang (yicwang) >

Subject: Re: [openstack-dev] [kloudbuster] authorization failed problem

Hi Alec,
Thanks for clarifying. I didnot have the cinder service previously. It was not 
a complete setup. Now, I did the setup of cinder service.
Output of keystone service list.
[Inline image 1]
I installed the setup of openstack using the installation guide for ubuntu and 
for kloudbuster, its a pypi based installation. So, I am running kloudbuster 
using the CLI option.
kloudbuster --tested-rc keystone-openrc.sh --tested-passwd * --config kb.cfg

contents of kb.cfg:
image_name: 'kloudbuster'

I added the kloudbuster v5 version as glance image with name as kloudbuster.

I don't understand some basic things. If you can help, then that would be great.
-Does the server side means the cloud generating the traffic and client side 
means the the cloud on which connections are established? Can you please 
elaborate on client, server and proxy?
-while running the kloudbuster, I saw "setting up redis connection". Can you 
please expain to which connection is established and why? Is it KB_PROXY?

Please find attached the run of kloudbuster as a file. I have still not 
succeeded in running the kloudbuster, 

Re: [openstack-dev] [bifrost][ironic][release] missing build artifacts

2016-04-05 Thread Jim Rollenhagen

> On Apr 2, 2016, at 05:46, Doug Hellmann  wrote:
> 
> Excerpts from Julia Kreger's message of 2016-04-01 18:21:24 -0400:
>>> On Fri, Apr 1, 2016 at 4:48 PM, Doug Hellmann  wrote:
>>> 
>>> Excerpts from Doug Hellmann's message of 2016-04-01 16:25:07 -0400:
 Ironic/Bifrost team,
>> 
>>> 
 It would be good to understand what your intent is for builds. Can
 you follow up here on this thread with some details?
>> 
>> First time partial user of the deliverables release tagging and didn't
>> realize that we were
>> missing the configuration necessary to push a tag up to tarballs.o.o. Would
>> landing
>> 300646 and then reverting it after 300474 has landed result in the
>> artifacts being built
>> and pushed to tarballs.o.o?
> 
> The tarball job is triggered when a new tag is added, so no. Now that
> the correct jobs are in place you could apply a new tag to produce an
> artifact, though. If you re-tag the same SHA as 1.0.0 with version
> 1.0.1, we will at least have that. We can decide what to do about the
> missing artifacts separately (either remove them from the record or
> disable the links to those versions).
> 
> You can either tag directly and then record the results in the releases
> repo, or you can submit a release request and  let the release team
> process it.

Just to follow up on this, ttx and I released 1.0.1 this morning through the 
usual release process. Thanks for pointing this out, Doug. :)

// jim 

> 
> Doug
> 
>> 
>>> 
>>> I forgot to mention that https://review.openstack.org/#/c/300474
>>> includes some changes to add the simple server publishing jobs. If you
>>> want to use those, please +1 the patch.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Proposing Anton Arefiev (aarefiev) for ironic-inspector-core

2016-04-05 Thread Jim Rollenhagen
+1 from me :)

// jim 

> On Apr 5, 2016, at 03:24, Dmitry Tantsur  wrote:
> 
> Hi!
> 
> I'd like to propose Anton to the ironic-inspector core reviewers team. His 
> stats are pretty nice [1], he's making meaningful reviews and he's pushing 
> important things (discovery, now tempest).
> 
> Members of the current ironic-inspector-team and everyone interested, please 
> respond with your +1/-1. A lazy consensus will be applied: if nobody objects 
> by the next Tuesday, the change will be in effect.
> 
> Thanks
> 
> [1] http://stackalytics.com/report/contribution/ironic-inspector/60
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OVSDB native interface as default in gate jobs

2016-04-05 Thread Sean M. Collins
Russell Bryant wrote:
> because they are related to two different command line utilities
> (ovs-vsctl vs ovs-ofctl) that speak two different protocols (OVSDB vs
> OpenFlow) that talk to two different daemons on the system (ovsdb-server vs
> ovs-vswitchd) ?

True, they influence two different daemons - but it's really two options
that both have two settings:

* "talk to it via the CLI tool"
* "talk to it via a native interface"

How likely is it to have one talking via native interface and the other
via CLI? 

Also, if the native interface is faster, I think we should consider
making it the default.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Juan Antonio Osorio
I was planning to bring it up informally for TripleO. But it would be cool
to have a slot to talk about this.

BR
On 5 Apr 2016 18:51, "Fox, Kevin M"  wrote:

> Yeah, and they just deprecated vendor data plugins too, which eliminates
> my other workaround. :/
>
> We need to really discuss this problem at the summit and get a viable path
> forward. Its just getting worse. :/
>
> Thanks,
> Kevin
> --
> *From:* Juan Antonio Osorio [jaosor...@gmail.com]
> *Sent:* Tuesday, April 05, 2016 5:16 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [TripleO] FreeIPA integration
>
>
>
> On Tue, Apr 5, 2016 at 2:45 PM, Fox, Kevin M  wrote:
>
>> This sounds suspiciously like, "how do you get a secret to the instance
>> to get a secret from the secret store" issue :)
>>
> Yeah, sounds pretty familiar. We were using the nova hooks mechanism for
> this means, but it was deprecated recently. So bummer :/
>
>>
>> Nova instance user spec again?
>>
>> Thanks,
>> Kevin
>>
>> --
>> *From:* Juan Antonio Osorio
>> *Sent:* Tuesday, April 05, 2016 4:07:06 AM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [TripleO] FreeIPA integration
>>
>>
>>
>> On Tue, Apr 5, 2016 at 11:36 AM, Steven Hardy  wrote:
>>
>>> On Sat, Apr 02, 2016 at 05:28:57PM -0400, Adam Young wrote:
>>> > I finally have enough understanding of what is going on with Tripleo to
>>> > reasonably discuss how to implement solutions for some of the main
>>> security
>>> > needs of a deployment.
>>> >
>>> >
>>> > FreeIPA is an identity management solution that can provide support
>>> for:
>>> >
>>> > 1. TLS on all network communications:
>>> >A. HTTPS for web services
>>> >B. TLS for the message bus
>>> >C. TLS for communication with the Database.
>>> > 2. Identity for all Actors in the system:
>>> >   A.  API services
>>> >   B.  Message producers and consumers
>>> >   C.  Database consumers
>>> >   D.  Keystone service users
>>> > 3. Secure  DNS DNSSEC
>>> > 4. Federation Support
>>> > 5. SSH Access control to Hosts for both undercloud and overcloud
>>> > 6. SUDO management
>>> > 7. Single Sign On for Applications running in the overcloud.
>>> >
>>> >
>>> > The main pieces of FreeIPA are
>>> > 1. LDAP (the 389 Directory Server)
>>> > 2. Kerberos
>>> > 3. DNS (BIND)
>>> > 4. Certificate Authority (CA) server (Dogtag)
>>> > 5. WebUI/Web Service Management Interface (HTTPD)
>>> >
>>> > Of these, the CA is the most critical.  Without a centralized CA, we
>>> have no
>>> > reasonable way to do certificate management.
>>> >
>>> > Now, I know a lot of people have an allergic reaction to some, maybe
>>> all, of
>>> > these technologies. They should not be required to be running in a
>>> > development or testbed setup.  But we need to make it possible to
>>> secure an
>>> > end deployment, and FreeIPA was designed explicitly for these kinds of
>>> > distributed applications.  Here is what I would like to implement.
>>> >
>>> > Assuming that the Undercloud is installed on a physical machine, we
>>> want to
>>> > treat the FreeIPA server as a managed service of the undercloud that
>>> is then
>>> > consumed by the rest of the overcloud. Right now, there are conflicts
>>> for
>>> > some ports (8080 used by both swift and Dogtag) that prevent a drop-in
>>> run
>>> > of the server on the undercloud controller.  Even if we could
>>> deconflict,
>>> > there is a possible battle between Keystone and the FreeIPA server on
>>> the
>>> > undercloud.  So, while I would like to see the ability to run the
>>> FreeIPA
>>> > server on the Undercloud machine eventuall, I think a more realistic
>>> > deployment is to build a separate virtual machine, parallel to the
>>> overcloud
>>> > controller, and install FreeIPA there. I've been able to modify Tripleo
>>> > Quickstart to provision this VM.
>>>
>>> IMO these services shouldn't be deployed on the undercloud - we only
>>> support a single node undercloud, and atm it's completely possible to
>>> take
>>> the undercloud down without any impact to your deployed cloud (other than
>>> losing the ability to manage it temporarily).
>>>
>> This is fair enough, however, for CI purposes, would it be acceptable to
>> deploy it there? Or where do you recommend we have it?
>>
>>>
>>> These auth pieces all appear critical to the operation of the deployed
>>> cloud, thus I'd assume you really want them independently managed
>>> (probably
>>> in an HA configuration on multiple nodes)?
>>>
>>> So, I'd say we support one of:
>>>
>>> 1. Document that FreeIPA must exist, installed by existing non-TripleO
>>> tooling
>>>
>>> 2. Support a heat template (in addition to overcloud.yaml) that can
>>> deploy
>>> FreeIPA.
>>>
>>> I feel like we should do (1), as it fits better with the TripleO vision
>>> (which is to deploy OpenStack), 

[openstack-dev] [TripleO] Austin summit sessions

2016-04-05 Thread Steven Hardy
All,

As discussed briefly today in our weekly meeting, I've started this
etherpad:

https://etherpad.openstack.org/p/newton-tripleo-sessions

We have two fishbowl sessions and 3 working sessions, plus a half-day
contributor meetup.

Please can everyone dump ideas of what topics you'd like to cover over the
next few days, then I'll attempt to refine the list into a shortlist of
sessions (I'll aim to do this Friday but please go ahead and brain-dump
ideas in there now so we've got time to discuss if needed).

If we can reach consensus on the final list on the ML that would be great,
if we don't achieve it we can revisit in next weeks TripleO meeting.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] post-copy live migration

2016-04-05 Thread Paul Carlton

On 05/04/16 16:33, Daniel P. Berrange wrote:

On Tue, Apr 05, 2016 at 05:17:41PM +0200, Luis Tomas wrote:

Hi,

We are working on the possibility of including post-copy live migration into
Nova (https://review.openstack.org/#/c/301509/)

At libvirt level, post-copy live migration works as follow:
 - Start live migration with a post-copy enabler flag
(VIR_MIGRATE_POSTCOPY). Note this does not mean the migration is performed
in post-copy mode, just that you can switch it to post-copy at any given
time.
 - Change the migration from pre-copy to post-copy mode.

However, we are not sure what's the most convenient way of providing this
functionality at Nova level.
The current specs, propose to include an optional flag at the live migration
API to include the VIR_MIGRATE_POSTCOPY flag when starting the live
migration. Then we propose a second API to actually switch the migration
from pre-copy to post-copy mode similarly to how it is done in LibVirt. This
is also similar to how the new "force-migrate" option works to ensure
migrations completion. In fact, this method could be an extension of the
force-migrate, by switching to postcopy if the migration was started with
the VIR_MIGRATE_POSTCOPY libvirt flag, or pause it otherwise.

The cons of this approach are that we expose a too specific mechanism
through the API. To alleviate this, we could remove the "switch" API, and
automatize the switch based on data transferred, available bandwidth or
other related metrics. However we will still need the extension to the
live-migration API to include the proper libvirt postcopy flag.

No we absolutely don't want to expose that in the API as a concept, as it
is private technical implementation detail of the KVM migration code.


The other solution is to start all the migrations with the
VIR_MIGRATE_POSTCOPY mode, and therefore no new APIs would be needed. The
system could automatically detect the migration is taking too long (or is
dirting memory faster than the sending rate), and automatically switch to
post-copy.

Yes this is what we should be doing as default behaviour with new enough
QEMU IMHO.


The cons of this is that including the VIR_MIGRATE_POSTCOPY flag has an
overhead, and it will not be desirable to included for all migrations,
specially is they can be nicely migrated with pre-copy mode. In addition, if
the migration fails after the switching, the VM will be lost. Therefore,
admins may want to ensure that post-copy is not used for some specific VMs.

We shouldn't be trying to run before we can walk. Even if post-copy
is hurts some guests, it'll still be a net win overall because it will
give a guarantee that migration can complete without needing to stop
guest CPUs entirely. All we need to start with is a nova.conf setting
to let admin turn off use of post-copy for the host for cases where
we want to priortize performance over the ability to migrate successfully.

Any plan wrt changing migration behaviour on a per-VM basis needs to
consider a much broader set of features than just post-copy. For example,
compression, autoconverge and max-downtime settings all have an overhead
or impact on the guest too. We don't want to end up exposing API flags to
turn any of these on/off individually. So any solution to this will have
to look at a combination of usage context and some kind of SLA marker on
the guest. eg if the migration is in the context of host-evacuate which
absolutely must always complete in finite time, we should always use
post-copy. If the migration is in the context of load-balancing workloads
across hosts, then some aspect of guest SLA must inform whether Nova chooses
to use post-copy, or compression or auto-converge, etc.

Regards,
Daniel

We talked about the SLA issue at the mid cycle.  I seem to recall saying
I'd propose a spec for Newton so I should probably get to that.

The idea discussed then was to define instances as Cattle, Pets and
Pandas where cattle are expendable, Pets are less so and Pandas are high
value instances.

I also believe we need to know how important the migration is. For
example if the operator is trying to empty a node due because they are
concerned it is likely to fail then they set the migration as a high
importance task.  On the other hand if they are moving instances as
part of a monthly maintenance task they may be more relaxed about the
outcome.  If the migration is part of a de-fragmentation exercise the
operator might be fine with some instances not being able to be moved.

So my suggestion is we have add a flag to the live-migration operation
to allow the operator to specify high, medium or low importance. When
the migration is in progress the compute manager can use this setting
in conjunction with the instance SLA to determine how aggressive it
should be in trying to get the migration completed.

Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:

Re: [openstack-dev] [osc] Meeting time preferences for OSC team

2016-04-05 Thread Steve Martinelli

I am OK with any of the proposed times.

stevemar



From:   Richard Theis 
To: 
Date:   2016/04/05 11:46 AM
Subject:[openstack-dev] [osc] Meeting time preferences for OSC team



Here are my preferences:

1. even week: E.1 or E.4
2. odd week: O.3

Thanks,
Richard

> Dear All,
>
> This is regarding deciding meeting time for OSC team to facilitate
> appropriate time for APAC guys.
> We are planning to have meeting on alternate weeks for this purpose.
>
> Some of the suggestions are:
>
> *E.1* Every two weeks (on even weeks) on Thursday at 1300 UTC in
> #openstack-meeting (IRC webclient)
> *E.2* Every two weeks (on even weeks) on Tuesday at 1500 UTC in
> #openstack-meeting (IRC webclient)
> *E.3* Every two weeks (on even weeks) on Friday at 1500 UTC in
> #openstack-meeting-4 (IRC webclient)
> *E.4* Every two weeks (on even weeks) on Thursday at 1600 UTC in
> #openstack-meeting (IRC webclient)
>
> *O.1* Every two weeks (on odd weeks) on Tuesday at 1400 UTC in
> #openstack-meeting-4 (IRC webclient)
> *O.2* Every two weeks (on odd weeks) on Wednesday at 1400 UTC in
> #openstack-meeting-3 (IRC webclient)
> *O.3* Every two weeks (on odd/even weeks) on Thursday at 1900 UTC in
> #openstack-meeting (IRC webclient)  -- our regular meeting time
>
> Kindly share your preffered time(select only one for each even/odd weeks
> and share in your response in below format).
>
> 1. even week: *E.1/E.2/E.3/E.4*/  ?
> 2. odd week:  *O.1/O.2/O.3*  ?
>
> (response sample):
> 1. even week: *E.2*
> 2. odd week: *O.3*
>
> Best Regards,
> Sheel
Rana__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] advanced search criteria

2016-04-05 Thread Kyle Mestery
On Mon, Apr 4, 2016 at 7:36 PM, Armando M.  wrote:
>
>
> On 4 April 2016 at 17:08, Jay Pipes  wrote:
>>
>> On 04/04/2016 06:57 PM, Ihar Hrachyshka wrote:
>>>
>>> - why do we even need to control those features with configuration
>>> options? can we deprecate and remove them?
>>
>>
>> This would be my choice. Configuration options that make the Neutron API
>> behave differently from one deployment to another should be put out to
>> pasture.
>
>
> So long as we figure out a way to provide support these capabilities
> natively, I agree that we should stop using config options to alter API
> behavior. This is undiscoverable by the end user, who is left with the only
> choice of poking at the API to see how it responds.
>
Which is at best an awful user experience.

So +1 to removing them.

Kyle

> Cheers,
> Armando
>
>>
>>
>> Best,
>> -jay
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Fox, Kevin M
Yeah, and they just deprecated vendor data plugins too, which eliminates my 
other workaround. :/

We need to really discuss this problem at the summit and get a viable path 
forward. Its just getting worse. :/

Thanks,
Kevin

From: Juan Antonio Osorio [jaosor...@gmail.com]
Sent: Tuesday, April 05, 2016 5:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] FreeIPA integration



On Tue, Apr 5, 2016 at 2:45 PM, Fox, Kevin M 
> wrote:
This sounds suspiciously like, "how do you get a secret to the instance to get 
a secret from the secret store" issue :)
Yeah, sounds pretty familiar. We were using the nova hooks mechanism for this 
means, but it was deprecated recently. So bummer :/

Nova instance user spec again?

Thanks,
Kevin


From: Juan Antonio Osorio
Sent: Tuesday, April 05, 2016 4:07:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] FreeIPA integration



On Tue, Apr 5, 2016 at 11:36 AM, Steven Hardy 
> wrote:
On Sat, Apr 02, 2016 at 05:28:57PM -0400, Adam Young wrote:
> I finally have enough understanding of what is going on with Tripleo to
> reasonably discuss how to implement solutions for some of the main security
> needs of a deployment.
>
>
> FreeIPA is an identity management solution that can provide support for:
>
> 1. TLS on all network communications:
>A. HTTPS for web services
>B. TLS for the message bus
>C. TLS for communication with the Database.
> 2. Identity for all Actors in the system:
>   A.  API services
>   B.  Message producers and consumers
>   C.  Database consumers
>   D.  Keystone service users
> 3. Secure  DNS DNSSEC
> 4. Federation Support
> 5. SSH Access control to Hosts for both undercloud and overcloud
> 6. SUDO management
> 7. Single Sign On for Applications running in the overcloud.
>
>
> The main pieces of FreeIPA are
> 1. LDAP (the 389 Directory Server)
> 2. Kerberos
> 3. DNS (BIND)
> 4. Certificate Authority (CA) server (Dogtag)
> 5. WebUI/Web Service Management Interface (HTTPD)
>
> Of these, the CA is the most critical.  Without a centralized CA, we have no
> reasonable way to do certificate management.
>
> Now, I know a lot of people have an allergic reaction to some, maybe all, of
> these technologies. They should not be required to be running in a
> development or testbed setup.  But we need to make it possible to secure an
> end deployment, and FreeIPA was designed explicitly for these kinds of
> distributed applications.  Here is what I would like to implement.
>
> Assuming that the Undercloud is installed on a physical machine, we want to
> treat the FreeIPA server as a managed service of the undercloud that is then
> consumed by the rest of the overcloud. Right now, there are conflicts for
> some ports (8080 used by both swift and Dogtag) that prevent a drop-in run
> of the server on the undercloud controller.  Even if we could deconflict,
> there is a possible battle between Keystone and the FreeIPA server on the
> undercloud.  So, while I would like to see the ability to run the FreeIPA
> server on the Undercloud machine eventuall, I think a more realistic
> deployment is to build a separate virtual machine, parallel to the overcloud
> controller, and install FreeIPA there. I've been able to modify Tripleo
> Quickstart to provision this VM.

IMO these services shouldn't be deployed on the undercloud - we only
support a single node undercloud, and atm it's completely possible to take
the undercloud down without any impact to your deployed cloud (other than
losing the ability to manage it temporarily).
This is fair enough, however, for CI purposes, would it be acceptable to deploy 
it there? Or where do you recommend we have it?

These auth pieces all appear critical to the operation of the deployed
cloud, thus I'd assume you really want them independently managed (probably
in an HA configuration on multiple nodes)?

So, I'd say we support one of:

1. Document that FreeIPA must exist, installed by existing non-TripleO
tooling

2. Support a heat template (in addition to overcloud.yaml) that can deploy
FreeIPA.

I feel like we should do (1), as it fits better with the TripleO vision
(which is to deploy OpenStack), and it removes the need for us to maintain
a bunch of non-openstack stuff.

The path I'm imagining is we have a documented integration with FreeIPA,
and perhaps some third-party CI, but we don't support deploying these
pieces directly via TripleO.

> I was also able to run FreeIPA in a container on the undercloud machine, but
> this is, I think, not how we want to migrate to a container based strategy.
> It should be more deliberate.
>
>
> While the ideal setup would be to install the IPA layer first, and create
> service users in there, this produces a different 

[openstack-dev] [osc] Meeting time preferences for OSC team

2016-04-05 Thread Richard Theis
Here are my preferences:

1. even week: E.1 or E.4
2. odd week: O.3

Thanks,
Richard

> Dear All,
> 
> This is regarding deciding meeting time for OSC team to facilitate
> appropriate time for APAC guys.
> We are planning to have meeting on alternate weeks for this purpose.
> 
> Some of the suggestions are:
> 
> *E.1* Every two weeks (on even weeks) on Thursday at 1300 UTC in
> #openstack-meeting (IRC webclient)
> *E.2* Every two weeks (on even weeks) on Tuesday at 1500 UTC in
> #openstack-meeting (IRC webclient)
> *E.3* Every two weeks (on even weeks) on Friday at 1500 UTC in
> #openstack-meeting-4 (IRC webclient)
> *E.4* Every two weeks (on even weeks) on Thursday at 1600 UTC in
> #openstack-meeting (IRC webclient)
> 
> *O.1* Every two weeks (on odd weeks) on Tuesday at 1400 UTC in
> #openstack-meeting-4 (IRC webclient)
> *O.2* Every two weeks (on odd weeks) on Wednesday at 1400 UTC in
> #openstack-meeting-3 (IRC webclient)
> *O.3* Every two weeks (on odd/even weeks) on Thursday at 1900 UTC in
> #openstack-meeting (IRC webclient)  -- our regular meeting time
> 
> Kindly share your preffered time(select only one for each even/odd weeks
> and share in your response in below format).
> 
> 1. even week: *E.1/E.2/E.3/E.4*/  ?
> 2. odd week:  *O.1/O.2/O.3*  ?
> 
> (response sample):
> 1. even week: *E.2*
> 2. odd week: *O.3*
> 
> Best Regards,
> Sheel Rana__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] post-copy live migration

2016-04-05 Thread Daniel P. Berrange
On Tue, Apr 05, 2016 at 05:17:41PM +0200, Luis Tomas wrote:
> Hi,
> 
> We are working on the possibility of including post-copy live migration into
> Nova (https://review.openstack.org/#/c/301509/)
> 
> At libvirt level, post-copy live migration works as follow:
> - Start live migration with a post-copy enabler flag
> (VIR_MIGRATE_POSTCOPY). Note this does not mean the migration is performed
> in post-copy mode, just that you can switch it to post-copy at any given
> time.
> - Change the migration from pre-copy to post-copy mode.
> 
> However, we are not sure what's the most convenient way of providing this
> functionality at Nova level.
> The current specs, propose to include an optional flag at the live migration
> API to include the VIR_MIGRATE_POSTCOPY flag when starting the live
> migration. Then we propose a second API to actually switch the migration
> from pre-copy to post-copy mode similarly to how it is done in LibVirt. This
> is also similar to how the new "force-migrate" option works to ensure
> migrations completion. In fact, this method could be an extension of the
> force-migrate, by switching to postcopy if the migration was started with
> the VIR_MIGRATE_POSTCOPY libvirt flag, or pause it otherwise.
> 
> The cons of this approach are that we expose a too specific mechanism
> through the API. To alleviate this, we could remove the "switch" API, and
> automatize the switch based on data transferred, available bandwidth or
> other related metrics. However we will still need the extension to the
> live-migration API to include the proper libvirt postcopy flag.

No we absolutely don't want to expose that in the API as a concept, as it
is private technical implementation detail of the KVM migration code.

> The other solution is to start all the migrations with the
> VIR_MIGRATE_POSTCOPY mode, and therefore no new APIs would be needed. The
> system could automatically detect the migration is taking too long (or is
> dirting memory faster than the sending rate), and automatically switch to
> post-copy.

Yes this is what we should be doing as default behaviour with new enough
QEMU IMHO.

> The cons of this is that including the VIR_MIGRATE_POSTCOPY flag has an
> overhead, and it will not be desirable to included for all migrations,
> specially is they can be nicely migrated with pre-copy mode. In addition, if
> the migration fails after the switching, the VM will be lost. Therefore,
> admins may want to ensure that post-copy is not used for some specific VMs.

We shouldn't be trying to run before we can walk. Even if post-copy
is hurts some guests, it'll still be a net win overall because it will
give a guarantee that migration can complete without needing to stop
guest CPUs entirely. All we need to start with is a nova.conf setting
to let admin turn off use of post-copy for the host for cases where
we want to priortize performance over the ability to migrate successfully.

Any plan wrt changing migration behaviour on a per-VM basis needs to
consider a much broader set of features than just post-copy. For example,
compression, autoconverge and max-downtime settings all have an overhead
or impact on the guest too. We don't want to end up exposing API flags to
turn any of these on/off individually. So any solution to this will have
to look at a combination of usage context and some kind of SLA marker on
the guest. eg if the migration is in the context of host-evacuate which
absolutely must always complete in finite time, we should always use
post-copy. If the migration is in the context of load-balancing workloads
across hosts, then some aspect of guest SLA must inform whether Nova chooses
to use post-copy, or compression or auto-converge, etc.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] post-copy live migration

2016-04-05 Thread Luis Tomas

Hi,

We are working on the possibility of including post-copy live migration 
into Nova (https://review.openstack.org/#/c/301509/)


At libvirt level, post-copy live migration works as follow:
- Start live migration with a post-copy enabler flag 
(VIR_MIGRATE_POSTCOPY). Note this does not mean the migration is 
performed in post-copy mode, just that you can switch it to post-copy at 
any given time.

- Change the migration from pre-copy to post-copy mode.

However, we are not sure what's the most convenient way of providing 
this functionality at Nova level.
The current specs, propose to include an optional flag at the live 
migration API to include the VIR_MIGRATE_POSTCOPY flag when starting the 
live migration. Then we propose a second API to actually switch the 
migration from pre-copy to post-copy mode similarly to how it is done in 
LibVirt. This is also similar to how the new "force-migrate" option 
works to ensure migrations completion. In fact, this method could be an 
extension of the force-migrate, by switching to postcopy if the 
migration was started with the VIR_MIGRATE_POSTCOPY libvirt flag, or 
pause it otherwise.


The cons of this approach are that we expose a too specific mechanism 
through the API. To alleviate this, we could remove the "switch" API, 
and automatize the switch based on data transferred, available bandwidth 
or other related metrics. However we will still need the extension to 
the live-migration API to include the proper libvirt postcopy flag.


The other solution is to start all the migrations with the 
VIR_MIGRATE_POSTCOPY mode, and therefore no new APIs would be needed. 
The system could automatically detect the migration is taking too long 
(or is dirting memory faster than the sending rate), and automatically 
switch to post-copy.


The cons of this is that including the VIR_MIGRATE_POSTCOPY flag has an 
overhead, and it will not be desirable to included for all migrations, 
specially is they can be nicely migrated with pre-copy mode. In 
addition, if the migration fails after the switching, the VM will be 
lost. Therefore, admins may want to ensure that post-copy is not used 
for some specific VMs.


I would like to know your opinion in this? What option would be 
preferred? Is there a different option we are missing?


Thanks!

Best regards,
Luis

--
---
Dr. Luis Tomás
Postdoctoral Researcher
Department of Computing Science
Umeå University
l...@cs.umu.se
www.cloudresearch.se
www8.cs.umu.se/~luis



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][trove] Trove bug scrub

2016-04-05 Thread Amrith Kumar
If you are subscribed to bug notifications from Trove, you'd have received a 
lot of email from me over the past couple of days as I've gone through the LP 
bug list for the project and attempted to do some spring cleaning.

Here's (roughly) what I've tried to do:


-many bugs that have been inactive, and/or assigned to people who have not 
been active in Trove for a while have been updated and are now no longer 
assigned to anyone

-many bugs that related to reviews that have been abandoned at some time 
and were marked as "in-progress" at the time are now updated; some are marked 
'confirmed', others which appear to be no longer the case are set to 
'incomplete'

-some bugs that were recently fixed, or are in the process of getting 
merged have been nominated for backports to mitaka and in some cases liberty

Over the next several days, I will continue this process and start assigning 
meaningful milestones for the bugs that don't have them.

There are now a number of bugs marked as 'low-hanging-fruit', and several 
others that are unassigned so please feel free to pitch in and help with making 
this list shorter.

Thanks,

-amrith
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #77

2016-04-05 Thread Denis Egorenko
We did our meeting, and you can read the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-04-05-15.00.html

Thanks!

2016-04-04 15:29 GMT+03:00 Emilien Macchi <emil...@redhat.com>:

> Hi,
>
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting4.
>
> https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack
>
> As usual, free free to bring topics in this etherpad:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160405
>
> We'll also have open discussion for bugs & reviews, so anyone is welcome
> to join.
>
> Note: I'll be away this day, Denis will chair the meeting.
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Egorenko Denis,
Senior Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] branching Mitaka April 6, 2016

2016-04-05 Thread Igor Belikov
Hi Sergey,

According to Matthew Mosesohn the plan is to delay branching of detach-* 
plugins.
The only plugin scheduled for branching tomorrow seems to be a 
fuel-plugin-murano.
--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com

> On 04 Apr 2016, at 15:11, Sergii Golovatiuk  wrote:
> 
> What about plugins? 
> 
> For instance: fuel-plugin-detach-keystone
> 
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
> 
> On Mon, Apr 4, 2016 at 1:44 PM, Igor Belikov  > wrote:
> Hi,
> 
> Fuel SCF will be taking place on April 6th, this means that we’re going to 
> create stable/mitaka branches for a number of Fuel repos.
> 
> PLEASE, take a look at the following list and respond if you think your 
> project should be included or excluded from the list:
> * fuel-agent
> * fuel-astute
> * fuel-library
> * fuel-main
> * fuel-menu
> * fuel-mirror
> * fuel-nailgun-agent
> * fuel-noop-fixtures
> * fuel-octane
> * fuel-ostf
> * fuel-plugin-murano
> * fuel-qa
> * fuel-ui
> * fuel-upgrade
> * fuel-virtualbox
> * fuel-web
> * network-checker
> * python-fuelclient
> * shotgun
> 
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat] Re-evaluate conditions specification

2016-04-05 Thread Zane Bitter

On 01/04/16 12:31, Steven Hardy wrote:

On Fri, Apr 01, 2016 at 04:15:30PM +0200, Thomas Herve wrote:

On Fri, Apr 1, 2016 at 3:21 AM, Zane Bitter  wrote:

On 31/03/16 18:10, Zane Bitter wrote:



I'm in favour of some sort of variable-based implementation for a few
reasons. One is that (5) seems to come up fairly regularly in a complex
deployment like TripleO. Another is that Fn::If feels awkward compared
to get_variable.



I actually have to revise this last part after reviewing the patches.
get_variable can't replace Fn::If, because we'd still need to handle stuff
of the form:

 some_property: {if: [{get_variable: some_var},
  {get_resource: res1},
  {get_resource: res2}]

where the alternatives can't come from a variable because they contain
resource references and we have said we'd constrain variables to be static.

In fact the intrinsic functions that could be allowed in the first argument
to the {if: } function would have to constrained in the same way as the
constraint field in the resource, because we should only validate and obtain
dependencies from _one_ of the alternates, so we need to be able to
determine which one statically and not have to wait until the actual value
is resolved. This is possibly the strongest argument for keeping on the cfn
implementation course.


We talked about another possibilities on IRC: instead of having a new
section, create a new resource OS::Heat::Value which can hold some
data. It would look like that:

resources:
 is_prod:
 type: OS::Heat::Value
 properties:
 value: {equals: {get_param, env}, prod}}

 my_resource:
 condition: {get_attr: [is_prod, value}}


Another alternative (which maybe you discussed, sorry I missed the IRC
chat) would be just to use parameters, as that's already conceptually where
we obtain values that are input to resources.

E.g:

parameters:
   env:
type: string
default: prod

   is_prod:
 type: boolean
 default: {equals: {get_param, env}}

 From an interface standpoint this seems much cleaner and more intuitive than
the other solutions discussed IMHO, but I suspect it's potentially harder to
implement.


Yeah, the problems to solve here would be:

1) Need to define some subset of functions that are valid in templates 
(for HOT, probably all except get_resource and get_attr).


  - Easy enough, just add a separate method called param_functions or 
something to the Template class. We'd have to do the same for the 
"conditions" section anyway.


2) Need to prevent circular references between parameters.

  - Hard. Since currently no templates support parsing parameters for 
functions, we can add a separate implementation without breaking the 
third-party Template API though, so that helps. Lots of edge cases though.


3) Need to make clear to users that only a parameter is a valid input to 
a resource condition.


  - Could be done with something like:

 my_resource:
   condition_parameter: is_prod

instead of:

 my_resource:
   condition: {get_param: is_prod}

and something similar for {if: [is_prod, , ]}

4) Need to implement the conditions section in CloudFormation templates.

  - Could have some sort of hacky pseudo-parameter thing.


Your original example gets much cleaner too if we allow all intrinsic function
(except get_attr) in the scope of parameters:

parameters:
   host:
 type: string
   port:
 type: string
   endpoint:
 type: string
 default:
   str_replace:
 template:
http://HOSTORT/
 params:
HOST: {get_param: host}
PORT: {get_param: port}


So the alternative is:

  parameters:
host:
  type: string
port:
  type: string

  resources:
endpoint:
  type: OS::Heat::Value
  properties:
type: string
value:
  str_replace:
template:
  http://HOST:PORT/
params:
  HOST: {get_param: host}
  PORT: {get_param: port}

which is known to be trivial to implement, and has the advantage that it 
can also combine in data that is only available at runtime (e.g. it's 
easy to imagine that HOST might come from within the template, but that 
you'd still need to use the endpoint URL in enough places that it's not 
worth copying and pasting the whole str_replace function). Also, every 
template format would get it for free, which is nice.


The only downside is that you can't replace the whole value definition 
with a parameter value passed in from outside. Given that it would have 
to be a complete override including the substitute values (i.e. we can't 
parse functions from template _values_, only defaults), I'm not sure 
that is a big loss.


In my view it is almost a toss-up, but I am marginally in favour of a 
conditions section (per the current proposed implementation) plus a 
OS::Heat::Value resource over computable parameter 

Re: [openstack-dev] [congress][release] missing build artifacts

2016-04-05 Thread Tim Hinrichs
Looks like the congress job is working now.  We have a tarball for our
latest RC:

http://tarballs.openstack.org/congress/congress-3.0.0.0rc3.tar.gz

Thierry added the links back for the tarball.

https://github.com/openstack/releases/commit/beee35379b3b52ed7d444d93d7afd8b6603f69b6

So it looks like everything is back on track for the Congress Mitaka
release.  Let me know if there's anything else I need to do.

Thanks for all the help release team!
Tim


On Sat, Apr 2, 2016 at 5:39 AM Doug Hellmann  wrote:

> Excerpts from Tim Hinrichs's message of 2016-04-01 22:46:59 +:
> > Hi Doug,
> >
> > Thanks for letting us know.  Here's what we're intending...
> > - We'd like to release the server code for Mitaka.
> > - We release the client to Pypi, so that's already taken care of.
> > - We haven't moved our docs off of readthedocs yet, so we're taking care
> of
> > that as well.
> >
> > I gave a +1 to your patch for adding openstack-server-release-jobs to
> zuul.
> > I also pushed a congress-only patch for the same, thinking that's
> probably
> > what you actually wanted us to do.
> > https://review.openstack.org/#/c/300682/
>
> Now that the jobs are in place, we should tag new release candidates
> using new versions and the same SHAs as the last candidate to get the
> artifacts built properly. You can submit the request for that using the
> releases repo, or you can tag yourself and submit the info after the
> fact. We will decide what to do about the old artifacts you have on
> launchpad separately.
>
> >
> > I gave a -1 to your patch that removes all the Congress deliverables from
> > openstack/releases, thinking that we can have this sorted out quickly.
> The
> > job we're missing just produces a tarball and uploads it, right?  We've
> > been manually doing releases up to this point, which is why we didn't
> have
> > the job in place.
> > https://review.openstack.org/300644
> >
> > I didn't touch your change on the artifact-link-mode, since it seems
> like a
> > short-term solution that may go in before we get everything sorted.
> > https://review.openstack.org/300457
>
> Yes, we'll merge that as a short-term fix until we have the rest of it
> worked out.
>
> Doug
>
> >
> > Tim
> >
> > On Fri, Apr 1, 2016 at 1:23 PM Doug Hellmann 
> wrote:
> >
> > > Congress team,
> > >
> > > We noticed in our audit of the links on
> > > http://releases.openstack.org/mitaka/index.html that the links to
> > > the build artifacts for congress point to missing files. The
> > > repository doesn't seem to have any real build jobs configured in
> > > openstack-infra/project-config/zuul/layout.yaml, so it's not clear
> > > how tagging is producing a release for you.
> > >
> > > For now, we are disabling links to the artifacts for those repos
> > > via https://review.openstack.org/300457 but we're also planning to
> > > remove them from the official Mitaka page since there don't
> > > appear to be any actual related deliverables
> > > (https://review.openstack.org/300644).
> > >
> > > It would be good to understand what your intent is for builds. Can
> > > you follow up here on this thread with some details?
> > >
> > > Thanks,
> > > Doug
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Implementing tempest test for Keystone federation functional tests

2016-04-05 Thread Minying Lu
Thank you for your awesome feedbacks!


> Another option is to add those tests to keystone itself (if you are not
>> including tests that triggers other components APIs). See
>> https://blueprints.launchpad.net/keystone/+spec/keystone-tempest-plugin-tests
>>
>>
>
>
knikolla and I are looking into the keystone-tempest-plugin too thanks
Rodrigo!


> Again though, the problem is not where the tests live but where we run
> them. To practically run these tests we need to either add K2K testing
> support to devstack (not sure this is appropriate) or come up with a new
> test environment that deploys 2 keystones and federation support that we
> can CI against in the gate. This is doable but i think something we need
> support with from infra before worrying about tempest.
>
>
We have engineers in the team that are communicating with the infra team on
how to set up a environment that runs the federation test. We're thinking
about creating a 2 devstack setup with the keystones configured as Identity
provider and service provider with federation support. Meanwhile I can just
work on writing the test in a pre-configured environment that's the same as
the 2 devstack setup.


>
>
>>
>>> The fly in the ointment for this case will be CI though. For tests to
>>> live in
>>> tempest they need to be verified by a CI system before they can land. So
>>> to
>>> land the additional testing in tempest you'll have to also ensure there
>>> is a
>>> CI job setup in infra to configure the necessary environment. While I
>>> think
>>> this is a good thing to have in the long run, it's not necessarily a
>>> small
>>> undertaking.
>>>
>>
>>> -Matt Treinish
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Rodrigo Duarte Sousa
>> Senior Quality Engineer @ Red Hat
>> MSc in Computer Science
>> http://rodrigods.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][openstackclient] Required name option for volumes, snapshots and backups

2016-04-05 Thread Dean Troyer
On Tuesday, April 5, 2016, Duncan Thomas  wrote:

> What about commands the become ambiguous in the future? I doubt there are
> many operations or objects that are unique to Cinder - backup, snapshot,
> transfer, group, type - these are all very much generic, and even if they
> aren't ambiguous now, they might well become so in future
>

This is one reason I phrase this as "name the resource" and not "add
API name"..  The text you use may not always be the API name.  I do not
have an example of this for Volume, but in Compute "flavor" is a good
example, as it would (will someday soon) be named "server flavor".  That is
also an example of it taking >3 years for a resource to need qualifying.

dt

>

-- 
-- 
Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Juan Antonio Osorio
Having an extra node for FreeIPA spawn up by heat works for me. And it's
not a hard-requirement that we have to wire this into the TripleO CI. But
the most sustainable approach to having TLS everywhere (at least for the
admin and internal endpoints of Openstack, the message broker server nodes
and the database) is using FreeIPA as a CA. So if we advertise (at some
point) that TripleO will support such a feature, then it's probably a good
idea to have it in the official CI.

BR

On Tue, Apr 5, 2016 at 4:01 PM, Steven Hardy  wrote:

> On Tue, Apr 05, 2016 at 02:07:06PM +0300, Juan Antonio Osorio wrote:
> >On Tue, Apr 5, 2016 at 11:36 AM, Steven Hardy 
> wrote:
> >
> >  On Sat, Apr 02, 2016 at 05:28:57PM -0400, Adam Young wrote:
> >  > I finally have enough understanding of what is going on with
> Tripleo
> >  to
> >  > reasonably discuss how to implement solutions for some of the main
> >  security
> >  > needs of a deployment.
> >  >
> >  >
> >  > FreeIPA is an identity management solution that can provide
> support
> >  for:
> >  >
> >  > 1. TLS on all network communications:
> >  >Â  Â  A. HTTPS for web services
> >  >Â  Â  B. TLS for the message bus
> >  >Â  Â  C. TLS for communication with the Database.
> >  > 2. Identity for all Actors in the system:
> >  >   A.  API services
> >  >   B.  Message producers and consumers
> >  >   C.  Database consumers
> >  >   D.  Keystone service users
> >  > 3. Secure  DNS DNSSEC
> >  > 4. Federation Support
> >  > 5. SSH Access control to Hosts for both undercloud and overcloud
> >  > 6. SUDO management
> >  > 7. Single Sign On for Applications running in the overcloud.
> >  >
> >  >
> >  > The main pieces of FreeIPA are
> >  > 1. LDAP (the 389 Directory Server)
> >  > 2. Kerberos
> >  > 3. DNS (BIND)
> >  > 4. Certificate Authority (CA) server (Dogtag)
> >  > 5. WebUI/Web Service Management Interface (HTTPD)
> >  >
> >  > Of these, the CA is the most critical.  Without a centralized
> CA, we
> >  have no
> >  > reasonable way to do certificate management.
> >  >
> >  > Now, I know a lot of people have an allergic reaction to some,
> maybe
> >  all, of
> >  > these technologies. They should not be required to be running in a
> >  > development or testbed setup.  But we need to make it possible to
> >  secure an
> >  > end deployment, and FreeIPA was designed explicitly for these
> kinds of
> >  > distributed applications.  Here is what I would like to
> implement.
> >  >
> >  > Assuming that the Undercloud is installed on a physical machine,
> we
> >  want to
> >  > treat the FreeIPA server as a managed service of the undercloud
> that
> >  is then
> >  > consumed by the rest of the overcloud. Right now, there are
> conflicts
> >  for
> >  > some ports (8080 used by both swift and Dogtag) that prevent a
> drop-in
> >  run
> >  > of the server on the undercloud controller.  Even if we could
> >  deconflict,
> >  > there is a possible battle between Keystone and the FreeIPA
> server on
> >  the
> >  > undercloud.  So, while I would like to see the ability to run the
> >  FreeIPA
> >  > server on the Undercloud machine eventuall, I think a more
> realistic
> >  > deployment is to build a separate virtual machine, parallel to the
> >  overcloud
> >  > controller, and install FreeIPA there. I've been able to modify
> >  Tripleo
> >  > Quickstart to provision this VM.
> >
> >  IMO these services shouldn't be deployed on the undercloud - we only
> >  support a single node undercloud, and atm it's completely possible
> to
> >  take
> >  the undercloud down without any impact to your deployed cloud (other
> >  than
> >  losing the ability to manage it temporarily).
> >
> >This is fair enough, however, for CI purposes, would it be acceptable
> to
> >deploy it there? Or where do you recommend we have it?
>
> We're already well beyond capacity in CI, so to me this seems like
> something that's probably appropriate for a third-party CI job?
>
> To me it just doesn't make sense to integrate these pieces on the
> undercloud, and integrating it there just because we need it available for
> CI purposes seems like a poor argument, because we're not testing a
> representative/realistic environment.
>
> If we have to wire this in to TripleO CI I'd favor spinning up an extra
> node with the FreeIPA pieces in, e.g a separate Heat stack (so, e.g the
> nonha job takes 3 nodes, a "freeipa" stack of 1 and an overcloud of 2).
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [tempest] Implementing tempest test for Keystone federation functional tests

2016-04-05 Thread Minying Lu
Thank you all for your awesome feedbacks!

On Sun, Apr 3, 2016 at 9:07 PM, Jamie Lennox  wrote:

>
>
> On 2 April 2016 at 09:21, Rodrigo Duarte  wrote:
>
>>
>>
>> On Thu, Mar 31, 2016 at 1:11 PM, Matthew Treinish 
>> wrote:
>>
>>> On Thu, Mar 31, 2016 at 11:38:55AM -0400, Minying Lu wrote:
>>> > Hi all,
>>> >
>>> > I'm working on resource federation at the Massachusetts Open Cloud. We
>>> want
>>> > to implement functional test on the k2k federation, which requires
>>> > authentication with both a local keystone and a remote keystone (in a
>>> > different cloud installation). It also requires a K2K/SAML assertion
>>> > exchange with the local and remote keystones. These functions are not
>>> > implemented in the current tempest.lib.service library, so I'm adding
>>> code
>>> > to the service library.
>>> >
>>> > My question is, is it possible to adapt keystoneauth python clients?
>>> Or do
>>> > you prefer implementing it with http requests.
>>>
>>> So tempest's clients have to be completely independent. That's part of
>>> tempest's
>>> design points about testing APIs, not client implementations. If you
>>> need to add
>>> additional functionality to the tempest clients that's fine, but pulling
>>> in
>>> keystoneauth isn't really an option.
>>>
>>
>> ++
>>
>>
>>>
>>> >
>>> > And since this test requires a lot of environment set up including: 2
>>> > separate cloud installations, shibboleth, creating mapping and
>>> protocols on
>>> > remote cloud, etc. Would it be within the scope of tempest's mission?
>>>
>>> From the tempest perspective it expects the environment to be setup and
>>> already
>>> exist by the time you run the test. If it's a valid use of the API,
>>> which I'd
>>> say this is and an important one too, then I feel it's fair game to have
>>> tests
>>> for this live in tempest. We'll just have to make the configuration
>>> options
>>> around how tempest will do this very explicit to make sure the necessary
>>> environment exists before the tests are executed.
>>>
>>
>> Another option is to add those tests to keystone itself (if you are not
>> including tests that triggers other components APIs). See
>> https://blueprints.launchpad.net/keystone/+spec/keystone-tempest-plugin-tests
>>
>>
>
> Again though, the problem is not where the tests live but where we run
> them. To practically run these tests we need to either add K2K testing
> support to devstack (not sure this is appropriate) or come up with a new
> test environment that deploys 2 keystones and federation support that we
> can CI against in the gate. This is doable but i think something we need
> support with from infra before worrying about tempest.
>
>
>

>

>>> The fly in the ointment for this case will be CI though. For tests to
>>> live in
>>> tempest they need to be verified by a CI system before they can land. So
>>> to
>>> land the additional testing in tempest you'll have to also ensure there
>>> is a
>>> CI job setup in infra to configure the necessary environment. While I
>>> think
>>> this is a good thing to have in the long run, it's not necessarily a
>>> small
>>> undertaking.
>>>
>>
>>> -Matt Treinish
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Rodrigo Duarte Sousa
>> Senior Quality Engineer @ Red Hat
>> MSc in Computer Science
>> http://rodrigods.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][openstackclient] Required name option for volumes, snapshots and backups

2016-04-05 Thread Jay Bryant
Duncan,

Agreed, but the OSC team is concerned about unnecessarily adding API names
into commands as much as the Cinder team wishes to make it clearer which
commands belong to our component. This is where we need to keep this
discussion open with the OSC team to find a good common ground. I am hoping
to have Slade Baumann and Jacob Gregor work this issue actively with Sheel.
We need to pull you in as well given your views on this. I will make sure
they keep you in the loop.

Jay

On Tue, Apr 5, 2016 at 10:24 AM Duncan Thomas 
wrote:

> What about commands the become ambiguous in the future? I doubt there are
> many operations or objects that are unique to Cinder - backup, snapshot,
> transfer, group, type - these are all very much generic, and even if they
> aren't ambiguous now, they might well become so in future...
>
> On 5 April 2016 at 17:15, Jay Bryant 
> wrote:
>
>> All,
>>
>> Just to document the discussion we had during the OSC IRC meeting last
>> week: I believe the consensus we reached was that it wasn't appropriate to
>> pretend "volume" before all Cinder commands but that it would be
>> appropriate to move in that direction to for any commands that may be
>> ambiguous like "snapshot". The cinder core development team will start
>> working with the OSC development teams to address such commands and move
>> them to more user friendly commands and as we move forward we will work to
>> avoid such confusion in the future.
>>
>> Jay
>>
>> On Mon, Mar 28, 2016 at 1:15 PM Dean Troyer  wrote:
>>
>>> On Sun, Mar 27, 2016 at 6:11 PM, Mike Perez  wrote:
>>>
 On 00:40 Mar 28, Jordan Pittier wrote:
 > I am going to play the devil's advocate here but why can"t
 > python-openstackclient have its own opinion on the matter ? This CLI
 seems
 > to be for humans and humans love names/labels/tags and find UUIDS
 hard to
 > remember. Advanced users who want anonymous volumes can always hit
 the API
 > directly with curl or whatever SDK.

 I suppose it could, however, names are not unique.

>>>
>>> Names are not unique in much of OpenStack.  When ambiguity exists, we
>>> exit with an error.
>>>
>>> Also, this works to produce a volume with no name should you absolutely
>>> require it:
>>>
>>> openstack volume create --size 10 ""
>>>
>>>
>>> dt
>>> --
>>>
>>> Dean Troyer
>>> dtro...@gmail.com
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> --
> Duncan Thomas
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate] Tempest Tests - in-repo or separate

2016-04-05 Thread Matthew Treinish
On Tue, Apr 05, 2016 at 01:37:21PM +0100, Kiall Mac Innes wrote:
> Agreed on a separate repo.
> 
> I'm honestly a little unsure how in-service-repo would ever work long
> term, given that tempest's requirements will match master, and may
> not be compatible with N releases ago's $service requirements..
> 
> Thoughts on how that might work, or is it just that no service has had
> an in-repo plugin for long enough to hit this yet?

So this has come up, the plugin interface has been around for a little over a
cycle so some projects have crossed a release boundary already. Although, most
projects don't seem to worry about this and either don't run the plugin on
stable or just haven't thought about it yet. The solution I've been working with
for ironic is to install the project from master inside the tempest tox venv.
I've got a patch up to add the machinery to devstack-gate to enable this:

https://review.openstack.org/297446

But, I agree having the plugin be a separate python package makes this much
easier.

-Matt Treinish

> 
> On 04/04/16 17:16, Andrea Frittoli wrote:
> > I look forward to the first (as far as I know) tempest plugin for a
> > service in its own repo! :)
> >
> >
> > On Mon, Apr 4, 2016 at 4:49 PM Hayes, Graham  > > wrote:
> >
> > On 04/04/2016 16:36, Jordan Pittier wrote:
> > >
> > > On Mon, Apr 4, 2016 at 5:01 PM, Hayes, Graham
> > 
> > > >> wrote:
> > >
> > > As we have started to move to a tempest plugin for our
> > functional test
> > > suite, we have 2 choices about where it lives.
> > >
> > > 1 - In repo (as we have [0] currently)
> > > 2 - In a repo of its own (something like
> > openstack/designate-tempest)
> > >
> > > There are several advantages to a separate repo:
> > >
> > > * It will force us to make API changes compatable
> > > * * This could cause us to be slower at merging changes [1]
> > > * It allows us to be branchless (like tempest is)
> > > * It can be its own installable package, and a (much)
> > shorter list
> > > of requirements.
> > >
> > > I am not a Designate contributor, but as a Tempest contributor we
> > > recommend to use a separate repo. See
> > >
> > 
> > http://docs.openstack.org/developer/tempest/plugin.html#standalone-plugin-vs-in-repo-plugin
> > > for more details.
> >
> > Yeap - that was one of the reasons I will leaning towards the separate
> > repo. The only thing stopping us was that I cannot see any other
> > project
> > who does it [2]
> >
> > We just had a quick discussion in IRC[3] and we are going to go with
> > the separate repo anyway.
> >
> > 2 -
> > 
> > http://codesearch.openstack.org/?q=%5Etempest=nope=setup.cfg=
> > 3 -
> > 
> > http://eavesdrop.openstack.org/irclogs/%23openstack-dns/%23openstack-dns.2016-04-04.log.html#t2016-04-04T15:05:38
> >
> >
> > > If everyone is OK with a separate repo, I will go ahead and
> > start the
> > > creation process.
> > >
> > > Thanks
> > >
> > > - Graham
> > >
> > >
> > > 0 - https://review.openstack.org/283511
> > > 1 -
> > >   
> >  
> > http://docs.openstack.org/developer/tempest/HACKING.html#branchless-tempest-considerations
> > >
> > >   
> >  
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > >   
> >  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 
> > >   
> >  
> > >   
> >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> >
> >
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [cinder][openstackclient] Required name option for volumes, snapshots and backups

2016-04-05 Thread Jay Bryant
All,

Just to document the discussion we had during the OSC IRC meeting last
week: I believe the consensus we reached was that it wasn't appropriate to
pretend "volume" before all Cinder commands but that it would be
appropriate to move in that direction to for any commands that may be
ambiguous like "snapshot". The cinder core development team will start
working with the OSC development teams to address such commands and move
them to more user friendly commands and as we move forward we will work to
avoid such confusion in the future.

Jay

On Mon, Mar 28, 2016 at 1:15 PM Dean Troyer  wrote:

> On Sun, Mar 27, 2016 at 6:11 PM, Mike Perez  wrote:
>
>> On 00:40 Mar 28, Jordan Pittier wrote:
>> > I am going to play the devil's advocate here but why can"t
>> > python-openstackclient have its own opinion on the matter ? This CLI
>> seems
>> > to be for humans and humans love names/labels/tags and find UUIDS hard
>> to
>> > remember. Advanced users who want anonymous volumes can always hit the
>> API
>> > directly with curl or whatever SDK.
>>
>> I suppose it could, however, names are not unique.
>>
>
> Names are not unique in much of OpenStack.  When ambiguity exists, we exit
> with an error.
>
> Also, this works to produce a volume with no name should you absolutely
> require it:
>
> openstack volume create --size 10 ""
>
>
> dt
> --
>
> Dean Troyer
> dtro...@gmail.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][openstackclient] Required name option for volumes, snapshots and backups

2016-04-05 Thread Duncan Thomas
What about commands the become ambiguous in the future? I doubt there are
many operations or objects that are unique to Cinder - backup, snapshot,
transfer, group, type - these are all very much generic, and even if they
aren't ambiguous now, they might well become so in future...

On 5 April 2016 at 17:15, Jay Bryant  wrote:

> All,
>
> Just to document the discussion we had during the OSC IRC meeting last
> week: I believe the consensus we reached was that it wasn't appropriate to
> pretend "volume" before all Cinder commands but that it would be
> appropriate to move in that direction to for any commands that may be
> ambiguous like "snapshot". The cinder core development team will start
> working with the OSC development teams to address such commands and move
> them to more user friendly commands and as we move forward we will work to
> avoid such confusion in the future.
>
> Jay
>
> On Mon, Mar 28, 2016 at 1:15 PM Dean Troyer  wrote:
>
>> On Sun, Mar 27, 2016 at 6:11 PM, Mike Perez  wrote:
>>
>>> On 00:40 Mar 28, Jordan Pittier wrote:
>>> > I am going to play the devil's advocate here but why can"t
>>> > python-openstackclient have its own opinion on the matter ? This CLI
>>> seems
>>> > to be for humans and humans love names/labels/tags and find UUIDS hard
>>> to
>>> > remember. Advanced users who want anonymous volumes can always hit the
>>> API
>>> > directly with curl or whatever SDK.
>>>
>>> I suppose it could, however, names are not unique.
>>>
>>
>> Names are not unique in much of OpenStack.  When ambiguity exists, we
>> exit with an error.
>>
>> Also, this works to produce a volume with no name should you absolutely
>> require it:
>>
>> openstack volume create --size 10 ""
>>
>>
>> dt
>> --
>>
>> Dean Troyer
>> dtro...@gmail.com
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat] Re-evaluate conditions specification

2016-04-05 Thread Zane Bitter

On 05/04/16 06:43, Steven Hardy wrote:

On Fri, Apr 01, 2016 at 04:39:02PM +, Fox, Kevin M wrote:

Why is imperative programming always brought up when discussing
conditionals in the templates? We are not wanting anything imperative. The
heat engine still picks the final ordering of things. We just want to give
it all the valid options, and then enough knowledge to eliminate
unacceptlable solutions. Maybe we are using the wrong term for them? Would
'constraint' suit the conversation better?


Since nobody is really disputing that we should implement *something* 
can we maybe just stop having this unhelpful argument about semantics?



I think it's because as soon as you start conflating specific conditions
with a declarative model (in the same place, e.g within the same template),
you end up with an imperative flow whether you want it or not.


Sorry, but all of the templates and environment files and other data 
(including parameters) together constitute the model. The distinction 
you're trying to draw between stuff that is defined in the template and 
stuff that is defined in the environment (as in your "more declarative 
approach" below) is entirely spurious.



E.g if condition=$foo, resource X is created, but resource Y isn't, which
means you no longer have an explicit definition of the required state in
the template.


You could make a similar argument about parameters, and it would be just 
as pointless.



This feels imperative because the template now contains
statements which change the resulting desired state and it influences
references (e.g attributes) elsewhere in the template.


As long as we follow the cfn model, or something in the same ballpark, 
it's completely declarative. You're *declaring* that you want this set 
of things, and simultaneously *declaring* that you don't want this other 
set of things. The only difference between this and the current 
templates is that you're being oddly specific about things you don't want :D


There's simply no hint of imperativeness here - the conditions are all 
completely determined before the model is built.


The same would arguably not be true if we allowed arbitrary data to be 
passed as the condition and for it to be evaluated at runtime (e.g. from 
get_attr), and the need to have some distinction in the template 
structure to indicate to users that we cannot, in fact, allow this is 
the reason that I now support hewing close to the cfn model.



The more declarative approach to composition I was aiming for here:

https://github.com/openstack/heat-specs/blob/master/specs/mitaka/resource-capabilities.rst

This is a model where each template retains it's declarative interface, and
you select which combination of templates to use by "eliminating
unacceptlable solutions" exactly as you say above.


Two problems with having this as the _only_ solution:

1) Stacks are an extremely heavyweight abstraction. This is part of the 
reason that Heat can chew up 10s of GB of RAM on a TripleO deployment 
with several hundred nodes.
2) Because moving a resource into a nested stack severs its dependency 
relationship with the surrounding resources, the declarative model is 
actually broken when it comes to deleting or replacing the now-nested 
resource.[1]


cheers,
Zane.

[1] https://etherpad.openstack.org/p/mitaka-heat-break-stack-barrier

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Rescheduling IRC meetings

2016-04-05 Thread ZhiQiang Fan
+1 to demand meetings and asynchronous way

On Thu, Mar 31, 2016 at 9:33 PM, Ildikó Váncsa 
wrote:

> Hi Gordon,
>
> >
> > ie. the new PTL should checkpoint with subteam leads regularly to review
> spec status or identify missing resources on high-priority
> > items?
>
> I would say anything that we feel as useful information to people who read
> this mailing list. In this sense features that got implemented, items we
> are focusing on, as you mentioned resource bottlenecks on important items,
> etc.
>
> >
> > as some feedback, re: news flash mails, we need a way to promote roadmap
> backlog items better. i'm not sure anyone looks at Road
> > Map page...
> > maybe we need to organise it better with priority and incentive.
>
> I had the Roadmap page in mind as well partially, we could highlight the
> plans/tasks from that page and also track progress.
>
> Thanks,
> /Ildikó
>
> >
> > On 31/03/2016 7:14 AM, Ildikó Váncsa wrote:
> > > Hi All,
> > >
> > > +1 on the on demand meeting schedule. Maybe we can also have some news
> flash mails  week to summarize the progress in our
> > sub-modules when we don't have the meeting. Just to keep people up to
> date.
> > >
> > > Will we already skip the today's meeting?
> > >
> > > Thanks,
> > > /Ildikó
> > >
> > >> -Original Message-
> > >> From: Julien Danjou [mailto:jul...@danjou.info]
> > >> Sent: March 31, 2016 11:04
> > >> To: liusheng
> > >> Cc: openstack-dev@lists.openstack.org
> > >> Subject: Re: [openstack-dev] [telemetry] Rescheduling IRC meetings
> > >>
> > >> On Thu, Mar 31 2016, liusheng wrote:
> > >>
> > >>> Another personal suggestion:
> > >>>
> > >>> maybe we can have a weekly routine mail thread to present the things
> > >>> need to be discussed or need to be notified. The mail will also list
> > >>> the topics posted in meeting agenda and ask to Telemetry folks if  a
> > >>> online IRC meeting is necessary, if there are a very few topics or
> > >>> low priority topics, or the topics can be suitably discussed
> > >>> asynchronously, we can disuccs them in the mail thread.
> > >>>
> > >>> any thoughts?
> > >>
> > >> Yeah I think it's more or less the same idea that was proposed,
> > >> schedule a meeting only if needed. I'm going to amend the meeting
> wiki page with that!
> > >>
> > >> --
> > >> Julien Danjou
> > >> -- Free Software hacker
> > >> -- https://julien.danjou.info
> > >
> > > __
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > --
> > gord
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas and the need for reservation

2016-04-05 Thread Ryan McNair
>It is believed that reservation help to to reserve a set of resources
>beforehand and hence eventually preventing any other upcoming request
>(serial or parallel) to exceed quota if because of original request the
>project might have reached the quota limits.
>
>Questions :-
>1. Does reservation in its current state as used by Nova, Cinder, Neutron
>help to solve the above problem ?

In Cinder the reservations are useful for grouping quota
for a single request, and if the request ends up failing
the reservation gets rolled back. The reservations also
rollback automatically if not committed within a certain
time. We also use reservations with Cinder nested quotas
to group a usage request that may propagate up to a parent
project in order to manage commit/rollback of the request
as a single unit.

>
>2. Is it consistent, reliable ?  Even with reservation can we run into
>in-consistent behaviour ?

Others can probably answer this better, but I have not
seen the reservations be a major issue. In general with
quotas we're not doing the check and set atomically which
can get us in an inconsistent state with quota-update,
but that's unrelated to the reservations.

>
>3. Do we really need it ?
>

Seems like we need *some* way of keeping track of usage
reserved during a particular request and a way to easily
roll that back at a later time. I'm open to alternatives
to reservations, just wondering what the big downside of
the current reservation system is.

- Ryan McNair (mc_nair)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] advanced search criteria

2016-04-05 Thread Ihar Hrachyshka

Hirofumi Ichihara  wrote:


Hi Ihar,

On 2016/04/05 7:57, Ihar Hrachyshka wrote:

Hi all,

in neutron, we have a bunch of configuration options to control advanced  
filtering features for API, f.e. allow_sorting, allow_pagination,  
allow_bulk, etc. Those options have default False values.
I saw allow_bulk option is set default True in  
https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L66

Well, I don't think there's someone sets False to the option.


Yes, indeed only allow_sorting and allow_pagination are disabled by default.



In the base API controller class, we have support for both native  
sorting/pagination/bulk operations [implemented by the plugin itself],  
as well as a generic implementation for plugins without native support.  
But if corresponding allow_* options are left with their default False  
values, those advanced search/filtering criteria just don’t work, no  
matter whether the plugin support native filters, or not.


It seems weird to me that our API behaves differently depending on  
configuration options, and that we have those useful features disabled  
by default.


My immediate interest is to add native support for sorting/pagination  
for QoS service plugin; I have a patch for that, and I planned to add  
some API tests to validate that the features work, but I hit failures  
because those features are not enabled for the -api job.


Some questions:
- can we enable those features in -api job?
- is there any reason to keep default values for allow_* as False, and  
if not, can we switch to True?
- why do we even need to control those features with configuration  
options? can we deprecate and remove them?
I agree we will deprecate and remove the option but I think that we need  
more tests if we support it as default.

It looks like there are very few tests(UT only).


That’s a good suggestion. I started a patch to enable those two options,  
plus add first tests for the feature:


https://review.openstack.org/#/c/301634/

For now it covers only for networks. I wonder how we envision the coverage.  
Do we want to have test cases per resource? Any ideas on how to make the  
code more generic to avoid code duplication? For example, I could move  
those test cases into a base class that would require some specialization  
for each resource that we want to cover (get/create methods, primary key,  
…).


Also, do we maybe want to split the patch into two pieces:
- first one adding tests [plus enabling those features for API job];
- second one changing the default value for the options.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Newton design summit planning

2016-04-05 Thread gordon chung
seems fine to me. we still don't have a story around events. we just 
sort of store and dump it right now. i guess this could probably fit 
into ceilometer split session... or maybe everyone is cool with just 
pushing things to elasticsearch and letting users play with that + kibana.

On 05/04/2016 5:58 AM, Julien Danjou wrote:
> Hi folks,
>
> I've started to fill in the blank for our next design summit sessions.
> The schedule is at:
>
>
> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Telemetry%3A
>
> We still have the 2 first work sessions empty, because I ran out of
> idea. I've used the Etherpad we previously announced¹ to build the
> schedule. Let me also know if you think we should shuffle things around.
>
> Any items/ideas we could discussion in this remaining sessions?
> Any project we could/should invite to discuss with us?
>
> Cheers,
>
> ¹  https://etherpad.openstack.org/p/newton-telemetry-summit-planning
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FPGA as a resource

2016-04-05 Thread Daniel P. Berrange
On Tue, Apr 05, 2016 at 02:27:30PM +0200, Roman Dobosz wrote:
> Hey all,
> 
> On yesterday's scheduler meeting I was raised the idea of bringing up 
> the FPGA to the OpenStack as the resource, which than might be exposed 
> to the VMs.
> 
> The use cases for motivations, why one want do this, are pretty broad -
> having such chip ready on the computes might be beneficial either for
> consumers of the technology and data center administrators. The
> utilization of the hardware is very broad - the only limitations are
> human imagination and hardware capability - since it might be used for
> accelerating execution of algorithms from compression and cryptography,
> through pattern recognition, transcoding to voice/video analysis and
> processing and all the others in between. Using FPGA to perform data
> processing may significantly reduce CPU utilization, the time and power
> consumption, which is a benefit on its own.
> 
> On OpenStack side, unlike utilizing the CPU or memory, for actually 
> using specific algorithm with FPGAs, it has to be programmed first. So 
> in simplified scenario, it might go like this:
> 
> * User selects VM with image which supports acceleration,
> * Scheduler selects the appropriate compute host with FPGA available,
> * Compute gets request, program IP into FPGA and then boot up the 
>   VM with accelerator attached.
> * If VM is removed, it may optionally erase the FPGA.
> 
> As you can see, it seems not complicated at this point, however it 
> become more complex due to following things we also have to take into 
> consideration:
> 
> * recent FPGA are divided into regions (or slots) and every of them 
>   can be programmed separately
> * slots may or may not fit the same bitstream (the program which FPGA
>   is fed, the IP)
> * There is several products around (Altera, Xilinx, others), which make 
>   bitstream incompatible. Even between the products of the same company
> * libraries which abstract the hardware layer like AAL[1] and their 
>   versions
> * for some products, there is a need for tracking memory usage, which 
>   is located on PCI boards
> * some of the FPGAs can be exposed using SR-IOV, while some other not, 
>   which implies multiple usage abilities

Along similar lines we have proposals to add vGPU support to Nova,
where the vGPUs may or may not be exposed using SR-IOV. We also want
to be able to on the fly decide whether any physical GPU is assigned
entirely to a guest as a full PCI device, or whether we only assign
individual "virtual functions" of the GPU. This means that even if
the GPU in question does *not* use SR-IOV, we still need to track
the GPU and vGPUs in the same way as we track PCI devices, so that
we can avoid assigning a vGPU to the guest, if the underlying physical
PCI device is already assigned to the guest.

I can see we will have much the same issue with FPGAs, where we may
either want to assign the entire physical PCI device to a guest, or
just assign a particular slot in the FPGA to the guest. So even if
the FPGA is not using SR-IOV, we need to tie this all into the PCI
device tracking code, as we are intending for vGPUs.

All in all, I think we probably ought to generalize the PCI device
assignment modelling so that we're actually modelling generic
hardware devices which may or may not be PCI based, so that we can
accurately track the relationships between the devices.

With NIC devices we're also seeing a need to expose capabilities
against the PCI devices, so that the schedular can be more selective
in deciding which particular devices to assign. eg so we can distinguish
between NICs which support RDMA and those which don't, or identify NIC
with hardware offload features, and so on. I can see this need to
associate capabilities with devices is something that will likely
be needed for the FPGA scenario, and vGPUs too. So again this points
towards more general purpose modelling of assignable hardware devices
beyond the limited PCI device modelling we've got today.

Looking to the future I think we'll see more usecases for device
assignment appearing for other types of device.

IOW, I think it would be a mistake to model FPGAs as a distinct
object type on their own. Generalization of assignable devices
is the way to go

> In other words, it may be necessary to incorporate another actions:
> 
> * properly discover FPGA and its capabilities
> * schedule right bitstream with corresponding matching unoccupied FPGA
>   device/slot
> * actual program FPGA
> * provide libraries into VM, which are necessary for interacting between
>   user program and the exposed FPGA (or AAL) (this may be optional, 
>   since user can upload complete image with everything in place)
> * bitstream images have to be keep in some kind of service (Glance?) 
>   with some kind of way for identifying which image match what FPGA
> 
> All of that makes modelling resource extremely complicated, contrary to 
> CPU resource for example. I'd like to discuss how the goal 

Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Steven Hardy
On Tue, Apr 05, 2016 at 02:07:06PM +0300, Juan Antonio Osorio wrote:
>On Tue, Apr 5, 2016 at 11:36 AM, Steven Hardy  wrote:
> 
>  On Sat, Apr 02, 2016 at 05:28:57PM -0400, Adam Young wrote:
>  > I finally have enough understanding of what is going on with Tripleo
>  to
>  > reasonably discuss how to implement solutions for some of the main
>  security
>  > needs of a deployment.
>  >
>  >
>  > FreeIPA is an identity management solution that can provide support
>  for:
>  >
>  > 1. TLS on all network communications:
>  >    A. HTTPS for web services
>  >    B. TLS for the message bus
>  >    C. TLS for communication with the Database.
>  > 2. Identity for all Actors in the system:
>  >   A.  API services
>  >   B.  Message producers and consumers
>  >   C.  Database consumers
>  >   D.  Keystone service users
>  > 3. Secure  DNS DNSSEC
>  > 4. Federation Support
>  > 5. SSH Access control to Hosts for both undercloud and overcloud
>  > 6. SUDO management
>  > 7. Single Sign On for Applications running in the overcloud.
>  >
>  >
>  > The main pieces of FreeIPA are
>  > 1. LDAP (the 389 Directory Server)
>  > 2. Kerberos
>  > 3. DNS (BIND)
>  > 4. Certificate Authority (CA) server (Dogtag)
>  > 5. WebUI/Web Service Management Interface (HTTPD)
>  >
>  > Of these, the CA is the most critical.  Without a centralized CA, we
>  have no
>  > reasonable way to do certificate management.
>  >
>  > Now, I know a lot of people have an allergic reaction to some, maybe
>  all, of
>  > these technologies. They should not be required to be running in a
>  > development or testbed setup.  But we need to make it possible to
>  secure an
>  > end deployment, and FreeIPA was designed explicitly for these kinds of
>  > distributed applications.  Here is what I would like to implement.
>  >
>  > Assuming that the Undercloud is installed on a physical machine, we
>  want to
>  > treat the FreeIPA server as a managed service of the undercloud that
>  is then
>  > consumed by the rest of the overcloud. Right now, there are conflicts
>  for
>  > some ports (8080 used by both swift and Dogtag) that prevent a drop-in
>  run
>  > of the server on the undercloud controller.  Even if we could
>  deconflict,
>  > there is a possible battle between Keystone and the FreeIPA server on
>  the
>  > undercloud.  So, while I would like to see the ability to run the
>  FreeIPA
>  > server on the Undercloud machine eventuall, I think a more realistic
>  > deployment is to build a separate virtual machine, parallel to the
>  overcloud
>  > controller, and install FreeIPA there. I've been able to modify
>  Tripleo
>  > Quickstart to provision this VM.
> 
>  IMO these services shouldn't be deployed on the undercloud - we only
>  support a single node undercloud, and atm it's completely possible to
>  take
>  the undercloud down without any impact to your deployed cloud (other
>  than
>  losing the ability to manage it temporarily).
> 
>This is fair enough, however, for CI purposes, would it be acceptable to
>deploy it there? Or where do you recommend we have it?

We're already well beyond capacity in CI, so to me this seems like
something that's probably appropriate for a third-party CI job?

To me it just doesn't make sense to integrate these pieces on the
undercloud, and integrating it there just because we need it available for
CI purposes seems like a poor argument, because we're not testing a
representative/realistic environment.

If we have to wire this in to TripleO CI I'd favor spinning up an extra
node with the FreeIPA pieces in, e.g a separate Heat stack (so, e.g the
nonha job takes 3 nodes, a "freeipa" stack of 1 and an overcloud of 2).

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][ironic] bifrost 1.0.1 release (mitaka)

2016-04-05 Thread no-reply
We are satisfied to announce the release of:

bifrost 1.0.1: Deployment of physical machines using OpenStack Ironic
and Ansible

This release is part of the mitaka stable release series.

For more details, please see below.

Changes in bifrost 1.0.0..1.0.1
---

83f2232 Update mitaka release notes source
a4ccb66 Update default branch to stable/mitaka

Diffstat (except docs and test files)
-

.gitreview | 1 +
releasenotes/source/mitaka.rst | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Proposing Anton Arefiev (aarefiev) for ironic-inspector-core

2016-04-05 Thread Vladyslav Drok
Not sure if I'm eligible to vote, but his reviews are insightful, so +1
from me. Congrats! :)

On Tue, Apr 5, 2016 at 3:22 PM, Imre Farkas  wrote:

> +1 from me. Thanks for your work Anton and congrats! ;-)
>
> Imre
>
>
>
> On 04/05/2016 12:24 PM, Dmitry Tantsur wrote:
>
>> Hi!
>>
>> I'd like to propose Anton to the ironic-inspector core reviewers team.
>> His stats are pretty nice [1], he's making meaningful reviews and he's
>> pushing important things (discovery, now tempest).
>>
>> Members of the current ironic-inspector-team and everyone interested,
>> please respond with your +1/-1. A lazy consensus will be applied: if
>> nobody objects by the next Tuesday, the change will be in effect.
>>
>> Thanks
>>
>> [1] http://stackalytics.com/report/contribution/ironic-inspector/60
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Newton design summit planning

2016-04-05 Thread Balázs Gibizer
> -Original Message-
> From: Chris Dent [mailto:cdent...@anticdent.org]
> Sent: April 05, 2016 12:13
> 
> On Tue, 5 Apr 2016, Julien Danjou wrote:
> 
> > Any items/ideas we could discussion in this remaining sessions?
> > Any project we could/should invite to discuss with us?
> 
> Nova has started work on versioned notifications:
> 
>http://specs.openstack.org/openstack/nova-
> specs/specs/mitaka/implemented/versioned-notification-api.html
> 
> That in itself is probably something to be aware of but also:
> 
> If the group working on that (Balazs Gibizer and co) is interested they might
> be encouraged to explore changing the nova compute manager to produce,
> as notifications, some of the information the ceilo compute pollsters get.

I support this idea. However I'd like to note that my current priority is to 
transform
the legacy nova notifications to the new versioned framework[1]. Of course it
shall not prevent us to propose new notifications based on that framework.
I am definitely interested in discussing this topic on the summit. Maybe a
cross project celio-nova session would be the good place to have the right
audience and to set up the right priorities.

Cheers,
Gibi

[1] https://review.openstack.org/#/c/286675/ 

> 
> --
> Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
> freenode: cdent tw: @anticdent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate] Tempest Tests - in-repo or separate

2016-04-05 Thread Kiall Mac Innes
Agreed on a separate repo.

I'm honestly a little unsure how in-service-repo would ever work long
term, given that tempest's requirements will match master, and may
not be compatible with N releases ago's $service requirements..

Thoughts on how that might work, or is it just that no service has had
an in-repo plugin for long enough to hit this yet?

Thanks,
Kiall

On 04/04/16 17:16, Andrea Frittoli wrote:
> I look forward to the first (as far as I know) tempest plugin for a
> service in its own repo! :)
>
>
> On Mon, Apr 4, 2016 at 4:49 PM Hayes, Graham  > wrote:
>
> On 04/04/2016 16:36, Jordan Pittier wrote:
> >
> > On Mon, Apr 4, 2016 at 5:01 PM, Hayes, Graham
> 
> > >> wrote:
> >
> > As we have started to move to a tempest plugin for our
> functional test
> > suite, we have 2 choices about where it lives.
> >
> > 1 - In repo (as we have [0] currently)
> > 2 - In a repo of its own (something like
> openstack/designate-tempest)
> >
> > There are several advantages to a separate repo:
> >
> > * It will force us to make API changes compatable
> > * * This could cause us to be slower at merging changes [1]
> > * It allows us to be branchless (like tempest is)
> > * It can be its own installable package, and a (much)
> shorter list
> > of requirements.
> >
> > I am not a Designate contributor, but as a Tempest contributor we
> > recommend to use a separate repo. See
> >
> 
> http://docs.openstack.org/developer/tempest/plugin.html#standalone-plugin-vs-in-repo-plugin
> > for more details.
>
> Yeap - that was one of the reasons I will leaning towards the separate
> repo. The only thing stopping us was that I cannot see any other
> project
> who does it [2]
>
> We just had a quick discussion in IRC[3] and we are going to go with
> the separate repo anyway.
>
> 2 -
> 
> http://codesearch.openstack.org/?q=%5Etempest=nope=setup.cfg=
> 3 -
> 
> http://eavesdrop.openstack.org/irclogs/%23openstack-dns/%23openstack-dns.2016-04-04.log.html#t2016-04-04T15:05:38
>
>
> > If everyone is OK with a separate repo, I will go ahead and
> start the
> > creation process.
> >
> > Thanks
> >
> > - Graham
> >
> >
> > 0 - https://review.openstack.org/283511
> > 1 -
> >   
>  
> http://docs.openstack.org/developer/tempest/HACKING.html#branchless-tempest-considerations
> >
> >   
>  
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> >   
>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> >   
>  
> >   
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FPGA as a resource

2016-04-05 Thread Roman Dobosz
Hey all,

On yesterday's scheduler meeting I was raised the idea of bringing up 
the FPGA to the OpenStack as the resource, which than might be exposed 
to the VMs.

The use cases for motivations, why one want do this, are pretty broad -
having such chip ready on the computes might be beneficial either for
consumers of the technology and data center administrators. The
utilization of the hardware is very broad - the only limitations are
human imagination and hardware capability - since it might be used for
accelerating execution of algorithms from compression and cryptography,
through pattern recognition, transcoding to voice/video analysis and
processing and all the others in between. Using FPGA to perform data
processing may significantly reduce CPU utilization, the time and power
consumption, which is a benefit on its own.

On OpenStack side, unlike utilizing the CPU or memory, for actually 
using specific algorithm with FPGAs, it has to be programmed first. So 
in simplified scenario, it might go like this:

* User selects VM with image which supports acceleration,
* Scheduler selects the appropriate compute host with FPGA available,
* Compute gets request, program IP into FPGA and then boot up the 
  VM with accelerator attached.
* If VM is removed, it may optionally erase the FPGA.

As you can see, it seems not complicated at this point, however it 
become more complex due to following things we also have to take into 
consideration:

* recent FPGA are divided into regions (or slots) and every of them 
  can be programmed separately
* slots may or may not fit the same bitstream (the program which FPGA
  is fed, the IP)
* There is several products around (Altera, Xilinx, others), which make 
  bitstream incompatible. Even between the products of the same company
* libraries which abstract the hardware layer like AAL[1] and their 
  versions
* for some products, there is a need for tracking memory usage, which 
  is located on PCI boards
* some of the FPGAs can be exposed using SR-IOV, while some other not, 
  which implies multiple usage abilities

In other words, it may be necessary to incorporate another actions:

* properly discover FPGA and its capabilities
* schedule right bitstream with corresponding matching unoccupied FPGA
  device/slot
* actual program FPGA
* provide libraries into VM, which are necessary for interacting between
  user program and the exposed FPGA (or AAL) (this may be optional, 
  since user can upload complete image with everything in place)
* bitstream images have to be keep in some kind of service (Glance?) 
  with some kind of way for identifying which image match what FPGA

All of that makes modelling resource extremely complicated, contrary to 
CPU resource for example. I'd like to discuss how the goal of having 
reprogrammable accelerators in OpenStack can be achieved. Ideally I'd 
like to fit it into Jay and Chris work on resource-*.

Looking forward for any comments :)

[1] 
http://www.intel.com/content/dam/doc/white-paper/quickassist-technology-aal-white-paper.pdf

-- 
Cheers,
Roman Dobosz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Proposing Anton Arefiev (aarefiev) for ironic-inspector-core

2016-04-05 Thread Imre Farkas

+1 from me. Thanks for your work Anton and congrats! ;-)

Imre


On 04/05/2016 12:24 PM, Dmitry Tantsur wrote:

Hi!

I'd like to propose Anton to the ironic-inspector core reviewers team.
His stats are pretty nice [1], he's making meaningful reviews and he's
pushing important things (discovery, now tempest).

Members of the current ironic-inspector-team and everyone interested,
please respond with your +1/-1. A lazy consensus will be applied: if
nobody objects by the next Tuesday, the change will be in effect.

Thanks

[1] http://stackalytics.com/report/contribution/ironic-inspector/60

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Juan Antonio Osorio
On Tue, Apr 5, 2016 at 2:45 PM, Fox, Kevin M  wrote:

> This sounds suspiciously like, "how do you get a secret to the instance to
> get a secret from the secret store" issue :)
>
Yeah, sounds pretty familiar. We were using the nova hooks mechanism for
this means, but it was deprecated recently. So bummer :/

>
> Nova instance user spec again?
>
> Thanks,
> Kevin
>
> --
> *From:* Juan Antonio Osorio
> *Sent:* Tuesday, April 05, 2016 4:07:06 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [TripleO] FreeIPA integration
>
>
>
> On Tue, Apr 5, 2016 at 11:36 AM, Steven Hardy  wrote:
>
>> On Sat, Apr 02, 2016 at 05:28:57PM -0400, Adam Young wrote:
>> > I finally have enough understanding of what is going on with Tripleo to
>> > reasonably discuss how to implement solutions for some of the main
>> security
>> > needs of a deployment.
>> >
>> >
>> > FreeIPA is an identity management solution that can provide support for:
>> >
>> > 1. TLS on all network communications:
>> >A. HTTPS for web services
>> >B. TLS for the message bus
>> >C. TLS for communication with the Database.
>> > 2. Identity for all Actors in the system:
>> >   A.  API services
>> >   B.  Message producers and consumers
>> >   C.  Database consumers
>> >   D.  Keystone service users
>> > 3. Secure  DNS DNSSEC
>> > 4. Federation Support
>> > 5. SSH Access control to Hosts for both undercloud and overcloud
>> > 6. SUDO management
>> > 7. Single Sign On for Applications running in the overcloud.
>> >
>> >
>> > The main pieces of FreeIPA are
>> > 1. LDAP (the 389 Directory Server)
>> > 2. Kerberos
>> > 3. DNS (BIND)
>> > 4. Certificate Authority (CA) server (Dogtag)
>> > 5. WebUI/Web Service Management Interface (HTTPD)
>> >
>> > Of these, the CA is the most critical.  Without a centralized CA, we
>> have no
>> > reasonable way to do certificate management.
>> >
>> > Now, I know a lot of people have an allergic reaction to some, maybe
>> all, of
>> > these technologies. They should not be required to be running in a
>> > development or testbed setup.  But we need to make it possible to
>> secure an
>> > end deployment, and FreeIPA was designed explicitly for these kinds of
>> > distributed applications.  Here is what I would like to implement.
>> >
>> > Assuming that the Undercloud is installed on a physical machine, we
>> want to
>> > treat the FreeIPA server as a managed service of the undercloud that is
>> then
>> > consumed by the rest of the overcloud. Right now, there are conflicts
>> for
>> > some ports (8080 used by both swift and Dogtag) that prevent a drop-in
>> run
>> > of the server on the undercloud controller.  Even if we could
>> deconflict,
>> > there is a possible battle between Keystone and the FreeIPA server on
>> the
>> > undercloud.  So, while I would like to see the ability to run the
>> FreeIPA
>> > server on the Undercloud machine eventuall, I think a more realistic
>> > deployment is to build a separate virtual machine, parallel to the
>> overcloud
>> > controller, and install FreeIPA there. I've been able to modify Tripleo
>> > Quickstart to provision this VM.
>>
>> IMO these services shouldn't be deployed on the undercloud - we only
>> support a single node undercloud, and atm it's completely possible to take
>> the undercloud down without any impact to your deployed cloud (other than
>> losing the ability to manage it temporarily).
>>
> This is fair enough, however, for CI purposes, would it be acceptable to
> deploy it there? Or where do you recommend we have it?
>
>>
>> These auth pieces all appear critical to the operation of the deployed
>> cloud, thus I'd assume you really want them independently managed
>> (probably
>> in an HA configuration on multiple nodes)?
>>
>> So, I'd say we support one of:
>>
>> 1. Document that FreeIPA must exist, installed by existing non-TripleO
>> tooling
>>
>> 2. Support a heat template (in addition to overcloud.yaml) that can deploy
>> FreeIPA.
>>
>> I feel like we should do (1), as it fits better with the TripleO vision
>> (which is to deploy OpenStack), and it removes the need for us to maintain
>> a bunch of non-openstack stuff.
>>
>> The path I'm imagining is we have a documented integration with FreeIPA,
>> and perhaps some third-party CI, but we don't support deploying these
>> pieces directly via TripleO.
>
>
>> > I was also able to run FreeIPA in a container on the undercloud
>> machine, but
>> > this is, I think, not how we want to migrate to a container based
>> strategy.
>> > It should be more deliberate.
>> >
>> >
>> > While the ideal setup would be to install the IPA layer first, and
>> create
>> > service users in there, this produces a different install path between
>> > with-FreeIPA and without-FreeIPA. Thus, I suspect the right approach is
>> to
>> > run the overcloud deploy, then "harden" the deployment with the FreeIPA
>> > steps.

[openstack-dev] [Smaug]- IRC Meeting today (04/05) - 1400 UTC SM

2016-04-05 Thread Saggi Mizrahi
Hi All,

We will hold our bi-weekly IRC meeting today (Tuesday, 04/05) at 1400
UTC in #openstack-meeting

Please review the proposed meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/smaug

Please feel free to add to the agenda any subject you would like to discuss.

Thanks,
Saggi
-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Hayes, Graham
On 02/04/2016 22:33, Adam Young wrote:
> I finally have enough understanding of what is going on with Tripleo to
> reasonably discuss how to implement solutions for some of the main
> security needs of a deployment.
>
>
> FreeIPA is an identity management solution that can provide support for:
>
> 1. TLS on all network communications:
>  A. HTTPS for web services
>  B. TLS for the message bus
>  C. TLS for communication with the Database.
> 2. Identity for all Actors in the system:
> A.  API services
> B.  Message producers and consumers
> C.  Database consumers
> D.  Keystone service users
> 3. Secure  DNS DNSSEC
> 4. Federation Support
> 5. SSH Access control to Hosts for both undercloud and overcloud
> 6. SUDO management
> 7. Single Sign On for Applications running in the overcloud.
>
>
> The main pieces of FreeIPA are
> 1. LDAP (the 389 Directory Server)
> 2. Kerberos
> 3. DNS (BIND)
> 4. Certificate Authority (CA) server (Dogtag)
> 5. WebUI/Web Service Management Interface (HTTPD)
>



>
>
> There are a couple ongoing efforts that will tie in with this:
>
> 1. Designate should be able to use the DNS from FreeIPA.  That was the
> original implementation.

Designate cannot use FreeIPA - we haven't had a driver for it since
Kilo.

There have been various efforts since to support FreeIPA, but it
requires that it is the point of truth for DNS information, as does
Designate.

If FreeIPA supported the traditional Notify and Zone Transfer mechanisms
then we would be fine, but unfortunately it does not.

[1] Actually points out that the goal of FreeIPA's DNS integration
"... is NOT to provide general-purpose DNS server. Features beyond
easing FreeIPA deployment and maintenance are explicitly out of scope."

1 - http://www.freeipa.org/page/DNS#Goals


> 2.  Juan Antonio Osorio  has been working on TLS everywhere.  The issue
> thus far has been Certificate management.  This provides a Dogtag server
> for Certs.
>
> 3. Rob Crittenden has been working on auto-registration of virtual
> machines with an Identity Provider upon launch.  This gives that efforts
> an IdM to use.
>
> 4. Keystone can make use of the Identity store for administrative users
> in their own domain.
>
> 5. Many of the compliance audits have complained about cleartext
> passwords in config files. This removes most of them.  MySQL supports
> X509 based authentication today, and there is Kerberos support in the
> works, which should remove the last remaining cleartext Passwords.
>
> I mentioned Centralized SUDO and HBAC.  These are both tools that may be
> used by administrators if so desired on the install. I would recommend
> that they be used, but there is no requirement to do so.
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Design summit planning

2016-04-05 Thread Thomas Herve
Hi all,

We've been allocated 12 sessions for Austin. I started an etherpad
where you can put topics that you'd like to talk about:
https://etherpad.openstack.org/p/newton-heat-sessions. Please fill it
up, we'll discuss what we have at the meeting tomorrow. Maybe we'll
wait until next week to finalize the schedule.

Thanks,

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] -2'ing all patches on every gate breakage

2016-04-05 Thread Hayes, Graham
On 04/04/2016 23:20, Carl Baldwin wrote:
> On Mon, Apr 4, 2016 at 3:22 PM, Doug Wiegley
>  wrote:
>> I don’t know, -1 really means, “there is something wrong, the submitter
>> should fix it and clear the slate.”  Whereas -2 has two meanings.  The first
>> is “procedural block”, and the second is “f*** you.”
>>
>> I really don’t see a reason not to use the procedural block as a procedural
>> block. Are you not trusting the other cores to remove them or something?
>> It’s literally what it’s there for.
>
> I'm not complaining.  I've had plenty of these -2s and I understanding
> the reason behind it.  But, I thought I'd chime in.
>
> I interpret a -2 on a patch as a procedural block because of something
> related to the patch.  It is awkward as a procedural block when it is
> being applied due to circumstances that have nothing to do with the
> patch itself and the only person who can remove the block is the
> person who applied it in the first place.  That person might get
> distracted, leave for the week-end, go on vacation, etc.
>
> Would it be nice if the project itself had an easy procedural block?
> A single switch that turns off entering the gate queue for the entire
> project?  Wouldn't it also be nice if the switch could be toggled by
> any one of a group responsible for it?  I think it would be nice but
> I'm not sure how it could be easily implemented.
>
> Carl

See my response earlier. if it is too "over engineered" there is
variations that could work (e.g. a script with shared creds / some sort
of side repo that triggers a jenkins job to run it) - I just suggested
a bot as it seemed easier.

On a side note, I have always thought that our overloaded use of -2 to
procedurally block things was a bit weird. A new label seems like a
much better idea.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Fox, Kevin M
This sounds suspiciously like, "how do you get a secret to the instance to get 
a secret from the secret store" issue :)

Nova instance user spec again?

Thanks,
Kevin


From: Juan Antonio Osorio
Sent: Tuesday, April 05, 2016 4:07:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] FreeIPA integration



On Tue, Apr 5, 2016 at 11:36 AM, Steven Hardy 
> wrote:
On Sat, Apr 02, 2016 at 05:28:57PM -0400, Adam Young wrote:
> I finally have enough understanding of what is going on with Tripleo to
> reasonably discuss how to implement solutions for some of the main security
> needs of a deployment.
>
>
> FreeIPA is an identity management solution that can provide support for:
>
> 1. TLS on all network communications:
>A. HTTPS for web services
>B. TLS for the message bus
>C. TLS for communication with the Database.
> 2. Identity for all Actors in the system:
>   A.  API services
>   B.  Message producers and consumers
>   C.  Database consumers
>   D.  Keystone service users
> 3. Secure  DNS DNSSEC
> 4. Federation Support
> 5. SSH Access control to Hosts for both undercloud and overcloud
> 6. SUDO management
> 7. Single Sign On for Applications running in the overcloud.
>
>
> The main pieces of FreeIPA are
> 1. LDAP (the 389 Directory Server)
> 2. Kerberos
> 3. DNS (BIND)
> 4. Certificate Authority (CA) server (Dogtag)
> 5. WebUI/Web Service Management Interface (HTTPD)
>
> Of these, the CA is the most critical.  Without a centralized CA, we have no
> reasonable way to do certificate management.
>
> Now, I know a lot of people have an allergic reaction to some, maybe all, of
> these technologies. They should not be required to be running in a
> development or testbed setup.  But we need to make it possible to secure an
> end deployment, and FreeIPA was designed explicitly for these kinds of
> distributed applications.  Here is what I would like to implement.
>
> Assuming that the Undercloud is installed on a physical machine, we want to
> treat the FreeIPA server as a managed service of the undercloud that is then
> consumed by the rest of the overcloud. Right now, there are conflicts for
> some ports (8080 used by both swift and Dogtag) that prevent a drop-in run
> of the server on the undercloud controller.  Even if we could deconflict,
> there is a possible battle between Keystone and the FreeIPA server on the
> undercloud.  So, while I would like to see the ability to run the FreeIPA
> server on the Undercloud machine eventuall, I think a more realistic
> deployment is to build a separate virtual machine, parallel to the overcloud
> controller, and install FreeIPA there. I've been able to modify Tripleo
> Quickstart to provision this VM.

IMO these services shouldn't be deployed on the undercloud - we only
support a single node undercloud, and atm it's completely possible to take
the undercloud down without any impact to your deployed cloud (other than
losing the ability to manage it temporarily).
This is fair enough, however, for CI purposes, would it be acceptable to deploy 
it there? Or where do you recommend we have it?

These auth pieces all appear critical to the operation of the deployed
cloud, thus I'd assume you really want them independently managed (probably
in an HA configuration on multiple nodes)?

So, I'd say we support one of:

1. Document that FreeIPA must exist, installed by existing non-TripleO
tooling

2. Support a heat template (in addition to overcloud.yaml) that can deploy
FreeIPA.

I feel like we should do (1), as it fits better with the TripleO vision
(which is to deploy OpenStack), and it removes the need for us to maintain
a bunch of non-openstack stuff.

The path I'm imagining is we have a documented integration with FreeIPA,
and perhaps some third-party CI, but we don't support deploying these
pieces directly via TripleO.

> I was also able to run FreeIPA in a container on the undercloud machine, but
> this is, I think, not how we want to migrate to a container based strategy.
> It should be more deliberate.
>
>
> While the ideal setup would be to install the IPA layer first, and create
> service users in there, this produces a different install path between
> with-FreeIPA and without-FreeIPA. Thus, I suspect the right approach is to
> run the overcloud deploy, then "harden" the deployment with the FreeIPA
> steps.

I think we should require the IPA layer to be installed first - I mean
isn't it likely in many (most?) production environments that these services
already exist?

This simplifies things, because then you just pass inputs from the existing
proven IPA environment in as a tripleo/heat environment file - same model
we already support for all kinds of vendor integration, SSL etc etc.

> The IdM team did just this last summer in preparing for the Tokyo summit,
> using Ansible and Packstack.  The Rippowam project
> 

[openstack-dev] [nova] Nova API sub-team meeting

2016-04-05 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >