[openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-15 Thread Edmund Rhudy (BLOOMBERG/ 120 PARK)
Hi all,

I'd like to follow up on a few discussions that took place last week in Boston, 
specifically in the Compute Instance/Volume Affinity for HPC session 
(https://etherpad.openstack.org/p/BOS-forum-compute-instance-volume-affinity-hpc).

In this session, the discussions all trended towards adding more complexity to 
the Nova UX, like adding --near and --distance flags to the nova boot command 
to have the scheduler figure out how to place an instance near some other 
resource, adding more fields to flavors or flavor extra specs, etc.

My question is: is it the right question to ask how to add more fine-grained 
complications to the OpenStack user experience to support what seemed like a 
pretty narrow use case?

The only use case that I remember hearing was an operator not wanting it to be 
possible for a user to launch an instance in a particular Nova AZ and then not 
be able to attach a volume from a different Cinder AZ, or they try to boot an 
instance from a volume in the wrong place and get a failure to launch. This 
seems okay to me, though - either the user has to rebuild their instance in the 
right place or Nova will just return an error during instance build. Is it 
worth adding all sorts of convolutions to Nova to avoid the possibility that 
somebody might have to build instances a second time?

The feedback I get from my cloud-experienced users most frequently is that they 
want to know why the OpenStack user experience in the storage area is so 
radically different from AWS, which is what they all have experience with. I 
don't really have a great answer for them, except to admit that in our clouds 
they just have to know what combination of flavors and Horizon options or BDM 
structure is going to get them the right tradeoff between storage durability 
and speed. I was pleased with how the session on expanding Cinder's role for 
Nova ephemeral storage went because of the suggestion of reducing Nova 
imagebackend's role to just the file driver and having Cinder take over for 
everything else. That, to me, is the kind of simplification that's a win-win 
for both devs and ops: devs get to radically simplify a thorny part of the Nova 
codebase, storage driver development only has to happen in Cinder, operators 
get a storage workflow that's easier to explain to users.

Am I off base in the view of not wanting to add more options to nova boot and 
more logic to the scheduler? I know the AWS comparison is a little North 
America-centric (this came up at the summit a few times that EMEA/APAC 
operators may have very different ideas of a normal cloud workflow), but I am 
striving to give my users a private cloud that I can define for them in terms 
of AWS workflows and vocabulary. AWS by design restricts where your volumes can 
live (you can use instance store volumes and that data is gone on reboot or 
terminate, or you can put EBS volumes in a particular AZ and mount them on 
instances in that AZ), and I don't think that's a bad thing, because it makes 
it easy for the users to understand the contract they're getting from the 
platform when it comes to where their data is stored and what instances they 
can attach it to.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Problem in Ubuntu check when building Kolla base image

2017-01-12 Thread Edmund Rhudy (BLOOMBERG/ 120 PARK)
Here at Bloomberg, we're evaluating Kolla to replace our in-house OpenStack 
deployment system, and one of our requirements is that we be able to do our 
builds without touching the Internet - everything needs to come from locally 
hosted repositories. A few weeks ago, I pushed up a PR 
(https://review.openstack.org/#/c/414639/) to start working on the ability to 
build Kolla containers while disconnected from the Internet. It doesn't provide 
complete coverage by any means (though that is the goal, to ensure that every 
container can be built offline for every base OS image), but I wanted to use it 
as a starter for further discussion, as well as reducing the amount of stuff 
we're carrying as local changes on top of upstream Kolla.

That being said, when I pushed the PR up, it failed the Ubuntu checks. I looked 
into it, and here's what I found:

1) There is a bug in Kolla (https://bugs.launchpad.net/kolla/+bug/1633187) that 
causes it to ignore any custom sources.list provided when building 
Debian/Ubuntu containers. You can supply one, and it will be copied into the 
build context, but because of 
http://git.openstack.org/cgit/openstack/kolla/tree/docker/base/Dockerfile.j2#n215,
 only the sources.list files that come with Kolla would be used anyway. 
Necessarily, because using local mirrors requires providing a custom 
sources.list, I fixed this bug (https://bugs.launchpad.net/kolla/+bug/1633187).

2) The Ubuntu gate checks provide a custom sources.list which redirects the 
container away from Canonical's mirrors and onto OSIC-hosted mirrors. The OSIC 
mirror, for whatever reason, is unsigned. In current master Kolla, this 
sources.list just isn't used, so checks that rebuild the base image will always 
use archive.ubuntu.com, because that's the mirror that's specified in 
docker/base/sources.list.ubuntu. Take for example the output of another PR 
https://review.openstack.org/#/c/411154/ - if you examine 
http://logs.openstack.org/54/411154/12/check/gate-kolla-dsvm-build-ubuntu-binary-ubuntu-xenial-nv/26627d8/console.html.gz
 (from the very top), you can see that it's downloading packages from 
archive.ubuntu.com as part of the base container build, even though 
http://logs.openstack.org/54/411154/12/check/gate-kolla-dsvm-build-ubuntu-binary-ubuntu-xenial-nv/26627d8/logs/kolla_configs/kolla/sources.list.txt.gz
 is supplied as sources.list.

3) When I fixed the bug described in #1, it meant the unsigned OSIC mirror 
specified in sources.list suddenly started getting used, and the base container 
build now fails because the container build process does not allow 
unauthenticated packages to be installed.

How can this be fixed? There are a few options:

1) Remove the sources.list from the current gate configurations - the way 
things are currently set up, the Ubuntu gates actually _depend_ on the presence 
of a bug in Kolla to function if they ever need to build the base Kolla image. 
This is not good.

2) I don't know why the OSIC Ubuntu mirror is unsigned. I feel like it should 
be a straight clone of Canonical's repos so that the baked-in signing key for 
the Ubuntu base image will just work, but presumably it's this way for a reason?

3) Specify a custom apt preferences in the gate to allow installing 
unauthenticated packages in the containers (ugly).

Would somebody with knowledge of the Kolla testing infrastructure be so kind as 
to comment? I brought this up in IRC a few times but could not get much 
attention on it.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-05-12 Thread Edmund Rhudy (BLOOMBERG/ 120 PARK)
I flubbed my description of what I had in mind - I was thinking of GitHub 
personal access tokens as a model, _not_ OAuth tokens. I believe the normal 
excuse is "inadequate caffeine".

From: dolph.math...@gmail.com 
Subject: Re: [openstack-dev] [horizon][keystone] Getting Auth Token from 
Horizon when using Federation

On Thu, May 12, 2016 at 8:10 AM Edmund Rhudy (BLOOMBERG/ 120 PARK) 
<erh...@bloomberg.net> wrote:

+1 on desiring OAuth-style tokens in Keystone.

OAuth 1.0a has been supported by keystone since the havana release, you just 
have to turn it on and use it:

  http://docs.openstack.org/developer/keystone/configuration.html#oauth1-1-0a 
-- 
-Dolph

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-05-12 Thread Edmund Rhudy (BLOOMBERG/ 120 PARK)
+1 on desiring OAuth-style tokens in Keystone. The use cases that come up here 
are people wanting to be able to execute jobs that use the APIs (Jenkins, 
Terraform, Vagrant, etc.) without having to save their personal credentials in 
plaintext somewhere, and also wanting to be able to associate credentials with 
a project instead of a specific person, so that if a person leaves or rotates 
their password it doesn't blow up their team's carefully crafted automation.

We can sort of work around it with LDAP service accounts as mentioned 
previously, but the concern around those is the lack of speedy revocability in 
the event of a compromise, and the service accounts could possibly be used to 
get to non-OpenStack places until they get shut down. One thought I had to try 
to keep the auth domain constrained to only OpenStack was using the EC2 API 
because at least that means you're not saving LDAP passwords on disk and the 
access keys are useless beyond that particular Keystone installation, but you 
run into impedance mismatches between the Nova API and AWS EC2 API, and we'd 
like people to use the native OpenStack APIs. (Turns out the notion of using 
AWS's EC2 API to talk to a private cloud is strange to people not steeped in 
cloudy things.)

From: openstack-dev@lists.openstack.org 
Subject: Re: [openstack-dev] [horizon][keystone] Getting Auth Token from 
Horizon when using Federation

Hi Dolph,

On Mon, 2016-04-18 at 17:50 -0500, Dolph Mathews wrote:
> 
> On Mon, Apr 18, 2016 at 11:34 AM, Martin Millnert 
> wrote:
> Hi,
> 
> we're deploying Liberty (soon Mitaka) with heavy reliance on
> the SAML2
> Federation system by Keystone where we're a Service Provider
> (SP).
> 
> The problem in this situation is getting a token for direct
> API
> access.(*)
> 
> There are conceptually two methods to use the CLI:
>  1) Modify ones (each customer -- in our case O(100)) IdP to
> add support
> for a feature called ECP(**), and then use keystoneauth with
> SAML2
> plugin,
>  2) Go to (for example) "Access & Security / API Access / View
> Credentials" in Horizon, and check out a token from there.
> 
> 
> With a default configuration, this token would only last a short
> period of time, so this would be incredibly repetitive (and thus
> tedious).

Indeed.

> So, I assume you mean some sort of long-lived API tokens?

Right.

> API tokens, including keystone's UUID, PKI, PKIZ, and Fernet tokens
> are all bearer tokens, so we force a short lifetime by default,
> because there are always multiple parties capable of compromising the
> integrity of a token. OAuth would be a counter example, where OAuth
> access tokens can (theoretically) live forever.

This does sound very interesting. As long as the end user gets something
useful to plug into the openstack auth libraries/APIs, we're home free
(modulo security considerations, etc).

> 2) isn't implemented. 1) is a complete blocker for many
> customers.
> 
> Are there any principal and fundamental reasons why 2 is not
> doable?
> What I imagine needs to happen:
>   A) User is authenticated (see *) in Horizon,
>   B) User uses said authentication (token) to request another
> token from
> Keystone, which is displayed under the "API Access" tab on
> "Access &
> Security".
> 
> 
> The (token) here could be an OAuth access token.

Will look into this (also as per our discussion in Austin).

The one issue that has appeared in our continued discussions at home, is
the contrast against "service user accounts", that seems a relatively
prevalent/common among deployers today, which basically use
username/password as the api key credentials, e.g. the authZ of the
issued token:

If AdminNameless is Domain Admin in their domain, won't their OAuth
access token yield keystone tokens with the same authZ as they otherwise
have?

My presumptive answer being 'yes', brought me to the realization that,
if one wants to avoid going the way of "service user accounts" but still
reduce authZ, one would like to be able to get OAuth access tokens for a
specific project, with a specific role (e.g. "user", or [project-]admin)
and the authZ this entails. This would keep the traceability, which is
one of the main issues with non-personal accounts.

How feasible is this last bit?


In general, the primary use case is:
 - I as a user of openstack on my personal computer retrieve a token to
manage openstack client operations without the need of storing my
Federation-username/password in local config (nor typing the password in
on the keyboard).

An extended use case definition of this being:
 - I as a user of openstack can provision an automated system with these
credentials, that can continue to operate as an openstack client for a
very long