[openstack-dev] [neutron-fwaas]Re: Heat Support for FWaaSv2

2017-05-18 Thread Vikash Kumar
On Fri, May 19, 2017 at 10:54 AM, Vikash Kumar <
vikash.ku...@oneconvergence.com> wrote:

> Hi Team,
>
>Are we planning for Heat support for FWAASV2 ? I see its missing.
>
> --
> Regards,
> Vikash
>



-- 
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][openstack-ansible] Moving on

2017-05-18 Thread Amy
We'll miss you Steve.:(

Amy (spotz)

Sent from my iPhone

> On May 18, 2017, at 8:55 PM, Steve Lewis  wrote:
> 
> It is clear to me now that I won't be able to work on OpenStack as a part of 
> my next day job, wherever that ends up being. As such, I’ll no longer be able 
> to invest the time and energy required to maintain my involvement in the 
> community. It's time to resign my role as a core reviewer, effective 
> immediately.
> 
> Thanks for all the fish.
> -- 
> SteveL
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM Heartbeat / Healthcheck Monitoring

2017-05-18 Thread Vikash Kumar
Hi Greg,

Please include my email in this spec also. We are also dealing with HA
of Virtual Instances (especially for Vendors) and will participate.

On Thu, May 18, 2017 at 11:33 PM, Waines, Greg 
wrote:

> Yes I am good with writing spec for this in masakari-spec.
>
>
>
> Do you use gerrit for this git ?
>
> Do you have a template for your specs ?
>
>
>
> Greg.
>
>
>
>
>
>
>
> *From: *Sam P 
> *Reply-To: *"openstack-dev@lists.openstack.org"  openstack.org>
> *Date: *Thursday, May 18, 2017 at 1:51 PM
> *To: *"openstack-dev@lists.openstack.org"  openstack.org>
> *Subject: *Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM
> Heartbeat / Healthcheck Monitoring
>
>
>
> Hi Greg,
>
> Thank you Adam for followup.
>
> This is new feature for masakari-monitors and think  Masakari can
>
> accommodate this feature in  masakari-monitors.
>
> From the implementation prospective, it is not that hard to do.
>
> However, as you can see in our Boston presentation, Masakari will
>
> replace its monitoring parts ( which is masakari-monitors) with,
>
> nova-host-alerter, **-process-alerter, and **-instance-alerter. (**
>
> part is not defined yet..:p)...
>
> Therefore, I would like to save this specifications, and make sure we
>
> will not miss  anything in the transformation..
>
> Does is make sense to write simple spec for this in masakari-spec [1]?
>
> So we can discuss about the requirements how to implement it.
>
>
>
> [1] https://github.com/openstack/masakari-specs
>
>
>
> --- Regards,
>
> Sampath
>
>
>
>
>
>
>
> On Thu, May 18, 2017 at 2:29 AM, Adam Spiers  wrote:
>
> I don't see any reason why masakari couldn't handle that, but you'd
>
> have to ask Sampath and the masakari team whether they would consider
>
> that in scope for their roadmap.
>
>
>
> Waines, Greg  wrote:
>
>
>
> Sure.  I can propose a new user story.
>
>
>
> And then are you thinking of including this user story in the scope of
>
> what masakari would be looking at ?
>
>
>
> Greg.
>
>
>
>
>
> From: Adam Spiers 
>
> Reply-To: "openstack-dev@lists.openstack.org"
>
> 
>
> Date: Wednesday, May 17, 2017 at 10:08 AM
>
> To: "openstack-dev@lists.openstack.org"
>
> 
>
> Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat /
>
> Healthcheck Monitoring
>
>
>
> Thanks for the clarification Greg.  This sounds like it has the
>
> potential to be a very useful capability.  May I suggest that you
>
> propose a new user story for it, along similar lines to this existing
>
> one?
>
>
>
>
>
> http://specs.openstack.org/openstack/openstack-user-
> stories/user-stories/proposed/ha_vm.html
>
>
>
> Waines, Greg 
> >
>
> wrote:
>
> Yes that’s correct.
>
> VM Heartbeating / Health-check Monitoring would introduce intrusive /
>
> white-box type monitoring of VMs / Instances.
>
>
>
> I realize this is somewhat in the gray-zone of what a cloud should be
>
> monitoring or not,
>
> but I believe it provides an alternative for Applications deployed in VMs
>
> that do not have an external monitoring/management entity like a VNF
> Manager
>
> in the MANO architecture.
>
> And even for VMs with VNF Managers, it provides a highly reliable
>
> alternate monitoring path that does not rely on Tenant Networking.
>
>
>
> You’re correct, that VM HB/HC Monitoring would leverage
>
> https://wiki.libvirt.org/page/Qemu_guest_agent
>
> that would require the agent to be installed in the images for talking
>
> back to the compute host.
>
> ( there are other examples of similar approaches in openstack ... the
>
> murano-agent for installation, the swift-agent for object store management
> )
>
> Although here, in the case of VM HB/HC Monitoring, via the QEMU Guest
>
> Agent, the messaging path is internal thru a QEMU virtual serial device.
>
> i.e. a very simple interface with very few dependencies ... it’s up and
>
> available very early in VM lifecycle and virtually always up.
>
>
>
> Wrt failure modes / use-cases
>
>
>
> · a VM’s response to a Heartbeat Challenge Request can be as
>
> simple as just ACK-ing,
>
> this alone allows for detection of:
>
>
>
> oa failed or hung QEMU/KVM instance, or
>
>
>
> oa failed or hung VM’s OS, or
>
>
>
> oa failure of the VM’s OS to schedule the QEMU Guest Agent daemon, or
>
>
>
> oa failure of the VM to route basic IO via linux sockets.
>
>
>
> · I have had feedback that this is similar to the virtual hardware
>
> watchdog of QEMU/KVM (
>
> https://libvirt.org/formatdomain.html#elementsWatchdog )
>
>
>
> · However, the VM Heartbeat / Health-check Monitoring
>
>
>
> o   provides a higher-level (i.e. application-level) heartbeating
>
>
>
> §  i.e. if the Heartbeat requests are being 

[openstack-dev] [glance][openstack-ansible] Moving on

2017-05-18 Thread Steve Lewis
It is clear to me now that I won't be able to work on OpenStack as a part
of my next day job, wherever that ends up being. As such, I’ll no longer be
able to invest the time and energy required to maintain my involvement in
the community. It's time to resign my role as a core reviewer, effective
immediately.

Thanks for all the fish.
-- 
SteveL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Zane Bitter

On 18/05/17 09:23, Monty Taylor wrote:


But think of the following use cases:

As a user, I want to make an API key that I'm going to use for general
automation just like I use my Password auth plugin based user account
today. I want it to be able to do everything I can do today - but I
value the revocation features.

As a user, I want to make an API key that can only upload content to
swift. I don't want to have to list every possible other API call.


What if we think about it like this:

For step one:

- A User creates an API Key in a Project. It will be a blacklist Key.
- That API Key is created with identical role assignments to the User
that created it.
- The role assignment clone is done by keystone and is not tied to the
User's ability to perform role assignments
- All API Keys are hardcoded in keystone to not be able to do
(POST,DELETE) /projects/{project_id}/api-key
- All API Keys are hardcoded in keystone to not be able to do
(POST,PATCH,DELETE) /users
- All API Keys are hardcoded in keystone to not be able to do
(POST,PATCH,DELETE) /projects/{project_id}

For step two:
- A User creates a whitelist API Key. It can't do ANYTHING by default,
no further action is needed on API key restrictions.
- A User creates a blacklist API Key. All API Key restrictions from step
one are added as initial blacklist filters.

The change in step two would allow a User to decide that they want to
opt-in to letting an API Key do *dangerous* things - but it would
require explicit action on their part ... even if they have requested a
blacklist Key.

We should also potentially add a policy check that would disallow a User
from removing the API Key blacklist exclusions, since it's possible and
reasonable that an Admin does not want a User to be able to create keys
that can manage keys.


I'd encourage everyone to read this excellent blog post on how it works 
in AWS:


http://start.jcolemorrison.com/aws-iam-policies-in-a-nutshell/

TL;DR: a policy document contains a Principal, an Action, a Resource and 
a Condition (like e.g. validity time). You can attach this policy 
_either_ to an IAM account (i.e. a User or Role - 'Role' being the 
equivalent of an auto-provisioned API key), in which case that account 
is assumed to be the Principal, _or_ to a resource (e.g. an S3 bucket), 
in which case that is assumed to be the Resource. AWS services 
themselves can also be Principals. IIUC access is default-deny and you 
open up individual stuff with a "Effect": "Allow" rule, but you can open 
up everything by setting  "Action": "*" and then blacklist stuff by 
adding "Effect": "Deny" rules.


When we're designing the initial API we need to keep in mind that the 
next stage will require a comparable level of sophistication to this.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread aalvarez
Yes but doesn't Pecan allow to use a development server (pecan serve) that
can accept interface and port options? I thought this would be the
test/development server Gnocchi would use.



--
View this message in context: 
http://openstack.10931.n7.nabble.com/gnocchi-Running-Gnocchi-API-in-specific-interface-tp135004p135081.html
Sent from the Developer mailing list archive at Nabble.com.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-18 Thread Adrian Turjak
On 19 May 2017 11:43 am, Curtis  wrote:On Thu, May 18, 2017 at 4:13 PM, Adrian Turjak  wrote:
> Hello fellow OpenStackers,
>
> For the last while I've been looking at options for multi-region
> multi-master Keystone, as well as multi-master for other services I've
> been developing and one thing that always came up was there aren't many
> truly good options for a true multi-master backend. Recently I've been
> looking at Cockroachdb and while I haven't had the chance to do any
> testing I'm curious if anyone else has looked into it. It sounds like
> the perfect solution, and if it can be proved to be stable enough it
> could solve a lot of problems.
>
> So, specifically in the realm of Keystone, since we are using sqlalchemy
> we already have Postgresql support, and since Cockroachdb does talk
> Postgres it shouldn't be too hard to back Keystone with it. At that
> stage you have a Keystone DB that could be multi-region, multi-master,
> consistent, and mostly impervious to disaster. Is that not the holy
> grail for a service like Keystone? Combine that with fernet tokens and
> suddenly Keystone becomes a service you can't really kill, and can
> mostly forget about.
>
> I'm welcome to being called mad, but I am curious if anyone has looked
> at this. I'm likely to do some tests at some stage regarding this,
> because I'm hoping this is the solution I've been hoping to find for
> quite a long time.
I was going to take a look at this a bit myself, just try it out. I
can't completely speak for the Fog/Edge/Massively Distributed working
group in OpenStack, but I feel like this might be something they look
into.
For standard multi-site I don't know how much it would help, say if
you only had a couple or three clouds, but more than that maybe this
starts to make sense. Also running Galera has gotten easier but still
not that easy.Multi-site with a shared Keystone was my goal because auth has to be shared in all regions for us. Fernet solves a part of it, but user data, roles, etc also needs to be replicated if we want a Keystone running in each region. That's where Cockroachdb could prove useful.
I had thought that the OpenStack community was deprecating Postgres
support though, so that could make things a bit harder here (I might
be wrong about this).I really hope not, because that will take Cockroachdb off the table entirely (unless they add MySQL support) and it may prove to be a great option overall once it is known to be stable and has been tested in larger scale setups.I remember reading about the possibility of deprecating Postgres but there are people using it in production so I assumed we didn't go down that path. Would be good to have someone confirm.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Boston Forum session recap - claims in the scheduler (or conductor)

2017-05-18 Thread Matt Riedemann
The etherpad for this session is here [1]. The goal for this session was 
to inform operators and get feedback on the plan for what we're doing 
with moving claims from the computes to the control layer (scheduler or 
conductor).


We mostly talked about retries, which also came up in the cells v2 
session that Dan Smith led [2] and will recap later.


Without getting into too many details, in the cells v2 session we came 
to a compromise on build retries and said that we could pass hosts down 
to the cell so that the cell-level conductor could retry if needed (even 
though we expect doing claims at the top will fix the majority of 
reasons you'd have a reschedule in the first place).


During the claims in the scheduler session, a new wrinkle came up which 
is the hosts that the scheduler returns to the top-level conductor may 
be in different cells. So if we have two cells, A and B, with hosts x 
and y in cell A and host z in cell B, we can't send z to A for retries, 
or x or y to B for retries. So we need some kind of post-filter/weigher 
filtering such that hosts are grouped by cell and then they can be sent 
to the cells for retries as necessary.


There was also some side discussion asking if we somehow regressed 
pack-first strategies by using Placement in Ocata. John Garbutt and Dan 
Smith have the context on this (I think) so I'm hoping they can clarify 
if we really need to fix something in Ocata at this point, or is this 
more of a case of closing a loop-hole?


We also spent a good chunk of the session talking about overhead 
calculations for memory_mb and disk_gb which happens in the compute and 
on a per-hypervisor basis. In the absence of automating ways to adjust 
for overhead, our solution for now is operators can adjust reserved host 
resource values (vcpus, memory, disk) via config options and be 
conservative or aggressive as they see fit. Chris Dent and I also noted 
that you can adjust those reserved values via the placement REST API but 
they will be overridden by the config in a periodic task - which may be 
a bug, if not at least a surprise to an operator.


We didn't really get into this during the forum session, but there are 
different opinions within the nova dev team on how to do claims in the 
controller services (conductor vs scheduler). Sylvain Bauza has a series 
which uses the conductor service, and Ed Leafe has a series using the 
scheduler. More on that in the mailing list [3].


Next steps are going to be weighing both options between Sylvain and Ed, 
picking a path and moving forward, as we don't have a lot of time to sit 
on this fence if we're going to get it done in Pike.


As a side request, it would be great if companies that have teams doing 
performance and scale testing could help out and compare before (Ocata) 
and after (Pike with claims in the controller) results, because we 
eventually want to deprecate the caching scheduler but that currently 
outperforms the filter scheduler at scale because of the retries 
involved when using the filter scheduler, and which we expect doing 
claims at the top will fix.


[1] 
https://etherpad.openstack.org/p/BOS-forum-move-claims-from-compute-to-scheduler
[2] 
https://etherpad.openstack.org/p/BOS-forum-cellsv2-developer-community-coordination

[3] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116949.html

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-18 Thread Joshua Harlow

Chris Friesen wrote:

On 05/16/2017 10:45 AM, Joshua Harlow wrote:

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock



And always get a write lock.

It is a slightly different way of getting those locks (via a context
manager)
but the implementation underneath is a deque; so fairness should be
assured in
FIFO order...


I'm going ahead and doing this. Your docs for fastener don't actually
say that lock.ReaderWriterLock.write_lock() provides fairness. If you're
going to ensure that stays true it might make sense to document the fact.


Sounds great, I was starting to but then got busy with other stuff :-P



Am I correct that fasteners.InterProcessLock is basically as fair as the
underlying OS-specific lock? (Which should be reasonably fair except for
process scheduler priority.)


Yup that IMHO would be fair, its just fnctl under the covers (at least 
for linux). Though from what I remember at 
https://github.com/harlowja/fasteners/issues/26#issuecomment-253543912 
the lock class here seemed a little nicer (though more complex). That 
guy I think was going to propose some kind of merge, but that never 
seemd to appear.





Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-18 Thread Matt Riedemann
I just wanted to blurt this out since it hit me a few times at the 
summit, and see if I'm misreading the rooms.


For the last few years, Nova has pushed back on adding orchestration to 
the compute API, and even define a policy for it since it comes up so 
much [1]. The stance is that the compute API should expose capabilities 
that a higher-level orchestration service can stitch together for a more 
fluid end user experience.


One simple example that comes up time and again is allowing a user to 
pass volume type to the compute API when booting from volume such that 
when nova creates the backing volume in Cinder, it passes through the 
volume type. If you need a non-default volume type for boot from volume, 
the way you do this today is first create the volume with said type in 
Cinder and then provide that volume to the compute API when creating the 
server. However, people claim that is bad UX or hard for users to 
understand, something like that (at least from a command line, I assume 
Horizon hides this, and basic users should probably be using Horizon 
anyway right?).


While talking about claims in the scheduler and a top-level conductor 
for cells v2 deployments, we've talked about the desire to eliminate 
"up-calls" from the compute service to the top-level controller services 
(nova-api, nova-conductor and nova-scheduler). Build retries is one such 
up-call. CERN disables build retries, but others rely on them, because 
of how racy claims in the computes are (that's another story and why 
we're working on fixing it). While talking about this, we asked, "why 
not just do away with build retries in nova altogether? If the scheduler 
picks a host and the build fails, it fails, and you have to 
retry/rebuild/delete/recreate from a top-level service."


But during several different Forum sessions, like user API improvements 
[2] but also the cells v2 and claims in the scheduler sessions, I was 
hearing about how operators only wanted to expose the base IaaS services 
and APIs and end API users wanted to only use those, which means any 
improvements in those APIs would have to be in the base APIs (nova, 
cinder, etc). To me, that generally means any orchestration would have 
to be baked into the compute API if you're not using Heat or something 
similar.


Am I missing the point, or is the pendulum really swinging away from 
PaaS layer services which abstract the dirty details of the lower-level 
IaaS APIs? Or was this always something people wanted and I've just 
never made the connection until now?


[1] https://docs.openstack.org/developer/nova/project_scope.html#api-scope
[2] 
https://etherpad.openstack.org/p/BOS-forum-openstack-user-api-improvements


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Jeremy Stanley
On 2017-05-18 18:04:35 -0400 (-0400), Paul Belanger wrote:
[...]
> if we decide to publish to docker, I don't think we'd push
> directly. Maybe push to our docker registry then mirror to docker
> hub. That is something we can figure out a little later.
[...]

Ideally by iterating on https://review.openstack.org/447524 where
the details for what that would look like are being hashed out.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Boston Forum session recap - instance/volume affinity for HPC

2017-05-18 Thread Matt Riedemann
The etherpad for this session is here [1]. This was about discussing 
ways to achieve co-location or affinity for VMs and volumes for 
high-performance, and was spurred by an older dev list discussion 
(linked in the etherpad).


This quickly grew into side discussions and it became apparent that at a 
high level we were talking about complicated solutions looking for a 
problem. That is also touched on a bit after the session in the dev ML [2].


The base use case is a user wants their server instance and volume 
located as close to each other as possible, ideally on the same compute 
host.


We talked about ways to model a sort of "distance" attribute between 
resource providers in an aggregate relationship (in the placement sense 
of 'aggregate', not compute host aggregates in nova). This distance or 
nearness idea led down a path for how you define distance in a cloud, 
i.e. does 'near' mean the same host or rack or data center in a 
particular cloud? How are these values defined - would they be custom 
per cloud and if so, how is that discoverable/inter-operable for an end 
API user? It was noted that flavors aren't inter-operable either really, 
at least not by name. Jay Pipes has an older spec [3] about generic 
scheduling which could replace server groups, so this could maybe fall 
into that.


When talking about this there are also private cloud biases, i.e. things 
people are willing to tolerate or expose to their users because they are 
running a private cloud. Those same things don't all work in a public 
cloud, e.g. mapping availability zones one-to-one for cinder-volume and 
nova-compute on the same host when you have hundreds of thousands of hosts.


Then there are other questions about if/how people have already solved 
this using things like flavors with extra specs and host aggregates and 
the AggregateInstanceExtraSpecsFilter, or setting 
[cinder]cross_az_attach=False in nova.conf on certain hosts. For 
example, setup host aggregates with nova-compute and cinder-volume 
running on the same host, define flavors with extra specs that match the 
host aggregate metadata, and then charge more for those flavors as your 
HPC type. Or, can we say, use Ironic?


It's clear that we don't have a good end-user story for this 
requirement, and so I think next steps for this are going to involve 
working with the public cloud work group [4] and/or product work group 
[5] (hopefully those two groups could work together here) on defining 
the actual use cases and what the end user experience looks like.


[1] 
https://etherpad.openstack.org/p/BOS-forum-compute-instance-volume-affinity-hpc

[2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116694.html
[3] https://review.openstack.org/#/c/183837/
[4] https://wiki.openstack.org/wiki/PublicCloudWorkingGroup
[5] https://wiki.openstack.org/wiki/ProductTeam

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-18 Thread Curtis
On Thu, May 18, 2017 at 4:13 PM, Adrian Turjak  wrote:
> Hello fellow OpenStackers,
>
> For the last while I've been looking at options for multi-region
> multi-master Keystone, as well as multi-master for other services I've
> been developing and one thing that always came up was there aren't many
> truly good options for a true multi-master backend. Recently I've been
> looking at Cockroachdb and while I haven't had the chance to do any
> testing I'm curious if anyone else has looked into it. It sounds like
> the perfect solution, and if it can be proved to be stable enough it
> could solve a lot of problems.
>
> So, specifically in the realm of Keystone, since we are using sqlalchemy
> we already have Postgresql support, and since Cockroachdb does talk
> Postgres it shouldn't be too hard to back Keystone with it. At that
> stage you have a Keystone DB that could be multi-region, multi-master,
> consistent, and mostly impervious to disaster. Is that not the holy
> grail for a service like Keystone? Combine that with fernet tokens and
> suddenly Keystone becomes a service you can't really kill, and can
> mostly forget about.
>
> I'm welcome to being called mad, but I am curious if anyone has looked
> at this. I'm likely to do some tests at some stage regarding this,
> because I'm hoping this is the solution I've been hoping to find for
> quite a long time.

I was going to take a look at this a bit myself, just try it out. I
can't completely speak for the Fog/Edge/Massively Distributed working
group in OpenStack, but I feel like this might be something they look
into.

For standard multi-site I don't know how much it would help, say if
you only had a couple or three clouds, but more than that maybe this
starts to make sense. Also running Galera has gotten easier but still
not that easy.

I had thought that the OpenStack community was deprecating Postgres
support though, so that could make things a bit harder here (I might
be wrong about this).

Thanks,
Curtis.

>
> Further reading:
> https://www.cockroachlabs.com/
> https://github.com/cockroachdb/cockroach
> https://www.cockroachlabs.com/docs/build-a-python-app-with-cockroachdb-sqlalchemy.html
>
> Cheers,
> - Adrian Turjak
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Blog: serverascode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-18 Thread Rochelle Grober
 From: Duncan Thomas 
> On 18 May 2017 at 22:26, Rochelle Grober 
> wrote:
> > If you're going to use --distance, then you should have specific values
> (standard definitions) rather than operator defined:
> > And for that matter, is there something better than distance?  Collocated
> maybe?
> >
> > colocated={local, rack, row, module, dc} Keep the standard definitions
> > that are already in use in/across data centers
> 
> There's at least 'chasis' that some people would want to add (blade based
> stuff) and I'm not sure what standard 'module' is... The trouble with standard
> definitions is that your standards rarely match the next guy's standards, and
> since some of these are entirely irrelevant to many storage topologies,
> you're likely going to need an API to discover what is relevant to a specific
> system anyway.


Dang.  Missed the chassis.  Yeah.  So, module/pod/container is the fly/crane in 
the container already built to add onto your larger DC.  But, I think the key 
is that if we came up with a reasonable list, based on what Ops know and use, 
then each operator can choose to use what is relevant to her and ignore the 
others.  More can be added by request.  But the key is that it is a limited set 
with a definition of each term.

I also agree that storage doesn't neatly fit into a distance relationship.  It 
can be everywhere and slow, local and slow, some distance and fast, etc.  
Actually, the more I think about this, this may be part of the placement 
conundrum.  Does this/how does this map to terms and decisions made in the 
placement subproject?

--Rocky

> 
> --
> Duncan Thomas
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][devstack][tooz][all] etcd 3.x as a base service

2017-05-18 Thread Davanum Srinivas
Team,

Please take a look at this devstack review that adds a new etcd3 service:
https://review.openstack.org/#/c/445432/

Waiting on infra team to help with creating a directory on
tarballs.openstack.org with etcd release binaries as so far i haven't
been able to get time/effort from ubuntu/debian distro folks. Fedora
already has 3.1.x so no problem there. Another twist is that the ppc64
arch support is not present in 3.1.x etcd.

Here are two options to enable the DLM use case with tooz (for
eventlet based services, Note that non-eventlet based services can
already use tooz with etcd3 with the driver added by Jay and Julien):
https://review.openstack.org/#/c/466098/
https://review.openstack.org/#/c/466109/

Please let me know here or in the review which one you would lean
towards. The first one neatly separates the etcg3+v3alpha-grpc/gateway
into a separate driver. the second one tries to be a bit more clever
when to use grpc directly and when to use the v3alpha-grpc/gateway.

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-18 Thread Chris Friesen

On 05/16/2017 10:45 AM, Joshua Harlow wrote:

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock


And always get a write lock.

It is a slightly different way of getting those locks (via a context manager)
but the implementation underneath is a deque; so fairness should be assured in
FIFO order...


I'm going ahead and doing this.   Your docs for fastener don't actually say that 
lock.ReaderWriterLock.write_lock() provides fairness.  If you're going to ensure 
that stays true it might make sense to document the fact.


Am I correct that fasteners.InterProcessLock is basically as fair as the 
underlying OS-specific lock?  (Which should be reasonably fair except for 
process scheduler priority.)


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-18 Thread Duncan Thomas
On 18 May 2017 at 22:26, Rochelle Grober  wrote:
> If you're going to use --distance, then you should have specific values 
> (standard definitions) rather than operator defined:
> And for that matter, is there something better than distance?  Collocated 
> maybe?
>
> colocated={local, rack, row, module, dc}
> Keep the standard definitions that are already in use in/across data centers

There's at least 'chasis' that some people would want to add (blade
based stuff) and I'm not sure what standard 'module' is... The trouble
with standard definitions is that your standards rarely match the next
guy's standards, and since some of these are entirely irrelevant to
many storage topologies, you're likely going to need an API to
discover what is relevant to a specific system anyway.

-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-18 Thread Adrian Turjak
Hello fellow OpenStackers,

For the last while I've been looking at options for multi-region
multi-master Keystone, as well as multi-master for other services I've
been developing and one thing that always came up was there aren't many
truly good options for a true multi-master backend. Recently I've been
looking at Cockroachdb and while I haven't had the chance to do any
testing I'm curious if anyone else has looked into it. It sounds like
the perfect solution, and if it can be proved to be stable enough it
could solve a lot of problems.

So, specifically in the realm of Keystone, since we are using sqlalchemy
we already have Postgresql support, and since Cockroachdb does talk
Postgres it shouldn't be too hard to back Keystone with it. At that
stage you have a Keystone DB that could be multi-region, multi-master,
consistent, and mostly impervious to disaster. Is that not the holy
grail for a service like Keystone? Combine that with fernet tokens and
suddenly Keystone becomes a service you can't really kill, and can
mostly forget about.

I'm welcome to being called mad, but I am curious if anyone has looked
at this. I'm likely to do some tests at some stage regarding this,
because I'm hoping this is the solution I've been hoping to find for
quite a long time.

Further reading:
https://www.cockroachlabs.com/
https://github.com/cockroachdb/cockroach
https://www.cockroachlabs.com/docs/build-a-python-app-with-cockroachdb-sqlalchemy.html

Cheers,
- Adrian Turjak


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Monty Taylor

On 05/18/2017 04:32 PM, Zane Bitter wrote:

On 18/05/17 07:53, Sean Dague wrote:



My worry about policy also is that I'm not sure how safe it is for a
project owned API key to inherit permissions from the user who created
it. I can't think of a better way to it though but I'm still slightly
uncomfortable with it since a user with more roles could make a key
with  a subset of those which then someone else in the project can reset
the password for and then have access to a API key that 'may' be more
powerful than their own user. In clouds like ours where we allow
customers to create/manage users within the scope of their own projects,
there are often users who have different access all in the same project
and this could be an odd issue.

This is a super interesting point I hadn't considered, thanks for
bringing it up. We could probably address it by just blocking certain
operations entirely for APIKeys. I don't think we loose much in
preventing APIKeys from self reproducing. Blocking user/pw reset seems
like another good safety measure (it also just wouldn't work in
something like LDAP, because there is no write authority). That would be
a very good set of things to consider on Monty's spec of any APIs that
we're going to explicitly prohibit for APIKeys in iteration 1.


I can't actually think of a use case for even allowing 'password' (i.e.
key) changes/resets. If a user wants to replace one, we should make them
create a new API key, roll it out to their application, and then delete
the old one. (Bonus: Heat automates this workflow for you ;)


Totally agree.

I believe I have included all of these things in the latest rev of the 
spec - but I also possibly have not.



In the future when we have separate reader/writer roles in the default
policy then you'll definitely require the writer role to delete an API
key from the project (as you would for any other resource), so there'd
be no issue AFAICT.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Paul Belanger
On Thu, May 18, 2017 at 09:34:44AM -0700, Michał Jastrzębski wrote:
> >> Issue with that is
> >>
> >> 1. Apache served is harder to use because we want to follow docker API
> >> and we'd have to reimplement it
> >
> > No, the idea is apache is transparent, for now we have been using proxypass
> > module in apache.  I think what Doug was mentioning was have a primary 
> > docker
> > registery, with is RW for a publisher, then proxy it to regional mirrors as 
> > RO.
> 
> That would also work, yes
> 
> >> 2. Running registry is single command
> >>
> > I've seen this mentioned a few times before, just because it is one command 
> > or
> > 'simple' to do, doesn't mean we want to or can.  Currently our 
> > infrastructure is
> > complicated, for various reasons.  I am sure we'll get to the right 
> > technical
> > solution for making jobs happy. Remember our infrastructure spans 6 clouds 
> > and 15
> > regions and want to make sure it is done correctly.
> 
> And that's why we discussed dockerhub. Remember that I was willing to
> implement proper registry, but we decided to go with dockerhub simply
> because it puts less stress on both infra and infra team. And I
> totally agree with that statement. Dockerhub publisher + apache
> caching was our working idea.
> 
yes, we still want to implement a docker registry for openstack, maybe for
testing, maybe for production. From the technical side, we have a good handle
now how that would look.  However, even if we decide to publish to docker, I
don't think we'd push directly. Maybe push to our docker registry then mirror to
docker hub. That is something we can figure out a little later.

> >> 3. If we host in in infra, in case someone actually uses it (there
> >> will be people like that), that will eat up lot of network traffic
> >> potentially
> >
> > We can monitor this and adjust as needed.
> >
> >> 4. With local caching of images (working already) in nodepools we
> >> loose complexity of mirroring registries across nodepools
> >>
> >> So bottom line, having dockerhub/quay.io is simply easier.
> >>
> > See comment above.
> >
> >> > Doug
> >> >
> >> > __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe: 
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry] Room during the next PTG

2017-05-18 Thread Julien Danjou
Hi team,

It's time for us to request a room (or share one) for the next PTG in
September if we want to meet. Last time we did not. Do we want one this
time?

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Zane Bitter

On 18/05/17 07:53, Sean Dague wrote:



My worry about policy also is that I'm not sure how safe it is for a
project owned API key to inherit permissions from the user who created
it. I can't think of a better way to it though but I'm still slightly
uncomfortable with it since a user with more roles could make a key
with  a subset of those which then someone else in the project can reset
the password for and then have access to a API key that 'may' be more
powerful than their own user. In clouds like ours where we allow
customers to create/manage users within the scope of their own projects,
there are often users who have different access all in the same project
and this could be an odd issue.

This is a super interesting point I hadn't considered, thanks for
bringing it up. We could probably address it by just blocking certain
operations entirely for APIKeys. I don't think we loose much in
preventing APIKeys from self reproducing. Blocking user/pw reset seems
like another good safety measure (it also just wouldn't work in
something like LDAP, because there is no write authority). That would be
a very good set of things to consider on Monty's spec of any APIs that
we're going to explicitly prohibit for APIKeys in iteration 1.


I can't actually think of a use case for even allowing 'password' (i.e. 
key) changes/resets. If a user wants to replace one, we should make them 
create a new API key, roll it out to their application, and then delete 
the old one. (Bonus: Heat automates this workflow for you ;)


In the future when we have separate reader/writer roles in the default 
policy then you'll definitely require the writer role to delete an API 
key from the project (as you would for any other resource), so there'd 
be no issue AFAICT.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-18 Thread Rochelle Grober


 From: Matt Riedemann 
> On 5/15/2017 2:28 PM, Edmund Rhudy (BLOOMBERG/ 120 PARK) wrote:
> > Hi all,
> >
> > I'd like to follow up on a few discussions that took place last week
> > in Boston, specifically in the Compute Instance/Volume Affinity for
> > HPC session
> > (https://etherpad.openstack.org/p/BOS-forum-compute-instance-
> volume-affinity-hpc).
> >
> > In this session, the discussions all trended towards adding more
> > complexity to the Nova UX, like adding --near and --distance flags to
> > the nova boot command to have the scheduler figure out how to place an
> > instance near some other resource, adding more fields to flavors or
> > flavor extra specs, etc.
> >
> > My question is: is it the right question to ask how to add more
> > fine-grained complications to the OpenStack user experience to support
> > what seemed like a pretty narrow use case?
> 
> I think we can all agree we don't want to complicate the user experience.
> 
> >
> > The only use case that I remember hearing was an operator not wanting
> > it to be possible for a user to launch an instance in a particular
> > Nova AZ and then not be able to attach a volume from a different
> > Cinder AZ, or they try to boot an instance from a volume in the wrong
> > place and get a failure to launch. This seems okay to me, though -
> > either the user has to rebuild their instance in the right place or
> > Nova will just return an error during instance build. Is it worth
> > adding all sorts of convolutions to Nova to avoid the possibility that
> > somebody might have to build instances a second time?
> 
> We might have gone down this path but it's not the intention or the use case
> as I thought I had presented it, and is in the etherpad. For what you're
> describing, we already have the CONF.cinder.cross_az_attach option in nova
> which prevents you from booting or attaching a volume to an instance in a
> different AZ from the instance. That's not what we're talking about though.
> 
> The use case, as I got from the mailing list discussion linked in the 
> etherpad, is
> a user wants their volume attached as close to local storage for the instance
> as possible for performance reasons. If this could be on the same physical
> server, great. But there is the case where the operator doesn't want to use
> any local disk on the compute and wants to send everything to Cinder, and
> the backing storage might not be on the same physical server, so that's
> where we started talking about --near or --distance (host, rack, row, data
> center, etc).
> 
> >
> > The feedback I get from my cloud-experienced users most frequently is
> > that they want to know why the OpenStack user experience in the
> > storage area is so radically different from AWS, which is what they
> > all have experience with. I don't really have a great answer for them,
> > except to admit that in our clouds they just have to know what
> > combination of flavors and Horizon options or BDM structure is going
> > to get them the right tradeoff between storage durability and speed. I
> > was pleased with how the session on expanding Cinder's role for Nova
> > ephemeral storage went because of the suggestion of reducing Nova
> > imagebackend's role to just the file driver and having Cinder take over for
> everything else.
> > That, to me, is the kind of simplification that's a win-win for both
> > devs and ops: devs get to radically simplify a thorny part of the Nova
> > codebase, storage driver development only has to happen in Cinder,
> > operators get a storage workflow that's easier to explain to users.
> >
> > Am I off base in the view of not wanting to add more options to nova
> > boot and more logic to the scheduler? I know the AWS comparison is a
> > little North America-centric (this came up at the summit a few times
> > that EMEA/APAC operators may have very different ideas of a normal
> > cloud workflow), but I am striving to give my users a private cloud
> > that I can define for them in terms of AWS workflows and vocabulary.
> > AWS by design restricts where your volumes can live (you can use
> > instance store volumes and that data is gone on reboot or terminate,
> > or you can put EBS volumes in a particular AZ and mount them on
> > instances in that AZ), and I don't think that's a bad thing, because
> > it makes it easy for the users to understand the contract they're
> > getting from the platform when it comes to where their data is stored
> > and what instances they can attach it to.
> >
> 
> Again, we don't want to make the UX more complicated, but as noted in the
> etherpad, the solution we have today is if you want the same instance and
> volume on the same host for performance reasons, then you need to have a
> 1:1 relationship for AZs and hosts since AZs are exposed to the user. In a
> public cloud where you've got hundreds of thousands of compute hosts, 1:1
> AZs aren't going to be realistic, for neither the admin or user. Plus, AZs are
> really supposed to 

[openstack-dev] [nova] Boston Forum session recap - cinder ephemeral storage

2017-05-18 Thread Matt Riedemann
The etherpad for this session is here [1]. The goal for this session was 
figuring out the use cases for using Cinder as instance ephemeral 
storage and short/long-term solutions.


This really came down to a single use case, which is as an operator I 
want to use Cinder for all of my storage needs, which means minimal 
local compute disk is used for VMs.


We discussed several solutions to this problem which are detailed with 
pros/cons in the etherpad. We arrived at two solutions, one is 
short-term and one is long-term:


1. Short-term: provide a way to force automatic boot from volume in the API.

John Griffith had a POC for doing this with flavor extra specs which are 
controlled by the admin and by default are not discoverable by the API 
user. There are downsides to this, like the fact that the API user isn't 
specifying BDMs but while their server is creating, they see a volume 
pop up in Horizon which they didn't expect (which is for their root 
disk) and since they don't want to be charged for it, they delete it - 
which makes the server go into ERROR state eventually (and is not 
retried). This just makes for a weird/bad user experience and it was 
unclear how to microversion this in the API so it's discoverable, plus 
it couples two complicated debt-ridden pieces of Nova code: flavor extra 
specs and block device mappings. It is, however, fairly simple to implement.


2. Long-term: write an image backend driver which is a proxy to Cinder. 
This would not require any changes to the API, it's all configurable 
per-compute, it would remove the need for the in-tree RBD/ScaleIO/LVM 
image backends, and open up support for all other Cinder volume drivers 
- plus we'd allow passing through a volume type via flavor extra spec in 
this case. This option, however, has no owner, and is dependent on 
working it into an area of the code that is very complicated and high 
technical debt right now (the libvirt imagebackend code). So while we 
all agreed we'd love to have this, it's not even really on the horizon.


As a compromise on the short-term option, I suggested that we avoid 
using flavor extra specs to embed auto-BFV and instead put a new 
attribute directly on the flavor, e.g. is_volume_backed, or something 
like that. This would be in a microversion which makes it discoverable. 
Operators control the flavors so they can control which ones have this 
flag set, and could tie those flavors to host aggregates for compute 
hosts where they want to avoid local disk for ephemeral storage.


The next step from this session is going to be fleshing out this idea 
into a spec which can be discussed for the Queens release, which would 
also include details on the alternatives in the etherpad and the 
pros/cons for each so we don't lose that information. Unless someone 
beats me to it, I think I'm signed up for writing this spec.


[1] 
https://etherpad.openstack.org/p/BOS-forum-using-cinder-for-nova-ephemeral-storage


--

Thanks,

Matt

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Boston Forum session recap - searchlight integration

2017-05-18 Thread Matt Riedemann

Hi everyone,

After previous summits where we had vertical tracks for Nova sessions I 
would provide a recap for each session.


The Forum in Boston was a bit different, so here I'm only attempting to 
recap the Forum sessions that I ran. Dan Smith led a session on Cells 
v2, John Garbutt led several sessions on the VM and Baremetal platform 
concept, and Sean Dague led sessions on hierarchical quotas and API 
microversions, and I'm going to leave recaps for those sessions to them.


I'll do these one at a time in separate emails.


Using Searchlight to list instances across cells in nova-api


The etherpad for this session is here [1]. The goal for this session was 
to explain the problem and proposed plan from the spec [2] to the 
operators in the room and get feedback.


Polling the room we found that not many people are deploying Searchlight 
but most everyone was using ElasticSearch.


An immediate concern that came up was the complexity involved with 
integrating Searchlight, especially around issues with latency for state 
changes and questioning how this does not redo the top-level cells v1 
sync issue. It admittedly does to an extent, but we don't have all of 
the weird side code paths with cells v1 and it should be self-healing. 
Kris Lindgren noted that the instance.usage.exists periodic notification 
from the computes hammers their notification bus; we suggested he report 
a bug so we can fix that.


It was also noted that if data is corrupted in ElasticSearch or is out 
of sync, you could re-sync that from nova to searchlight, however, 
searchlight syncs up with nova via the compute REST API, which if the 
compute REST API is using searchlight in the backend, you end up getting 
into an infinite loop of broken. This could probably be fixed with 
bypass query options in the compute API, but it's not a fun problem.


It was also suggested that we store a minimal set of data about 
instances in the top-level nova API database's instance_mappings table, 
where all we have today is the uuid. Anything that is set in the API 
would probably be OK for this, but operators in the room noted that they 
frequently need to filter instances by an IP, which is set in the 
compute. So this option turns into a slippery slope, and is potentially 
not inter-operable across clouds.


Matt Booth is also skeptical that we can't have a multi-cell query 
perform well, and he's proposed a POC here [3]. If that works out, then 
it defeats the main purpose for using Searchlight for listing instances 
in the compute API.


Since sorting instances across cells is the main issue, it was also 
suggested that we allow a config option to disable sorting in the API. 
It was stated this would be without a microversion, and filtering/paging 
would still be supported. I'm personally skeptical about how this could 
be consider inter-operable or discoverable for API users, and would need 
more thought and input from users like Monty Taylor and Clark Boylan.


Next steps are going to be fleshing out Matt Booth's POC for efficiently 
listing instances across cells. I think we can still continue working on 
the versioned notifications changes we're making for searchlight as 
those are useful on their own. And we should still work on enabling 
searchlight in the nova-next CI job so we can get an idea for how the 
versioned notifications are working by a consumer. However, any major 
development for actually integrating searchlight into Nova is probably 
on hold at the moment until we know how Matt's POC works.


[1] 
https://etherpad.openstack.org/p/BOS-forum-using-searchlight-to-list-instances
[2] 
https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/list-instances-using-searchlight.html

[3] https://review.openstack.org/#/c/463618/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql support status patch for governance

2017-05-18 Thread Sean Dague
On 05/18/2017 01:02 PM, Mike Bayer wrote:
> 
> 
> On 05/17/2017 02:38 PM, Sean Dague wrote:
>>
>> Some of the concerns/feedback has been "please describe things that are
>> harder by this being an abstraction", so examples are provided.
> 
> so let's go through this list:
> 
> - OpenStack services taking a more active role in managing the DBMS
> 
> , "managing" is vague to me, are we referring to the database
> service itself, e.g. starting / stopping / configuring?   installers
> like tripleo do this now, pacemaker is standard in HA for control of
> services, I think I need some background here as to what the more active
> role would look like.

I will leave that one for mordred, it was his concern.

> 
> 
> - The ability to have zero down time upgrade for services such as
>   Keystone.
> 
> So "zero down time upgrades" seems to have broken into:
> 
> * "expand / contract with the code carefully dancing around the
> existence of two schema concepts simultaneously", e.g. nova, neutron.
> AFAIK there is no particular issue supporting multiple backends on this
> because we use alembic or sqlalchemy-migrate to abstract away basic
> ALTER TABLE types of feature.
> 
> * "expand / contract using server side triggers to reconcile the two
> schema concepts", e.g. keystone.   This is more difficult because there
> is currently no "trigger" abstraction layer.   Triggers represent more
> of an imperative programming model vs. typical SQL,  which is why I've
> not taken on trying to build a one-size-fits-all abstraction for this in
> upstream Alembic or SQLAlchemy.   However, it is feasible to build a
> "one-size-that-fits-openstack-online-upgrades" abstraction.  I was
> trying to gauge interest in helping to create this back in the
> "triggers" thread, in my note at
> http://lists.openstack.org/pipermail/openstack-dev/2016-August/102345.html,
> which also referred to some very raw initial code examples.  However, it
> received strong pushback from a wide range of openstack veterans, which
> led me to believe this was not a thing that was happening.   Apparently
> Keystone has gone ahead and used triggers anyway, however I was not
> pulled into that process.   But if triggers are to be "blessed" by at
> least some projects, I can likely work on this problem for MySQL /
> Postgresql agnosticism.  If keystone is using triggers right now for
> online upgrades, I would ask, are they currently working on Postgresql
> as well with PG-specific triggers, or does Postgresql degrade into a
> "non-online" migration scenario if you're running Keystone?

This is the triggers conversation, which while I have issues with, is
the only path forward now if you are doing keystone in a load balancer
and need to retain HA through the process.

No one is looking at pg here. And yes, everything not mysql would just
have to take the minimal expand / contract downtime. Data services like
Keystone / Glance whose data is their REST API definitely have different
concerns than Nova dropping it's control plane for 30s to recycle code
and apply db schema tweaks.

> - Consistent UTF8 4 & 5 byte support in our APIs
> 
> "5 byte support" appears to refer to utf-8's ability to be...well a
> total of 6 bytes.But in practice, unicode itself only needs 4 bytes
> and that is as far as any database supports right now since they target
> unicode (see https://en.wikipedia.org/wiki/UTF-8#Description).  That's
> all any database we're talking about supports at most.  So...lets assume
> this means four bytes.

The 5 byte statement came in via a bug to Nova, it might have been
confused, and I might have been confused in interpretting it. Lets
assume it's invalid now and move to 4 byte.

> 
> From the perspective of database-agnosticism with regards to database
> and driver support for non-ascii characters, this problem has been
> solved by SQLAlchemy well before Python 3 existed when many DBAPIs would
> literally crash if they received a u'' string, and the rest of them
> would churn out garbage; SQLAlchemy implemented a full encode/decode
> layer on top of the Python DBAPI to fix this.  The situation is vastly
> improved now that all DBAPIs support unicode natively.
> 
> However, on the MySQL side there is this complexity that their utf-8
> support is a 3-byte only storage model, and you have to use utf8mb4 if
> you want the four byte model.   I'm not sure right now what projects are
> specifically hitting issues related to this.
> 
> Postgresql doesn't have such a limitation.   If your Postgresql server
> or specific database is set up for utf-8 (which should be the case),
> then you get full utf-8 character set support.
> 
> So I don't see the problem of "consistent utf8 support" having much to
> do with whether or not we support Posgtresql - you of course need your
> "CREATE DATABASE" to include the utf8 charset like we do on MySQL, but
> that's it.

That's where we stand which means that we're doing 3 byte UTF8 on MySQL,
and 4 byte on PG. That's 

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-18 Thread Julien Danjou
On Thu, May 18 2017, Mike Bayer wrote:

> replaces oslo.service with a multiprocessing approach that doesn't use
> eventlet.  great!  any openstack service that rides on oslo.service would like
> to be able to transparently switch from eventlet to multiprocessing the same
> way they can more or less switch to mod_wsgi at the moment.IMO this should
> be part of oslo.service itself.   Docs state: "oslo.service being impossible 
> to
> fix and bringing an heavy dependency on eventlet, "  is there a discussion
> thread on that?

Yes, and many reviews around that. I'll let Mehdi comments if he feels
like it. :)

> I'm finding it hard to believe that only a few years ago, everyone saw the
> wisdom of not re-implementing everything in their own projects and using a
> common layer like oslo, and already that whole situation is becoming forgotten
> - not just for consistency, but also when a bug is found, if fixed in oslo it
> gets fixed for everyone.

I guess it depends what you mean by everyone. FTR, one of the two first
projects in OpenStack, Swift, never used anything from Oslo for anything
and always refused to depends on any of its library.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-18 Thread Mike Bayer



On 05/18/2017 02:37 PM, Julien Danjou wrote:

On Thu, May 18 2017, Mike Bayer wrote:


I'm not understanding this?  do you mean this?


In the long run, yes. Unfortunately, we're not happy with the way Oslo
libraries are managed and too OpenStack centric. I've tried for the last
couple of years to move things on, but it's barely possible to deprecate
anything and contribute, so I feel it's safer to start fresh and better
alternative. Cotyledon by Mehdi is a good example of what can be
achieved.



here's cotyledon:

https://cotyledon.readthedocs.io/en/latest/


replaces oslo.service with a multiprocessing approach that doesn't use 
eventlet.  great!  any openstack service that rides on oslo.service 
would like to be able to transparently switch from eventlet to 
multiprocessing the same way they can more or less switch to mod_wsgi at 
the moment.IMO this should be part of oslo.service itself.   Docs 
state: "oslo.service being impossible to fix and bringing an heavy 
dependency on eventlet, "  is there a discussion thread on that?


I'm finding it hard to believe that only a few years ago, everyone saw 
the wisdom of not re-implementing everything in their own projects and 
using a common layer like oslo, and already that whole situation is 
becoming forgotten - not just for consistency, but also when a bug is 
found, if fixed in oslo it gets fixed for everyone.


An increase in the scope of oslo is essential to dealing with the issue 
of "complexity" in openstack.  The state of openstack as dozens of 
individual software projects each with their own idiosyncratic quirks, 
CLIs, process and deployment models, and everything else that is visible 
to operators is ground zero for perceived operator complexity.









Though to comment on your example, oslo.db is probably the most useful
Oslo library that Gnocchi depends on and that won't go away in a snap.
:-(



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-18 Thread Julien Danjou
On Thu, May 18 2017, Mike Bayer wrote:

> I'm not understanding this?  do you mean this?

In the long run, yes. Unfortunately, we're not happy with the way Oslo
libraries are managed and too OpenStack centric. I've tried for the last
couple of years to move things on, but it's barely possible to deprecate
anything and contribute, so I feel it's safer to start fresh and better
alternative. Cotyledon by Mehdi is a good example of what can be
achieved.

Though to comment on your example, oslo.db is probably the most useful
Oslo library that Gnocchi depends on and that won't go away in a snap.
:-(

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM Heartbeat / Healthcheck Monitoring

2017-05-18 Thread Sam P
Hi Greg,

 Thank you.
> Do you use gerrit for this git ?
Yes, we use gerrit, same as other openstack projects.
https://review.openstack.org/#/admin/projects/openstack/masakari-specs
Here is the list for current and past spec works.
https://review.openstack.org/#/q/project:openstack/masakari-specs

> Do you have a template for your specs ?
Yes, please see the template in pike directory.
https://github.com/openstack/masakari-specs/blob/master/doc/source/specs/pike/template.rst


--- Regards,
Sampath



On Fri, May 19, 2017 at 3:03 AM, Waines, Greg  wrote:
> Yes I am good with writing spec for this in masakari-spec.
>
>
>
> Do you use gerrit for this git ?
>
> Do you have a template for your specs ?
>
>
>
> Greg.
>
>
>
>
>
>
>
> From: Sam P 
> Reply-To: "openstack-dev@lists.openstack.org"
> 
> Date: Thursday, May 18, 2017 at 1:51 PM
> To: "openstack-dev@lists.openstack.org" 
> Subject: Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM Heartbeat /
> Healthcheck Monitoring
>
>
>
> Hi Greg,
>
> Thank you Adam for followup.
>
> This is new feature for masakari-monitors and think  Masakari can
>
> accommodate this feature in  masakari-monitors.
>
> From the implementation prospective, it is not that hard to do.
>
> However, as you can see in our Boston presentation, Masakari will
>
> replace its monitoring parts ( which is masakari-monitors) with,
>
> nova-host-alerter, **-process-alerter, and **-instance-alerter. (**
>
> part is not defined yet..:p)...
>
> Therefore, I would like to save this specifications, and make sure we
>
> will not miss  anything in the transformation..
>
> Does is make sense to write simple spec for this in masakari-spec [1]?
>
> So we can discuss about the requirements how to implement it.
>
>
>
> [1] https://github.com/openstack/masakari-specs
>
>
>
> --- Regards,
>
> Sampath
>
>
>
>
>
>
>
> On Thu, May 18, 2017 at 2:29 AM, Adam Spiers  wrote:
>
> I don't see any reason why masakari couldn't handle that, but you'd
>
> have to ask Sampath and the masakari team whether they would consider
>
> that in scope for their roadmap.
>
>
>
> Waines, Greg  wrote:
>
>
>
> Sure.  I can propose a new user story.
>
>
>
> And then are you thinking of including this user story in the scope of
>
> what masakari would be looking at ?
>
>
>
> Greg.
>
>
>
>
>
> From: Adam Spiers 
>
> Reply-To: "openstack-dev@lists.openstack.org"
>
> 
>
> Date: Wednesday, May 17, 2017 at 10:08 AM
>
> To: "openstack-dev@lists.openstack.org"
>
> 
>
> Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat /
>
> Healthcheck Monitoring
>
>
>
> Thanks for the clarification Greg.  This sounds like it has the
>
> potential to be a very useful capability.  May I suggest that you
>
> propose a new user story for it, along similar lines to this existing
>
> one?
>
>
>
>
>
> http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html
>
>
>
> Waines, Greg >
>
> wrote:
>
> Yes that’s correct.
>
> VM Heartbeating / Health-check Monitoring would introduce intrusive /
>
> white-box type monitoring of VMs / Instances.
>
>
>
> I realize this is somewhat in the gray-zone of what a cloud should be
>
> monitoring or not,
>
> but I believe it provides an alternative for Applications deployed in VMs
>
> that do not have an external monitoring/management entity like a VNF Manager
>
> in the MANO architecture.
>
> And even for VMs with VNF Managers, it provides a highly reliable
>
> alternate monitoring path that does not rely on Tenant Networking.
>
>
>
> You’re correct, that VM HB/HC Monitoring would leverage
>
> https://wiki.libvirt.org/page/Qemu_guest_agent
>
> that would require the agent to be installed in the images for talking
>
> back to the compute host.
>
> ( there are other examples of similar approaches in openstack ... the
>
> murano-agent for installation, the swift-agent for object store management )
>
> Although here, in the case of VM HB/HC Monitoring, via the QEMU Guest
>
> Agent, the messaging path is internal thru a QEMU virtual serial device.
>
> i.e. a very simple interface with very few dependencies ... it’s up and
>
> available very early in VM lifecycle and virtually always up.
>
>
>
> Wrt failure modes / use-cases
>
>
>
> · a VM’s response to a Heartbeat Challenge Request can be as
>
> simple as just ACK-ing,
>
> this alone allows for detection of:
>
>
>
> oa failed or hung QEMU/KVM instance, or
>
>
>
> oa failed or hung VM’s OS, or
>
>
>
> oa failure of the VM’s OS to schedule the QEMU Guest Agent daemon, or
>
>
>
> oa failure of the VM to route basic IO via linux sockets.
>
>
>
> · I have had feedback that this is similar to the virtual 

Re: [openstack-dev] [masakari] Intrusive Instance Monitoring

2017-05-18 Thread Sam P
Hi Greg,

 Thank you for proposal.
 #BTW, I replied to our discussion in [1].

 Masakari mainly focuses on black box monitoring the VMs.
 But that does not mean Masakari do not do white box type of monitoring.
 There will be a configuration options for operators for whether to
use it or not and how to configure it.
 For masakari, this is one of the ways to extend its instance
monitoring capabilities.

 I really appreciate it if you could write a spec for this in [2], and
it will help masakari community and openstack-ha community to
understand the requirements and
 support them in future developments.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117003.html
[2] https://github.com/openstack/masakari-specs
--- Regards,
Sampath



On Thu, May 18, 2017 at 6:15 AM, Waines, Greg  wrote:
> ( I have been having a discussion with Adam Spiers on
> [openstack-dev][vitrage][nova] on this topic ... thought I would switchover
> to [masakari] )
>
>
>
> I am interested in contributing an implementation of Intrusive Instance
> Monitoring,
>
> initially specifically VM Heartbeat / Heath-check Monitoring thru the QEMU
> Guest Agent (https://wiki.libvirt.org/page/Qemu_guest_agent).
>
>
>
> I’d like to know whether Masakari project leaders would consider a blueprint
> on “VM Heartbeat / Health-check Monitoring”.
>
> See below for some more details,
>
> Greg.
>
>
>
> -
>
>
>
>
>
> VM Heartbeating / Health-check Monitoring would introduce intrusive /
> white-box type monitoring of VMs / Instances to Masakari.
>
>
>
> Briefly, “VM Heartbeat / Health-check Monitoring”
>
> · is optionally enabled thru a Nova flavor extra-spec,
>
> · is a service that runs on an OpenStack Compute Node,
>
> · it sends periodic Heartbeat / Health-check Challenge Requests to a
> VM
> over a virtio-serial-device setup between the Compute Node and the VM thru
> QEMU,
> ( https://wiki.libvirt.org/page/Qemu_guest_agent )
>
> · on loss of heartbeat or a failed health check status will result
> in fault event, against the VM, being
> reported to Masakari and any other registered reporting backends like
> Mistral, or Vitrage.
>
>
>
> I realize this is somewhat in the gray-zone of what a cloud should be
> monitoring or not,
>
> but I believe it provides an alternative for Applications deployed in VMs
> that do not have an external monitoring/management entity like a VNF Manager
> in the MANO architecture.
>
> And even for VMs with VNF Managers, it provides a highly reliable alternate
> monitoring path that does not rely on Tenant Networking.
>
>
>
> VM HB/HC Monitoring would leverage
> https://wiki.libvirt.org/page/Qemu_guest_agent
>
> that would require the agent to be installed in the images for talking back
> to the compute host.
>
> ( there are other examples of similar approaches in openstack ... the
> murano-agent for installation, the swift-agent for object store management )
>
> Although here, in the case of VM HB/HC Monitoring, via the QEMU Guest Agent,
> the messaging path is internal thru a QEMU virtual serial device.  i.e. a
> very simple interface with very few dependencies ... it’s up and available
> very early in VM lifecycle and virtually always up.
>
>
>
> Wrt failure modes / use-cases
>
> · a VM’s response to a Heartbeat Challenge Request can be as simple
> as just ACK-ing,
> this alone allows for detection of:
>
> oa failed or hung QEMU/KVM instance, or
>
> oa failed or hung VM’s OS, or
>
> oa failure of the VM’s OS to schedule the QEMU Guest Agent daemon, or
>
> oa failure of the VM to route basic IO via linux sockets.
>
> · I have had feedback that this is similar to the virtual hardware
> watchdog of QEMU/KVM (https://libvirt.org/formatdomain.html#elementsWatchdog
> )
>
> · However, the VM Heartbeat / Health-check Monitoring
>
> o   provides a higher-level (i.e. application-level) heartbeating
>
> §  i.e. if the Heartbeat requests are being answered by the Application
> running within the VM
>
> o   provides more than just heartbeating, as the Application can use it to
> trigger a variety of audits,
>
> o   provides a mechanism for the Application within the VM to report a
> Health Status / Info back to the Host / Cloud,
>
> o   provides notification of the Heartbeat / Health-check status to
> higher-level cloud entities thru Masakari, Mistral and/or Vitrage
>
> §  e.g.   VM-Heartbeat-Monitor - to - Vitrage - (EventAlarm) - Aodh - ... -
> VNF-Manager
>
> - (StateChange) - Nova - ... - VNF Manager
>
>
>
> NOTE: perhaps the reporting to Vitrage would be a separate blueprint within
> Masakari.
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM Heartbeat / Healthcheck Monitoring

2017-05-18 Thread Waines, Greg
Yes I am good with writing spec for this in masakari-spec.

Do you use gerrit for this git ?
Do you have a template for your specs ?

Greg.



From: Sam P 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Thursday, May 18, 2017 at 1:51 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM Heartbeat / 
Healthcheck Monitoring

Hi Greg,
Thank you Adam for followup.
This is new feature for masakari-monitors and think  Masakari can
accommodate this feature in  masakari-monitors.
From the implementation prospective, it is not that hard to do.
However, as you can see in our Boston presentation, Masakari will
replace its monitoring parts ( which is masakari-monitors) with,
nova-host-alerter, **-process-alerter, and **-instance-alerter. (**
part is not defined yet..:p)...
Therefore, I would like to save this specifications, and make sure we
will not miss  anything in the transformation..
Does is make sense to write simple spec for this in masakari-spec [1]?
So we can discuss about the requirements how to implement it.

[1] https://github.com/openstack/masakari-specs

--- Regards,
Sampath



On Thu, May 18, 2017 at 2:29 AM, Adam Spiers 
> wrote:
I don't see any reason why masakari couldn't handle that, but you'd
have to ask Sampath and the masakari team whether they would consider
that in scope for their roadmap.

Waines, Greg > 
wrote:

Sure.  I can propose a new user story.

And then are you thinking of including this user story in the scope of
what masakari would be looking at ?

Greg.


From: Adam Spiers >
Reply-To: 
"openstack-dev@lists.openstack.org"
>
Date: Wednesday, May 17, 2017 at 10:08 AM
To: 
"openstack-dev@lists.openstack.org"
>
Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat /
Healthcheck Monitoring

Thanks for the clarification Greg.  This sounds like it has the
potential to be a very useful capability.  May I suggest that you
propose a new user story for it, along similar lines to this existing
one?


http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html

Waines, Greg 
>
wrote:
Yes that’s correct.
VM Heartbeating / Health-check Monitoring would introduce intrusive /
white-box type monitoring of VMs / Instances.

I realize this is somewhat in the gray-zone of what a cloud should be
monitoring or not,
but I believe it provides an alternative for Applications deployed in VMs
that do not have an external monitoring/management entity like a VNF Manager
in the MANO architecture.
And even for VMs with VNF Managers, it provides a highly reliable
alternate monitoring path that does not rely on Tenant Networking.

You’re correct, that VM HB/HC Monitoring would leverage
https://wiki.libvirt.org/page/Qemu_guest_agent
that would require the agent to be installed in the images for talking
back to the compute host.
( there are other examples of similar approaches in openstack ... the
murano-agent for installation, the swift-agent for object store management )
Although here, in the case of VM HB/HC Monitoring, via the QEMU Guest
Agent, the messaging path is internal thru a QEMU virtual serial device.
i.e. a very simple interface with very few dependencies ... it’s up and
available very early in VM lifecycle and virtually always up.

Wrt failure modes / use-cases

· a VM’s response to a Heartbeat Challenge Request can be as
simple as just ACK-ing,
this alone allows for detection of:

oa failed or hung QEMU/KVM instance, or

oa failed or hung VM’s OS, or

oa failure of the VM’s OS to schedule the QEMU Guest Agent daemon, or

oa failure of the VM to route basic IO via linux sockets.

· I have had feedback that this is similar to the virtual hardware
watchdog of QEMU/KVM (
https://libvirt.org/formatdomain.html#elementsWatchdog )

· However, the VM Heartbeat / Health-check Monitoring

o   provides a higher-level (i.e. application-level) heartbeating

§  i.e. if the Heartbeat requests are being answered by the Application
running within the VM

o   provides more than just heartbeating, as the Application can use it to
trigger a variety of audits,

o   provides a mechanism for the Application within the VM to report a
Health Status / Info back to the Host / Cloud,

o   provides notification of the Heartbeat / Health-check status to
higher-level cloud entities thru Vitrage

§  

Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM Heartbeat / Healthcheck Monitoring

2017-05-18 Thread Sam P
Hi Greg,
 Thank you Adam for followup.
 This is new feature for masakari-monitors and think  Masakari can
accommodate this feature in  masakari-monitors.
 From the implementation prospective, it is not that hard to do.
 However, as you can see in our Boston presentation, Masakari will
replace its monitoring parts ( which is masakari-monitors) with,
 nova-host-alerter, **-process-alerter, and **-instance-alerter. (**
part is not defined yet..:p)...
 Therefore, I would like to save this specifications, and make sure we
will not miss  anything in the transformation..
 Does is make sense to write simple spec for this in masakari-spec [1]?
 So we can discuss about the requirements how to implement it.

[1] https://github.com/openstack/masakari-specs

--- Regards,
Sampath



On Thu, May 18, 2017 at 2:29 AM, Adam Spiers  wrote:
> I don't see any reason why masakari couldn't handle that, but you'd
> have to ask Sampath and the masakari team whether they would consider
> that in scope for their roadmap.
>
> Waines, Greg  wrote:
>>
>> Sure.  I can propose a new user story.
>>
>> And then are you thinking of including this user story in the scope of
>> what masakari would be looking at ?
>>
>> Greg.
>>
>>
>> From: Adam Spiers 
>> Reply-To: "openstack-dev@lists.openstack.org"
>> 
>> Date: Wednesday, May 17, 2017 at 10:08 AM
>> To: "openstack-dev@lists.openstack.org"
>> 
>> Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat /
>> Healthcheck Monitoring
>>
>> Thanks for the clarification Greg.  This sounds like it has the
>> potential to be a very useful capability.  May I suggest that you
>> propose a new user story for it, along similar lines to this existing
>> one?
>>
>>
>> http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html
>>
>> Waines, Greg >
>> wrote:
>> Yes that’s correct.
>> VM Heartbeating / Health-check Monitoring would introduce intrusive /
>> white-box type monitoring of VMs / Instances.
>>
>> I realize this is somewhat in the gray-zone of what a cloud should be
>> monitoring or not,
>> but I believe it provides an alternative for Applications deployed in VMs
>> that do not have an external monitoring/management entity like a VNF Manager
>> in the MANO architecture.
>> And even for VMs with VNF Managers, it provides a highly reliable
>> alternate monitoring path that does not rely on Tenant Networking.
>>
>> You’re correct, that VM HB/HC Monitoring would leverage
>> https://wiki.libvirt.org/page/Qemu_guest_agent
>> that would require the agent to be installed in the images for talking
>> back to the compute host.
>> ( there are other examples of similar approaches in openstack ... the
>> murano-agent for installation, the swift-agent for object store management )
>> Although here, in the case of VM HB/HC Monitoring, via the QEMU Guest
>> Agent, the messaging path is internal thru a QEMU virtual serial device.
>> i.e. a very simple interface with very few dependencies ... it’s up and
>> available very early in VM lifecycle and virtually always up.
>>
>> Wrt failure modes / use-cases
>>
>> · a VM’s response to a Heartbeat Challenge Request can be as
>> simple as just ACK-ing,
>> this alone allows for detection of:
>>
>> oa failed or hung QEMU/KVM instance, or
>>
>> oa failed or hung VM’s OS, or
>>
>> oa failure of the VM’s OS to schedule the QEMU Guest Agent daemon, or
>>
>> oa failure of the VM to route basic IO via linux sockets.
>>
>> · I have had feedback that this is similar to the virtual hardware
>> watchdog of QEMU/KVM (
>> https://libvirt.org/formatdomain.html#elementsWatchdog )
>>
>> · However, the VM Heartbeat / Health-check Monitoring
>>
>> o   provides a higher-level (i.e. application-level) heartbeating
>>
>> §  i.e. if the Heartbeat requests are being answered by the Application
>> running within the VM
>>
>> o   provides more than just heartbeating, as the Application can use it to
>> trigger a variety of audits,
>>
>> o   provides a mechanism for the Application within the VM to report a
>> Health Status / Info back to the Host / Cloud,
>>
>> o   provides notification of the Heartbeat / Health-check status to
>> higher-level cloud entities thru Vitrage
>>
>> §  e.g.   VM-Heartbeat-Monitor - to - Vitrage - (EventAlarm) - Aodh - ...
>> - VNF-Manager
>>
>> - (StateChange) - Nova - ... - VNF Manager
>>
>>
>> Greg.
>>
>>
>> From: Adam Spiers >
>> Reply-To:
>> "openstack-dev@lists.openstack.org"
>> >
>> Date: Tuesday, May 16, 2017 at 7:29 PM
>> To:
>> "openstack-dev@lists.openstack.org"
>> 

Re: [openstack-dev] [ptg] How to slice the week to minimize conflicts

2017-05-18 Thread John Dickinson


On 18 May 2017, at 2:27, Thierry Carrez wrote:

> Hi everyone,
>
> For the PTG events we have a number of rooms available for 5 days, of
> which we need to make the best usage. We also want to keep it simple and
> productive, so we want to minimize room changes (allocating the same
> room to the same group for one or more days).
>
> For the first PTG in Atlanta, we split the week into two groups.
> Monday-Tuesday for "horizontal" project team meetups (Infra, QA...) and
> workgroups (API WG, Goals helprooms...), and Wednesday-Friday for
> "vertical" project team meetups (Nova, Swift...). This kinda worked, but
> the feedback we received called for more optimizations and reduced
> conflicts.
>
> In particular, some projects which have a lot of contributors overlap
> (Storlets/Swift, or Manila/Cinder) were all considered "vertical" and
> happened at the same time. Also horizontal team members ended up having
> issues to attend workgroups, and had nowhere to go for the rest of the
> week. Finally, on Monday-Tuesday the rooms that had the most success
> were inter-project ones we didn't really anticipate (like the API WG),
> while rooms with horizontal project team meetups were a bit
> under-attended. While we have a lot of constraints, I think we can
> optimize a bit better.
>
> After giving it some thought, my current thinking is that we should
> still split the week in two, but should move away from an arbitrary
> horizontal/vertical split. My strawman proposal would be to split the
> week between inter-project work (+ teams that rely mostly on liaisons in
> other teams) on Monday-Tuesday, and team-specific work on Wednesday-Friday:
>
> Example of Monday-Tuesday rooms:
> Interop WG, Docs, QA, API WG, Packaging WG, Oslo, Goals helproom,
> Infra/RelMgt/support teams helpdesk, TC/SWG room, VM Working group...
>
> Example of Wednesday-Thursday or Wednesday-Friday rooms:
> Nova, Cinder, Neutron, Swift, TripleO, Kolla, Infra...
>
> (NB: in this example infra team members end up being available in a
> general support team helpdesk room in the first part of the week, and
> having a regular team meetup on the second part of the week)
>
> In summary, Monday-Tuesday would be mostly around themes, while
> Wednesday-Friday would be mostly around teams. In addition to that,
> teams that /prefer/ to run on Monday-Tuesday to avoid conflicting with
> another project meetup (like Manila wanting to avoid conflicting with
> Cinder, or Storlets wanting to avoid conflicting with Swift) could
> *choose* to go for Monday-Tuesday instead of Wednesday-Friday.
>
> It's a bit of a long shot (we'd still want to equilibrate both sides in
> terms of room usage, so it's likely that the teams that are late to
> decide to participate would be pushed on one side or the other), but I
> think it's a good incremental change that could solve some of the issues
> reported in the Atlanta week slicing, as well as generally make
> inter-project coordination simpler.
>
> If we adopt that format, we need to be pretty flexible in terms of what
> is a "workgroup": to me, any inter-project work that would like to have
> a one-day or two-day room should be able to get some.
> Nova-{Cinder,Neutron,Ironic} discussions would for example happen in the
> VM & BM working group room, but we can imagine others just like it.
>
> Let me know what you think. Also feel free to propose alternate creative
> ways to slice the space and time we'll have. We need to open
> registration very soon (June 1st is the current target), and we'd like
> to have a rough idea of the program before we do that (so that people
> can report which days they will attend more accurately).
>
> -- 
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Sounds like a good idea to me.

--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] ptgbot: how to make "what's currently happening" emerge

2017-05-18 Thread John Dickinson


On 18 May 2017, at 2:57, Thierry Carrez wrote:

> Hi again,
>
> For the PTG events we have, by design, a pretty loose schedule. Each
> room is free to organize their agenda in whatever way they see fit, and
> take breaks whenever they need. This flexibility is key to keep our
> productivity at those events at a maximum. In Atlanta, most teams ended
> up dynamically building a loose agenda on a room etherpad.
>
> This approach is optimized for team meetups and people who strongly
> identify with one team in particular. In Atlanta during the first two
> days, where a lot of vertical team contributors did not really know
> which room to go to, it was very difficult to get a feel of what is
> currently being discussed and where they could go. Looking into 20
> etherpads and trying to figure out what is currently being discussed is
> just not practical. In the feedback we received, the need to expose the
> schedule more visibly was the #1 request.
>
> It is a thin line to walk on. We clearly don't want to publish a
> schedule in advance or be tied to pre-established timeboxes for every
> topic. We want it to be pretty fluid and natural, but we still need to
> somehow make "what's currently happening" (and "what will be discussed
> next") emerge globally.
>
> One lightweight solution I've been working on is an IRC bot ("ptgbot")
> that would produce a static webpage. Room leaders would update it on
> #openstack-ptg using commands like:
>
> #swift now discussing ring placement optimizations
> #swift next at 14:00 we plan to discuss better #keystone integration
>
> and the bot would collect all those "now" and "next" items and publish a
> single (mobile-friendly) webpage, (which would also include
> ethercalc-scheduled things, if we keep any).
>
> The IRC commands double as natural language announcements for those that
> are following activity on the IRC channel. Hashtags can be used to
> attract other teams attention. You can announce later discussions, but
> the commitment on exact timing is limited. Every "now" command would
> clear "next" entries, so that there wouldn't be any stale entries and
> the command interface would be kept dead simple (at the cost of a bit of
> repetition).
>
> I have POC code for this bot already. Before I publish it (and start
> work to make infra support it), I just wanted to see if this is the
> right direction and if I should continue to work on it :) I feel like
> it's an incremental improvement that preserves the flexibility and
> self-scheduling while addressing the main visibility concern. If you
> have better ideas, please let me know !
>
> -- 
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Seems like a reasonable idea and helpful tool. For the Swift team, we generally 
end up with more than one thing being discussed at a time at different 
tables/corners in the same room. A "#swift now discussing foo, bar, and baz" 
(instead of one-thing-at-a-time) would be how we'd likely use it. I'd guess 
other teams work in a similar way, too.


--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-18 Thread Mike Bayer



On 05/16/2017 05:42 AM, Julien Danjou wrote:

On Wed, Apr 19 2017, Julien Danjou wrote:


So Gnocchi gate is all broken (agan) because it depends on "pbr" and
some new release of oslo.* depends on pbr!=2.1.0.


Same things happened today with Babel. As far as Gnocchi is concerned,
we're going to take the easiest route and remove all our oslo
dependencies over the next months for sanely maintained alternative at
this point.


I'm not understanding this?  do you mean this?

diff --git a/gnocchi/indexer/sqlalchemy.py b/gnocchi/indexer/sqlalchemy.py
index 3497b52..0ae99fd 100644
--- a/gnocchi/indexer/sqlalchemy.py
+++ b/gnocchi/indexer/sqlalchemy.py
@@ -22,11 +22,7 @@ import uuid

 from alembic import migration
 from alembic import operations
-import oslo_db.api
-from oslo_db import exception
-from oslo_db.sqlalchemy import enginefacade
-from oslo_db.sqlalchemy import utils as oslo_db_utils
-from oslo_log import log
+from ??? import ???
 try:
 import psycopg2
 except ImportError:








Cheers,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] ptgbot: how to make "what's currently happening" emerge

2017-05-18 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-18 11:57:04 +0200:
> Hi again,
> 
> For the PTG events we have, by design, a pretty loose schedule. Each
> room is free to organize their agenda in whatever way they see fit, and
> take breaks whenever they need. This flexibility is key to keep our
> productivity at those events at a maximum. In Atlanta, most teams ended
> up dynamically building a loose agenda on a room etherpad.
> 
> This approach is optimized for team meetups and people who strongly
> identify with one team in particular. In Atlanta during the first two
> days, where a lot of vertical team contributors did not really know
> which room to go to, it was very difficult to get a feel of what is
> currently being discussed and where they could go. Looking into 20
> etherpads and trying to figure out what is currently being discussed is
> just not practical. In the feedback we received, the need to expose the
> schedule more visibly was the #1 request.
> 
> It is a thin line to walk on. We clearly don't want to publish a
> schedule in advance or be tied to pre-established timeboxes for every
> topic. We want it to be pretty fluid and natural, but we still need to
> somehow make "what's currently happening" (and "what will be discussed
> next") emerge globally.
> 
> One lightweight solution I've been working on is an IRC bot ("ptgbot")
> that would produce a static webpage. Room leaders would update it on
> #openstack-ptg using commands like:
> 
> #swift now discussing ring placement optimizations
> #swift next at 14:00 we plan to discuss better #keystone integration
> 
> and the bot would collect all those "now" and "next" items and publish a
> single (mobile-friendly) webpage, (which would also include
> ethercalc-scheduled things, if we keep any).
> 
> The IRC commands double as natural language announcements for those that
> are following activity on the IRC channel. Hashtags can be used to
> attract other teams attention. You can announce later discussions, but
> the commitment on exact timing is limited. Every "now" command would
> clear "next" entries, so that there wouldn't be any stale entries and
> the command interface would be kept dead simple (at the cost of a bit of
> repetition).
> 
> I have POC code for this bot already. Before I publish it (and start
> work to make infra support it), I just wanted to see if this is the
> right direction and if I should continue to work on it :) I feel like
> it's an incremental improvement that preserves the flexibility and
> self-scheduling while addressing the main visibility concern. If you
> have better ideas, please let me know !
> 

I would subscribe to that twitter feed, too.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql support status patch for governance

2017-05-18 Thread Mike Bayer



On 05/17/2017 02:38 PM, Sean Dague wrote:


Some of the concerns/feedback has been "please describe things that are
harder by this being an abstraction", so examples are provided.


so let's go through this list:

- OpenStack services taking a more active role in managing the DBMS

, "managing" is vague to me, are we referring to the database 
service itself, e.g. starting / stopping / configuring?   installers 
like tripleo do this now, pacemaker is standard in HA for control of 
services, I think I need some background here as to what the more active 
role would look like.



- The ability to have zero down time upgrade for services such as
  Keystone.

So "zero down time upgrades" seems to have broken into:

* "expand / contract with the code carefully dancing around the 
existence of two schema concepts simultaneously", e.g. nova, neutron. 
AFAIK there is no particular issue supporting multiple backends on this 
because we use alembic or sqlalchemy-migrate to abstract away basic 
ALTER TABLE types of feature.


* "expand / contract using server side triggers to reconcile the two 
schema concepts", e.g. keystone.   This is more difficult because there 
is currently no "trigger" abstraction layer.   Triggers represent more 
of an imperative programming model vs. typical SQL,  which is why I've 
not taken on trying to build a one-size-fits-all abstraction for this in 
upstream Alembic or SQLAlchemy.   However, it is feasible to build a 
"one-size-that-fits-openstack-online-upgrades" abstraction.  I was 
trying to gauge interest in helping to create this back in the 
"triggers" thread, in my note at 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/102345.html, 
which also referred to some very raw initial code examples.  However, it 
received strong pushback from a wide range of openstack veterans, which 
led me to believe this was not a thing that was happening.   Apparently 
Keystone has gone ahead and used triggers anyway, however I was not 
pulled into that process.   But if triggers are to be "blessed" by at 
least some projects, I can likely work on this problem for MySQL / 
Postgresql agnosticism.  If keystone is using triggers right now for 
online upgrades, I would ask, are they currently working on Postgresql 
as well with PG-specific triggers, or does Postgresql degrade into a 
"non-online" migration scenario if you're running Keystone?



- Consistent UTF8 4 & 5 byte support in our APIs

"5 byte support" appears to refer to utf-8's ability to be...well a 
total of 6 bytes.But in practice, unicode itself only needs 4 bytes 
and that is as far as any database supports right now since they target 
unicode (see https://en.wikipedia.org/wiki/UTF-8#Description).  That's 
all any database we're talking about supports at most.  So...lets assume 
this means four bytes.


From the perspective of database-agnosticism with regards to database 
and driver support for non-ascii characters, this problem has been 
solved by SQLAlchemy well before Python 3 existed when many DBAPIs would 
literally crash if they received a u'' string, and the rest of them 
would churn out garbage; SQLAlchemy implemented a full encode/decode 
layer on top of the Python DBAPI to fix this.  The situation is vastly 
improved now that all DBAPIs support unicode natively.


However, on the MySQL side there is this complexity that their utf-8 
support is a 3-byte only storage model, and you have to use utf8mb4 if 
you want the four byte model.   I'm not sure right now what projects are 
specifically hitting issues related to this.


Postgresql doesn't have such a limitation.   If your Postgresql server 
or specific database is set up for utf-8 (which should be the case), 
then you get full utf-8 character set support.


So I don't see the problem of "consistent utf8 support" having much to 
do with whether or not we support Posgtresql - you of course need your 
"CREATE DATABASE" to include the utf8 charset like we do on MySQL, but 
that's it.



- The requirement that Postgresql libraries are compiled for new users
  trying to just run unit tests (no equiv is true for mysql because of
  the pure python driver).

I would suggest that new developers for whom the presence of things like 
postgresql client libraries is a challenge (but somehow they are running 
a MySQL server for their pure python driver to talk to?)  don't actually 
have to worry about running the tests against Postgresql, this is how 
the "opportunistic" testing model in oslo.db has always worked; it only 
runs for the backends that you have set up.


Also, openstack got all the way through Kilo approximately using the 
native python-MySQL driver which required a compiled client library as 
well as the MySQL dependencies be installed.  The psycopg2 driver has a 
ton of whl's up on pypi (https://pypi.python.org/pypi/psycopg2) and all 
linux distros supply it as a package in any case, so an actual "compile" 
should not be needed.   Also, this is 

Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Lance Bragstad
I followed up with Sean in IRC [0]. My last note about rebuilding role
assignment dynamically doesn't really make sense. I was approaching this
from a different perspective.


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-05-18.log.html#t2017-05-18T15:20:32

On Thu, May 18, 2017 at 9:39 AM, Lance Bragstad  wrote:

>
>
> On Thu, May 18, 2017 at 8:45 AM, Sean Dague  wrote:
>
>> On 05/18/2017 09:27 AM, Doug Hellmann wrote:
>> > Excerpts from Adrian Turjak's message of 2017-05-18 13:34:56 +1200:
>> >
>> >> Fully agree that expecting users of a particular cloud to understand
>> how
>> >> the policy stuff works is pointless, but it does fall on the cloud
>> >> provider to educate and document their roles and the permissions of
>> >> those roles. I think step 1 plus some basic role permissions for the
>> >
>> > Doesn't basing the API key permissions directly on roles also imply that
>> > the cloud provider has to anticipate all of the possible ways API keys
>> > might be used so they can then set up those roles?
>>
>> Not really. It's not explicit roles, it's inherited ones. At some point
>> an adminstrator gave a user permission to do stuff (through roles that
>> may be site specific). Don't care how we got there. The important thing
>> is those are cloned to the APIKey, otherwise, the APIKey litterally
>> would not be able to do anything, ever. Discussing roles here was an
>> attempt to look at how internals would work today, though it's
>> definitely not part of contract of this new interface.
>>
>> There is a lot more implicitness in what roles mean (see
>> https://bugs.launchpad.net/keystone/+bug/968696) which is another reason
>> I'm really skeptical that we should have roles or policy points in the
>> APIKey interface. Describing what they do in any particular installation
>> is a ton of work. And you thought ordering a Medium coffee at Starbucks
>> was annoying. :)
>>
>> The important thing is to make a clear and expressive API with the user
>> so they can be really clear about what they expect a thing should do.
>>
>> >> Keys with the expectation of operators to document their roles/policy
>> is
>> >> a safe enough place to start, and for us to document and set some
>> >> sensible default roles and policy. I don't think we currently have good
>> >
>> > This seems like an area where we want to encourage interoperability.
>> > Policy doesn't do that today, because deployers can use arbitrary
>> > names for roles and set permissions in those roles in any way they
>> > want. That's fine for human users, but doesn't work for enabling
>> > automation. If the sets of roles and permissions are different in
>> > every cloud, how would anyone write a key allocation script that
>> > could provision a key for their application on more than one cloud?
>>
>> So, this is where there are internals happening distinctly from user
>> expressed intent.
>>
>> POST /apikey {}
>>
>> Creates an APIKey, in the project the token is currently authed to, and
>> the APIKey inherits all the roles on that project that the user
>> currently has. The user may or may not even know what these are. It's
>> not a user interface.
>>
>
> If we know the user_id and project_id of the API key, then can't we build
> the roles dynamically whenever the API key is used (unless the API key is
> scoped to a single role)? This is the same approach we recently took with
> token validation because it made the revocation API sub-system *way*
> simpler (i.e. we no longer have to write revocation events anytime a role
> is removed from a user on a project, instead the revocation happens
> naturally when the token is used). Would this be helpful from a "default
> open" PoV with API keys?
>
> We touched on blacklisting certain operations a bit in Atlanta at the PTG
> (see the API key section) [0]. I attempted to document it shortly after the
> PTG, but some of those statement might be superseded at this point.
>
>
> [0] https://www.lbragstad.com/blog/keystone-pike-ptg-summary
>
>
>>
>> The contract is "Give me an APIKey that can do what I do*" (* with the
>> exception of self propogating, i.e. the skynet exception).
>>
>> That's iteration #1. APIKey can do what I can do.
>>
>> Iteration #2 is fine grained permissions that make it so I can have an
>> APIKey do far less than I can do.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [gnocchi] Migration to GitHub

2017-05-18 Thread Julien Danjou
Hi,

I've started to migrate Gnocchi itself to GitHub. The Launchpad bugs
have been re-created at https://github.com/gnocchixyz/gnocchi/issues and
I'll move the repository as soon as all opened reviews are merged.

Cheers,
-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-05-18 Thread Chris Dent


Greetings OpenStack community,

A short meeting today, mostly reflecting on the Birds of a Feather session [4] 
at Summit last week. It was well attended and engendered plenty of good 
discussion. There are notes on an etherpad at 
https://etherpad.openstack.org/p/BOS-API-WG-BOF that continue to be digested. 
One of the main takeaways was the group should work with people creating 
documentation (api-ref and otherwise) to encourage linking from those documents 
to the guidelines [2]. This will help to explain why some things are the way 
they are (for example microversions) and also highlight a path whereby people 
can contribute to improving or clarifying the guidelines.

Working on that linking will be an ongoing effort. In the meantime the primary 
action for the group (and anyone else interested in API consistency) is to 
review Monty's efforts to document client side interactions with the service 
catalog and version discovery (linked below).

# Newly Published Guidelines

Nothing new at this time.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None at this time but please check out the reviews below.

# Guidelines Currently Under Review [3]

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* A suite of several documents about using the service catalog and doing 
version discovery
  Start at https://review.openstack.org/#/c/462814/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18679/api-working-group-update-and-bof

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Michał Jastrzębski
On 18 May 2017 at 08:03, Paul Belanger  wrote:
> On Tue, May 16, 2017 at 02:11:18PM +, Sam Yaple wrote:
>> I would like to bring up a subject that hasn't really been discussed in
>> this thread yet, forgive me if I missed an email mentioning this.
>>
>> What I personally would like to see is a publishing infrastructure to allow
>> pushing built images to an internal infra mirror/repo/registry for
>> consumption of internal infra jobs (deployment tools like kolla-ansible and
>> openstack-ansible). The images built from infra mirrors with security
>> turned off are perfect for testing internally to infra.
>>
> Zuulv3 should have a little with this, it will allow for DAG graph for jobs,
> which means the top level job could be an image build then all jobs below can
> now consume said image.  The steps we are still working on is artifact 
> handling
> but long term, it should be possible for the testing jobs to setup the dynamic
> infrastructure needed themselves.
>
>> If you build images properly in infra, then you will have an image that is
>> not security checked (no gpg verification of packages) and completely
>> unverifiable. These are absolutely not images we want to push to
>> DockerHub/quay for obvious reasons. Security and verification being chief
>> among them. They are absolutely not images that should ever be run in
>> production and are only suited for testing. These are the only types of
>> images that can come out of infra.
>>
> We disable gpg for Ubuntu packaging for a specific reason, most this is 
> because
> our APT repos are not official mirrors of upstream. We regenerate indexes 
> every
> 2 hours as not to break long running jobs.  We have talked in the past of 
> fixing
> this, but it requires openstack-infra to move to a new mirroring tool for APT.

So idea to solve this particular problem goes like this:

Publish job is not a change-driven, it'll be periodical (24h?) during
low time. Then in this job we can turn off using infra mirrors and
just use upstream signed.

That being said, all the technical issues we saw so far (unless I'm
missing something) are solvable and we (kolla community) would love to
do all the heavy lifting to solve it. We need to wait for TC to
resolve non-technical issues before we can proceed tho.

>> Thanks,
>> SamYaple
>>
>> On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski 
>> wrote:
>>
>> > On 16 May 2017 at 06:22, Doug Hellmann  wrote:
>> > > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> > >> Flavio Percoco wrote:
>> > >> > From a release perspective, as Doug mentioned, we've avoided
>> > releasing projects
>> > >> > in any kind of built form. This was also one of the concerns I raised
>> > when
>> > >> > working on the proposal to support other programming languages. The
>> > problem of
>> > >> > releasing built images goes beyond the infrastructure requirements.
>> > It's the
>> > >> > message and the guarantees implied with the built product itself that
>> > are the
>> > >> > concern here. And I tend to agree with Doug that this might be a
>> > problem for us
>> > >> > as a community. Unfortunately, putting your name, Michal, as contact
>> > point is
>> > >> > not enough. Kolla is not the only project producing container images
>> > and we need
>> > >> > to be consistent in the way we release these images.
>> > >> >
>> > >> > Nothing prevents people for building their own images and uploading
>> > them to
>> > >> > dockerhub. Having this as part of the OpenStack's pipeline is a
>> > problem.
>> > >>
>> > >> I totally subscribe to the concerns around publishing binaries (under
>> > >> any form), and the expectations in terms of security maintenance that it
>> > >> would set on the publisher. At the same time, we need to have images
>> > >> available, for convenience and testing. So what is the best way to
>> > >> achieve that without setting strong security maintenance expectations
>> > >> for the OpenStack community ? We have several options:
>> > >>
>> > >> 1/ Have third-parties publish images
>> > >> It is the current situation. The issue is that the Kolla team (and
>> > >> likely others) would rather automate the process and use OpenStack
>> > >> infrastructure for it.
>> > >>
>> > >> 2/ Have third-parties publish images, but through OpenStack infra
>> > >> This would allow to automate the process, but it would be a bit weird to
>> > >> use common infra resources to publish in a private repo.
>> > >>
>> > >> 3/ Publish transient (per-commit or daily) images
>> > >> A "daily build" (especially if you replace it every day) would set
>> > >> relatively-limited expectations in terms of maintenance. It would end up
>> > >> picking up security updates in upstream layers, even if not immediately.
>> > >>
>> > >> 4/ Publish images and own them
>> > >> Staff release / VMT / stable team in a way that lets us properly own
>> > >> those images and publish them 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Michał Jastrzębski
>> Issue with that is
>>
>> 1. Apache served is harder to use because we want to follow docker API
>> and we'd have to reimplement it
>
> No, the idea is apache is transparent, for now we have been using proxypass
> module in apache.  I think what Doug was mentioning was have a primary docker
> registery, with is RW for a publisher, then proxy it to regional mirrors as 
> RO.

That would also work, yes

>> 2. Running registry is single command
>>
> I've seen this mentioned a few times before, just because it is one command or
> 'simple' to do, doesn't mean we want to or can.  Currently our infrastructure 
> is
> complicated, for various reasons.  I am sure we'll get to the right technical
> solution for making jobs happy. Remember our infrastructure spans 6 clouds 
> and 15
> regions and want to make sure it is done correctly.

And that's why we discussed dockerhub. Remember that I was willing to
implement proper registry, but we decided to go with dockerhub simply
because it puts less stress on both infra and infra team. And I
totally agree with that statement. Dockerhub publisher + apache
caching was our working idea.

>> 3. If we host in in infra, in case someone actually uses it (there
>> will be people like that), that will eat up lot of network traffic
>> potentially
>
> We can monitor this and adjust as needed.
>
>> 4. With local caching of images (working already) in nodepools we
>> loose complexity of mirroring registries across nodepools
>>
>> So bottom line, having dockerhub/quay.io is simply easier.
>>
> See comment above.
>
>> > Doug
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][swg] Updates on the TC Vision for 2019

2017-05-18 Thread John Garbutt
On 17 May 2017 at 20:02, Dean Troyer  wrote:
> On Wed, May 17, 2017 at 1:47 PM, Doug Hellmann  wrote:
>> The timeline depends on who signed up to do the next revision. Did
>> we get someone to do that, yet, or are we still looking for a
>> volunteer?  (Note that I am not volunteering here, just asking for
>> status.)
>
> I believe John (johnthetubaguy),Chris (cdent) and I (dtroyer) are the
> ones identified to drive the next steps.  Timing-wise, having this
> wrapped up by 2nd week of June suits me great as I am planning some
> time off about then.  I see that as having a solid 'final' proposal by
> then, not necessarily having it approved.

Yep, I am hoping to help.

I am away the week before you, but some kind of tag team should be fine.

I hope to read through the feedback properly and start digesting it
properly tomorrow with any luck.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Blueprint process question

2017-05-18 Thread Rob Cresswell
There isn't a specific time for blueprint review at the moment. It's usually 
whenever I get time, or someone asks via email or IRC. During the weekly 
meetings we always have time for open discussion of bugs/blueprints/patches etc.

Rob

On 18 May 2017 at 16:31, Waines, Greg 
> wrote:
A blueprint question for horizon team.

I registered a new blueprint the other day.
https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar

Do I need to do anything else to get this reviewed?  I don’t think so, but 
wanted to double check.
How frequently do horizon blueprints get reviewed?  once a week?

Greg.


p.s. ... the above blueprint does depend on a Vitrage blueprint which I do have 
in review.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Blueprint process question

2017-05-18 Thread Waines, Greg
A blueprint question for horizon team.

I registered a new blueprint the other day.
https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar

Do I need to do anything else to get this reviewed?  I don’t think so, but 
wanted to double check.
How frequently do horizon blueprints get reviewed?  once a week?

Greg.


p.s. ... the above blueprint does depend on a Vitrage blueprint which I do have 
in review.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Paul Belanger
On Tue, May 16, 2017 at 02:11:18PM +, Sam Yaple wrote:
> I would like to bring up a subject that hasn't really been discussed in
> this thread yet, forgive me if I missed an email mentioning this.
> 
> What I personally would like to see is a publishing infrastructure to allow
> pushing built images to an internal infra mirror/repo/registry for
> consumption of internal infra jobs (deployment tools like kolla-ansible and
> openstack-ansible). The images built from infra mirrors with security
> turned off are perfect for testing internally to infra.
> 
Zuulv3 should have a little with this, it will allow for DAG graph for jobs,
which means the top level job could be an image build then all jobs below can
now consume said image.  The steps we are still working on is artifact handling
but long term, it should be possible for the testing jobs to setup the dynamic
infrastructure needed themselves.

> If you build images properly in infra, then you will have an image that is
> not security checked (no gpg verification of packages) and completely
> unverifiable. These are absolutely not images we want to push to
> DockerHub/quay for obvious reasons. Security and verification being chief
> among them. They are absolutely not images that should ever be run in
> production and are only suited for testing. These are the only types of
> images that can come out of infra.
> 
We disable gpg for Ubuntu packaging for a specific reason, most this is because
our APT repos are not official mirrors of upstream. We regenerate indexes every
2 hours as not to break long running jobs.  We have talked in the past of fixing
this, but it requires openstack-infra to move to a new mirroring tool for APT.

> Thanks,
> SamYaple
> 
> On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski 
> wrote:
> 
> > On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> > > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
> > >> Flavio Percoco wrote:
> > >> > From a release perspective, as Doug mentioned, we've avoided
> > releasing projects
> > >> > in any kind of built form. This was also one of the concerns I raised
> > when
> > >> > working on the proposal to support other programming languages. The
> > problem of
> > >> > releasing built images goes beyond the infrastructure requirements.
> > It's the
> > >> > message and the guarantees implied with the built product itself that
> > are the
> > >> > concern here. And I tend to agree with Doug that this might be a
> > problem for us
> > >> > as a community. Unfortunately, putting your name, Michal, as contact
> > point is
> > >> > not enough. Kolla is not the only project producing container images
> > and we need
> > >> > to be consistent in the way we release these images.
> > >> >
> > >> > Nothing prevents people for building their own images and uploading
> > them to
> > >> > dockerhub. Having this as part of the OpenStack's pipeline is a
> > problem.
> > >>
> > >> I totally subscribe to the concerns around publishing binaries (under
> > >> any form), and the expectations in terms of security maintenance that it
> > >> would set on the publisher. At the same time, we need to have images
> > >> available, for convenience and testing. So what is the best way to
> > >> achieve that without setting strong security maintenance expectations
> > >> for the OpenStack community ? We have several options:
> > >>
> > >> 1/ Have third-parties publish images
> > >> It is the current situation. The issue is that the Kolla team (and
> > >> likely others) would rather automate the process and use OpenStack
> > >> infrastructure for it.
> > >>
> > >> 2/ Have third-parties publish images, but through OpenStack infra
> > >> This would allow to automate the process, but it would be a bit weird to
> > >> use common infra resources to publish in a private repo.
> > >>
> > >> 3/ Publish transient (per-commit or daily) images
> > >> A "daily build" (especially if you replace it every day) would set
> > >> relatively-limited expectations in terms of maintenance. It would end up
> > >> picking up security updates in upstream layers, even if not immediately.
> > >>
> > >> 4/ Publish images and own them
> > >> Staff release / VMT / stable team in a way that lets us properly own
> > >> those images and publish them officially.
> > >>
> > >> Personally I think (4) is not realistic. I think we could make (3) work,
> > >> and I prefer it to (2). If all else fails, we should keep (1).
> > >>
> > >
> > > At the forum we talked about putting test images on a "private"
> > > repository hosted on openstack.org somewhere. I think that's option
> > > 3 from your list?
> > >
> > > Paul may be able to shed more light on the details of the technology
> > > (maybe it's just an Apache-served repo, rather than a full blown
> > > instance of Docker's service, for example).
> >
> > Issue with that is
> >
> > 1. Apache served is harder to use because we want to follow docker API
> > 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Paul Belanger
On Tue, May 16, 2017 at 06:57:04AM -0700, Michał Jastrzębski wrote:
> On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
> >> Flavio Percoco wrote:
> >> > From a release perspective, as Doug mentioned, we've avoided releasing 
> >> > projects
> >> > in any kind of built form. This was also one of the concerns I raised 
> >> > when
> >> > working on the proposal to support other programming languages. The 
> >> > problem of
> >> > releasing built images goes beyond the infrastructure requirements. It's 
> >> > the
> >> > message and the guarantees implied with the built product itself that 
> >> > are the
> >> > concern here. And I tend to agree with Doug that this might be a problem 
> >> > for us
> >> > as a community. Unfortunately, putting your name, Michal, as contact 
> >> > point is
> >> > not enough. Kolla is not the only project producing container images and 
> >> > we need
> >> > to be consistent in the way we release these images.
> >> >
> >> > Nothing prevents people for building their own images and uploading them 
> >> > to
> >> > dockerhub. Having this as part of the OpenStack's pipeline is a problem.
> >>
> >> I totally subscribe to the concerns around publishing binaries (under
> >> any form), and the expectations in terms of security maintenance that it
> >> would set on the publisher. At the same time, we need to have images
> >> available, for convenience and testing. So what is the best way to
> >> achieve that without setting strong security maintenance expectations
> >> for the OpenStack community ? We have several options:
> >>
> >> 1/ Have third-parties publish images
> >> It is the current situation. The issue is that the Kolla team (and
> >> likely others) would rather automate the process and use OpenStack
> >> infrastructure for it.
> >>
> >> 2/ Have third-parties publish images, but through OpenStack infra
> >> This would allow to automate the process, but it would be a bit weird to
> >> use common infra resources to publish in a private repo.
> >>
> >> 3/ Publish transient (per-commit or daily) images
> >> A "daily build" (especially if you replace it every day) would set
> >> relatively-limited expectations in terms of maintenance. It would end up
> >> picking up security updates in upstream layers, even if not immediately.
> >>
> >> 4/ Publish images and own them
> >> Staff release / VMT / stable team in a way that lets us properly own
> >> those images and publish them officially.
> >>
> >> Personally I think (4) is not realistic. I think we could make (3) work,
> >> and I prefer it to (2). If all else fails, we should keep (1).
> >>
> >
> > At the forum we talked about putting test images on a "private"
> > repository hosted on openstack.org somewhere. I think that's option
> > 3 from your list?
> >
> > Paul may be able to shed more light on the details of the technology
> > (maybe it's just an Apache-served repo, rather than a full blown
> > instance of Docker's service, for example).
> 
> Issue with that is
> 
> 1. Apache served is harder to use because we want to follow docker API
> and we'd have to reimplement it

No, the idea is apache is transparent, for now we have been using proxypass
module in apache.  I think what Doug was mentioning was have a primary docker
registery, with is RW for a publisher, then proxy it to regional mirrors as RO.

> 2. Running registry is single command
>
I've seen this mentioned a few times before, just because it is one command or
'simple' to do, doesn't mean we want to or can.  Currently our infrastructure is
complicated, for various reasons.  I am sure we'll get to the right technical
solution for making jobs happy. Remember our infrastructure spans 6 clouds and 
15
regions and want to make sure it is done correctly.

> 3. If we host in in infra, in case someone actually uses it (there
> will be people like that), that will eat up lot of network traffic
> potentially

We can monitor this and adjust as needed.

> 4. With local caching of images (working already) in nodepools we
> loose complexity of mirroring registries across nodepools
> 
> So bottom line, having dockerhub/quay.io is simply easier.
> 
See comment above.

> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Lance Bragstad
On Thu, May 18, 2017 at 8:45 AM, Sean Dague  wrote:

> On 05/18/2017 09:27 AM, Doug Hellmann wrote:
> > Excerpts from Adrian Turjak's message of 2017-05-18 13:34:56 +1200:
> >
> >> Fully agree that expecting users of a particular cloud to understand how
> >> the policy stuff works is pointless, but it does fall on the cloud
> >> provider to educate and document their roles and the permissions of
> >> those roles. I think step 1 plus some basic role permissions for the
> >
> > Doesn't basing the API key permissions directly on roles also imply that
> > the cloud provider has to anticipate all of the possible ways API keys
> > might be used so they can then set up those roles?
>
> Not really. It's not explicit roles, it's inherited ones. At some point
> an adminstrator gave a user permission to do stuff (through roles that
> may be site specific). Don't care how we got there. The important thing
> is those are cloned to the APIKey, otherwise, the APIKey litterally
> would not be able to do anything, ever. Discussing roles here was an
> attempt to look at how internals would work today, though it's
> definitely not part of contract of this new interface.
>
> There is a lot more implicitness in what roles mean (see
> https://bugs.launchpad.net/keystone/+bug/968696) which is another reason
> I'm really skeptical that we should have roles or policy points in the
> APIKey interface. Describing what they do in any particular installation
> is a ton of work. And you thought ordering a Medium coffee at Starbucks
> was annoying. :)
>
> The important thing is to make a clear and expressive API with the user
> so they can be really clear about what they expect a thing should do.
>
> >> Keys with the expectation of operators to document their roles/policy is
> >> a safe enough place to start, and for us to document and set some
> >> sensible default roles and policy. I don't think we currently have good
> >
> > This seems like an area where we want to encourage interoperability.
> > Policy doesn't do that today, because deployers can use arbitrary
> > names for roles and set permissions in those roles in any way they
> > want. That's fine for human users, but doesn't work for enabling
> > automation. If the sets of roles and permissions are different in
> > every cloud, how would anyone write a key allocation script that
> > could provision a key for their application on more than one cloud?
>
> So, this is where there are internals happening distinctly from user
> expressed intent.
>
> POST /apikey {}
>
> Creates an APIKey, in the project the token is currently authed to, and
> the APIKey inherits all the roles on that project that the user
> currently has. The user may or may not even know what these are. It's
> not a user interface.
>

If we know the user_id and project_id of the API key, then can't we build
the roles dynamically whenever the API key is used (unless the API key is
scoped to a single role)? This is the same approach we recently took with
token validation because it made the revocation API sub-system *way*
simpler (i.e. we no longer have to write revocation events anytime a role
is removed from a user on a project, instead the revocation happens
naturally when the token is used). Would this be helpful from a "default
open" PoV with API keys?

We touched on blacklisting certain operations a bit in Atlanta at the PTG
(see the API key section) [0]. I attempted to document it shortly after the
PTG, but some of those statement might be superseded at this point.


[0] https://www.lbragstad.com/blog/keystone-pike-ptg-summary


>
> The contract is "Give me an APIKey that can do what I do*" (* with the
> exception of self propogating, i.e. the skynet exception).
>
> That's iteration #1. APIKey can do what I can do.
>
> Iteration #2 is fine grained permissions that make it so I can have an
> APIKey do far less than I can do.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Sean Dague
On 05/18/2017 09:27 AM, Doug Hellmann wrote:
> Excerpts from Adrian Turjak's message of 2017-05-18 13:34:56 +1200:
> 
>> Fully agree that expecting users of a particular cloud to understand how
>> the policy stuff works is pointless, but it does fall on the cloud
>> provider to educate and document their roles and the permissions of
>> those roles. I think step 1 plus some basic role permissions for the
> 
> Doesn't basing the API key permissions directly on roles also imply that
> the cloud provider has to anticipate all of the possible ways API keys
> might be used so they can then set up those roles?

Not really. It's not explicit roles, it's inherited ones. At some point
an adminstrator gave a user permission to do stuff (through roles that
may be site specific). Don't care how we got there. The important thing
is those are cloned to the APIKey, otherwise, the APIKey litterally
would not be able to do anything, ever. Discussing roles here was an
attempt to look at how internals would work today, though it's
definitely not part of contract of this new interface.

There is a lot more implicitness in what roles mean (see
https://bugs.launchpad.net/keystone/+bug/968696) which is another reason
I'm really skeptical that we should have roles or policy points in the
APIKey interface. Describing what they do in any particular installation
is a ton of work. And you thought ordering a Medium coffee at Starbucks
was annoying. :)

The important thing is to make a clear and expressive API with the user
so they can be really clear about what they expect a thing should do.

>> Keys with the expectation of operators to document their roles/policy is
>> a safe enough place to start, and for us to document and set some
>> sensible default roles and policy. I don't think we currently have good
> 
> This seems like an area where we want to encourage interoperability.
> Policy doesn't do that today, because deployers can use arbitrary
> names for roles and set permissions in those roles in any way they
> want. That's fine for human users, but doesn't work for enabling
> automation. If the sets of roles and permissions are different in
> every cloud, how would anyone write a key allocation script that
> could provision a key for their application on more than one cloud?

So, this is where there are internals happening distinctly from user
expressed intent.

POST /apikey {}

Creates an APIKey, in the project the token is currently authed to, and
the APIKey inherits all the roles on that project that the user
currently has. The user may or may not even know what these are. It's
not a user interface.

The contract is "Give me an APIKey that can do what I do*" (* with the
exception of self propogating, i.e. the skynet exception).

That's iteration #1. APIKey can do what I can do.

Iteration #2 is fine grained permissions that make it so I can have an
APIKey do far less than I can do.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Doug Hellmann
Excerpts from Adrian Turjak's message of 2017-05-18 13:34:56 +1200:

> Fully agree that expecting users of a particular cloud to understand how
> the policy stuff works is pointless, but it does fall on the cloud
> provider to educate and document their roles and the permissions of
> those roles. I think step 1 plus some basic role permissions for the

Doesn't basing the API key permissions directly on roles also imply that
the cloud provider has to anticipate all of the possible ways API keys
might be used so they can then set up those roles?

> Keys with the expectation of operators to document their roles/policy is
> a safe enough place to start, and for us to document and set some
> sensible default roles and policy. I don't think we currently have good

This seems like an area where we want to encourage interoperability.
Policy doesn't do that today, because deployers can use arbitrary
names for roles and set permissions in those roles in any way they
want. That's fine for human users, but doesn't work for enabling
automation. If the sets of roles and permissions are different in
every cloud, how would anyone write a key allocation script that
could provision a key for their application on more than one cloud?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Monty Taylor

On 05/18/2017 06:53 AM, Sean Dague wrote:

On 05/17/2017 09:34 PM, Adrian Turjak wrote:



On 17/05/17 23:20, Sean Dague wrote:

On 05/16/2017 07:34 PM, Adrian Turjak wrote:


Anyway that aside, I'm sold on API keys as a concept in this case
provided they are project owned rather than user owned, I just don't
think we should make them too unique, and we shouldn't be giving them a
unique policy system because that way madness lies.

Policy is already a complicated system, lets not have to maintain two
systems. Any policy system we make for API keys ought to be built on top
of the policy systems we end up with using roles. An explosion of roles
will happen with dynamic policy anyway, and yes sadly it will be a DSL
for some clouds, but no sensible cloud operator is going to allow a
separate policy system in for API keys unless they can control it. I
don't think we can solve the "all clouds have the same policy for API
keys" problem and I'd suggest putting that in the "too hard for now
box". Thus we do your step 1, and leave step 2 until later when we have
a better idea of how to do it without pissing off a lot of operators,
breaking standard policy, or maintaining an entirely separate policy system.

This is definitely steps. And I agree we do step 1 to get us at least
revokable keys. That's Pike (hopefully), and then figure out the path
through step 2.

The thing about the policy system today, is it's designed for operators.
Honestly, the only way you really know what policy is really doing is if
you read the source code of openstack as well. That is very very far
from a declarative way of a user to further drop privileges. If we went
straight forward from here we're increasing the audience for this by a
factor of 1000+, with documentation, guarantees that policy points don't
ever change. No one has been thinking about microversioning on a policy
front, for instance. It now becomes part of a much stricter contract,
with a much wider audience.

I think the user experience of API use is going to be really bad if we
have to teach the world about our policy names. They are non mnemonic
for people familiar with the API. Even just in building up testing in
the Nova tree over the years mistakes have been made because it wasn't
super clear what routes the policies in question were modifying. Nova
did a giant replacement of all the policy names 2 cycles ago because of
that. It's better now, but still not what I'd want to thrust on people
that don't have at least 20% of the Nova source tree in their head.

We also need to realize there are going to be 2 levels of permissions
here. There is going to be what the operator allows (which is policy +
roles they have built up on there side), and then what the user allows
in their API Key. I would imagine that an API Key created by a user
inherits any roles that user has (the API Key is still owned by a
project). The user at any time can change the allowed routes on the key.
The admin at any time can change the role / policy structure. *both*
have to be checked on operations, and only if both succeed do we move
forward.

I think another question where we're clearly in a different space, is if
we think about granting an API Key user the ability to create a server.
In a classical role/policy move, that would require not just (compute,
"os_compute_api:servers:create"), but also (image, "get_image"), (image,
"download_image"), (network, "get_port"), (network, "create_port"), and
possibly much more. Missing one of these policies means a deep late
fail, which is not debugable unless you have the source code in front of
you. And not only requires knowledge of the OpenStack API, but deep
knowledge of the particular deployment, because the permissions needed
around networking might be different on different clouds.

Clearly, that's not the right experience for someone that just wants to
write a cloud native application that works on multiple clouds.

So we definitely are already doing something a bit different, that is
going to need to not be evaluated everywhere that policy is current
evaluated, but only *on the initial inbound request*. The user would
express this as (region1, compute, /servers, POST), which means that's
the API call they want this API Key to be able to make. Subsequent
requests wrapped in service tokens bypass checking API Key permissions.
The role system is still in play, keeping the API Key in the box the
operator wanted to put it in.

Given that these systems are going to act differently, and at different
times, I don't actually see it being a path to madness. I actually see
it as less confusing to manage correctly in the code, because they two
things won't get confused, and the wrong permissions checks get made. I
totally agree that policy today is far too complicated, and I fear
making it a new related, but very different task, way more than building
a different declarative approach that is easier for users to get right.

But... that being said, all of this 

Re: [openstack-dev] [Heat] Heat template example repository

2017-05-18 Thread Mehdi Abaakouk

On Thu, May 18, 2017 at 11:26:41AM +0200, Lance Haig wrote:



This is not only an Aodh/Ceilometer alarm issue. I can confirm that
whatever the resource prefix, this works well.

But an alarm description also contains a query an external API to
retrieve statistics. Aodh alarms are currently able to
query the deprecated Ceilometer-API and the Gnocchi-API. Creating alarms
that query the deprecated Ceilometer-API is obviously deprecated too.

Unfortunately, I have seen that all templates still use the deprecated
Ceilometer-API. Since Ocata, this API don't even run by default.

I just propose an update for one template as example here:

https://review.openstack.org/#/c/465817/

I can't really do the others, I don't have enough knowledge in
Mistral/Senlin/Openshift.
One of the challenges we have is that we have users who are on 
different versions of heat and so if we change the examples to 
accommodate the new features then we effectively block them from being 
able to use these or learn from them.


I think, it's too late to use term 'new feature' for
Aodh/Telemetry/Gnocchi. It's not new anymore, but current. The current
templates just doesn't work since at least 3 cycles... And the repo
still doesn't have templates that use the current supported APIs.

How many previous version do you want to support in this repos ? I doubt it's
more of 2-3 cycles, you may just fixes all autoscaling/autohealing templates
today.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] How to slice the week to minimize conflicts

2017-05-18 Thread Thierry Carrez
Dmitry Tantsur wrote:
>> [...]
>> After giving it some thought, my current thinking is that we should
>> still split the week in two, but should move away from an arbitrary
>> horizontal/vertical split. My strawman proposal would be to split the
>> week between inter-project work (+ teams that rely mostly on liaisons in
>> other teams) on Monday-Tuesday, and team-specific work on
>> Wednesday-Friday:
>>
>> Example of Monday-Tuesday rooms:
>> Interop WG, Docs, QA, API WG, Packaging WG, Oslo, Goals helproom,
>> Infra/RelMgt/support teams helpdesk, TC/SWG room, VM Working group...
>>
>> Example of Wednesday-Thursday or Wednesday-Friday rooms:
>> Nova, Cinder, Neutron, Swift, TripleO, Kolla, Infra...
> 
> Two objections here:
> 
> 1. In Atlanta moving cross-project things to the first 2 day resulted in
> a big share of people arriving on Tuesday and just skipping it. This is
> partly because of budget, partly because they did not associate
> themselves with any cross-project group. I wonder if we should motivate
> people to participate in at least some of these by moving one of the
> days to the middle of the week.

That was a feature rather than a bug. Give people option to pick a
smaller timespan, rather than force them to attend all 5 days. I mean,
we totally could have the Nova team meetup on Monday, Thursday and
Friday, but I suspect that would result in some people only attending
the last two days.

> 2. Doing TripleO in parallel to other projects was quite unfortunate
> IMO. TripleO is an integration project. I would love TripleO people to
> come to Ironic sessions, and I'd like to attend TripleO sessions myself.
> It is essentially impossible with this suggestions.

That's funny, because they were originally programmed on Mon-Tue (for
the reasons you mention) but they (the TripleO team) explicitly
requested to be moved to Wed-Fri, to be able to attend the inter-project
stuff instead. So I guess the mileage really varies :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Sean Dague
On 05/17/2017 09:34 PM, Adrian Turjak wrote:
> 
> 
> On 17/05/17 23:20, Sean Dague wrote:
>> On 05/16/2017 07:34 PM, Adrian Turjak wrote:
>> 
>>> Anyway that aside, I'm sold on API keys as a concept in this case
>>> provided they are project owned rather than user owned, I just don't
>>> think we should make them too unique, and we shouldn't be giving them a
>>> unique policy system because that way madness lies.
>>>
>>> Policy is already a complicated system, lets not have to maintain two
>>> systems. Any policy system we make for API keys ought to be built on top
>>> of the policy systems we end up with using roles. An explosion of roles
>>> will happen with dynamic policy anyway, and yes sadly it will be a DSL
>>> for some clouds, but no sensible cloud operator is going to allow a
>>> separate policy system in for API keys unless they can control it. I
>>> don't think we can solve the "all clouds have the same policy for API
>>> keys" problem and I'd suggest putting that in the "too hard for now
>>> box". Thus we do your step 1, and leave step 2 until later when we have
>>> a better idea of how to do it without pissing off a lot of operators,
>>> breaking standard policy, or maintaining an entirely separate policy system.
>> This is definitely steps. And I agree we do step 1 to get us at least
>> revokable keys. That's Pike (hopefully), and then figure out the path
>> through step 2.
>>
>> The thing about the policy system today, is it's designed for operators.
>> Honestly, the only way you really know what policy is really doing is if
>> you read the source code of openstack as well. That is very very far
>> from a declarative way of a user to further drop privileges. If we went
>> straight forward from here we're increasing the audience for this by a
>> factor of 1000+, with documentation, guarantees that policy points don't
>> ever change. No one has been thinking about microversioning on a policy
>> front, for instance. It now becomes part of a much stricter contract,
>> with a much wider audience.
>>
>> I think the user experience of API use is going to be really bad if we
>> have to teach the world about our policy names. They are non mnemonic
>> for people familiar with the API. Even just in building up testing in
>> the Nova tree over the years mistakes have been made because it wasn't
>> super clear what routes the policies in question were modifying. Nova
>> did a giant replacement of all the policy names 2 cycles ago because of
>> that. It's better now, but still not what I'd want to thrust on people
>> that don't have at least 20% of the Nova source tree in their head.
>>
>> We also need to realize there are going to be 2 levels of permissions
>> here. There is going to be what the operator allows (which is policy +
>> roles they have built up on there side), and then what the user allows
>> in their API Key. I would imagine that an API Key created by a user
>> inherits any roles that user has (the API Key is still owned by a
>> project). The user at any time can change the allowed routes on the key.
>> The admin at any time can change the role / policy structure. *both*
>> have to be checked on operations, and only if both succeed do we move
>> forward.
>>
>> I think another question where we're clearly in a different space, is if
>> we think about granting an API Key user the ability to create a server.
>> In a classical role/policy move, that would require not just (compute,
>> "os_compute_api:servers:create"), but also (image, "get_image"), (image,
>> "download_image"), (network, "get_port"), (network, "create_port"), and
>> possibly much more. Missing one of these policies means a deep late
>> fail, which is not debugable unless you have the source code in front of
>> you. And not only requires knowledge of the OpenStack API, but deep
>> knowledge of the particular deployment, because the permissions needed
>> around networking might be different on different clouds.
>>
>> Clearly, that's not the right experience for someone that just wants to
>> write a cloud native application that works on multiple clouds.
>>
>> So we definitely are already doing something a bit different, that is
>> going to need to not be evaluated everywhere that policy is current
>> evaluated, but only *on the initial inbound request*. The user would
>> express this as (region1, compute, /servers, POST), which means that's
>> the API call they want this API Key to be able to make. Subsequent
>> requests wrapped in service tokens bypass checking API Key permissions.
>> The role system is still in play, keeping the API Key in the box the
>> operator wanted to put it in.
>>
>> Given that these systems are going to act differently, and at different
>> times, I don't actually see it being a path to madness. I actually see
>> it as less confusing to manage correctly in the code, because they two
>> things won't get confused, and the wrong permissions checks get made. I
>> totally agree that policy today is far too 

Re: [openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-18 Thread Dan Prince
On Thu, 2017-05-18 at 03:29 +, Steven Dake (stdake) wrote:
> My experience with BTRFS has been flawless.  My experience with
> overlayfs is that occasionally (older centos kernels) returned
>  as permissions (rather the drwxrwrw).  This most often
> happened after using the yum overlay driver.  I’ve found overlay to
> be pretty reliable as a “read-only” filesystem – eg just serving up
> container images, not persistent storage.

We've now switched to 'overlay2' and things seem happier. CI passes and
for me locally I'm not seeing any issues in TripleO CI yet either.

Curious to see if the Kolla tests upstream work with it as well:

https://review.openstack.org/#/c/465920/

Dan

>  
> YMMV.  Overlayfs is the long-term filesystem of choice for the use
> case you outlined.  I’ve heard overlayfs has improved over the last
> year in terms of backport quality so maybe it is approaching ready.
>  
> Regards
> -steve
>  
>  
> From: Steve Baker 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Wednesday, May 17, 2017 at 7:30 PM
> To: "OpenStack Development Mailing List (not for usage questions)"  penstack-...@lists.openstack.org>, "dwa...@redhat.com"  .com>
> Subject: Re: [openstack-dev] [TripleO][Kolla] default docker storage
> backend for TripleO
>  
>  
>  
> On Thu, May 18, 2017 at 12:38 PM, Fox, Kevin M 
> wrote:
> I've only used btrfs and devicemapper on el7. btrfs has worked well.
> devicemapper ate may data on multiple occasions. Is redhat supporting
> overlay in the el7 kernels now?
>  
> overlay2 is documented as a Technology Preview graph driver in the
> Atomic Host 7.3.4 release notes:
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linu
> x_atomic_host/7/html-single/release_notes/
>  
>  
>  
> _
> From: Dan Prince [dpri...@redhat.com]
> Sent: Wednesday, May 17, 2017 5:24 PM
> To: openstack-dev
> Subject: [openstack-dev] [TripleO][Kolla] default docker storage
> backend for    TripleO
> 
> TripleO currently uses the default "loopback" docker storage device.
> This is not recommended for production (see 'docker info').
> 
> We've been poking around with docker storage backends in TripleO for
> almost 2 months now here:
> 
>  https://review.openstack.org/#/c/451916/
> 
> For TripleO there are a couple of considerations:
> 
>  - we intend to support in place upgrades from baremetal to
> containers
> 
>  - when doing in place upgrades re-partitioning disks is hard, if not
> impossible. This makes using devicemapper hard.
> 
>  - we'd like to to use a docker storage backend that is production
> ready.
> 
>  - our target OS is latest Centos/RHEL 7
> 
> As we approach pike 2 I'm keen to move towards a more production
> docker
> storage backend. Is there consensus that 'overlay2' is a reasonable
> approach to this? Or is it too early to use that with the
> combinations
> above?
> 
> Looking around at what is recommended in other projects it seems to
> be
> a mix as well from devicemapper to btrfs.
> 
> [1] https://docs.openshift.com/container-platform/3.3/install_config/
> in
> stall/host_preparation.html#configuring-docker-storage
> [2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_Re
> dH
> at.sh#n30
> 
>  
> I'd love to be able to use overlay2. I've CCed Daniel Walsh with the
> hope we can get a general overview of the maturity of overlay2 on
> rhel/centos.
>  
> I tried using overlay2 recently to create an undercloud and hit an
> issue doing a "cp -a *" on deleted files. This was with kernel-
> 3.10.0-514.16.1 and docker-1.12.6.
>  
> I want to get to the bottom of it so I'll reproduce and raise a bug
> as appropriate.
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Networking-vSphere]

2017-05-18 Thread pravin ghuge
Hi,
I am trying to configure openstack with VMware as an compute and network
driver as vlan/vxlan. My instances are being created in vCenter, but are
not getting IP.
The bridge created in the neutron node are being created in the vCenter but
they are not attached to any exsi host. I figured that a plugin is required
i.e ovsvapp and NSX and i want to use ovsvapp, But there is no proper
installation doc for the ovsvapp. Please provide the steps to be done on
the openstack neutron/controller/compute nodes and the vCenter(VDS and
uplinks) related to ovsvapp.


Thanks and Regards.

Praveen Ghuge
(askopenstack ID - jarvis@openstack)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Bogdan Dobrelya
On 16.05.2017 20:57, Michał Jastrzębski wrote:
> On 16 May 2017 at 11:49, Doug Hellmann  wrote:
>> Excerpts from Michał Jastrzębski's message of 2017-05-16 11:38:19 -0700:
>>> On 16 May 2017 at 11:27, Doug Hellmann  wrote:
 Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
> So another consideration. Do you think whole rule of "not building
> binares" should be reconsidered? We are kind of new use case here. We
> aren't distro but we are packagers (kind of). I don't think putting us
> on equal footing as Red Hat, Canonical or other companies is correct
> here.
>
> K8s is something we want to work with, and what we are discussing is
> central to how k8s is used. K8s community creates this culture of
> "organic packages" built by anyone, most of companies/projects already
> have semi-official container images and I think expectations on
> quality of these are well...none? You get what you're given and if you
> don't agree, there is always way to reproduce this yourself.
>
> [Another huge snip]
>

 I wanted to have the discussion, but my position for now is that
 we should continue as we have been and not change the policy.

 I don't have a problem with any individual or group of individuals
 publishing their own organic packages. The issue I have is with
 making sure it is clear those *are* "organic" and not officially
 supported by the broader community. One way to do that is to say
 they need to be built somewhere other than on our shared infrastructure.
 There may be other ways, though, so I'm looking for input on that.
>>>
>>> What I was trying to say here is, current discussion aside, maybe we
>>> should revise this "not supported by broader community" rule. They may
>>> very well be supported to a certain point. Support is not just yes or
>>> no, it's all the levels in between. I think we can afford *some* level
>>> of official support, even if that some level means best effort made by
>>> community. If Kolla community, not an individual like myself, would
>>> like to support these images best to our ability, why aren't we
>>> allowed? As long as we are crystal clear what is scope of our support,
>>> why can't we do it? I think we've already proven that it's going to be
>>> tremendously useful for a lot of people, even in a shape we discuss
>>> today, that is "best effort, you still need to validate it for
>>> yourself"...
>>
>> Right, I understood that. So far I haven't heard anything to change
>> my mind, though.
>>
>> I think you're underestimating the amount of risk you're taking on
>> for yourselves and by extension the rest of the community, and
>> introducing to potential consumers of the images, by promising to
>> support production deployments with a small team of people without
>> the economic structure in place to sustain the work.
> 
> Again, we tell what it is and what it is not. I think support is
> loaded term here. Instead we can create lengthy documentation
> explaining to a detail lifecycle and testing certain container had to
> pass before it lands in dockerhub. Maybe add link to particular set of
> jobs that container had passed. Only thing we can offer is automated
> and transparent process of publishing. On top of that? You are on your
> own. But even within these boundaries, a lot of people could have
> better experience of running OpenStack...

That totally makes sense. Supporting builds like "a published container
passed some test scenarios for our CI gates, here is a link to
particular set of jobs that container had passed" benefits all and has
nothing to the production use cases and guarantees nothing in terms of
supporting them.

> 
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] How to slice the week to minimize conflicts

2017-05-18 Thread Dmitry Tantsur

Hi Thierry, thanks for raising it. I think it's very important to discuss 
indeed.

On 05/18/2017 11:27 AM, Thierry Carrez wrote:

Hi everyone,

For the PTG events we have a number of rooms available for 5 days, of
which we need to make the best usage. We also want to keep it simple and
productive, so we want to minimize room changes (allocating the same
room to the same group for one or more days).

For the first PTG in Atlanta, we split the week into two groups.
Monday-Tuesday for "horizontal" project team meetups (Infra, QA...) and
workgroups (API WG, Goals helprooms...), and Wednesday-Friday for
"vertical" project team meetups (Nova, Swift...). This kinda worked, but
the feedback we received called for more optimizations and reduced
conflicts.

In particular, some projects which have a lot of contributors overlap
(Storlets/Swift, or Manila/Cinder) were all considered "vertical" and
happened at the same time. Also horizontal team members ended up having
issues to attend workgroups, and had nowhere to go for the rest of the
week. Finally, on Monday-Tuesday the rooms that had the most success
were inter-project ones we didn't really anticipate (like the API WG),
while rooms with horizontal project team meetups were a bit
under-attended. While we have a lot of constraints, I think we can
optimize a bit better.

After giving it some thought, my current thinking is that we should
still split the week in two, but should move away from an arbitrary
horizontal/vertical split. My strawman proposal would be to split the
week between inter-project work (+ teams that rely mostly on liaisons in
other teams) on Monday-Tuesday, and team-specific work on Wednesday-Friday:

Example of Monday-Tuesday rooms:
Interop WG, Docs, QA, API WG, Packaging WG, Oslo, Goals helproom,
Infra/RelMgt/support teams helpdesk, TC/SWG room, VM Working group...

Example of Wednesday-Thursday or Wednesday-Friday rooms:
Nova, Cinder, Neutron, Swift, TripleO, Kolla, Infra...


Two objections here:

1. In Atlanta moving cross-project things to the first 2 day resulted in a big 
share of people arriving on Tuesday and just skipping it. This is partly because 
of budget, partly because they did not associate themselves with any 
cross-project group. I wonder if we should motivate people to participate in at 
least some of these by moving one of the days to the middle of the week.


2. Doing TripleO in parallel to other projects was quite unfortunate IMO. 
TripleO is an integration project. I would love TripleO people to come to Ironic 
sessions, and I'd like to attend TripleO sessions myself. It is essentially 
impossible with this suggestions.




(NB: in this example infra team members end up being available in a
general support team helpdesk room in the first part of the week, and
having a regular team meetup on the second part of the week)

In summary, Monday-Tuesday would be mostly around themes, while
Wednesday-Friday would be mostly around teams. In addition to that,
teams that /prefer/ to run on Monday-Tuesday to avoid conflicting with
another project meetup (like Manila wanting to avoid conflicting with
Cinder, or Storlets wanting to avoid conflicting with Swift) could
*choose* to go for Monday-Tuesday instead of Wednesday-Friday.

It's a bit of a long shot (we'd still want to equilibrate both sides in
terms of room usage, so it's likely that the teams that are late to
decide to participate would be pushed on one side or the other), but I
think it's a good incremental change that could solve some of the issues
reported in the Atlanta week slicing, as well as generally make
inter-project coordination simpler.

If we adopt that format, we need to be pretty flexible in terms of what
is a "workgroup": to me, any inter-project work that would like to have
a one-day or two-day room should be able to get some.
Nova-{Cinder,Neutron,Ironic} discussions would for example happen in the
VM & BM working group room, but we can imagine others just like it.


Big +1 to being flexible in defining work groups. Actually, we had to organize 
Ironic-Neutron room anyway on Monday. The only problem, as I note above, is that 
many people may opt out of being there on Mon-Tue.




Let me know what you think. Also feel free to propose alternate creative
ways to slice the space and time we'll have. We need to open
registration very soon (June 1st is the current target), and we'd like
to have a rough idea of the program before we do that (so that people
can report which days they will attend more accurately).




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Proposal to change the timing of Feature Freeze

2017-05-18 Thread Thierry Carrez
Chris Jones wrote:
> I have a fairly simple proposal to make - I'd like to suggest that
> Feature Freeze move to being much earlier in the release cycle (no
> earlier than M.1 and no later than M.2 would be my preference).
> [...]

Hey Chris,

From my (admittedly too long) experience in release management, forcing
more time for stabilization work does not magically yield better
results. There is nothing like a "perfect" release, it's always a "good
enough" trade-off. Holding releases in the hope that more bugs will be
discovered and fixed only works so far: some bugs will only emerge once
people start deploying software in their unique environments and use
cases. It's better to put it out there when it's "good enough".

So a Feature Freeze should be placed early enough to give you an
opportunity to slow down, fix known blockers, have documentation and
translations catch up. Currently that means 5-6 weeks. Moving it earlier
than this reasonable trade-off just brings more pain for little benefit.
It is hard enough to get people to stop pushing features and feature
freeze exceptions and do stabilization work for 5 weeks. Forcing a
longer freeze would just see an explosion of local feature branches, not
a more "stable" release.

Furthermore, we have a number of projects (newly-created ones that need
to release early, or mature ones that want to push that occasional new
feature more often) that bypass the feature freeze / RC system
completely. With more constraints, I'd expect most projects to switch to
that model instead.

> Rather than getting hung up on the specific numbers of weeks, perhaps it
> would be helpful to start with opinions on whether or not there is
> enough stabilisation time in the current release schedules.

Compared to the early days of OpenStack (where we'd still use a 5-6-week
freeze period) our automated testing has come a long way. The cases
where we need to respin release candidates due to a major blocker that
was not caught in automated testing are becoming rarer. If anything, the
data points to a need for shorter freezes rather than longer ones. The
main reason we are still at 5-6weeks those days is for translations and
docs, rather than real stabilization work. I'm not advocating for making
it shorter, I still think it's the right trade-off :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] ptgbot: how to make "what's currently happening" emerge

2017-05-18 Thread Davanum Srinivas
On Thu, May 18, 2017 at 5:57 AM, Thierry Carrez  wrote:
> Hi again,
>
> For the PTG events we have, by design, a pretty loose schedule. Each
> room is free to organize their agenda in whatever way they see fit, and
> take breaks whenever they need. This flexibility is key to keep our
> productivity at those events at a maximum. In Atlanta, most teams ended
> up dynamically building a loose agenda on a room etherpad.
>
> This approach is optimized for team meetups and people who strongly
> identify with one team in particular. In Atlanta during the first two
> days, where a lot of vertical team contributors did not really know
> which room to go to, it was very difficult to get a feel of what is
> currently being discussed and where they could go. Looking into 20
> etherpads and trying to figure out what is currently being discussed is
> just not practical. In the feedback we received, the need to expose the
> schedule more visibly was the #1 request.
>
> It is a thin line to walk on. We clearly don't want to publish a
> schedule in advance or be tied to pre-established timeboxes for every
> topic. We want it to be pretty fluid and natural, but we still need to
> somehow make "what's currently happening" (and "what will be discussed
> next") emerge globally.
>
> One lightweight solution I've been working on is an IRC bot ("ptgbot")
> that would produce a static webpage. Room leaders would update it on
> #openstack-ptg using commands like:
>
> #swift now discussing ring placement optimizations
> #swift next at 14:00 we plan to discuss better #keystone integration
>
> and the bot would collect all those "now" and "next" items and publish a
> single (mobile-friendly) webpage, (which would also include
> ethercalc-scheduled things, if we keep any).
>
> The IRC commands double as natural language announcements for those that
> are following activity on the IRC channel. Hashtags can be used to
> attract other teams attention. You can announce later discussions, but
> the commitment on exact timing is limited. Every "now" command would
> clear "next" entries, so that there wouldn't be any stale entries and
> the command interface would be kept dead simple (at the cost of a bit of
> repetition).
>
> I have POC code for this bot already. Before I publish it (and start
> work to make infra support it), I just wanted to see if this is the
> right direction and if I should continue to work on it :) I feel like
> it's an incremental improvement that preserves the flexibility and
> self-scheduling while addressing the main visibility concern. If you
> have better ideas, please let me know !

Love it! +1 to try this at the next ptg.

-- Dims

> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg] ptgbot: how to make "what's currently happening" emerge

2017-05-18 Thread Thierry Carrez
Hi again,

For the PTG events we have, by design, a pretty loose schedule. Each
room is free to organize their agenda in whatever way they see fit, and
take breaks whenever they need. This flexibility is key to keep our
productivity at those events at a maximum. In Atlanta, most teams ended
up dynamically building a loose agenda on a room etherpad.

This approach is optimized for team meetups and people who strongly
identify with one team in particular. In Atlanta during the first two
days, where a lot of vertical team contributors did not really know
which room to go to, it was very difficult to get a feel of what is
currently being discussed and where they could go. Looking into 20
etherpads and trying to figure out what is currently being discussed is
just not practical. In the feedback we received, the need to expose the
schedule more visibly was the #1 request.

It is a thin line to walk on. We clearly don't want to publish a
schedule in advance or be tied to pre-established timeboxes for every
topic. We want it to be pretty fluid and natural, but we still need to
somehow make "what's currently happening" (and "what will be discussed
next") emerge globally.

One lightweight solution I've been working on is an IRC bot ("ptgbot")
that would produce a static webpage. Room leaders would update it on
#openstack-ptg using commands like:

#swift now discussing ring placement optimizations
#swift next at 14:00 we plan to discuss better #keystone integration

and the bot would collect all those "now" and "next" items and publish a
single (mobile-friendly) webpage, (which would also include
ethercalc-scheduled things, if we keep any).

The IRC commands double as natural language announcements for those that
are following activity on the IRC channel. Hashtags can be used to
attract other teams attention. You can announce later discussions, but
the commitment on exact timing is limited. Every "now" command would
clear "next" entries, so that there wouldn't be any stale entries and
the command interface would be kept dead simple (at the cost of a bit of
repetition).

I have POC code for this bot already. Before I publish it (and start
work to make infra support it), I just wanted to see if this is the
right direction and if I should continue to work on it :) I feel like
it's an incremental improvement that preserves the flexibility and
self-scheduling while addressing the main visibility concern. If you
have better ideas, please let me know !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-18 Thread Attila Fazekas
+1, Totally agree.

Best Regards,
Attila

On Tue, May 16, 2017 at 10:22 AM, Andrea Frittoli  wrote:

> Hello team,
>
> I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.
>
> Over the past two cycle Fanglei has been steadily contributing to Tempest
> and its community.
> She's done a great deal of work in making Tempest code cleaner, easier to
> read, maintain and
> debug, fixing bugs and removing cruft. Both her code as well as her
> reviews demonstrate a
> very good understanding of Tempest internals and of the project future
> direction.
> I believe Fanglei will make an excellent addition to the team.
>
> As per the usual, if the current Tempest core team members would please
> vote +1
> or -1(veto) to the nomination when you get a chance. We'll keep the polls
> open
> for 5 days or until everyone has voted.
>
> References:
> https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn
> https://review.openstack.org/#/q/reviewer:zhufl
>
> Thank you,
>
> Andrea (andreaf)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-18 Thread Lance Haig



On 18.05.17 08:00, Mehdi Abaakouk wrote:

Hi,

On Mon, May 15, 2017 at 01:01:57PM -0400, Zane Bitter wrote:

On 15/05/17 12:10, Steven Hardy wrote:

On Mon, May 15, 2017 at 04:46:28PM +0200, Lance Haig wrote:

Hi Steve,

I am happy to assist in any way to be honest.


It was great to meet you in Boston, and thanks very much for 
volunteering to help out.


BTW one issue I'm aware of is that the autoscaling template examples 
we have all use OS::Ceilometer::* resources for alarms. We have a 
global environment thingy that maps those to OS::Aodh::*, so at least 
in theory those templates should continue to work, but there are 
actually no examples that I can find of autoscaling templates doing 
things the way we want everyone to do them.


This is not only an Aodh/Ceilometer alarm issue. I can confirm that
whatever the resource prefix, this works well.

But an alarm description also contains a query an external API to
retrieve statistics. Aodh alarms are currently able to
query the deprecated Ceilometer-API and the Gnocchi-API. Creating alarms
that query the deprecated Ceilometer-API is obviously deprecated too.

Unfortunately, I have seen that all templates still use the deprecated
Ceilometer-API. Since Ocata, this API don't even run by default.

I just propose an update for one template as example here:

 https://review.openstack.org/#/c/465817/

I can't really do the others, I don't have enough knowledge in
Mistral/Senlin/Openshift.
One of the challenges we have is that we have users who are on different 
versions of heat and so if we change the examples to accommodate the new 
features then we effectively block them from being able to use these or 
learn from them.


I think we need to have a discussion about what is the most effective 
way forward.


Lance

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg] How to slice the week to minimize conflicts

2017-05-18 Thread Thierry Carrez
Hi everyone,

For the PTG events we have a number of rooms available for 5 days, of
which we need to make the best usage. We also want to keep it simple and
productive, so we want to minimize room changes (allocating the same
room to the same group for one or more days).

For the first PTG in Atlanta, we split the week into two groups.
Monday-Tuesday for "horizontal" project team meetups (Infra, QA...) and
workgroups (API WG, Goals helprooms...), and Wednesday-Friday for
"vertical" project team meetups (Nova, Swift...). This kinda worked, but
the feedback we received called for more optimizations and reduced
conflicts.

In particular, some projects which have a lot of contributors overlap
(Storlets/Swift, or Manila/Cinder) were all considered "vertical" and
happened at the same time. Also horizontal team members ended up having
issues to attend workgroups, and had nowhere to go for the rest of the
week. Finally, on Monday-Tuesday the rooms that had the most success
were inter-project ones we didn't really anticipate (like the API WG),
while rooms with horizontal project team meetups were a bit
under-attended. While we have a lot of constraints, I think we can
optimize a bit better.

After giving it some thought, my current thinking is that we should
still split the week in two, but should move away from an arbitrary
horizontal/vertical split. My strawman proposal would be to split the
week between inter-project work (+ teams that rely mostly on liaisons in
other teams) on Monday-Tuesday, and team-specific work on Wednesday-Friday:

Example of Monday-Tuesday rooms:
Interop WG, Docs, QA, API WG, Packaging WG, Oslo, Goals helproom,
Infra/RelMgt/support teams helpdesk, TC/SWG room, VM Working group...

Example of Wednesday-Thursday or Wednesday-Friday rooms:
Nova, Cinder, Neutron, Swift, TripleO, Kolla, Infra...

(NB: in this example infra team members end up being available in a
general support team helpdesk room in the first part of the week, and
having a regular team meetup on the second part of the week)

In summary, Monday-Tuesday would be mostly around themes, while
Wednesday-Friday would be mostly around teams. In addition to that,
teams that /prefer/ to run on Monday-Tuesday to avoid conflicting with
another project meetup (like Manila wanting to avoid conflicting with
Cinder, or Storlets wanting to avoid conflicting with Swift) could
*choose* to go for Monday-Tuesday instead of Wednesday-Friday.

It's a bit of a long shot (we'd still want to equilibrate both sides in
terms of room usage, so it's likely that the teams that are late to
decide to participate would be pushed on one side or the other), but I
think it's a good incremental change that could solve some of the issues
reported in the Atlanta week slicing, as well as generally make
inter-project coordination simpler.

If we adopt that format, we need to be pretty flexible in terms of what
is a "workgroup": to me, any inter-project work that would like to have
a one-day or two-day room should be able to get some.
Nova-{Cinder,Neutron,Ironic} discussions would for example happen in the
VM & BM working group room, but we can imagine others just like it.

Let me know what you think. Also feel free to propose alternate creative
ways to slice the space and time we'll have. We need to open
registration very soon (June 1st is the current target), and we'd like
to have a rough idea of the program before we do that (so that people
can report which days they will attend more accurately).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-18 Thread Lance Haig



On 17.05.17 22:18, Zane Bitter wrote:

On 16/05/17 10:32, Lance Haig wrote:

What if instead of a directory per release, we just had a 'deprecated'
directory where we move stuff that is going away (e.g. anything
relying on OS::Glance::Image), and then deleted them when it
disappeared from any supported release (e.g. LBaaSv1 must be close if
it isn't gone already).


I agree in general this would be good. How would we deal with users who
are running older versions of openstack?
Most of the customers I support have Liberty and newer so I would
perhaps like to have these available as tested.
The challenge for us is that the newer the OStack version the more
features are available e.g. conditionals etc..
To support that in a backwards compatible fashion is going to be tough I
think. Unless I am missing something.


'stable' branches could achieve that, and it's the most feasible way 
to actually CI test them against older releases anyway.

Ok this sounds good. We just need to see if it is feasible to implement.



As we've proven, maintaining these templates has been a challenge
given the
available resources, so I guess I'm still in favor of not duplicating
a bunch
of templates, e.g perhaps we could focus on a target of CI testing
templates on the current stable release as a first step?


I'd rather do CI against Heat master, I think, but yeah that sounds
like the first step. Note that if we're doing CI on old stuff then
we'd need to do heat-templates stable branches rather than
directory-per-release.

With my suggestion above, we could just not check anything in the
'deprecated' directory maybe?

I agree in part.
If we are using the heat examples to test the functionality of the
master branch then that would be a good idea.
If we want to provide useable templates for users to reference and use
then I would suggest we test against stable.


The downside of that is you can't add a template that uses a new 
feature in Heat until after the next release (which includes that 
feature).


I think the answer here is to have stable heat-templates test against 
stable heat and master against master.

Agree completely.



I am sure we could find a way to do both.
I would suggets that we first get reliable CICD running on the current
templates and fix what we can in there.
Then we can look at what would be a good way forward.

I am just brain dumping so any other ideas would also be good.



As you guys mentioned in our discussions the Networking example I
quoted is
not something you guys can deal with as the source project affects
this.

Unless we can use this exercise to test these and fix them then I am
happier.

My vision would be to have a set of templates and examples that are
tested
regularly against a running OS deployment so that we can make sure 
the

combinations still run. I am sure we can agree on a way to do this
with CICD
so that we test the fetureset.


Agreed, getting the approach to testing agreed seems like the first
step -
FYI we do already have automated scenario tests in the main heat tree
that
consume templates similar to many of the examples:

https://github.com/openstack/heat/tree/master/heat_integrationtests/scenario 




So, in theory, getting a similar test running on heat_templates
should be
fairly simple, but getting all the existing templates working is
likely to
be a bigger challenge.


Even if we just ran the 'template validate' command on them to check
that all of the resource types & properties still exist, that would be
pretty helpful. It'd catch of of the times when we break backwards
compatibility so we can decide to either fix it or deprecate/remove
the obsolete template. (Note that you still need all of the services
installed, or at least endpoints in the catalog, for the validation to
work.)


So apparently Thomas already implemented this (nice!). The limitation 
is that we're ignoring errors where the template contains a resource 
where the endpoint is not in the service catalog. That's likely to 
mean a lot of these templates aren't _really_ getting tested.


In theory all we have to do to fix that is add endpoints for all of 
them in the catalog (afaik we shouldn't need to actually run any of 
the services).


If we have the ability to have "offline" endpoints to validate against 
this would make CICD for the templates much easier.
It would be good then to have this available for people who are 
developing on a laptop when offline.


I hope we are able to do this.

Lance

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [doc] Docs team meeting

2017-05-18 Thread Alexandra Settle
Hey everyone,

The docs meeting will continue today in #openstack-meeting as scheduled 
(Thursday at 16:00 UTC). For more details, and the agenda, see the meeting 
page: - 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

The meeting chair will be me! Hope you can all make the new time ☺

Thanks,

Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread Julien Danjou
On Thu, May 18 2017, Hanxi Liu wrote:

> Ceilometer, Gnocchi, Aodh all use pbr, so the port is 8000 by default.
>
> I guess we also should hardcode Gnocchi's port in rdo project, together
> with Aodh.
> i proposed patchs for Aodh and Gnocchi:
>
> https://review.rdoproject.org/r/#/c/5848/
> https://review.rdoproject.org/r/#/c/5847/
>
> But hguemar suggest not to hardcode port.
>
> How do you think about this?

Port for HTTP is 80. The rest, unless assigned by IANA, is fantasy. :-)

You can make all your (OpenStack) services run under
http://example.com/openstack, e.g. http://example.com/openstack/metric
for Gnocchi, if you want. So there's no good reason to assigne a port to
a service more than there is to assigne a hardcoded URL prefix.

So I think I'd agree with Haikel here. But ultimately, in the RDO case,
the packaging files should leverage a real WSGI Web server like uwsgi if
they want to start the service, rather than defaulting *all* packages to
the same pbr default. Which will conflict and is bad user experience.

Nobody asks what the default port of e.g. phpmyadmin. The same should go
with OpenStack services ultimately. Unfortunately, the bad habit has
spread from the early day of Nova and Swift using a port and not
providing a WSGI file.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread Julien Danjou
On Thu, May 18 2017, aalvarez wrote:

> I thought the API was based on and mounted by Pecan? Isn't there a way to
> pass these options to Pecan?

Pecan is an API framework, not a HTTP server.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread aalvarez
I thought the API was based on and mounted by Pecan? Isn't there a way to
pass these options to Pecan?



--
View this message in context: 
http://openstack.10931.n7.nabble.com/gnocchi-Running-Gnocchi-API-in-specific-interface-tp135004p135012.html
Sent from the Developer mailing list archive at Nabble.com.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread Hanxi Liu
On Thu, May 18, 2017 at 3:06 PM, Julien Danjou  wrote:

> On Wed, May 17 2017, aalvarez wrote:
>
> > I do not need this functionality for production, but for testing. I
> think it
> > would be nice if we can specify the interface for the gnocchi-api even
> for
> > test purposes, just like the port.
>
> Feel free to send a patch. This is provided by pbr so that's where you
> should sent the patch:
>
>   https://docs.openstack.org/developer/pbr/
>
> Hi jd,

Ceilometer, Gnocchi, Aodh all use pbr, so the port is 8000 by default.

I guess we also should hardcode Gnocchi's port in rdo project, together
with Aodh.
i proposed patchs for Aodh and Gnocchi:

https://review.rdoproject.org/r/#/c/5848/
https://review.rdoproject.org/r/#/c/5847/

But hguemar suggest not to hardcode port.

How do you think about this?

Cheers,
Hanxi Liu
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-18 Thread Bogdan Dobrelya
On 18.05.2017 2:38, Fox, Kevin M wrote:
> I've only used btrfs and devicemapper on el7. btrfs has worked well. 
> devicemapper ate may data on multiple occasions. Is redhat supporting overlay 
> in the el7 kernels now?

Please take a look this fs benchmark results thread and comments [0]
before evaluating btrfs: [tl;dr] btrfs performed very slow for some cases.

[0] https://news.ycombinator.com/item?id=11749010

> 
> Thanks,
> Kevin
> 
> From: Dan Prince [dpri...@redhat.com]
> Sent: Wednesday, May 17, 2017 5:24 PM
> To: openstack-dev
> Subject: [openstack-dev] [TripleO][Kolla] default docker storage backend for  
>   TripleO
> 
> TripleO currently uses the default "loopback" docker storage device.
> This is not recommended for production (see 'docker info').
> 
> We've been poking around with docker storage backends in TripleO for
> almost 2 months now here:
> 
>  https://review.openstack.org/#/c/451916/
> 
> For TripleO there are a couple of considerations:
> 
>  - we intend to support in place upgrades from baremetal to containers
> 
>  - when doing in place upgrades re-partitioning disks is hard, if not
> impossible. This makes using devicemapper hard.
> 
>  - we'd like to to use a docker storage backend that is production
> ready.
> 
>  - our target OS is latest Centos/RHEL 7
> 
> As we approach pike 2 I'm keen to move towards a more production docker
> storage backend. Is there consensus that 'overlay2' is a reasonable
> approach to this? Or is it too early to use that with the combinations
> above?
> 
> Looking around at what is recommended in other projects it seems to be
> a mix as well from devicemapper to btrfs.
> 
> [1] https://docs.openshift.com/container-platform/3.3/install_config/in
> stall/host_preparation.html#configuring-docker-storage
> [2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_RedH
> at.sh#n30
> 
> 
> Dan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread Julien Danjou
On Wed, May 17 2017, aalvarez wrote:

> I do not need this functionality for production, but for testing. I think it
> would be nice if we can specify the interface for the gnocchi-api even for
> test purposes, just like the port.

Feel free to send a patch. This is provided by pbr so that's where you
should sent the patch:

  https://docs.openstack.org/developer/pbr/

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread aalvarez
I do not need this functionality for production, but for testing. I think it
would be nice if we can specify the interface for the gnocchi-api even for
test purposes, just like the port.



--
View this message in context: 
http://openstack.10931.n7.nabble.com/gnocchi-Running-Gnocchi-API-in-specific-interface-tp135004p135008.html
Sent from the Developer mailing list archive at Nabble.com.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-18 Thread Mehdi Abaakouk

Hi,

On Mon, May 15, 2017 at 01:01:57PM -0400, Zane Bitter wrote:

On 15/05/17 12:10, Steven Hardy wrote:

On Mon, May 15, 2017 at 04:46:28PM +0200, Lance Haig wrote:

Hi Steve,

I am happy to assist in any way to be honest.


It was great to meet you in Boston, and thanks very much for 
volunteering to help out.


BTW one issue I'm aware of is that the autoscaling template examples 
we have all use OS::Ceilometer::* resources for alarms. We have a 
global environment thingy that maps those to OS::Aodh::*, so at least 
in theory those templates should continue to work, but there are 
actually no examples that I can find of autoscaling templates doing 
things the way we want everyone to do them.


This is not only an Aodh/Ceilometer alarm issue. I can confirm that
whatever the resource prefix, this works well.

But an alarm description also contains a query an external API to
retrieve statistics. Aodh alarms are currently able to
query the deprecated Ceilometer-API and the Gnocchi-API. Creating alarms
that query the deprecated Ceilometer-API is obviously deprecated too.

Unfortunately, I have seen that all templates still use the deprecated
Ceilometer-API. Since Ocata, this API don't even run by default.

I just propose an update for one template as example here:

 https://review.openstack.org/#/c/465817/

I can't really do the others, I don't have enough knowledge in
Mistral/Senlin/Openshift.

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev