Re: [openstack-dev] [tripleo] composable roles team

2016-05-03 Thread Tomas Sedovic

On 05/02/2016 07:47 PM, Brent Eagles wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/01/2016 08:01 PM, Emilien Macchi wrote:




If a feature can't land without disruption, then why not using a
special branch to be merged once the feature is complete ?


The problem is that during our work, some people will update the
manifests, and it will affect us, since we're copy/pasting the
code somewhere else (in puppet-tripleo), that's why we might need
some outstanding help from the team, to converge to the new model.
I know asking that is tough, but if we want to converge quickly,
we need to make the adoption accepted by everyone. One thing we can
do, is asking our reviewer team to track the patches that will need
some work, and the composable team can help in the review process.

The composable roles is a feature that we all wait, having the
help from our contributors will really save us time.


s/wait/want/ I expect.

Well said. I understand the reservations on the -1 for non-composable
role patches. It *does* feel a bit strong, but in the end I think it's
just being honest. The likelihood that a patch on tht is going to land
"as is" prior to the merge of the related composable role changes
seems really unlikely. I for one, am willing to do what I can to help
anyone who has had their patch pre-empted during this period get their
patch refactored/ported once the comp-roles thing has settled down.


There is a precedent for this: back when we were using merge.py to 
generate the TripleO Heat templates, the patch that moved us to pure 
Heat was taking a long time to merge due to all the other t-h-t patches 
coming in and causing conflicts.


So we decided to -2 all the other t-h-t patches until this one got merged.

On the other hand, that lasted for a few days. I'm not sure how long 
until we get the composable roles landed.


Tomas




Cheers,

Brent

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXJ5KpAAoJEIXWptqvFlBWmA8P/1dBlsCNYIqOHBBWxzEnLM41
gP/K+UsGFHaXj86yOdus5gp58/JFX9KJ+mqr0Yi/8ail+h+t0yjgcCXLlp6HUTKo
7OtNfAzPMDeDkquB5R7WREJfLdtP7tVpBsd0Ezs00y5ZDUuDk/J0waleQFtAKUjr
Xiip2y/e8tZMdWa0gvp/q+kWJ3v+YhAnl9PNQMCeIGf/IwDQrTNYvDTIChLx6dud
g7tWfH+ej6nL/ty8UM4R3ac94ZyZLrxprShbdpAh798kYhrR1WPju+hmBgln8rlx
fcTzXq8b428QzCmNKFeKuNmP32yXjOCZlEi2/NijfiR7nFY6sLvh7ROIODiwmzx8
fPSb1W8bLqIijeAUy2YpZFfvbe+NZdn2iIHjseS6Yu4D85NakUunkrBJEpbnCy8L
26N9ShseHbVRckpMSyxEyi+jJfJcCp4FzR26SUFUamPcusMQVlBDQlhOh/8lr/Lq
frhxcYCn45JZ/R2pc3PS2HnRapmvM/TLxdFhbteUFMcEXBT4dvdQPcQdqH1Kx/Yw
S5C+1CESRMGH2KpqghHaMNnySYHFHYQNmCKEVfJjERGbI/U5dEEogIUuzHXHQlYV
kL83XvMh6gGHfRwbmeTOLsrR86c8+u3vaE5PzHPxQ3IBseezmRYiN2fmNclYsg8B
LvyHYCRNvOcj1y8gr0Yr
=obJ6
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change 247669: Ceph Puppet: implement per-pool parameters:

2016-05-03 Thread Giulio Fidente

On 04/22/2016 09:51 AM, Shinobu Kinjo wrote:

Hi TripleO Team,

If you could take care of ${subject}, it would be nice.

[1] https://review.openstack.org/#/c/247669


ack, I liked revision 6 more than 7 but it might not work as 
interpolation always returns a string and we need a dict there


I'll post an update as soon as I get it to work
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] going forward

2016-05-03 Thread Thierry Carrez

John Dickinson wrote:

At the summit last week, the Swift community spent a lot of time discussing the 
feature/hummingbird branch. (For those who don't know, the feature/hummingbird 
branch contains some parts of Swift which have been reimplemented in Go.)

As a result of that summit discussion, we have a plan and a goal: we will 
integrate a subset of the current hummingbird work into Swift's master branch, 
and the future will contain both Python and Go code. We are starting with the 
object server and replication layer.
[...]


As you move from experimentation in feature branches to a more long-term 
plan, it would be great if you could formally propose the support of Go 
as an officially-supported language in OpenStack, following the 
resolution at:


http://governance.openstack.org/resolutions/20150901-programming-languages.html

That will help in clarifying our stance there and should kick off 
interesting side discussions on infrastructure/QA support.


Regards,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Austin summit versioned notification

2016-05-03 Thread Balázs Gibizer
Hi, 

Last week Friday in Austin we discussed the way forward with the versioned
notification transformation in Nova. 

We agreed that when we separate the object model use for notifications from
the nova object model we still use the NovaObject as a base class to avoid
change in the wire format and the major version bump it would cause. 
However we won't register the notification object into the NovaObjectRegistry.
In general we agreed that we move forward with the transformation according
to the spec [1].

Regarding the schema generation for the notifications we agreed to
propose a general JSON Schema generation implementation to
oslo.versionedobjects [2] that can be used in Nova later to generate
schemas for the notification object model. 

To have a way to synchronize our effort I'd like to restart the weekly
subteam meeting [5]. As the majority of the subteam is in US and EU I propose
to continue the currently existing time slot UTC 17:00 every Tuesday.
I proposed the frequency increase from biweekly to weekly here [3].
This means that we can meet today 17:00 UTC [4] on #openstack-meeting-4.

Cheers,
Gibi

[1] https://review.openstack.org/#/c/286675/ Versioned notification 
transformation
[2] https://review.openstack.org/#/c/311194/ versionedobjects: add json schema 
generation
[3] https://review.openstack.org/#/c/311948/
[4] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160503T17 
[5] https://wiki.openstack.org/wiki/Meetings/NovaNotification 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] amphora flavour and load balancer topology

2016-05-03 Thread Lingxian Kong
On Tue, May 3, 2016 at 10:30 AM, Michael Johnson  wrote:
> Hi Lingxian,
>
> On #2, we would like to enable neutron flavors to support the
> selection of topology.  For example, "bronze" flavor may be
> standalone, "silver" would be active/standby.  However, that
> capability is not yet implemented.  Feel free to take that on if you
> have the cycles... grin.

:-)

Yeah, using neutron flavour can solve the problem to some extent.
According to https://review.openstack.org/#/c/310667/(maybe you
already have some discussion in Summit), octavia will be standalone
service in future, a long term roadmap, we should support this
capability with or without Neutron. I am willing to do some PoC and
contribute that to upstream. Thanks for the suggestion.

>
> Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Could you tell me the best way to develop openstack in Windows?

2016-05-03 Thread Martin Hickey
I agree.  It is a lot easier to just develop on a Linux environment. You
could set-up a Linux VM using VirtualBox or VMware tools to do this on
Windows. You would then be aligned with tools and documentation in the
community and hence better able to get up and running.

Regards,
Martin




From:   Jeremy Stanley 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   02/05/2016 19:45
Subject:Re: [openstack-dev] [Nova] Could you tell me the best way to
develop openstack in Windows?



On 2016-05-02 17:20:13 +0900 (+0900), �� wrote:
[...]
> But Im using PyCharm in the Windows 7 but, Its very hard to
> programming and testing when I tried to change code little bit.
[...]

In my experience, when developing software which runs primarily on
Linux it's a lot more complicated to attempt to test it on non-Linux
systems than it is to learn to use Linux instead (at least in a
virtual machine somewhere). Similarly, I would not expect to be able
to comfortably develop software under Linux when it's intended to
mostly run on Microsoft Windows or Apple Macintosh.

I personally just use Linux as my development platform. I gather
some in our community prefer to use Mac or Win systems but I think
in most cases they end up augmenting their development workflow with
Linux virtual machines (either local or at a remote service
provider).
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] Easy Way to Test Tricircle North-South L3 Networking

2016-05-03 Thread Vega Cai
Hi all,

Just would like to share a way to test Tricircle north-south L3 networking
without requiring the third interface.

In the Tricircle readme, it is said that you need to add an interface in
your host to br-ext bridge. One interface to access the host, one interface
for east-west networking and one interface for north-south networking, so
all together three interfaces are required.

What if your host only have two interfaces? Here is another deployment
choice.

First, change your external network type to flat type. If you are using the
DevStack script provided by Tricircle, do the following changes in node2
local.conf then run DevStack in node2.

(1) change Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS
from (network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
to (network_vlan_ranges=bridge:2001:3000)
(since we going to use flat external network, no need to configure VLAN
range for extern)
(2) add PHYSICAL_NETWORK=extern
(3) keep OVS_BRIDGE_MAPPINGS=bridge:br-bridge,extern:br-ext

Second, specify flat type when creating external network.

curl -X POST http://127.0.0.1:9696/v2.0/networks
   -H "Content-Type: application/json" \
   -H "X-Auth-Token: $token" \
   -d '{"network": {"name": "ext-net", "admin_state_up": true,
"router:external": true, "provider:network_type": "flat",
"provider:physical_network": "extern", "availability_zone_hints":
["Pod2"]}}'

Third, configure IP address of br-ext.

sudo ifconfig br-ext 163.3.124.1 netmask 255.255.255.0

Here 163.3.124.1 is your external network gateway IP, set net mask
according to your CIDR.

After the above steps, you can access your VM via floating IP in node2.
Also your VM can ping the external gateway.

Would like your VM to access the Internet?(Of course node2 should be able
to access the Internet) Two more steps to follow:
(1) Enable packet forward in node2

sudo bash
echo 1 >/proc/sys/net/ipv4/ip_forward

(2) Configure SNAT in node2

sudo iptables -t nat -I POSTROUTING -s 163.3.124.0/24 -o eth1 -j SNAT
--to-source 10.250.201.21

163.3.124.0/24 is your external network CIDR, eth1 is the interface
associated with your default route in node2 and 10.250.201.21 is the IP of
eth1.

Hope this information helps.

BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Live migration meeting today

2016-05-03 Thread Murray, Paul (HP Cloud)
First live migration meeting to kick off after the summit will be today, see: 
https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration

Paul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] next min libvirt?

2016-05-03 Thread Daniel P. Berrange
On Fri, Apr 29, 2016 at 03:16:56PM -0500, Matt Riedemann wrote:
> On 4/29/2016 10:28 AM, Daniel P. Berrange wrote:
> >On Fri, Apr 29, 2016 at 10:13:42AM -0500, Sean Dague wrote:
> >>We've just landed the libvirt min to bump us up to 1.2.1 required. It's
> >>probably a good time consider the appropriate bump for Otaca.
> >>
> >>By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
> >>(1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.
> >
> >By the time Ocata is released, I think it'll be valid to ignore
> >RHEL-7.1, as we'll already be onto 7.3 at that time.
> >
> >>My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
> >>that NUMA support in libvirt (excepting the blacklists) and huge page
> >>support is assumed on x86_64.
> >
> >If we ignore RHEL 7.1, we could go to 1.2.9 which is the min in Jessie.
> 
> Is there a simple reason why ignoring RHEL 7.1 is OK? Honestly I can't
> remember which OpenStack release came out around that time, was it Kilo?

By the time Ocata comes out, we'll be on RHEL-7.3 as latest update
so people really shouldn't be continuing to deploy on RHEL-7.1. IOW,
I think we should be aiming to target $current  & $current-1  RHEL-7
update releases.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] next min libvirt?

2016-05-03 Thread Daniel P. Berrange
On Sat, Apr 30, 2016 at 10:28:23AM -0500, Thomas Bechtold wrote:
> Hi,
> 
> On Fri, Apr 29, 2016 at 10:13:42AM -0500, Sean Dague wrote:
> > We've just landed the libvirt min to bump us up to 1.2.1 required. It's
> > probably a good time consider the appropriate bump for Otaca.
> > 
> > By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
> > (1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.
> >
> > My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
> > that NUMA support in libvirt (excepting the blacklists) and huge page
> > support is assumed on x86_64.
> 
> Works also for SUSE which has 1.2.18 already in SLE 12 SP1.

Is there any public site where I can find details of what RPM versions
are present in SLES releases ? I was trying to find details last week
but was not able to find any info. If there's no public reference, could
you update the wiki with RPM details for libvirt, kvm and libguestfs:

https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Live Migration: Austin summit update

2016-05-03 Thread Daniel P. Berrange
On Fri, Apr 29, 2016 at 10:32:09PM +, Murray, Paul (HP Cloud) wrote:
> The following summarizes status of the main topics relating to live migration
> after the Newton design summit. Please feel free to correct any inaccuracies
> or add additional information.

> Post copy
> 
> The spec to add post copy migration support in the libvirt driver was
> discussed in the live migration session. Post copy guarantees completion
> of a migration in linear time without needing to pause the VM. This can
> be used as an alternative to pausing in live-migration-force-complete.
> Pause or complete could also be invoked automatically under some
> circumstances. The issue slowing these specs is how to decide which
> method to use given they provide a different user experience but we
> don't want to expose virt specific features in the API. Two additional
> specs listed below suggest possible generic ways to address the issue.
> 
> There was no conclusions reached in the session so the debate will
> continue on the specs. The first below is the main spec for the feature.
> 
> https://review.openstack.org/#/c/301509 : Adds post-copy live migration 
> support to Nova
> https://review.openstack.org/#/c/305425 : Define instance availability 
> profiles
> https://review.openstack.org/#/c/306561 : Automatic Live Migration Completion

There are currently many options for live migration with QEMU that can
assist in completion

 - Pause the VM
 - Auto-converge
 - XBZRLE compression
 - Multi-thread compression
 - Post-copy

Combined with tunables such as max-bandwidth and max-downtime. It is
absolutely clear as mud which of these work best for ensuring completion,
and what kind of impact they have on the guest performance.

Given this I've spent the last week creating an automated test harness
for QEMU upstream which triggers migration with an extreme guest CPU
load and measures the performance impact of these features on the guest,
and whether the migration actually completes.

I hope to be able to publish the results of this investigation this week
which should facilitate us in deciding which is best to use for OpenStack.
The spoiler though is that all the options are pretty terrible, except for
post-copy.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Libvirt version requirement

2016-05-03 Thread Daniel P. Berrange
On Mon, May 02, 2016 at 11:27:01AM +0800, ZhiQiang Fan wrote:
> Hi Nova cores,
> 
> There is a spec[1] submitted to Telemetry project for Newton release,
> mentioned that a new feature requires libvirt >= 1.3.4 , I'm not sure if
> this will have bad impact to Nova service, so I open this thread and wait
> for your opinions.
> 
> [1]: https://review.openstack.org/#/c/311655/

Nova's policy is that we pick a minimum requires libvirt that everyone
must have, this is shown in this table

  
https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Nova_release_min_version

Nova will accept code changes which use features from a newer libvirt,
as long as they don't cause breakage for people with older libvirt.
Generally this means that we'll use newer libvirt features, only for
features which are new to nova - we don't change existing Nova code
to use new libvirt features, since that would cause regression.

IOW, I don't see any problem with you using newer libvirt version,
provided you gracefully fallback without error if run against the
current min required libvirt Nova declares.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-03 Thread Paul Bourke
Having read through the full thread I'm still in support of separate 
repos. I think the explanations Jeff Peeler and Adam Young have put 
forward summarise my thoughts very well.


One of the main arguments I seem to be hearing for a single repo is Git 
tooling which I don't think is a good one; we should do what's best for 
users and devs, not for tools.


Also as the guys pointed out, multiple repos are the most common pattern 
across OpenStack. I think it will help keep a better separation of 
concerns. Otherwise in my experience you start to get cross 
contamination of the projects, to the point where it becomes extremely 
difficult to pull them apart.


The images, ansible, and k8n need to be separate. The alternative is not 
scalable.


Thanks,
-Paul

On 03/05/16 00:39, Angus Salkeld wrote:

On Mon, May 2, 2016 at 7:07 AM Steven Dake (stdake) mailto:std...@cisco.com>> wrote:

Ryan had rightly pointed out that when we made the original proposal
9am morning we had asked folks if they wanted to participate in a
separate repository.

I don't think a separate repository is the correct approach based
upon one off private conversations with folks at summit.  Many
people from that list approached me and indicated they would like to
see the work integrated in one repository as outlined in my vote
proposal email.  The reasons I heard were:

  * Better integration of the community
  * Better integration of the code base
  * Doesn't present an us vs them mentality that one could argue
happened during kolla-mesos
  * A second repository makes k8s a second class citizen deployment
architecture without a voice in the full deployment methodology
  * Two gating methods versus one
  * No going back to a unified repository while preserving git history

I favor of the separate repositories I heard

  * It presents a unified workspace for kubernetes alone
  * Packaging without ansible is simpler as the ansible directory
need not be deleted

There were other complaints but not many pros.  Unfortunately I
failed to communicate these complaints to the core team prior to the
vote, so now is the time for fixing that.

I'll leave it open to the new folks that want to do the work if they
want to work on an offshoot repository and open us up to the
possible problems above.


+1 to the separate repo

I think the separate repo worked very well for us and would encourage
you to replicate that again. Having one repo doing one thing makes the
goal of the repo obvious and makes the api between the images and
deployment clearer (also the stablity of that
api and things like permissions *cough* drop-root).

-Angus


If you are on this list:

  * Ryan Hallisey
  * Britt Houser

  * mark casey

  * Steven Dake (delta-alpha-kilo-echo)

  * Michael Schmidt

  * Marian Schwarz

  * Andrew Battye

  * Kevin Fox (kfox)

  * Sidharth Surana (ssurana)

  *   Michal Rostecki (mrostecki)

  *Swapnil Kulkarni (coolsvap)

  *MD NADEEM (mail2nadeem92)

  *Vikram Hosakote (vhosakot)

  *Jeff Peeler (jpeeler)

  *Martin Andre (mandre)

  *Ian Main (Slower)

  * Hui Kang (huikang)

  * Serguei Bezverkhi (sbezverk)

  * Alex Polvi (polvi)

  * Rob Mason

  * Alicja Kwasniewska

  * sean mooney (sean-k-mooney)

  * Keith Byrne (kbyrne)

  * Zdenek Janda (xdeu)

  * Brandon Jozsa (v1k0d3n)

  * Rajath Agasthya (rajathagasthya)
  * Jinay Vora
  * Hui Kang
  * Davanum Srinivas



Please speak up if you are in favor of a separate repository or a
unified repository.

The core reviewers will still take responsibility for determining if
we proceed on the action of implementing kubernetes in general.

Thank you
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should be instance_dir in all nova compute node same ?

2016-05-03 Thread Matthew Booth
On Fri, Apr 29, 2016 at 2:47 AM, Eli Qiao  wrote:

> hi team,
>
> Is there any require that all compute node's instance_dir should be same?
>

Yes. This is assumed in many places, certainly in cold migration/resize.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Libvirt version requirement

2016-05-03 Thread ZhiQiang Fan
Glad to hear the news. Thanks, Daniel and Michael.

On Tue, May 3, 2016 at 5:19 PM, Daniel P. Berrange 
wrote:

> On Mon, May 02, 2016 at 11:27:01AM +0800, ZhiQiang Fan wrote:
> > Hi Nova cores,
> >
> > There is a spec[1] submitted to Telemetry project for Newton release,
> > mentioned that a new feature requires libvirt >= 1.3.4 , I'm not sure if
> > this will have bad impact to Nova service, so I open this thread and wait
> > for your opinions.
> >
> > [1]: https://review.openstack.org/#/c/311655/
>
> Nova's policy is that we pick a minimum requires libvirt that everyone
> must have, this is shown in this table
>
>
> https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Nova_release_min_version
>
> Nova will accept code changes which use features from a newer libvirt,
> as long as they don't cause breakage for people with older libvirt.
> Generally this means that we'll use newer libvirt features, only for
> features which are new to nova - we don't change existing Nova code
> to use new libvirt features, since that would cause regression.
>
> IOW, I don't see any problem with you using newer libvirt version,
> provided you gracefully fallback without error if run against the
> current min required libvirt Nova declares.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] composable roles team

2016-05-03 Thread Steven Hardy
On Fri, Apr 29, 2016 at 03:27:29PM -0500, Emilien Macchi wrote:
> Hi,
> 
> One of the most urgent tasks we need to achieve in TripleO during
> Newton cycle is the composable roles support.
> So we decided to build a team that would focus on it during the next weeks.

Note that there is some confusion regarding the "composable roles" term -
what we're discussing here is the effort to decompose services within the
existing roles, which is a precursor to fully composable roles.

There are two BPs related to this:

https://review.openstack.org/#/q/topic:bp/refactor-puppet-manifests
https://blueprints.launchpad.net/tripleo/+spec/refactor-puppet-manifests

This is about breaking up the monolithic puppet manifests into per-service
profiles in puppet-tripleo

https://review.openstack.org/#q,topic:bp/composable-services-within-roles,n,z
https://blueprints.launchpad.net/tripleo/+spec/composable-services-within-roles

This is about consuming the per-service profiles via a new per-service
template definition (a new internal template API for service configuration
via heat templates)

Both can be tracked independently, but the composable-services-within-roles
BP depends on the refactor-puppet-manifests work.

Then, there is a final step which is enabling user defined additional roles
(e.g groups of nodes not deployed via the fixed Controller/Compute/*Storage
groups) - I proposed a possible approach for this in our summit session,
and will raise a BP to track this work (hopefully will have a prototype
implementation posted soon).

> We started this etherpad:
> https://etherpad.openstack.org/p/tripleo-composable-roles-work
> 
> So anyone can help or check where we are.
> We're pushing / going to push a lot of patches, we would appreciate
> some reviews and feedback.

Thanks, I think the etherpad will be helpful to focus reviewer attention -
please ensure the patches are tagged with one of the BPs above as
appropriate too.

> Also, I would like to propose to -1 every patch that is not
> composable-role-helpful, it will help us to move forward. Our team
> will be available to help in the patches, so we can all converge
> together.

To clarify, I think we should block any new services from landing in the
"old" non-composable interface, but it's probably not reasonable to block
everything (in particular high priority bugfixes) that touches the old
monolithic manifests, we should try to minimise the rebase pain for
composable services though.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should be instance_dir in all nova compute node same ?

2016-05-03 Thread taget

Thanks Matt

That really surprised me.

On 2016年05月03日 17:42, Matthew Booth wrote:
On Fri, Apr 29, 2016 at 2:47 AM, Eli Qiao > wrote:


hi team,

Is there any require that all compute node's instance_dir should
be same?


Yes. This is assumed in many places, certainly in cold migration/resize.

Matt


--
Best Regards, Eli Qiao (乔立勇)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit Summary?

2016-05-03 Thread Steven Hardy
On Mon, May 02, 2016 at 07:35:47PM -0600, Jason Rist wrote:
> Hey Everyone - Forgive me if this has already been sent out - I looked
> through all of my emails and didn't see something yet.  Can someone
> provide a summary of TripleO from the summit, for instance some of the
> output from the design sessions like the Trove, Magnum and some other
> teams have done?  Those of us unable to attend are curious!

I'm planning to send out a session recap (to this list) later today, and
will also write a more detailed description of the TripleO Newton roadmap
in a blog post later this week.

Apologies for the delay, was pretty tired over the weekend but hopefully
the jet-lag has now subsided sufficiently for me to write semi-coherently ;)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Question for Antoni Puimedon about load balancers and overlay networks

2016-05-03 Thread Antoni Segura Puimedon
On Mon, May 2, 2016 at 8:11 PM, Mike Spreitzer  wrote:

> I am looking at
> https://www.openstack.org/videos/video/project-kuryr-docker-delivered-kubernetes-next,
> around 28:00.  You have said that overlay networks are involved, and are
> now talking about load balancers.  Is this Neutron LBaaS?  As far as I
> know, a Neutron LBaaS instance is "one-armed" --- both the VIP and the back
> endpoints have to be on the same Neutron network.  But you seem to have all
> the k8s services on a single subnet.  So I am having some trouble following
> exactly what is going on.  Can you please elaborate?


Hi Mike,

Thanks for reaching out and thanks for going over the video!

For those following at home, we are talking about the explanation I gave
about slide 18 in [1].

This topic was also discussed in the work sessions about Kubernetes
integration (maybe also in the architecture one, but I don't remember it
for sure). The further explanation I gave in the work session is that the
setup we use is having a Neutron network with a subnet for all the load
balancer pools and then each Kubernetes namespace have a different Neutron
net and subnets. To make it work, we have all the subnets (including the
load balancer one) connected to a single `raven` router. This is not
blocked by the API and I think according to the LB spec it should work,
although I have not tried it with ovs.

Christophe (CCed) is trying to verify this OVS usage.


[1]
http://www.slideshare.net/celebdor/project-kuryr-returns-docker-delivered-kubernetes-next


>
>
> BTW, there is also some discussion of k8s multi-tenancy in the Kubernetes
> Networking SIG and the Kubernetes OpenStack SIG.
>
> Thanks,
> Mike
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Smaug]- IRC Meeting today (05/03) - 1400 UTC SM

2016-05-03 Thread Saggi Mizrahi
Hi All,


We will hold our bi-weekly IRC meeting today (Tuesday, 05/03) at 1400 UTC in 
#openstack-meeting


Please review the proposed meeting agenda here: 
https://wiki.openstack.org/wiki/Meetings/smaug


Please feel free to add to the 
agenda any subject you would like to discuss.


This is our first meeting after the Summit.

We hope to see some new faces.


Thanks, Saggi
-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Interest in contributing to OpenStack

2016-05-03 Thread Djimeli Konrad
Hello Nikhil,

Thanks, for the reply. I would really love to make some substantial
contribution to Glance. I would put in some time familiarizing myself
with the codebase and I have already assigned myself to a bug
(https://bugs.launchpad.net/glance/+bug/1569937), so I can start
working on it. Though it was not easy finding an unsigned bug I could
work on. Since I am still busy with school work now, I would be able
to put in about 10 hours per week to work on OpenStack and then I
would dedicate more time when I am done with school in July.

I have been to the glance irc channel but I have not yet  introduced
myself there. I would do so when next I have the chance. My irc nick
is djkonro.

Thanks
Konrad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-03 Thread Miles Gould

On 02/05/16 18:43, Jay Pipes wrote:
This DB could be an RDBMS or Cassandra, depending on the deployer's 
preferences
AFAICT this would mean introducing and maintaining a layer that 
abstracts over RDBMSes and Cassandra. That's a big abstraction, over two 
quite different systems, and it would be hard to write code that 
performs well in both cases. If performance in this layer is critical, 
then pick whichever DB architecture handles the expected query load 
better and use that.


Miles

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #80

2016-05-03 Thread Emilien Macchi
Hi,

If you have any topic that you would like to discuss, please add it to
the topic list:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160503

If we don't have topics, we'll probably cancel the meeting this week.
I'm currently working on a summary of what happened during the Summit
for Puppet OpenStack project.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] driver review dashboards

2016-05-03 Thread Sean Dague
One of the ideas that came up was a morning review dashboard which would
list each driver, and some criteria to indicate it was ready to merge.
This could either be a star by a contact person, or a +1 by some
specific person.

I think it's probably also a good idea to filter by "passes specific 3rd
party CI", though when I did that I found that of VMware, HyperV, and
XenServer... only VMware is currently voting? That seems kind of like a
problem to me when there is the request that more time gets spent on
reviewing these drivers.

Given that no one is yet starring these changes, here is a version of
the dashboard with just a +1 by critical people in each subteam (vmware
is also filtered by +1 on it's CI).

[dashboard]
title = Nova Drivers Priorities
description = Review Inbox
foreach = project:openstack/nova status:open NOT owner:self NOT
label:Workflow<=-1 label:Verified>=1,jenkins NOT
label:Code-Review>=-2,self NOT label:Code-Review<=-1,nova-core
branch:master is:mergeable

[section "HyperV"]
query = label:Code-Review>=1,cb...@cloudbasesolutions.com file:hyperv

[section "XenServer"]
query = label:Code-Review>=1,bob.b...@citrix.com file:xenserver

[section "VMWare"]
query = label:Code-Review>=1,gkot...@vmware.com file:vmwareapi
label:Verified>=1,9008


The net result is the following: https://goo.gl/Q1p2oZ

Even with only using +1, this list is pretty small, especially as there
is a lot of unmergable code in the drivers. We could switch to
starredby: if people want to be more explicit here.

Questions:

1) is label:Code-Review>=1 good enough? or should we be explicit with
starredby:

2) should we require +1 on relevant CI? If so, what's the ETA for
XenServer & VMWare getting back in shape

3) are these the right point people for each driver?


I'll figure out a way to get this on a stable URL at some point (so that
owners can be swapped out), but for now I wanted to just get something
out for people to ponder.


-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] - suggested development workflow without ./rejoin-stack.sh ?

2016-05-03 Thread Dean Troyer
On Mon, May 2, 2016 at 5:10 PM, Kevin Benton  wrote:

> Now am I correct in understanding that when this happens there is no way
> to restart the services in a simple manner without blowing away everything
> and starting over? Unless I'm missing some way to run ./stack.sh without
> losing previous state, this seems like a major regression (went from mostly
> working ./rejoin-stack.sh to nothing).
>
> What is the recommended way to use devstack without being a power outage
> away from losing hours of work?
>

rejoin-stack.sh was never a part of the recommended workflow for DevStack.
It was a hack from the start and did not work in some common
configurations, was often not updated when other things changed around it,
and was totally untested.  The commit message in 291453 is almost exactly
why I proposed removal; I should have included the bits about being
untested and unmaintained.

There is an example of how to restart DevStack without reinitializing
everything, and it can be found in Grenade.  I think any proposal to
introduce a script to restart services should have a maintainer attached to
it who will keep it up-to-date with the rest of DevStack.  It should also
work with all of the major configurations, including Neutron L2, L3, DVR,
etc, and have a test job to exercise it.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit versioned notification

2016-05-03 Thread Jay Pipes
cc'ing Travis and Steve directly, since they will likely be very 
interested in this effort from the Project Searchlight perspective. :)


On 05/03/2016 04:10 AM, Balázs Gibizer wrote:

Hi,

Last week Friday in Austin we discussed the way forward with the versioned
notification transformation in Nova.

We agreed that when we separate the object model use for notifications from
the nova object model we still use the NovaObject as a base class to avoid
change in the wire format and the major version bump it would cause.
However we won't register the notification object into the NovaObjectRegistry.
In general we agreed that we move forward with the transformation according
to the spec [1].

Regarding the schema generation for the notifications we agreed to
propose a general JSON Schema generation implementation to
oslo.versionedobjects [2] that can be used in Nova later to generate
schemas for the notification object model.

To have a way to synchronize our effort I'd like to restart the weekly
subteam meeting [5]. As the majority of the subteam is in US and EU I propose
to continue the currently existing time slot UTC 17:00 every Tuesday.
I proposed the frequency increase from biweekly to weekly here [3].
This means that we can meet today 17:00 UTC [4] on #openstack-meeting-4.

Cheers,
Gibi

[1] https://review.openstack.org/#/c/286675/ Versioned notification 
transformation
[2] https://review.openstack.org/#/c/311194/ versionedobjects: add json schema 
generation
[3] https://review.openstack.org/#/c/311948/
[4] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160503T17
[5] https://wiki.openstack.org/wiki/Meetings/NovaNotification


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] driver review dashboards

2016-05-03 Thread Bob Ball
Hi Sean,

>  [section "XenServer"]
> query = label:Code-Review>=1,bob.b...@citrix.com file:xenserver

I believe this should be file:xenapi?
Can you match multiple files here?  Most files are under the xenapi trees, but 
it would miss some files in plugins/xenserver and not in 
plugins/xenserver/xenapi.

> 1) is label:Code-Review>=1 good enough? or should we be explicit with
> starredby:

I think that core-review>=1 should be used here, given that's how we indicate 
we think a change is good.  If, on the other hand, we want to have a specific 
subset of those changes (i.e. to highlight <10 changes that are the priority) 
then the starredby would also work.  

> 2) should we require +1 on relevant CI? If so, what's the ETA for XenServer & 
> VMWare
> getting back in shape

Yes; I think we have to require +1 from the CI (I wouldn't want any XenServer 
changes ever merged without a +1 from the CI...  That's the reason we have it 
:) ).  The XenServer CI was disabled yesterday as it started failing all 
changes, but was fixed this morning and is back to voting.

> 3) are these the right point people for each driver?

For XenServer, yes.

Thanks,

Bob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-05-03 Thread Ricardo Rocha
Hi.

On Mon, May 2, 2016 at 7:11 PM, Cammann, Tom  wrote:
> Thanks for the write up Hongbin and thanks to all those who contributed to 
> the design summit. A few comments on the summaries below.
>
> 6. Ironic Integration: 
> https://etherpad.openstack.org/p/newton-magnum-ironic-integration
> - Start the implementation immediately
> - Prefer quick work-around for identified issues (cinder volume attachment, 
> variation of number of ports, etc.)
>
> We need to implement a bay template that can use a flat networking model as 
> this is the only networking model Ironic currently supports. Multi-tenant 
> networking is imminent. This should be done before work on an Ironic template 
> starts.
>
> 7. Magnum adoption challenges: 
> https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
> - The challenges is listed in the etherpad above
>
> Ideally we need to turn this list into a set of actions which we can 
> implement over the cycle, i.e. create a BP to remove requirement for LBaaS.

There's one for floating IPs already:
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips

>
> 9. Magnum Heat template version: 
> https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
> - In each bay driver, version the template and template definition.
> - Bump template version for minor changes, and bump bay driver version for 
> major changes.
>
> We decided only bay driver versioning was required. The template and template 
> driver does not need versioning due to the fact we can get heat to pass back 
> the template which it used to create the bay.

This was also my understanding. We won't use heat template versioning,
just the bays.

> 10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
> - Add support for sending notifications to Ceilometer
> - Revisit bay monitoring and self-healing later
> - Container monitoring should not be done by Magnum, but it can be done by 
> cAdvisor, Heapster, etc.
>
> We split this topic into 3 parts – bay telemetry, bay monitoring, container 
> monitoring.
> Bay telemetry is done around actions such as bay/baymodel CRUD operations. 
> This is implemented using using ceilometer notifications.
> Bay monitoring is around monitoring health of individual nodes in the bay 
> cluster and we decided to postpone work as more investigation is required on 
> what this should look like and what users actually need.
> Container monitoring focuses on what containers are running in the bay and 
> general usage of the bay COE. We decided this will be done completed by 
> Magnum by adding access to cAdvisor/heapster through baking access to 
> cAdvisor by default.

I think we're missing a blueprint for this one too.

Ricardo

>
> - Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
> It can address the use case of heterogeneity of bay nodes (i.e. different 
> availability zones, flavors), but need to elaborate the details further.
>
> The idea revolves around creating a heat stack for each node in the bay. This 
> idea shows a lot of promise but needs more investigation and isn’t a current 
> priority.
>
> Tom
>
>
> From: Hongbin Lu 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Saturday, 30 April 2016 at 05:05
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: [openstack-dev] [magnum] Notes for Magnum design summit
>
> Hi team,
>
> For reference, below is a summary of the discussions/decisions in Austin 
> design summit. Please feel free to point out if anything is incorrect or 
> incomplete. Thanks.
>
> 1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-driver
> - Refactor existing code into bay drivers
> - Each bay driver will be versioned
> - Individual bay driver can have API extension and magnum CLI could load the 
> extensions dynamically
> - Work incrementally and support same API before and after the driver change
>
> 2. Bay lifecycle operations: 
> https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations
> - Support the following operations: reset the bay, rebuild the bay, rotate 
> TLS certificates in the bay, adjust storage of the bay, scale the bay.
>
> 3. Scalability: https://etherpad.openstack.org/p/newton-magnum-scalability
> - Implement Magnum plugin for Rally
> - Implement the spec to address the scalability of deploying multiple bays 
> concurrently: https://review.openstack.org/#/c/275003/
>
> 4. Container storage: 
> https://etherpad.openstack.org/p/newton-magnum-container-storage
> - Allow choice of storage driver
> - Allow choice of data volume driver
> - Work with Kuryr/Fuxi team to have data volume driver available in COEs 
> upstream
>
> 5. Container network: 
> https://etherpad.openstack.org/p/newton-magnum-container-network
> - Discuss how to scope/pass/store OpenStack credential in bays (needed by 
> Kuryr to communicate with Neutron).
> - Several options were explored. No perfect s

Re: [openstack-dev] [nova] next min libvirt?

2016-05-03 Thread Thomas Bechtold
On Tue, May 03, 2016 at 10:08:06AM +0100, Daniel P. Berrange wrote:
> On Sat, Apr 30, 2016 at 10:28:23AM -0500, Thomas Bechtold wrote:
> > Hi,
> > 
> > On Fri, Apr 29, 2016 at 10:13:42AM -0500, Sean Dague wrote:
> > > We've just landed the libvirt min to bump us up to 1.2.1 required. It's
> > > probably a good time consider the appropriate bump for Otaca.
> > > 
> > > By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
> > > (1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.
> > >
> > > My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
> > > that NUMA support in libvirt (excepting the blacklists) and huge page
> > > support is assumed on x86_64.
> > 
> > Works also for SUSE which has 1.2.18 already in SLE 12 SP1.
> 
> Is there any public site where I can find details of what RPM versions
> are present in SLES releases ? I was trying to find details last week
> but was not able to find any info.

There is https://www.suse.com/products/server/technical-information/#Package
There's a link to the package list. For whatever reason this is not
there for SP1 yet.

> If there's no public reference, could
> you update the wiki with RPM details for libvirt, kvm and libguestfs:
> 
> https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions

Done for both, SLES and openSUSE.

Cheers,

Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [StoryBoard] Meetings (and meetup) info

2016-05-03 Thread Zara Zaimeche

Hi all!

At the summit, a few people asked us about StoryBoard meetings (and the 
next StoryBoard meetup). So here are the details:


IRC Meetings (text shamelessly copied from wiki):
-

StoryBoard holds public weekly meetings in #openstack-meeting, 
Wednesdays at 1500 UTC. Everyone who is interested in development and 
use of StoryBoard is encouraged to attend.


In-Person Meetup:
-

https://wiki.openstack.org/wiki/StoryBoard/Milestone_1_Meetup (date= May 
16th; this was voted on in the StoryBoard meeting shortly after the last 
meetup) Feel free to update the etherpad. :)


Hope to see you there! We're also always available in #storyboard on 
freenode.


Best Wishes,

Zara

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] - suggested development workflow without ./rejoin-stack.sh ?

2016-05-03 Thread Kashyap Chamarthy
On Mon, May 02, 2016 at 03:10:56PM -0700, Kevin Benton wrote:
> This patch removed the ./rejoin-stack.sh script:
> https://review.openstack.org/#/c/291453/
> 
> I relied on this heavily in my development VM which sees lots of restarts
> because of various things (VM becomes unresponsive in load testing, my
> laptop has a kernel panic, etc). Normally this was not a big deal because I
> could ./rejoin-stack.sh and pick up where I left off (all db objects,
> virtual interfaces, instance images, etc all intact).
> 
> Now am I correct in understanding that when this happens there is no way to
> restart the services in a simple manner without blowing away everything and
> starting over? Unless I'm missing some way to run ./stack.sh without losing
> previous state, this seems like a major regression (went from mostly
> working ./rejoin-stack.sh to nothing).
> 
> What is the recommended way to use devstack without being a power outage
> away from losing hours of work?

FWIW, whenever I feel I have a working env. in DevStack, I take a qcow2
live internal snapshot:

$ sudo virsh snapshot-create-as devstack \
snap1 "Working setup for bug#123"

If something goes terribly wrong in my env, revert to a known sane state:

$ virsh snapshot-revert devstack snap1

This stipulates[*] that you use Qcow2 format.  Also it does not exactly
solve your 'sudden power outage' issue, but comes reasonably close if
you save the work at points-in-time you care about.

[*] There are other methods like 'external disk snapshots' (when you
create a snapshot, the current disk becomes a 'backing file' & a new
qcow2 overlay is created to track all the new writes from the point
of taking snapshot) that allow you both Raw and Qcow2 file formats. 

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]Proposing Shinobu Kinjo for Tricircle core reviewer

2016-05-03 Thread joehuang
Hi,

I would like to propose adding Shinobu Kinjo to the Tricircle core reviewer 
team. 

Shinobu has been a highly valuable reviewer to Tricircle for the past few 
months. His contribution covers each patch submitted, document, etherpad 
discussion, and always give valueable, meaningful and helpful comment. His 
review data could be found from http://stackalytics.com/ (but unfortuantely 
something wrong in stackalytics temporary, tricircle is missing in the project 
lists)

I believe Shinobu will be a great addition to the Tricircle team. 

Please response with +1/-1. Thank you! 

Best Regards
Chaoyi Huang ( joehuang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stackalytics]many projects missing in the "others" category

2016-05-03 Thread joehuang
Hello,

Very sad to know that some projects are missing again in the "others" category. 
When I want to cite some statistic data for Tricircle core reviewer nomination, 
can't find the data for many "others" projects which usually are listed 
"others" category. Is there any new rule in Stackalytics?

Best Regards
Chaoyi Huang ( joehuang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Citrix XenServer CI is disabled from voting

2016-05-03 Thread Bob Ball
The failure rate was indeed 100%; there were some requirements on packages not 
installed in our CI environment (libssl-dev libffi-dev) which were causing all 
failures.

This is now fixed and the CI is back to voting on passing changes.  I have 
re-queued all jobs which failed in less than 6 minutes which I think covers all 
of the above failures.

If anyone hits this after a recheck, please drop us a line at 
openst...@citrix.com

Thanks,

Bob

-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] 
Sent: 02 May 2016 20:18
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [nova] FYI: Citrix XenServer CI is disabled from voting

The Citrix XenServer CI is failing on most, if not all, changes it's running on 
today. Here is an example failure [1]. Devstack fails to setup due to a bad 
package install, so I'm guessing there is a problem in a mirror being used. I 
don't know if this is 100% failure but it's high enough to disable it from 
voting on new nova patch sets.

If you hit this, simply comment on your patch with 'recheck'.

[1]
http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/40/309440/7/13518//run_tests.log

-- 

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [docs] [cinder] [swift] [glance] [keystone] [ironic] [trove] [neutron] [heat] [senlin] [manila] [sahara] RST + YAML files ready for pick up from WADL migration

2016-05-03 Thread Anne Gentle
Hi all,
This patch contains all the RST + YAML for projects to bring over to their
repos to begin building API reference information from within your repo.
Get a copy of this patch, and pick up the files for your service in
api-site/api-ref/source/:

https://review.openstack.org/#/c/311596/

There is required cleanup, and you'll need an index.rst, conf.py, and build
jobs. All of these can be patterned after the nova repository api-ref
directory. Read more at
http://docs.openstack.org/contributor-guide/api-guides.html

It's overall in good shape thanks to Karen Bradshaw, Auggy Ragwitz, Andreas
Jaeger, and Sean Dague. Appreciate the help over the finish line during
Summit week, y'all.

The api-site/api-ref files are now frozen and we will not accept patches.
The output at developer.openstack.org/api-ref.html remains frozen until we
can provide redirects to the newly-sourced-and-built files. Please, make
this work a priority in this release. Ideally we can get everyone ready by
Milestone 1 (May 31).

If you would like to use a Swagger/OpenAPI file, pick that file up from
developer.openstack.org/draft/swagger/ and create build jobs from your repo
to publish it on developer.openstack.org.

Let me know if you have questions.
Thanks,
Anne

-- 
Anne Gentle
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Interest in contributing to OpenStack

2016-05-03 Thread Nikhil Komawar
Sounds good, Djimeli.

Best way to get to know the team might be to join us during one of the
weekly meetings http://eavesdrop.openstack.org/#Glance_Team_Meeting

Alternatively, you can join Glare update meeting to meet a few of us, if
that time is more convenient to you
http://eavesdrop.openstack.org/#Glance_Glare_Meeting

On 5/3/16 7:45 AM, Djimeli Konrad wrote:
> Hello Nikhil,
>
> Thanks, for the reply. I would really love to make some substantial
> contribution to Glance. I would put in some time familiarizing myself
> with the codebase and I have already assigned myself to a bug
> (https://bugs.launchpad.net/glance/+bug/1569937), so I can start
> working on it. Though it was not easy finding an unsigned bug I could
> work on. Since I am still busy with school work now, I would be able
> to put in about 10 hours per week to work on OpenStack and then I
> would dedicate more time when I am done with school in July.
>
> I have been to the glance irc channel but I have not yet  introduced
> myself there. I would do so when next I have the chance. My irc nick
> is djkonro.
>
> Thanks
> Konrad
>
>


-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Austin summit sessions recap

2016-05-03 Thread Emilien Macchi
Here's a summary of Puppet OpenStack sessions [1] during Austin summit.

* General feedback is excellent, things are stable, no major changes
are coming during the next cycle.
* We discussed about the work we want to do during Newton cycle [2]:

Ubuntu 16.04 LTS
Make Puppet OpenStack modules working and gated on Ubuntu 16.04,
starting from Newton.
Keep stable/mitaka and before gated on Ubuntu 14.04 LTS.

Release management with trailing cycle
The release model changed to:
http://governance.openstack.org/reference/tags/release_cycle-trailing.html
We'll start producing milestones within a cycle, continue efforts on
tarballs and investigate package builds (rpm, etc).

Move documentation out from Wiki
See [3].

puppet-pacemaker unification
Mirantis & Red Hat to continue collaboration on merging efforts on
puppet-pacemaker module: https://review.openstack.org/#/c/296440/)
So both Fuel & TripleO will use the same Puppet module to deploy Pacemaker.

CI stabilization
We're supporting 18 months old releases, so we will continue all
efforts to stabilize our CI and make it robust so it does not break
every morning.

Containers
Most of containers deployments have common bits (user/group
management, config files management, etc).
We decided that we would add the common bits in our modules, so they
can be used by people deploying OpenStack in containers. See [4].

[1] https://etherpad.openstack.org/p/newton-design-puppet
[2] https://etherpad.openstack.org/p/newton-puppet-project-status
[3] https://etherpad.openstack.org/p/newton-puppet-docs
[4] https://etherpad.openstack.org/p/newton-puppet-multinode-containers


As a retrospective, we've noticed that we had a quiet agenda &
sessions this time, without critical things. It is a sign for our
group things are now very stable and we did an excellent job to be at
this point.
Thanks for everyone who attended our sessions, feel free to add more
things that I might have missed, or any questions.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [docs] [cinder] [swift] [glance] [keystone] [ironic] [trove] [neutron] [heat] [senlin] [manila] [sahara] RST + YAML files ready for pick up from WADL migration

2016-05-03 Thread Anne Gentle
On Tue, May 3, 2016 at 8:39 AM, Sheel Rana Insaan 
wrote:

> Hi Anne,
>
> Sure, I will pick for cinder.
>
> Thanks!!
>
>
Thank you! I added your name to the wiki page:

https://wiki.openstack.org/wiki/Documentation/Migrate#API_Reference_Plan

Meant to add that to my email -- if you are interested in working on the
migration for a service, please add your name to the wiki page.

Thanks,
Anne


> Best Regards,
> Sheel Rana
>
> On Tue, May 3, 2016 at 6:59 PM, Anne Gentle  > wrote:
>
>> Hi all,
>> This patch contains all the RST + YAML for projects to bring over to
>> their repos to begin building API reference information from within your
>> repo. Get a copy of this patch, and pick up the files for your service in
>> api-site/api-ref/source/:
>>
>> https://review.openstack.org/#/c/311596/
>>
>> There is required cleanup, and you'll need an index.rst, conf.py, and
>> build jobs. All of these can be patterned after the nova repository api-ref
>> directory. Read more at
>> http://docs.openstack.org/contributor-guide/api-guides.html
>>
>> It's overall in good shape thanks to Karen Bradshaw, Auggy Ragwitz,
>> Andreas Jaeger, and Sean Dague. Appreciate the help over the finish line
>> during Summit week, y'all.
>>
>> The api-site/api-ref files are now frozen and we will not accept patches.
>> The output at developer.openstack.org/api-ref.html remains frozen until
>> we can provide redirects to the newly-sourced-and-built files. Please, make
>> this work a priority in this release. Ideally we can get everyone ready by
>> Milestone 1 (May 31).
>>
>> If you would like to use a Swagger/OpenAPI file, pick that file up from
>> developer.openstack.org/draft/swagger/ and create build jobs from your
>> repo to publish it on developer.openstack.org.
>>
>> Let me know if you have questions.
>> Thanks,
>> Anne
>>
>> --
>> Anne Gentle
>> www.justwriteclick.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Anne Gentle
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Clint Byrum
Excerpts from Matt Fischer's message of 2016-05-02 16:39:02 -0700:
> On Mon, May 2, 2016 at 5:26 PM, Clint Byrum  wrote:
> 
> > Hello! I enjoyed very much listening in on the default token provider
> > work session last week in Austin, so thanks everyone for participating
> > in that. I did not speak up then, because I wasn't really sure of this
> > idea that has been bouncing around in my head, but now I think it's the
> > case and we should consider this.
> >
> > Right now, Keystones without fernet keys, are issuing UUID tokens. These
> > tokens will be in the database, and valid, for however long the token
> > TTL is.
> >
> > The moment that one changes the configuration, keystone will start
> > rejecting these tokens. This will cause disruption, and I don't think
> > that is fair to the users who will likely be shown new bugs in their
> > code at a very unexpected moment.
> >
> 
> This will reduce the interruption and will also as you said possibly catch
> bugs. We had bugs in some custom python code that didn't get a new token
> when the keystone server returned certain code, but we found all those in
> our dev environment.
> 
> From an operational POV, I can't imagine that any operators will go to work
> one day and find out that they have a new token provider because of a new
> default. Wouldn't the settings in keystone.conf be under some kind of
> config management? I don't know what distros do with new defaults however,
> maybe that would be the surprise?
> 

"Production defaults" is something we used to mention a lot. One would
hope you can run a very nice Keystone with only the required settings
such as database connection details.

Agreed that upgrades will be conscious decisions by operators, no doubt!

However, the operator is not the one who gets the surprise. It is the
user who doesn't expect their tokens to be invalidated until their TTL
is up. The cloud changes when the operator decides it changes. And if
that is in the middle of something important, the operator has just
induced unnecessary complication on the user.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [docs] [cinder] [swift] [glance] [keystone] [ironic] [trove] [neutron] [heat] [senlin] [manila] [sahara] RST + YAML files ready for pick up from WADL migration

2016-05-03 Thread Sheel Rana Insaan
Hi Anne,

Sure, I will pick for cinder.

Thanks!!

Best Regards,
Sheel Rana

On Tue, May 3, 2016 at 6:59 PM, Anne Gentle 
wrote:

> Hi all,
> This patch contains all the RST + YAML for projects to bring over to their
> repos to begin building API reference information from within your repo.
> Get a copy of this patch, and pick up the files for your service in
> api-site/api-ref/source/:
>
> https://review.openstack.org/#/c/311596/
>
> There is required cleanup, and you'll need an index.rst, conf.py, and
> build jobs. All of these can be patterned after the nova repository api-ref
> directory. Read more at
> http://docs.openstack.org/contributor-guide/api-guides.html
>
> It's overall in good shape thanks to Karen Bradshaw, Auggy Ragwitz,
> Andreas Jaeger, and Sean Dague. Appreciate the help over the finish line
> during Summit week, y'all.
>
> The api-site/api-ref files are now frozen and we will not accept patches.
> The output at developer.openstack.org/api-ref.html remains frozen until
> we can provide redirects to the newly-sourced-and-built files. Please, make
> this work a priority in this release. Ideally we can get everyone ready by
> Milestone 1 (May 31).
>
> If you would like to use a Swagger/OpenAPI file, pick that file up from
> developer.openstack.org/draft/swagger/ and create build jobs from your
> repo to publish it on developer.openstack.org.
>
> Let me know if you have questions.
> Thanks,
> Anne
>
> --
> Anne Gentle
> www.justwriteclick.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Brant Knudson
On Mon, May 2, 2016 at 6:26 PM, Clint Byrum  wrote:

> Hello! I enjoyed very much listening in on the default token provider
> work session last week in Austin, so thanks everyone for participating
> in that. I did not speak up then, because I wasn't really sure of this
> idea that has been bouncing around in my head, but now I think it's the
> case and we should consider this.
>
> Right now, Keystones without fernet keys, are issuing UUID tokens. These
> tokens will be in the database, and valid, for however long the token
> TTL is.
>
> The moment that one changes the configuration, keystone will start
> rejecting these tokens. This will cause disruption, and I don't think
> that is fair to the users who will likely be shown new bugs in their
> code at a very unexpected moment.
>
> I wonder if one could merge UUID and Fernet into a provider which
> handles this transition gracefully:
>
> if self._fernet_keys:
>

This would have to check that there's an active fernet key and not just a
staging one. You'll want to push out a staging key to all the nodes first
to enable fernet validation before pushing out the active key to enable
token creation. Maybe there's a trick to getting keystone-manage
fernet_setup to only setup a staging key, or you just copy that key around.

Also, we could have keystone keep track of if there aren't any uuid tokens
since there's no need to query the database everytime we get an invalid
token just to see an empty table.

- Brant


>   return self._issue_fernet_token()
> else:
>   return self._issue_uuid_token()
>
> And in the validation, do the same, but also with an eye toward keeping
> the UUID tokens alive:
>
> if self._fernet_keys:

  try:
> self._validate_fernet_token()
>   except InvalidFernetFormatting:
> self._validate_uuid_token()
> else:
>   self._validate_uuid_token()
>
> So that while one is rolling out new keystone nodes and syncing fernet
> keys, all tokens issued would validated properly, with minimal extra
> cost to support both (basically just a number of UUID tokens will need
> to be parsed twice, once as Fernet, and once as UUID).
>
> Thoughts? I think doing this would make changing the default fairly
> uncontroversial.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit versioned notification

2016-05-03 Thread Matt Riedemann

On 5/3/2016 3:10 AM, Balázs Gibizer wrote:

Hi,

Last week Friday in Austin we discussed the way forward with the versioned
notification transformation in Nova.

We agreed that when we separate the object model use for notifications from
the nova object model we still use the NovaObject as a base class to avoid
change in the wire format and the major version bump it would cause.
However we won't register the notification object into the NovaObjectRegistry.


We also said that since the objects won't be registered, we still want 
to test their hashes in case something changes, so register the 
notification objects in the test that checks for changes (even though 
they aren't registered globally), this will keep us from slipping.



In general we agreed that we move forward with the transformation according
to the spec [1].

Regarding the schema generation for the notifications we agreed to
propose a general JSON Schema generation implementation to
oslo.versionedobjects [2] that can be used in Nova later to generate
schemas for the notification object model.

To have a way to synchronize our effort I'd like to restart the weekly
subteam meeting [5]. As the majority of the subteam is in US and EU I propose
to continue the currently existing time slot UTC 17:00 every Tuesday.
I proposed the frequency increase from biweekly to weekly here [3].
This means that we can meet today 17:00 UTC [4] on #openstack-meeting-4.

Cheers,
Gibi

[1] https://review.openstack.org/#/c/286675/ Versioned notification 
transformation
[2] https://review.openstack.org/#/c/311194/ versionedobjects: add json schema 
generation
[3] https://review.openstack.org/#/c/311948/
[4] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160503T17
[5] https://wiki.openstack.org/wiki/Meetings/NovaNotification


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Clint Byrum
Excerpts from Steve Martinelli's message of 2016-05-02 19:56:15 -0700:
> Comments inline...
> 
> On Mon, May 2, 2016 at 7:39 PM, Matt Fischer  wrote:
> 
> > On Mon, May 2, 2016 at 5:26 PM, Clint Byrum  wrote:
> >
> >> Hello! I enjoyed very much listening in on the default token provider
> >> work session last week in Austin, so thanks everyone for participating
> >> in that. I did not speak up then, because I wasn't really sure of this
> >> idea that has been bouncing around in my head, but now I think it's the
> >> case and we should consider this.
> >>
> >> Right now, Keystones without fernet keys, are issuing UUID tokens. These
> >> tokens will be in the database, and valid, for however long the token
> >> TTL is.
> >>
> >> The moment that one changes the configuration, keystone will start
> >> rejecting these tokens. This will cause disruption, and I don't think
> >> that is fair to the users who will likely be shown new bugs in their
> >> code at a very unexpected moment.
> >>
> >
> > This will reduce the interruption and will also as you said possibly catch
> > bugs. We had bugs in some custom python code that didn't get a new token
> > when the keystone server returned certain code, but we found all those in
> > our dev environment.
> >
> > From an operational POV, I can't imagine that any operators will go to
> > work one day and find out that they have a new token provider because of a
> > new default. Wouldn't the settings in keystone.conf be under some kind of
> > config management? I don't know what distros do with new defaults however,
> > maybe that would be the surprise?
> >
> 
> With respect to upgrades, assuming we default to Fernet tokens in the
> Newton release, it's only an issue if the the deployer has no token format
> specified (since it defaulted to UUID pre-Newton), and relied on the
> default after the upgrade (since it'll switches to Fernet in Newton).
> 

Assume all users are using defaults.

> I'm glad Matt outlines his reasoning above since that is nearly exactly
> what Jesse Keating said at the Fernet token work session we had in Austin.
> The straw man we come up with of a deployer that just upgrades without
> checking then config files is just that, a straw man. Upgrades are well
> planned and thought out before being performed. None of the operators in
> the room saw this as an issue. We opened a bug to prevent keystone from
> starting if fernet setup had not been run, and Fernet is the
> selected/defaulted token provider option:
> https://bugs.launchpad.net/keystone/+bug/1576315
> 


Right, I responded there, but just to be clear, this is not about
_operators_ being inconvenienced, it is about _users_.

> For all new installations, deploying your cloud will now have two extra
> steps, running "keystone-manage fernet_setup" and "keystone-manage
> fernet_rotate". We will update the install guide docs accordingly.
> 
> With all that said, we do intend to default to Fernet tokens for the Newton
> release.
> 

Great! They are supremely efficient and I love that we're moving
forward. However, users really do not care about something that just
makes the operator's life easier if it causes all of their stuff to blow
up in non-deterministic ways (since their new jobs won't have that fail,
it will be a really fun day in the debug chair).

> >
> >
> >>
> >> I wonder if one could merge UUID and Fernet into a provider which
> >> handles this transition gracefully:
> >>
> >> if self._fernet_keys:
> >>   return self._issue_fernet_token()
> >> else:
> >>   return self._issue_uuid_token()
> >>
> >> And in the validation, do the same, but also with an eye toward keeping
> >> the UUID tokens alive:
> >>
> >> if self._fernet_keys:
> >>   try:
> >> self._validate_fernet_token()
> >>   except InvalidFernetFormatting:
> >> self._validate_uuid_token()
> >> else:
> >>   self._validate_uuid_token()
> >>
> >
> This just seems sneaky/wrong to me. I'd rather see a failure here than
> switch token formats on the fly.
> 

You say "on the fly" I say "when the operator has configured things
fully".

Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.

Anyway, the idea could use a few kicks, and I think perhaps a better
way to state what I'm thinking is this:

When the operator has configured a new token format to emit, they should
also be able to allow any previously emitted formats to be validated to
allow users a smooth transition to the new format. We can then make the
default behavior for one release cycle to emit Fernet, and honor both
Fernet and UUID.

Perhaps ignore the other bit that I put in there about switching formats
just because you have fernet keys. Let's say the new pseudo code only
happens in validation:

try:
  self._validate_fernet_token()
except NotAFernetToken:
  self._validate_uuid_token()

I fight for the users -- Tron


Re: [openstack-dev] [nova] Austin summit cells v2 session recap

2016-05-03 Thread Andrew Laski
Thanks for the summary, this is great. Comments inline.


On Mon, May 2, 2016, at 09:32 PM, Matt Riedemann wrote:
> Andrew Laski led a double session for cells v2 on Wednesday afternoon. 
> The full session etherpad is here [1].
> 
> Andrew started with an overview of what's done and what's in progress. 
> Note that some of the background on cells, what's been completed for 
> cells v2 and what's being worked on is also in a summit video from a 
> user conference talk that Andrew gave [2].
> 
> We agreed to add the MQ switching [3] to get_cell_client and see what, 
> if anything, breaks.
> 
> DB migrations
> -
> 
> We had a quick rundown on the database tables slated for migration to 
> the API database. Notable items for the DB table migrations:
> 
> * Aggregates and quotas will be migrated, there are specs up for both of 
> these from Mark Doffman.
> * The nova-network related tables won't be migrated since we've 
> deprecated nova-network.
> * The agent_builds table won't be migrated. We plan on deprecating this 
> API since it's only used by the XenAPI virt driver and it sounds like 
> Rackspace doesn't even use/enable it.
> * We have to figure out what to do about the certificates table. The 
> only thing using it is the os-certificates REST API and nova-cert 
> service, and nothing in tree is using either of those now. The problem 
> is, the ec2api repo on GitHub is using the nova-cert rpc api directly 
> for s3 image download. So we need to figure out if we can move that into 
> the ec2api repo and drop it from Nova or find some other solution.
> * keypairs will be migrated to the API DB. There was a TODO about 
> needing to store the keypair type in the instance. I honestly can't 
> remember exactly what that was for now, I'm hoping Andrew remembers.

The metadata api exposes the keypair type but that information is not
passed down during the boot request. Currently the metadata service is
pulling the keypair from the db on each access, and for cellsv1 making
an RPC request to the parent cell for that data. To avoid requiring the
metadata service to query the nova_api database we should pass down
keypair information and persist it with the instance, perhaps in
instance_extra, so that lookups can be done locally to the cell.


> * We agreed to move instance_groups and instance_group_policy to the API 
> DB, but there is a TODO to sort out if instance_group_members should be 
> in the API DB.
> 
> nova-network
> 
> 
> For nova-network we agreed that we'll fail hard if someone tries to add 
> a second cell to a cells v2 deployment and they aren't using Neutron.
> 
> Testing
> ---
> 
> Chuck Carmack is working on some test plans for cells v2. There would be 
> a multi-node/cell job where one node is running the API and cell0 and 
> another is running a regular nova cell. There would also be migration 
> testing as part of grenade.
> 
> Documentation
> -
> 
> We discussed what needs to be documented and where it should live.
> 
> Since all deployments will at least be a cell of one, setting that up 
> will be in the install guide in docs.o.o. A multi-cell deployment would 
> be documented in the admin guide.
> 
> Anything related to the call path flow for different requests would live 
> in the nova developer documentation (devref).
> 
> Pagination
> --
> 
> This took a significant portion of the second cells v2 session and is 
> one of the more complicated problems to sort out. There are problems 
> with listing all instances across all cells especially when we support 
> sorting. And we really have a latent bug in the API since we never 
> restricted the list of valid sort keys for listing instances, so you can 
> literally sort on anything in the instances table in the DB.
> 
> There were some ideas about how to handle this:
> 
> 1. Don't support sorting in the API if you have multiple cells. Leave it 
> up to the caller to sort the results on their own. Obviously this isn't 
> a great solution for existing users that rely on this in the API.
> 
> 2. Each cell sorts the results individually, and the API merge sorts the 
> results from the cells. There is still overhead here.
> 
> 3. Don't split the database, or use a distributed database like Redis. 
> Since this wasn't brought up in person in the session, or on Friday, it 
> wasn't discussed. There is another thread about this though [4].
> 
> 4. Use the OpenStack Searchlight project for doing massive queries like 
> this. This would be optional for a cell of one but recommended/required 
> for anyone running multiple cells. The downside to this is it's a new 
> dependency, and requires Elasticsearch (but many deployments are 
> probably already running an ELK stack for monitoring their logs). It's 
> also unclear at an early stage how easy this would be to integrate into 
> Nova. Plus deployers would need to setup Searchlight to listen to 
> notifications emitted from Nova so the indexes are updated i

Re: [openstack-dev] [nova] Austin summit versioned notification

2016-05-03 Thread Ryan Rossiter

> On May 3, 2016, at 8:58 AM, Matt Riedemann  wrote:
> 
> On 5/3/2016 3:10 AM, Balázs Gibizer wrote:
>> Hi,
>> 
>> Last week Friday in Austin we discussed the way forward with the versioned
>> notification transformation in Nova.
>> 
>> We agreed that when we separate the object model use for notifications from
>> the nova object model we still use the NovaObject as a base class to avoid
>> change in the wire format and the major version bump it would cause.
>> However we won't register the notification object into the 
>> NovaObjectRegistry.
> 
> We also said that since the objects won't be registered, we still want to 
> test their hashes in case something changes, so register the notification 
> objects in the test that checks for changes (even though they aren't 
> registered globally), this will keep us from slipping.

I found yesterday that we do this for the DeviceBus object here [1]. We’ll be 
doing something similar with all objects that inherit from the notification 
base objects in either the test_versions(), or in setUp() of 
TestObjectVersions, whichever gives us the most coverage and least interference 
on other tests.

[1]: 
https://github.com/openstack/nova/blob/master/nova/tests/unit/objects/test_objects.py#L1254-L1260

> 
>> In general we agreed that we move forward with the transformation according
>> to the spec [1].
>> 
>> Regarding the schema generation for the notifications we agreed to
>> propose a general JSON Schema generation implementation to
>> oslo.versionedobjects [2] that can be used in Nova later to generate
>> schemas for the notification object model.
>> 
>> To have a way to synchronize our effort I'd like to restart the weekly
>> subteam meeting [5]. As the majority of the subteam is in US and EU I propose
>> to continue the currently existing time slot UTC 17:00 every Tuesday.
>> I proposed the frequency increase from biweekly to weekly here [3].
>> This means that we can meet today 17:00 UTC [4] on #openstack-meeting-4.
>> 
>> Cheers,
>> Gibi
>> 
>> [1] https://review.openstack.org/#/c/286675/ Versioned notification 
>> transformation
>> [2] https://review.openstack.org/#/c/311194/ versionedobjects: add json 
>> schema generation
>> [3] https://review.openstack.org/#/c/311948/
>> [4] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160503T17
>> [5] https://wiki.openstack.org/wiki/Meetings/NovaNotification
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] complete this 1 minute survey if you're Puppet OpenStack user

2016-05-03 Thread Emilien Macchi
http://goo.gl/forms/7VYibKHx1c

We would like to gather feedback of what our users are running, so we
can improve our CI and update the versions of Puppet / Ruby /
Operating Systems that we're gating.

Thanks a lot for your time,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Adam Young

On 05/03/2016 09:55 AM, Clint Byrum wrote:

Excerpts from Steve Martinelli's message of 2016-05-02 19:56:15 -0700:

Comments inline...

On Mon, May 2, 2016 at 7:39 PM, Matt Fischer  wrote:


On Mon, May 2, 2016 at 5:26 PM, Clint Byrum  wrote:


Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.


This will reduce the interruption and will also as you said possibly catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those in
our dev environment.

 From an operational POV, I can't imagine that any operators will go to
work one day and find out that they have a new token provider because of a
new default. Wouldn't the settings in keystone.conf be under some kind of
config management? I don't know what distros do with new defaults however,
maybe that would be the surprise?


With respect to upgrades, assuming we default to Fernet tokens in the
Newton release, it's only an issue if the the deployer has no token format
specified (since it defaulted to UUID pre-Newton), and relied on the
default after the upgrade (since it'll switches to Fernet in Newton).


Assume all users are using defaults.


I'm glad Matt outlines his reasoning above since that is nearly exactly
what Jesse Keating said at the Fernet token work session we had in Austin.
The straw man we come up with of a deployer that just upgrades without
checking then config files is just that, a straw man. Upgrades are well
planned and thought out before being performed. None of the operators in
the room saw this as an issue. We opened a bug to prevent keystone from
starting if fernet setup had not been run, and Fernet is the
selected/defaulted token provider option:
https://bugs.launchpad.net/keystone/+bug/1576315



Right, I responded there, but just to be clear, this is not about
_operators_ being inconvenienced, it is about _users_.


For all new installations, deploying your cloud will now have two extra
steps, running "keystone-manage fernet_setup" and "keystone-manage
fernet_rotate". We will update the install guide docs accordingly.

With all that said, we do intend to default to Fernet tokens for the Newton
release.


Great! They are supremely efficient and I love that we're moving
forward. However, users really do not care about something that just
makes the operator's life easier if it causes all of their stuff to blow
up in non-deterministic ways (since their new jobs won't have that fail,
it will be a really fun day in the debug chair).




I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self._fernet_keys:
   return self._issue_fernet_token()
else:
   return self._issue_uuid_token()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self._fernet_keys:
   try:
 self._validate_fernet_token()
   except InvalidFernetFormatting:
 self._validate_uuid_token()
else:
   self._validate_uuid_token()


This just seems sneaky/wrong to me. I'd rather see a failure here than
switch token formats on the fly.


You say "on the fly" I say "when the operator has configured things
fully".

Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.

Anyway, the idea could use a few kicks, and I think perhaps a better
way to state what I'm thinking is this:

When the operator has configured a new token format to emit, they should
also be able to allow any previously emitted formats to be validated to
allow users a smooth transition to the new format. We can then make the
default behavior for one release cycle to emit Fernet, and honor both
Fernet and UUID.

Perhaps ignore the other bit that I put in there about switching formats
just because you have fernet keys. Let's say the new pseudo code only
happens in validation:

try:
   self._validate_fernet_token()
except NotAFernetToken:
   self._validate_uuid_token()


I was actually thinking of a different migration strategy, exactly the 
opposite:  for a while, run with the uuid tokens, but store the Fernet 
body.  After while, switch from validating the uuid token body to the 
stored Fernet.  Finally, switch to validating the Fernet token from the

Re: [openstack-dev] Timeframe for naming the P release?

2016-05-03 Thread Jonathan D. Proulx
On Tue, May 03, 2016 at 12:09:40AM -0400, Adam Young wrote:
:On 05/02/2016 08:07 PM, Rochelle Grober wrote:
:>But, the original spelling of the landing site is Plimoth Rock.  There were 
still highway signs up in the 70's directing folks to "Plimoth Rock"

There are trill signs with both spellings. Presumably for sligtly
different contexts. Even having lived here basically my whole life I'm
not sure if there's a consistent distinction except the legal entity
that is the town is always with a y

:>
:>--Rocky
:>Who should know about rocks ;-)



:And Providnece is, I think close enough for inclusion as well.  And
:that is just the towns.

:
:
:Plymouth is the only County in Mass with a P name, but Penobscott ME
:used to be part of MA, and should probably be in the running as well.


I'd second Providnece and Penobscott as 'close enough'.  I'm actually
partial to Providence...

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] [aodh] Error when creating 2 event alarms with the same name

2016-05-03 Thread Weyl, Alexey (Nokia - IL)
Hi,

First of all, I wanted to thank again to all the participants in the fruitful 
Aodh-Vitrage design session in Austin :)

I wanted to show in this email, the problem that we have when creating 2 event 
alarms with the same name.
Here is what I got in the command line:

stack@ubuntu-devstack:/etc/vitrage$ ceilometer alarm-list   
  
+--+--+---+--+-++-+--+
| Alarm ID | Name | State | Severity | Enabled | Continuous | Alarm condition | 
Time constraints |
+--+--+---+--+-++-+--+
+--+--+---+--+-++-+--+

stack@ubuntu-devstack:/etc/vitrage$ ceilometer alarm-event-create --name 'Event 
Alarm 2' --state alarm --event-type 'my.event'
+---+--+
| Property  | Value    |
+---+--+
| alarm_actions | []   |
| alarm_id  | 96f11384-abd7-4c11-b0a5-678646c11e79 |
| description   | Alarm when my.event event occurred.  |
| enabled   | True |
| event_type    | my.event |
| insufficient_data_actions | []   |
| name  | Event Alarm 2    |
| ok_actions    | []   |
| project_id    | bec13d47c22e45a9948981f5cb1ba45b |
| query | []   |
| repeat_actions    | False    |
| severity  | low  |
| state | alarm    |
| type  | event    |
| user_id   | d8812494489546aca8341af184eddd2c |
+---+--+

stack@ubuntu-devstack:/etc/vitrage$ ceilometer alarm-list
+--+---+---+--+-++--+--+
| Alarm ID | Name  | State | Severity | 
Enabled | Continuous | Alarm condition  | Time constraints |
+--+---+---+--+-++--+--+
| 96f11384-abd7-4c11-b0a5-678646c11e79 | Event Alarm 2 | alarm | low  | 
True    | False  | query: []    | None |
|      |   |   |  | 
    |    | event_type: my.event |  |
+--+---+---+--+-++--+--+

stack@ubuntu-devstack:/etc/vitrage$ ceilometer alarm-event-create --name 'Event 
Alarm 2' --state alarm --event-type 'my.event'
Alarm with name='Event Alarm 2' exists (HTTP 409) (Request-ID: 
req-b05dd105-fd23-47d3-a0b6-940bde6bcdd8)


Do you think it is possible to drop the uniqueness of the alarm name in Aodh 
(for the Vitrage use cases that we talked about in the design session)?

Best regards,
Alexey Weyl


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Timeframe for naming the P release?

2016-05-03 Thread Monty Taylor

On 05/03/2016 09:29 AM, Jonathan D. Proulx wrote:

On Tue, May 03, 2016 at 12:09:40AM -0400, Adam Young wrote:
:On 05/02/2016 08:07 PM, Rochelle Grober wrote:
:>But, the original spelling of the landing site is Plimoth Rock.  There were still 
highway signs up in the 70's directing folks to "Plimoth Rock"

There are trill signs with both spellings. Presumably for sligtly
different contexts. Even having lived here basically my whole life I'm
not sure if there's a consistent distinction except the legal entity
that is the town is always with a y

:>
:>--Rocky
:>Who should know about rocks ;-)



:And Providnece is, I think close enough for inclusion as well.  And
:that is just the towns.


If you think the geographic region should be "New England" instead of 
"Massachusetts", please leave a review comment on the review. I thought 
about suggesting New England since we already had one summit in Boston 
anyway. Input highly welcome.



:
:
:Plymouth is the only County in Mass with a P name, but Penobscott ME
:used to be part of MA, and should probably be in the running as well.


I'd second Providnece and Penobscott as 'close enough'.  I'm actually
partial to Providence...



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-05-03 Thread Dane Leblanc (leblancd)


-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
Sent: Tuesday, May 03, 2016 8:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Hi.

On Mon, May 2, 2016 at 7:11 PM, Cammann, Tom  wrote:
> Thanks for the write up Hongbin and thanks to all those who contributed to 
> the design summit. A few comments on the summaries below.
>
> 6. Ironic Integration: 
> https://etherpad.openstack.org/p/newton-magnum-ironic-integration
> - Start the implementation immediately
> - Prefer quick work-around for identified issues (cinder volume 
> attachment, variation of number of ports, etc.)
>
> We need to implement a bay template that can use a flat networking model as 
> this is the only networking model Ironic currently supports. Multi-tenant 
> networking is imminent. This should be done before work on an Ironic template 
> starts.

I think the work on bay templates for multi-tenant networking can start now if 
we cherry pick this patch from Ironic:
https://review.openstack.org/#/c/256367/
and this patch for Ironic python client:
https://review.openstack.org/#/c/206144/
These should have all the dependency patches for supporting Ironic/Neutron-ML2 
integration. (We'd still need to figure out e.g. how to specify which of N nics 
to use, and how to specify/detect LAG groups.) The Ironic support for Cinder 
volumes doesn’t seem as far along, but in the interim we can probably use 
volume drivers such as rexray (don't know if this works for kube).

>
> 7. Magnum adoption challenges: 
> https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
> - The challenges is listed in the etherpad above
>
> Ideally we need to turn this list into a set of actions which we can 
> implement over the cycle, i.e. create a BP to remove requirement for LBaaS.

There's one for floating IPs already:
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips

>
> 9. Magnum Heat template version: 
> https://etherpad.openstack.org/p/newton-magnum-heat-template-versionin
> g
> - In each bay driver, version the template and template definition.
> - Bump template version for minor changes, and bump bay driver version for 
> major changes.
>
> We decided only bay driver versioning was required. The template and template 
> driver does not need versioning due to the fact we can get heat to pass back 
> the template which it used to create the bay.

This was also my understanding. We won't use heat template versioning, just the 
bays.

> 10. Monitoring: 
> https://etherpad.openstack.org/p/newton-magnum-monitoring
> - Add support for sending notifications to Ceilometer
> - Revisit bay monitoring and self-healing later
> - Container monitoring should not be done by Magnum, but it can be done by 
> cAdvisor, Heapster, etc.
>
> We split this topic into 3 parts – bay telemetry, bay monitoring, container 
> monitoring.
> Bay telemetry is done around actions such as bay/baymodel CRUD operations. 
> This is implemented using using ceilometer notifications.
> Bay monitoring is around monitoring health of individual nodes in the bay 
> cluster and we decided to postpone work as more investigation is required on 
> what this should look like and what users actually need.
> Container monitoring focuses on what containers are running in the bay and 
> general usage of the bay COE. We decided this will be done completed by 
> Magnum by adding access to cAdvisor/heapster through baking access to 
> cAdvisor by default.

I think we're missing a blueprint for this one too.

Ricardo

>
> - Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
> It can address the use case of heterogeneity of bay nodes (i.e. different 
> availability zones, flavors), but need to elaborate the details further.
>
> The idea revolves around creating a heat stack for each node in the bay. This 
> idea shows a lot of promise but needs more investigation and isn’t a current 
> priority.
>
> Tom
>
>
> From: Hongbin Lu 
> Reply-To: "OpenStack Development Mailing List (not for usage 
> questions)" 
> Date: Saturday, 30 April 2016 at 05:05
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: [openstack-dev] [magnum] Notes for Magnum design summit
>
> Hi team,
>
> For reference, below is a summary of the discussions/decisions in Austin 
> design summit. Please feel free to point out if anything is incorrect or 
> incomplete. Thanks.
>
> 1. Bay driver: 
> https://etherpad.openstack.org/p/newton-magnum-bay-driver
> - Refactor existing code into bay drivers
> - Each bay driver will be versioned
> - Individual bay driver can have API extension and magnum CLI could 
> load the extensions dynamically
> - Work incrementally and support same API before and after the driver 
> change
>
> 2. Bay lifecycle operations: 
> https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operatio
> ns
> - Support the follo

Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Lance Bragstad
If we were to write a uuid/fernet hybrid provider, it would only be
expected to support something like stable/liberty to stable/mitaka, right?
This is something that we could contribute to stackforge, too.

On Tue, May 3, 2016 at 9:21 AM, Adam Young  wrote:

> On 05/03/2016 09:55 AM, Clint Byrum wrote:
>
>> Excerpts from Steve Martinelli's message of 2016-05-02 19:56:15 -0700:
>>
>>> Comments inline...
>>>
>>> On Mon, May 2, 2016 at 7:39 PM, Matt Fischer 
>>> wrote:
>>>
>>> On Mon, May 2, 2016 at 5:26 PM, Clint Byrum  wrote:

 Hello! I enjoyed very much listening in on the default token provider
> work session last week in Austin, so thanks everyone for participating
> in that. I did not speak up then, because I wasn't really sure of this
> idea that has been bouncing around in my head, but now I think it's the
> case and we should consider this.
>
> Right now, Keystones without fernet keys, are issuing UUID tokens.
> These
> tokens will be in the database, and valid, for however long the token
> TTL is.
>
> The moment that one changes the configuration, keystone will start
> rejecting these tokens. This will cause disruption, and I don't think
> that is fair to the users who will likely be shown new bugs in their
> code at a very unexpected moment.
>
> This will reduce the interruption and will also as you said possibly
 catch
 bugs. We had bugs in some custom python code that didn't get a new token
 when the keystone server returned certain code, but we found all those
 in
 our dev environment.

  From an operational POV, I can't imagine that any operators will go to
 work one day and find out that they have a new token provider because
 of a
 new default. Wouldn't the settings in keystone.conf be under some kind
 of
 config management? I don't know what distros do with new defaults
 however,
 maybe that would be the surprise?

 With respect to upgrades, assuming we default to Fernet tokens in the
>>> Newton release, it's only an issue if the the deployer has no token
>>> format
>>> specified (since it defaulted to UUID pre-Newton), and relied on the
>>> default after the upgrade (since it'll switches to Fernet in Newton).
>>>
>>> Assume all users are using defaults.
>>
>> I'm glad Matt outlines his reasoning above since that is nearly exactly
>>> what Jesse Keating said at the Fernet token work session we had in
>>> Austin.
>>> The straw man we come up with of a deployer that just upgrades without
>>> checking then config files is just that, a straw man. Upgrades are well
>>> planned and thought out before being performed. None of the operators in
>>> the room saw this as an issue. We opened a bug to prevent keystone from
>>> starting if fernet setup had not been run, and Fernet is the
>>> selected/defaulted token provider option:
>>> https://bugs.launchpad.net/keystone/+bug/1576315
>>>
>>>
>> Right, I responded there, but just to be clear, this is not about
>> _operators_ being inconvenienced, it is about _users_.
>>
>> For all new installations, deploying your cloud will now have two extra
>>> steps, running "keystone-manage fernet_setup" and "keystone-manage
>>> fernet_rotate". We will update the install guide docs accordingly.
>>>
>>> With all that said, we do intend to default to Fernet tokens for the
>>> Newton
>>> release.
>>>
>>> Great! They are supremely efficient and I love that we're moving
>> forward. However, users really do not care about something that just
>> makes the operator's life easier if it causes all of their stuff to blow
>> up in non-deterministic ways (since their new jobs won't have that fail,
>> it will be a really fun day in the debug chair).
>>
>>
 I wonder if one could merge UUID and Fernet into a provider which
> handles this transition gracefully:
>
> if self._fernet_keys:
>return self._issue_fernet_token()
> else:
>return self._issue_uuid_token()
>
> And in the validation, do the same, but also with an eye toward keeping
> the UUID tokens alive:
>
> if self._fernet_keys:
>try:
>  self._validate_fernet_token()
>except InvalidFernetFormatting:
>  self._validate_uuid_token()
> else:
>self._validate_uuid_token()
>
> This just seems sneaky/wrong to me. I'd rather see a failure here than
>>> switch token formats on the fly.
>>>
>>> You say "on the fly" I say "when the operator has configured things
>> fully".
>>
>> Perhaps we have different perspectives. How is accepting what we
>> previously emitted and told the user would be valid sneaky or wrong?
>> Sounds like common sense due diligence to me.
>>
>> Anyway, the idea could use a few kicks, and I think perhaps a better
>> way to state what I'm thinking is this:
>>
>> When the operator has configured a new token format to emit, they should
>> also be able to allow any previously emitt

Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Monty Taylor

On 05/03/2016 08:55 AM, Clint Byrum wrote:

Excerpts from Steve Martinelli's message of 2016-05-02 19:56:15 -0700:

Comments inline...

On Mon, May 2, 2016 at 7:39 PM, Matt Fischer  wrote:


On Mon, May 2, 2016 at 5:26 PM, Clint Byrum  wrote:


Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.



This will reduce the interruption and will also as you said possibly catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those in
our dev environment.

 From an operational POV, I can't imagine that any operators will go to
work one day and find out that they have a new token provider because of a
new default. Wouldn't the settings in keystone.conf be under some kind of
config management? I don't know what distros do with new defaults however,
maybe that would be the surprise?



With respect to upgrades, assuming we default to Fernet tokens in the
Newton release, it's only an issue if the the deployer has no token format
specified (since it defaulted to UUID pre-Newton), and relied on the
default after the upgrade (since it'll switches to Fernet in Newton).



Assume all users are using defaults.


I'm glad Matt outlines his reasoning above since that is nearly exactly
what Jesse Keating said at the Fernet token work session we had in Austin.
The straw man we come up with of a deployer that just upgrades without
checking then config files is just that, a straw man. Upgrades are well
planned and thought out before being performed. None of the operators in
the room saw this as an issue. We opened a bug to prevent keystone from
starting if fernet setup had not been run, and Fernet is the
selected/defaulted token provider option:
https://bugs.launchpad.net/keystone/+bug/1576315




Right, I responded there, but just to be clear, this is not about
_operators_ being inconvenienced, it is about _users_.


I have confusion.

token format isn't really a thing users care about, like, ever. A token 
is an opaque blob you get from authenticating, and sometimes it expires 
and you have to reauthenticate. That re-auth must be accounted for in 
all of your user code, or else you'll have random sads (if you use 
keystoneauth it's handled for you, if you don't, it's on you_


If the operator rolls out fernet where it was uuid, the worst thing that 
will happen is that a token will "expire" before it needed to. As much 
as I'm normally a fountain for user indignation and rage ... I'm not 
sure end-users have any issues here.



For all new installations, deploying your cloud will now have two extra
steps, running "keystone-manage fernet_setup" and "keystone-manage
fernet_rotate". We will update the install guide docs accordingly.

With all that said, we do intend to default to Fernet tokens for the Newton
release.



Great! They are supremely efficient and I love that we're moving
forward. However, users really do not care about something that just
makes the operator's life easier if it causes all of their stuff to blow
up in non-deterministic ways (since their new jobs won't have that fail,
it will be a really fun day in the debug chair).






I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self._fernet_keys:
   return self._issue_fernet_token()
else:
   return self._issue_uuid_token()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self._fernet_keys:
   try:
 self._validate_fernet_token()
   except InvalidFernetFormatting:
 self._validate_uuid_token()
else:
   self._validate_uuid_token()




This just seems sneaky/wrong to me. I'd rather see a failure here than
switch token formats on the fly.



You say "on the fly" I say "when the operator has configured things
fully".

Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.


I agree - I see no reason we can't validate previously emitted tokens. 
But I don't agree strongly, because re-authing on invalid token is a 
thing users do hundreds of times a day. (these aren't oauth API Keys or 
anything)



Anyway, the idea could use a few kicks, and I think perhaps a better
way to state what I'm thinking 

Re: [openstack-dev] [vitrage] [aodh] Error when creating 2 event alarms with the same name

2016-05-03 Thread Julien Danjou
On Tue, May 03 2016, Weyl, Alexey (Nokia - IL) wrote:

> Do you think it is possible to drop the uniqueness of the alarm name in Aodh
> (for the Vitrage use cases that we talked about in the design session)?

Should be doable yeah, it's really just a (badly written) check in the
API.

Shoot us a patch! :)

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [aodh] Error when creating 2 event alarms with the same name

2016-05-03 Thread ZhiQiang Fan
Alarm name unique constraint is only applied to each project, I don't
remember the original cause, but in our customer's environment, alarm will
be showed in the portal, with their name, no uuid, because user will be
confused about such a random like string, then if alarm name can be
duplicated, it is hard for them to differ between alarms.

So is there particular reason why you need to create duplicate name? can it
be something like event-alarm-{event_type}-{seq_number} ?

Anyway, it is not so hard to remove this constraint, I just want to say
that alarm name should be meaningful, otherwise it makes no difference with
UUID: not human friendly.



On Tue, May 3, 2016 at 10:30 PM, Weyl, Alexey (Nokia - IL) <
alexey.w...@nokia.com> wrote:

> Hi,
>
> First of all, I wanted to thank again to all the participants in the
> fruitful Aodh-Vitrage design session in Austin :)
>
> I wanted to show in this email, the problem that we have when creating 2
> event alarms with the same name.
> Here is what I got in the command line:
>
> stack@ubuntu-devstack:/etc/vitrage$ ceilometer
> alarm-list
>
> +--+--+---+--+-++-+--+
> | Alarm ID | Name | State | Severity | Enabled | Continuous | Alarm
> condition | Time constraints |
>
> +--+--+---+--+-++-+--+
>
> +--+--+---+--+-++-+--+
>
> stack@ubuntu-devstack:/etc/vitrage$ ceilometer alarm-event-create --name
> 'Event Alarm 2' --state alarm --event-type 'my.event'
> +---+--+
> | Property  | Value|
> +---+--+
> | alarm_actions | []   |
> | alarm_id  | 96f11384-abd7-4c11-b0a5-678646c11e79 |
> | description   | Alarm when my.event event occurred.  |
> | enabled   | True |
> | event_type| my.event |
> | insufficient_data_actions | []   |
> | name  | Event Alarm 2|
> | ok_actions| []   |
> | project_id| bec13d47c22e45a9948981f5cb1ba45b |
> | query | []   |
> | repeat_actions| False|
> | severity  | low  |
> | state | alarm|
> | type  | event|
> | user_id   | d8812494489546aca8341af184eddd2c |
> +---+--+
>
> stack@ubuntu-devstack:/etc/vitrage$ ceilometer alarm-list
>
> +--+---+---+--+-++--+--+
> | Alarm ID | Name  | State | Severity
> | Enabled | Continuous | Alarm condition  | Time constraints |
>
> +--+---+---+--+-++--+--+
> | 96f11384-abd7-4c11-b0a5-678646c11e79 | Event Alarm 2 | alarm | low
> | True| False  | query: []| None |
> |  |   |   |
> | || event_type: my.event |  |
>
> +--+---+---+--+-++--+--+
>
> stack@ubuntu-devstack:/etc/vitrage$ ceilometer alarm-event-create --name
> 'Event Alarm 2' --state alarm --event-type 'my.event'
> Alarm with name='Event Alarm 2' exists (HTTP 409) (Request-ID:
> req-b05dd105-fd23-47d3-a0b6-940bde6bcdd8)
>
>
> Do you think it is possible to drop the uniqueness of the alarm name in
> Aodh (for the Vitrage use cases that we talked about in the design session)?
>
> Best regards,
> Alexey Weyl
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit versioned notification

2016-05-03 Thread Balázs Gibizer
> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: May 03, 2016 15:58
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Austin summit versioned notification
> 
> On 5/3/2016 3:10 AM, Balázs Gibizer wrote:
> > Hi,
> >
> > Last week Friday in Austin we discussed the way forward with the
> versioned
> > notification transformation in Nova.
> >
> > We agreed that when we separate the object model use for notifications
> from
> > the nova object model we still use the NovaObject as a base class to avoid
> > change in the wire format and the major version bump it would cause.
> > However we won't register the notification object into the
> NovaObjectRegistry.
> 
> We also said that since the objects won't be registered, we still want
> to test their hashes in case something changes, so register the
> notification objects in the test that checks for changes (even though
> they aren't registered globally), this will keep us from slipping.

Thanks for pointing this out. The spec is already up to date with this 
agreements.

> 
> > In general we agreed that we move forward with the transformation
> according
> > to the spec [1].
> >
> > Regarding the schema generation for the notifications we agreed to
> > propose a general JSON Schema generation implementation to
> > oslo.versionedobjects [2] that can be used in Nova later to generate
> > schemas for the notification object model.
> >
> > To have a way to synchronize our effort I'd like to restart the weekly
> > subteam meeting [5]. As the majority of the subteam is in US and EU I
> propose
> > to continue the currently existing time slot UTC 17:00 every Tuesday.
> > I proposed the frequency increase from biweekly to weekly here [3].
> > This means that we can meet today 17:00 UTC [4] on #openstack-meeting-
> 4.
> >
> > Cheers,
> > Gibi
> >
> > [1] https://review.openstack.org/#/c/286675/ Versioned notification
> transformation
> > [2] https://review.openstack.org/#/c/311194/ versionedobjects: add json
> schema generation
> > [3] https://review.openstack.org/#/c/311948/
> > [4]
> https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160503T1
> 7
> > [5] https://wiki.openstack.org/wiki/Meetings/NovaNotification
> >
> >
> >
> __
> 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit versioned notification

2016-05-03 Thread Balázs Gibizer
> -Original Message-
> From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
> Sent: May 03, 2016 16:10
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Austin summit versioned notification
> 
> 
> > On May 3, 2016, at 8:58 AM, Matt Riedemann
>  wrote:
> >
> > On 5/3/2016 3:10 AM, Balázs Gibizer wrote:
> >> Hi,
> >>
> >> Last week Friday in Austin we discussed the way forward with the
> versioned
> >> notification transformation in Nova.
> >>
> >> We agreed that when we separate the object model use for notifications
> from
> >> the nova object model we still use the NovaObject as a base class to avoid
> >> change in the wire format and the major version bump it would cause.
> >> However we won't register the notification object into the
> NovaObjectRegistry.
> >
> > We also said that since the objects won't be registered, we still want to 
> > test
> their hashes in case something changes, so register the notification objects 
> in
> the test that checks for changes (even though they aren't registered
> globally), this will keep us from slipping.
> 
> I found yesterday that we do this for the DeviceBus object here [1]. We'll be
> doing something similar with all objects that inherit from the notification 
> base
> objects in either the test_versions(), or in setUp() of TestObjectVersions,
> whichever gives us the most coverage and least interference on other tests.

Thanks for the idea. I will fix up the patch [6] based on this code soon.

Cheers,
Gibi

[6 ]https://review.openstack.org/#/c/309454/

> 
> [1]:
> https://github.com/openstack/nova/blob/master/nova/tests/unit/objects/t
> est_objects.py#L1254-L1260
> 
> >
> >> In general we agreed that we move forward with the transformation
> according
> >> to the spec [1].
> >>
> >> Regarding the schema generation for the notifications we agreed to
> >> propose a general JSON Schema generation implementation to
> >> oslo.versionedobjects [2] that can be used in Nova later to generate
> >> schemas for the notification object model.
> >>
> >> To have a way to synchronize our effort I'd like to restart the weekly
> >> subteam meeting [5]. As the majority of the subteam is in US and EU I
> propose
> >> to continue the currently existing time slot UTC 17:00 every Tuesday.
> >> I proposed the frequency increase from biweekly to weekly here [3].
> >> This means that we can meet today 17:00 UTC [4] on #openstack-
> meeting-4.
> >>
> >> Cheers,
> >> Gibi
> >>
> >> [1] https://review.openstack.org/#/c/286675/ Versioned notification
> transformation
> >> [2] https://review.openstack.org/#/c/311194/ versionedobjects: add json
> schema generation
> >> [3] https://review.openstack.org/#/c/311948/
> >> [4]
> https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160503T1
> 7
> >> [5] https://wiki.openstack.org/wiki/Meetings/NovaNotification
> >>
> >>
> >>
> __
> 
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > --
> >
> > Thanks,
> >
> > Matt Riedemann
> >
> >
> >
> __
> 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> -
> Thanks,
> 
> Ryan Rossiter (rlrossit)
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-03 Thread Edward Leafe
On May 3, 2016, at 6:45 AM, Miles Gould  wrote:

>> This DB could be an RDBMS or Cassandra, depending on the deployer's 
>> preferences
> AFAICT this would mean introducing and maintaining a layer that abstracts 
> over RDBMSes and Cassandra. That's a big abstraction, over two quite 
> different systems, and it would be hard to write code that performs well in 
> both cases. If performance in this layer is critical, then pick whichever DB 
> architecture handles the expected query load better and use that.

Agreed - you simply can’t structure the data the same way. When I read 
criticisms of Cassandra that include “you can’t do joins” or “you can’t 
aggregate”, it highlights this fact: you have to think about (and store) your 
data completely differently than you would in an RDBMS. You cannot simply 
abstract out the differences.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Timeframe for naming the P release?

2016-05-03 Thread David_Paterson
Dell - Internal Use - Confidential
Providence

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com]
Sent: Tuesday, May 03, 2016 9:38 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Timeframe for naming the P release?

On 05/03/2016 09:29 AM, Jonathan D. Proulx wrote:
> On Tue, May 03, 2016 at 12:09:40AM -0400, Adam Young wrote:
> :On 05/02/2016 08:07 PM, Rochelle Grober wrote:
> :>But, the original spelling of the landing site is Plimoth Rock. There were 
> still highway signs up in the 70's directing folks to "Plimoth Rock"
>
> There are trill signs with both spellings. Presumably for sligtly
> different contexts. Even having lived here basically my whole life I'm
> not sure if there's a consistent distinction except the legal entity
> that is the town is always with a y
>
> :>
> :>--Rocky
> :>Who should know about rocks ;-)
>
>
>
> :And Providnece is, I think close enough for inclusion as well. And
> :that is just the towns.

If you think the geographic region should be "New England" instead of 
"Massachusetts", please leave a review comment on the review. I thought about 
suggesting New England since we already had one summit in Boston anyway. Input 
highly welcome.

> :
> :
> :Plymouth is the only County in Mass with a P name, but Penobscott ME
> :used to be part of MA, and should probably be in the running as well.
>
>
> I'd second Providnece and Penobscott as 'close enough'. I'm actually
> partial to Providence...


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][all] Build unified abstraction for all COEs

2016-05-03 Thread Hongbin Lu
Hi all,

According to the decision in the design summit [1], we are going to narrow the 
scope of the Magnum project [2]. In particular, Magnum will focus on COEs 
deployment and management. The efforts of building unified container 
abstraction will potentially go into a new project. My role here is to collect 
interests for the new project, help to create a new team (if there are enough 
interests), and then pass the responsibility to the new team. An etherpad was 
created for this purpose:

https://etherpad.openstack.org/p/container-management-service

If you interest in contributing and/or leveraging the new container service, I 
would request to have your name and requirements stated in the etherpad. Your 
inputs will be appreciated.

[1] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
[2] https://review.openstack.org/#/c/311476/

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-23-16 11:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

Magnum is not a COE installer. It offers multi tenancy from the ground up, is 
well integrated with OpenStack services, and more COE features pre-configured 
than you would get with an ordinary stock deployment. For example, magnum 
offers integration with keystone that allows developer self-service to get a 
native container service in a few minutes with the same ease as getting a 
database server from Trove. It allows cloud operators to set up the COE 
templates in a way that they can be used to fit policies of that particular 
cloud.

Keeping a COE working with OpenStack requires expertise that the Magnum team 
has codified across multiple options.

--
Adrian

On Apr 23, 2016, at 2:55 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
I am not necessary agree with the viewpoint below, but that is the majority 
viewpoints when I was trying to sell Magnum to them. There are people who 
interested in adopting Magnum, but they ran away after they figured out what 
Magnum actually offers is a COE deployment service. My takeaway is COE 
deployment is not the real pain, and there are several alternatives available 
(Heat, Ansible, Chef, Puppet, Juju, etc.). Limiting Magnum to be a COE 
deployment service might prolong the existing adoption problem.

Best regards,
Hongbin

From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: April-20-16 6:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

If Magnum will be focused on installation and management for COE it will be 
unclear how much it is different from Heat and other generic orchestrations.  
It looks like most of the current Magnum functionality is provided by Heat. 
Magnum focus on deployment will potentially lead to another Heat-like  API.
Unless Magnum is really focused on containers its value will be minimal for 
OpenStack users who already use Heat/Orchestration.


On Wed, Apr 20, 2016 at 3:12 PM, Keith Bray 
mailto:keith.b...@rackspace.com>> wrote:
Magnum doesn¹t have to preclude tight integration for single COEs you
speak of.  The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed by
Magnum itself. The tight integration can be done by leveraging existing
technologies (Heat and/or choose your DevOps tool of choice:
Chef/Ansible/etc). This allows interested community members to focus on
tight integration of whatever COE they want, focusing specifically on the
COE integration part, contributing that integration focus to Magnum via
plug-ins, without having to actually know much about Magnum, but instead
contribute to the COE plug-in using DevOps tools of choice.   Pegging
Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
etc. project for every COE of interest, all with different ways of kicking
off COE management.  Magnum could unify that experience for users and
operators, without picking a winner in the COE space < this is just like
Nova not picking a winner between VM flavors or OS types.  It just
facilitates instantiation and management of thins.  Opinion here:  The
value of Magnum is in being a light-weight/thin API, providing modular
choice and plug-ability to COE provisioning and management, thereby
providing operators and users choice of COE instantiation and management
(via the bay concept), where each COE can be as tightly or loosely
integrated as desired by different plug-ins contributed to perform the COE
setup and configurations.  So, Magnum could have two or more swarm plug-in
options contributed to the community.. One overlays generic swarm on VMs.
The other swarm plug-in could instantiate swarm tightly integrated to
neutron, keystone

Re: [openstack-dev] [tc] Leadership training dates - please confirm attendance

2016-05-03 Thread Colette Alexander
On Fri, Apr 22, 2016 at 7:57 AM, Colette Alexander
 wrote:
> On Thu, Apr 21, 2016 at 10:42 AM, Doug Hellmann  wrote:
>>
>> Excerpts from Colette Alexander's message of 2016-04-21 08:07:52 -0700:
>>
>> >
>> > Hi everyone,
>> >
>> > Just checking in on this - if you're a current or past member of the TC and
>> > haven't yet signed up on the etherpad [0] and would like to attend
>> > training, please do so by tomorrow if you can! If you're waiting on travel
>> > approval or something else before you confirm, but want me to hold you a
>> > spot, just ping me on IRC and let me know.
>> >
>> > If you'd like to go to leadership training and you're *not* a past or
>> > current TC member, stay tuned - I'll know about free spots and will send
>> > out information during the summit next week.
>> >
>> > Thank you!
>> >
>> > -colette/gothicmindfood
>> >
>> > [0] https://etherpad.openstack.org/p/Leadershiptraining
>>
>> I've been waiting to have a chance to confer with folks in Austin. Are
>> we under a deadline to get a head-count?

Hey All -

Just checking in on the sign up sheet. I know a few folks mentioned
they were hoping to get travel approval from their managers last week
at the Summit.

Please sign up here as soon as you can (preferably by Friday, May 6th)
and let me know if you're interested but can't confirm travel before
then:  https://etherpad.openstack.org/p/Leadershiptraining

A few folks who are not TC members or TC emeriti have already
contacted me to express interest in attending. If you fall into that
category and think you might want to go to this first, experimental
round of training, and can get travel expenses covered, please feel
free to ping me now to let me know you're interested and/or talk about
logistics. Once I have a final count of TC-related attendance, I'll
make an official announcement on the dev list of extra space
available, and ask folks who are interested to sign up.

It was great to speak with so many of you about this in Austin last week!

Thanks,

-colette/gothicmindfood

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Timeframe for naming the P release?

2016-05-03 Thread Carol Bouchard (caboucha)
Here are other ideas?  

Paul-Revere   https://en.wikipedia.org/wiki/Paul_Revere
Peabody A town north east of Boston.  No special reason except 
saying the name personally makes me chuckle  
Patriots  https://en.wikipedia.org/wiki/New_England_Patriots


-Original Message-
From: Jonathan D. Proulx [mailto:j...@csail.mit.edu] 
Sent: Tuesday, May 03, 2016 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Timeframe for naming the P release?

On Tue, May 03, 2016 at 12:09:40AM -0400, Adam Young wrote:
:On 05/02/2016 08:07 PM, Rochelle Grober wrote:
:>But, the original spelling of the landing site is Plimoth Rock.  There were 
still highway signs up in the 70's directing folks to "Plimoth Rock"

There are trill signs with both spellings. Presumably for sligtly different 
contexts. Even having lived here basically my whole life I'm not sure if 
there's a consistent distinction except the legal entity that is the town is 
always with a y

:>
:>--Rocky
:>Who should know about rocks ;-)



:And Providnece is, I think close enough for inclusion as well.  And :that is 
just the towns.

:
:
:Plymouth is the only County in Mass with a P name, but Penobscott ME :used to 
be part of MA, and should probably be in the running as well.


I'd second Providnece and Penobscott as 'close enough'.  I'm actually partial 
to Providence...

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-03 Thread Steven Dake (stdake)
Paul,

Just to be clear, we are not putting master on pause for 4-6 weeks to
split apart the repos to enable kubernetes development.  The option on the
table at this point are
A) kolla repo as it exists today and empty repo for k8s
B) kolla repo as it exists today with kubernetes integrated

A pause would essentially kill any kubernetes effort.  Plus there is a
whole bunch of reasons why not to split the main kolla repo.  The fact
that our tools don't work well for this means that developers are less
likely to go through backporting bug fixes, which means our stable
branches may fall into disrepair.

Keep in mind, our stable branches fell into disrepair last time because of
tools.  We were not using launchpad correctly as a team, which we are now
doing.  As a result back-porting is consistently done and done well.  A
bunch of manual backports will result in a lower quality code base and I
have concerns folks wouldn't end up backporting - or worse make errors
since the process would no longer be automated.

Regards
-steve

On 5/3/16, 2:23 AM, "Paul Bourke"  wrote:

>Having read through the full thread I'm still in support of separate
>repos. I think the explanations Jeff Peeler and Adam Young have put
>forward summarise my thoughts very well.
>
>One of the main arguments I seem to be hearing for a single repo is Git
>tooling which I don't think is a good one; we should do what's best for
>users and devs, not for tools.
>
>Also as the guys pointed out, multiple repos are the most common pattern
>across OpenStack. I think it will help keep a better separation of
>concerns. Otherwise in my experience you start to get cross
>contamination of the projects, to the point where it becomes extremely
>difficult to pull them apart.
>
>The images, ansible, and k8n need to be separate. The alternative is not
>scalable.
>
>Thanks,
>-Paul
>
>On 03/05/16 00:39, Angus Salkeld wrote:
>> On Mon, May 2, 2016 at 7:07 AM Steven Dake (stdake) > > wrote:
>>
>> Ryan had rightly pointed out that when we made the original proposal
>> 9am morning we had asked folks if they wanted to participate in a
>> separate repository.
>>
>> I don't think a separate repository is the correct approach based
>> upon one off private conversations with folks at summit.  Many
>> people from that list approached me and indicated they would like to
>> see the work integrated in one repository as outlined in my vote
>> proposal email.  The reasons I heard were:
>>
>>   * Better integration of the community
>>   * Better integration of the code base
>>   * Doesn't present an us vs them mentality that one could argue
>> happened during kolla-mesos
>>   * A second repository makes k8s a second class citizen deployment
>> architecture without a voice in the full deployment methodology
>>   * Two gating methods versus one
>>   * No going back to a unified repository while preserving git
>>history
>>
>> I favor of the separate repositories I heard
>>
>>   * It presents a unified workspace for kubernetes alone
>>   * Packaging without ansible is simpler as the ansible directory
>> need not be deleted
>>
>> There were other complaints but not many pros.  Unfortunately I
>> failed to communicate these complaints to the core team prior to the
>> vote, so now is the time for fixing that.
>>
>> I'll leave it open to the new folks that want to do the work if they
>> want to work on an offshoot repository and open us up to the
>> possible problems above.
>>
>>
>> +1 to the separate repo
>>
>> I think the separate repo worked very well for us and would encourage
>> you to replicate that again. Having one repo doing one thing makes the
>> goal of the repo obvious and makes the api between the images and
>> deployment clearer (also the stablity of that
>> api and things like permissions *cough* drop-root).
>>
>> -Angus
>>
>>
>> If you are on this list:
>>
>>   * Ryan Hallisey
>>   * Britt Houser
>>
>>   * mark casey
>>
>>   * Steven Dake (delta-alpha-kilo-echo)
>>
>>   * Michael Schmidt
>>
>>   * Marian Schwarz
>>
>>   * Andrew Battye
>>
>>   * Kevin Fox (kfox)
>>
>>   * Sidharth Surana (ssurana)
>>
>>   *   Michal Rostecki (mrostecki)
>>
>>   *Swapnil Kulkarni (coolsvap)
>>
>>   *MD NADEEM (mail2nadeem92)
>>
>>   *Vikram Hosakote (vhosakot)
>>
>>   *Jeff Peeler (jpeeler)
>>
>>   *Martin Andre (mandre)
>>
>>   *Ian Main (Slower)
>>
>>   * Hui Kang (huikang)
>>
>>   * Serguei Bezverkhi (sbezverk)
>>
>>   * Alex Polvi (polvi)
>>
>>   * Rob Mason
>>
>>   * Alicja Kwasniewska
>>
>>   * sean mooney (sean-k-mooney)
>>
>>   * Keith Byrne (kbyrne)
>>
>>   * Zdenek Janda (xdeu)
>>
>>   * Brandon Jozsa (v1k0d3n)
>>
>>   * Rajath Agasthya (rajathagasthya)
>>   * Jinay Vora
>>   * Hui Kang
>>   * Dav

[openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson
TC,

In reference to 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html and 
Thierry's reply, I'm currently drafting a TC resolution to update 
http://governance.openstack.org/resolutions/20150901-programming-languages.html 
to include Go as a supported language in OpenStack projects.

As a starting point, what would you like to see addressed in the document I'm 
drafting?

--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-03 Thread Mike Bayer



On 05/02/2016 01:48 PM, Clint Byrum wrote:




FWIW, I agree with you. If you're going to use SQLAlchemy, use it to
take advantage of the relational model.

However, how is what you describe a win? Whether you use SELECT .. FOR
UPDATE, or a stored procedure, the lock is not distributed, and thus, will
still suffer rollback failures in Galera. For single DB server setups, you
don't have to worry about that, and SELECT .. FOR UPDATE will work fine.


Well it's a "win" vs. the lesser approach considered which also did not 
include a distributed locking system like Zookeeper.   It is also a win 
even with a Zookeeper-like system in place because it allows a SQL query 
to be much smarter about selecting data that involves IP numbers and 
CIDRs, without the need to pull data into memory and process it there. 
This is the most common mistake in SQL programming, not taking advantage 
of SQL's set-based nature and instead pulling data into memory 
unnecessarily.


Also, the "federated MySQL" approach of Cells V2 would still be OK with 
pessimistic locking, since this lock is not "distributed" across the 
entire dataspace.   Only the usual Galera caveats apply, e.g. point to 
only one galera "master" at a time and/or wait for Galera to support 
"SELECT FOR UPDATE" across the cluster.





Furthermore, any logic that happens inside the database server is extra
load on a much much much harder resource to scale, using code that is
much more complicated to update.


So I was careful to use the term "stored function" and not "stored 
procedure".   As ironic as it is for me to defend both the ORM 
business-logic-in-the-application-not-the-database position, *and* the 
let-the-database-do-things-not-the-application at the same time, using 
database functions to allow new kinds of math and comparison operations 
to take place over sets is entirely reasonable, and should not be 
confused with the old-school big-business approach of building an entire 
business logic layer as a huge wall of stored procedures, this is 
nothing like that.


The Postgresql database has INET and CIDR types native which include the 
same overlap logic we are implementing here as a MySQL stored function, 
so the addition of math functions like these shouldn't be controversial. 
  The "load" of this function is completely negligible (however I would 
be glad to assist in load testing it to confirm), especially compared to 
pulling the same data across the wire, processing it in Python, then 
sending just a tiny portion of it back again after we've extracted the 
needle from the haystack.


In pretty much every kind of load testing scenario we do with Openstack, 
the actual "load" on the database barely pushes anything.   The only 
database "resource" issue we have is Openstack using far more idle 
connections than it should, which is on my end to work on improvements 
to the connection pooling system which does not scale well across 
Openstack's tons-of-processes model.





To be clear, it's not the amount of data, but the size of the failure
domain. We're more worried about what will happen to those 40,000 open
connections from our 4000 servers when we do have to violently move them.


That's a really big number and I will admit I would need to dig into 
this particular problem domain more deeply to understand what exactly 
the rationale of that kind of scale would be here.   But it does seem 
like if you were using SQL databases, and the 4000 server system is in 
fact grouped into hundreds of "silos" that only deal with strict 
segments of the total dataspace, a federated approach would be exactly 
what you'd want to go with.





That particular problem isn't as scary if you have a large
Cassandra/MongoDB/Riak/ROME cluster, as the client libraries are
generally connecting to all or most of the nodes already, and will
simply use a different connection if the initial one fails. However,
these other systems also bring a whole host of new problems which the
simpler SQL approach doesn't have.


Regarding ROME, I only seek to make the point that if you're going to 
switch to NoSQL, you have to switch to NoSQL.   Bolting SQLAlchemy on 
top of Redis without a mature and widely-proven relational layer in 
between, down to the level of replicating the actual tables that were 
built within a relational schema, is a denial of the reality of the 
problem to be solved.






So it's worth doing an actual analysis of the failure handling before
jumping to the conclusion that a pile of cells/sharding code or a rewrite
to use a distributed database would be of benefit.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not f

Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-03 Thread Jeffrey Zhang
I +1 for split the kolla-k8s repo, too.

Here is the reason:

1. Kolla will be split into several repo in the future: kolla-docker,
kolla-ansible. So
   if we use one repo for k8s, we will split it again. It will be more
painful to do this.

2. Normally, the kolla-docker, kolla-ansible and kolla-k8s has less
relations between
   each other. We need decouple them. So split the repo will be helpful for
that. then
   different reviewer/committer cloud focus on her own domain.

On Tue, May 3, 2016 at 11:48 PM, Steven Dake (stdake) 
wrote:

> Paul,
>
> Just to be clear, we are not putting master on pause for 4-6 weeks to
> split apart the repos to enable kubernetes development.  The option on the
> table at this point are
> A) kolla repo as it exists today and empty repo for k8s
> B) kolla repo as it exists today with kubernetes integrated
>
> A pause would essentially kill any kubernetes effort.  Plus there is a
> whole bunch of reasons why not to split the main kolla repo.  The fact
> that our tools don't work well for this means that developers are less
> likely to go through backporting bug fixes, which means our stable
> branches may fall into disrepair.
>
> Keep in mind, our stable branches fell into disrepair last time because of
> tools.  We were not using launchpad correctly as a team, which we are now
> doing.  As a result back-porting is consistently done and done well.  A
> bunch of manual backports will result in a lower quality code base and I
> have concerns folks wouldn't end up backporting - or worse make errors
> since the process would no longer be automated.
>
> Regards
> -steve
>
> On 5/3/16, 2:23 AM, "Paul Bourke"  wrote:
>
> >Having read through the full thread I'm still in support of separate
> >repos. I think the explanations Jeff Peeler and Adam Young have put
> >forward summarise my thoughts very well.
> >
> >One of the main arguments I seem to be hearing for a single repo is Git
> >tooling which I don't think is a good one; we should do what's best for
> >users and devs, not for tools.
> >
> >Also as the guys pointed out, multiple repos are the most common pattern
> >across OpenStack. I think it will help keep a better separation of
> >concerns. Otherwise in my experience you start to get cross
> >contamination of the projects, to the point where it becomes extremely
> >difficult to pull them apart.
> >
> >The images, ansible, and k8n need to be separate. The alternative is not
> >scalable.
> >
> >Thanks,
> >-Paul
> >
> >On 03/05/16 00:39, Angus Salkeld wrote:
> >> On Mon, May 2, 2016 at 7:07 AM Steven Dake (stdake)  >> > wrote:
> >>
> >> Ryan had rightly pointed out that when we made the original proposal
> >> 9am morning we had asked folks if they wanted to participate in a
> >> separate repository.
> >>
> >> I don't think a separate repository is the correct approach based
> >> upon one off private conversations with folks at summit.  Many
> >> people from that list approached me and indicated they would like to
> >> see the work integrated in one repository as outlined in my vote
> >> proposal email.  The reasons I heard were:
> >>
> >>   * Better integration of the community
> >>   * Better integration of the code base
> >>   * Doesn't present an us vs them mentality that one could argue
> >> happened during kolla-mesos
> >>   * A second repository makes k8s a second class citizen deployment
> >> architecture without a voice in the full deployment methodology
> >>   * Two gating methods versus one
> >>   * No going back to a unified repository while preserving git
> >>history
> >>
> >> I favor of the separate repositories I heard
> >>
> >>   * It presents a unified workspace for kubernetes alone
> >>   * Packaging without ansible is simpler as the ansible directory
> >> need not be deleted
> >>
> >> There were other complaints but not many pros.  Unfortunately I
> >> failed to communicate these complaints to the core team prior to the
> >> vote, so now is the time for fixing that.
> >>
> >> I'll leave it open to the new folks that want to do the work if they
> >> want to work on an offshoot repository and open us up to the
> >> possible problems above.
> >>
> >>
> >> +1 to the separate repo
> >>
> >> I think the separate repo worked very well for us and would encourage
> >> you to replicate that again. Having one repo doing one thing makes the
> >> goal of the repo obvious and makes the api between the images and
> >> deployment clearer (also the stablity of that
> >> api and things like permissions *cough* drop-root).
> >>
> >> -Angus
> >>
> >>
> >> If you are on this list:
> >>
> >>   * Ryan Hallisey
> >>   * Britt Houser
> >>
> >>   * mark casey
> >>
> >>   * Steven Dake (delta-alpha-kilo-echo)
> >>
> >>   * Michael Schmidt
> >>
> >>   * Marian Schwarz
> >>
> >>   * Andrew Battye
> >>
> >>   * Kevin Fo

[openstack-dev] [neutron] neutron-qos meeting cancelled this week

2016-05-03 Thread Nate Johnston
All,

Because many of us are still recovering from the summit or on vacation, we
are cancelling the neutron-qos meeting for tomorrow.  We will resume with
the next meeting, on May 18th.

Thanks,

--N.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kolla] lock the distro version in the stable branch

2016-05-03 Thread Jeffrey Zhang
Hey guys,

Recently, the ubuntu 16.04 is out and it crashed kolla when using
ubuntu:lastest to
build the images.

even though kolla support multi base-tag, the kolla will failed when using
other
base-tag except for centos:7, ubuntu:14.04, rhel:7.
And it is also hard to support all kind of the image tag.

So I support that kolla should restrict the base-tag. the lastest tag is
mutable and
we should not use it, especially in the stable branch. When using a mutable
image,
it is never a *stable* release.

-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Tim Bell
John,

How would Oslo like functionality be included ? Would the aim be to produce 
equivalent libraries ?

Tim




On 03/05/16 17:58, "John Dickinson"  wrote:

>TC,
>
>In reference to 
>http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html and 
>Thierry's reply, I'm currently drafting a TC resolution to update 
>http://governance.openstack.org/resolutions/20150901-programming-languages.html
> to include Go as a supported language in OpenStack projects.
>
>As a starting point, what would you like to see addressed in the document I'm 
>drafting?
>
>--John
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Rayson Ho
I like Go! However, Go does not offer binary compatibility between point
releases. For those who install from source it may not be a big issue, but
for commercial distributions that pre-package & pre-compile everything,
then the compiled Go libs won't be compatible with old/new releases of the
Go compiler that the user may want to install on their systems.

Rayson

==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/
http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html




On Tue, May 3, 2016 at 11:58 AM, John Dickinson  wrote:

> TC,
>
> In reference to
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html
> and Thierry's reply, I'm currently drafting a TC resolution to update
> http://governance.openstack.org/resolutions/20150901-programming-languages.html
> to include Go as a supported language in OpenStack projects.
>
> As a starting point, what would you like to see addressed in the document
> I'm drafting?
>
> --John
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-03 Thread Clint Byrum
Excerpts from Edward Leafe's message of 2016-05-03 08:20:36 -0700:
> On May 3, 2016, at 6:45 AM, Miles Gould  wrote:
> 
> >> This DB could be an RDBMS or Cassandra, depending on the deployer's 
> >> preferences
> > AFAICT this would mean introducing and maintaining a layer that abstracts 
> > over RDBMSes and Cassandra. That's a big abstraction, over two quite 
> > different systems, and it would be hard to write code that performs well in 
> > both cases. If performance in this layer is critical, then pick whichever 
> > DB architecture handles the expected query load better and use that.
> 
> Agreed - you simply can’t structure the data the same way. When I read 
> criticisms of Cassandra that include “you can’t do joins” or “you can’t 
> aggregate”, it highlights this fact: you have to think about (and store) your 
> data completely differently than you would in an RDBMS. You cannot simply 
> abstract out the differences.
> 

Right, once one accepts that fact, Cassandra looks a lot less like a
revolutionary database, and a lot more like a sharding toolkit.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson
That's a good question, and I'll be sure to address it. Thanks.

In the context of "golang code in swift", any discussion around a "goslo" 
library would be up to the oslo team, I think. The proposed functionality that 
would be in golang in swift does not currently depend on any oslo library. In 
general, if the TC supports Go, I'd think it wouldn't be any different than the 
question of "where's the oslo libraries for javascript [which is already an 
approved language]?"

--John




On 3 May 2016, at 9:14, Tim Bell wrote:

> John,
>
> How would Oslo like functionality be included ? Would the aim be to produce 
> equivalent libraries ?
>
> Tim
>
>
>
>
> On 03/05/16 17:58, "John Dickinson"  wrote:
>
>> TC,
>>
>> In reference to 
>> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html and 
>> Thierry's reply, I'm currently drafting a TC resolution to update 
>> http://governance.openstack.org/resolutions/20150901-programming-languages.html
>>  to include Go as a supported language in OpenStack projects.
>>
>> As a starting point, what would you like to see addressed in the document 
>> I'm drafting?
>>
>> --John
>>
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson
That's an interesting point. I'm not very familiar with Golang itself yet, and 
I haven't yet had to manage any Golang projects in prod. These sorts of 
questions are great!

If a distro is distributing pre-compiled binaries, isn't the compatibility 
issue up to the distros? OpenStack is not distributing binaries (or even distro 
packages!), so while it's an important question, how does it affect the 
question of golang being an ok language in which to write openstack source code?

--John




On 3 May 2016, at 9:16, Rayson Ho wrote:

> I like Go! However, Go does not offer binary compatibility between point
> releases. For those who install from source it may not be a big issue, but
> for commercial distributions that pre-package & pre-compile everything,
> then the compiled Go libs won't be compatible with old/new releases of the
> Go compiler that the user may want to install on their systems.
>
> Rayson
>
> ==
> Open Grid Scheduler - The Official Open Source Grid Engine
> http://gridscheduler.sourceforge.net/
> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
>
>
>
>
> On Tue, May 3, 2016 at 11:58 AM, John Dickinson  wrote:
>
>> TC,
>>
>> In reference to
>> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html
>> and Thierry's reply, I'm currently drafting a TC resolution to update
>> http://governance.openstack.org/resolutions/20150901-programming-languages.html
>> to include Go as a supported language in OpenStack projects.
>>
>> As a starting point, what would you like to see addressed in the document
>> I'm drafting?
>>
>> --John
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] config: deduce related options for config generator?

2016-05-03 Thread Markus Zoeller
While working on [1] I came across a config option ("pybasedir")
which gets used as a base for many other options, for example
"state_path". The option "state_path" shows then a default value
"state_path = $pybasedir".
My question here is, is it possible/reasonable to enhance oslo.config
to add an information to "pybasedir" that is used as a base for other
config options?
My concern is, that one could change "pybasedir" and expect that only
this one single value changes, but actually one changes multiple other
config options as well. Making it explicit that "pybasedir" gets used
multiple times as a base could prevent confusion.

References:
[1] https://review.openstack.org/#/c/299236/7/nova/conf/paths.py

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][nova][SR-IOV] SR-IOV meeting May 3 2016 - update

2016-05-03 Thread Moshe Levi
Hi,



I just wanted to give a short update regarding SR-IOV/PCI Passthrough /NFV 
meeting.



* We decide to change the meeting frequency to every week, until 
PCI/SR-IOV/NUMA will be more stable  see [1]

* Improving SR-IOV/PCI Passthrough /NFV testing

o   With the help of wznoinsk we are working to move Mellanox CI to containers 
(owner lennyb)

o   How Muti node CI for IOV/PCI Passthrough /NFV (needs an owner)

o   CI for  PF passthrough (needs an owner)

* Documentation

o   Improve PCI Passthrough SR-IOV Documentation -  (owners lbeliveau, moshele)

o   Improve NUMA and cpu pinning Documentation - (owner sfinucan)

* I updated the etherpad [2] with the agenda for next week (May 10 
2016) and added the following sections:

o   Patches ready for core reviews

o   Patches for sub-team review - please try to review them for our next meeting

o   Patches that needs owner - feel free to  add your irc name to the patches 
you think you can continue



[1] - https://review.openstack.org/#/c/312107/

[2] - https://etherpad.openstack.org/p/sriov_meeting_agenda



Thanks,

 Moshe Levi.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Austin summit - session recap/summary

2016-05-03 Thread Steven Hardy
Hi all,

Some folks have requested a summary of our summit sessions, as has been
provided for some other projects.

I'll probably go into more detail on some of these topics either via
subsequent more focussed threads an/or some blog posts but what follows is
an overview of our summit sessions[1] with notable actions or decisions
highlighted.  I'm including some of my own thoughts and conclusions, folks
are welcome/encouraged to follow up with their own clarifications or
different perspectives :)

TripleO had a total of 5 sessions in Austin I'll cover them one-by-one:

-
Upgrades - current status and roadmap
-

In this session we discussed the current state of upgrades - initial
support for full major version upgrades has been implemented, but the
implementation is monolithic, highly coupled to pacemaker, and inflexible
with regard to third-party extraconfig changes.

The main outcomes were that we will add support for more granular
definition of the upgrade lifecycle to the new composable services format,
and that we will explore moving towards the proposed lightweight HA
architecture to reduce the need for so much pacemaker specific logic.

We also agreed that investigating use of mistral to drive upgrade workflows
was a good idea - currently we have a mixture of scripts combined with Heat
to drive the upgrade process, and some refactoring into discrete mistral
workflows may provide a more maintainable solution.  Potential for using
the existing SoftwareDeployment approach directly via mistral (outside of
the heat templates) was also discussed as something to be further
investigated and prototyped.

We also touched on the CI implications of upgrades - we've got an upgrades
job now, but we need to ensure coverage of full release-to-release upgrades
(not just commit to commit).

---
Containerization status/roadmap
---

In this session we discussed the current status of containers in TripleO
(which is to say, the container based compute node which deploys containers
via Heat onto an an Atomic host node that is also deployed via Heat), and
what strategy is most appropriate to achieve a fully containerized TripleO
deployment.

Several folks from Kolla participated in the session, and there was
significant focus on where work may happen such that further collaboration
between communities is possible.  To some extent this discussion on where
(as opposed to how) proved a distraction and prevented much discussion on
supportable architectural implementation for TripleO, thus what follows is
mostly my perspective on the issues that exist:

Significant uncertainty exists wrt integration between Kolla and TripleO -
there's largely consensus that we want to consume the container images
defined by the Kolla community, but much less agreement that we can
feasably switch to the ansible-orchestrated deployment/config flow
supported by Kolla without breaking many of our primary operator interfaces
in a fundamentally unacceptable way, for example:

- The Mistral based API is being implemented on the expectation that the
  primary interface to TripleO deployments is a parameters schema exposed
  by a series of Heat templates - this is no longer true in a "split stack"
  model where we have to hand off to an alternate service orchestration tool.

- The tripleo-ui (based on the Mistral based API) consumes heat parameter
  schema to build it's UI, and Ansible doesn't support the necessary
  parameter schema definition (such as types and descriptions) to enable
  this pattern to be replicated.  Ansible also doesn't provide a HTTP API,
  so we'd still have to maintain and API surface for the (non python) UI to
  consume.

We also discussed ideas around integration with kubernetes (a hot topic on
the Kolla track this summit), but again this proved inconclusive beyond
that yes someone should try developing a PoC to stimulate further
discussion.  Again, significant challenges exist:

- We still need to maintain the Heat parameter interfaces for the API/UI,
  and there is also a strong preference to maintain puppet as a tool for
  generating service configuration (so that existing operator integrations
  via puppet continue to function) - this is a barrier to directly
  consuming the kolla-kubernetes effort directly.

- A COE layer like kubernetes is a poor fit for deployments where operators
  require strict control of service placement (e.g exactly which nodes a service
  runs on, IP address assignments to specific nodes etc) - this is already
  a strong requirement for TripleO users and we need to figure out if/how
  it's possible to control container placement per node/namespace.

- There are several uncertainties regarding the HA architecture, such as
  how do we achieve fencing for nodes (which is currently provided via
  pacemaker), in particular the HA model for real production deployments
  via kube

Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Rayson Ho
On Tue, May 3, 2016 at 12:24 PM, John Dickinson  wrote:

> That's an interesting point. I'm not very familiar with Golang itself yet,
> and I haven't yet had to manage any Golang projects in prod. These sorts of
> questions are great!
>
>
See: https://golang.org/doc/go1compat



> If a distro is distributing pre-compiled binaries, isn't the compatibility
> issue up to the distros? OpenStack is not distributing binaries (or even
> distro packages!), so while it's an important question, how does it affect
> the question of golang being an ok language in which to write openstack
> source code?
>


I mean a commercial OpenStack distro...

OpenStack does not distribute binaries today (because Python is an
interpreted language), but Go is a compiled language. So may be I should
simplify my question -- in what form should a commercial OpenStack distro
distribute OpenStack components written in Go?

Rayson

==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/
http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html





>
> --John
>
>
>
>
> On 3 May 2016, at 9:16, Rayson Ho wrote:
>
> > I like Go! However, Go does not offer binary compatibility between point
> > releases. For those who install from source it may not be a big issue,
> but
> > for commercial distributions that pre-package & pre-compile everything,
> > then the compiled Go libs won't be compatible with old/new releases of
> the
> > Go compiler that the user may want to install on their systems.
> >
> > Rayson
> >
> > ==
> > Open Grid Scheduler - The Official Open Source Grid Engine
> > http://gridscheduler.sourceforge.net/
> > http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
> >
> >
> >
> >
> > On Tue, May 3, 2016 at 11:58 AM, John Dickinson  wrote:
> >
> >> TC,
> >>
> >> In reference to
> >> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html
> >> and Thierry's reply, I'm currently drafting a TC resolution to update
> >>
> http://governance.openstack.org/resolutions/20150901-programming-languages.html
> >> to include Go as a supported language in OpenStack projects.
> >>
> >> As a starting point, what would you like to see addressed in the
> document
> >> I'm drafting?
> >>
> >> --John
> >>
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Clint Byrum
Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:
> On 05/03/2016 08:55 AM, Clint Byrum wrote:
> >
> > Perhaps we have different perspectives. How is accepting what we
> > previously emitted and told the user would be valid sneaky or wrong?
> > Sounds like common sense due diligence to me.
> 
> I agree - I see no reason we can't validate previously emitted tokens. 
> But I don't agree strongly, because re-authing on invalid token is a 
> thing users do hundreds of times a day. (these aren't oauth API Keys or 
> anything)
> 

Sure, one should definitely not be expecting everything to always work
without errors. On this we agree for sure. However, when we do decide to
intentionally induce errors for reasons we have not done so before, we
should weigh the cost of avoiding that with the cost of having it
happen. Consider this strawman:

- User gets token, it says "expires_at Now+4 hours"
- User starts a brief set of automation tasks in their system
  that does not use python and has not failed with invalid tokens thus
  far.
- Keystone nodes are all updated at one time (AMAZING cloud ops team)
- User's automation jobs fail at next OpenStack REST call
- User begins debugging, wasting hours of time figuring out that
  their tokens, which they stored and show should still be valid, were
  rejected.

And now they have to refactor their app, because this may happen again,
and they have to make sure that invalid token errors can bubble up to the
layer that has the username/password, or accept rolling back and
retrying the whole thing.

I'm not saying anybody has this system, I'm suggesting we're putting
undue burden on users with an unknown consequence. Falling back to UUID
for a while has a known cost of a little bit of code and checking junk
tokens twice.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kolla-k8s] Core team

2016-05-03 Thread Michał Jastrzębski
Hello,

Since it seems that we have voted for separation of kolla-k8s repos
(yay!) I would like to table another discussion (but let's wait till
its official).

Core Team.

We need to build up new core team that will guard the gates on our
brand new repo (when it arrives). One of ideas Steven pointed out is
to add people from etherpad to core team, but I'd like to throw
different idea to the mix, to keep things interesting.

Idea is: let's start with current kolla core team and for the time
being add new cores to kolla-k8s by invitation by existing core
member. For example, I'm kolla core, working with k8s and I see some
guy doing great job and investing time into it, I would propose him
for core, and instead of normal voting, he will get his +2 powers
immediately. This would allow quick core team buildout and not start
with bunch of people who doesn't necessary want to contribute or even
know each other.

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] config: deduce related options for config generator?

2016-05-03 Thread Doug Hellmann
Excerpts from Markus Zoeller's message of 2016-05-03 18:26:50 +0200:
> While working on [1] I came across a config option ("pybasedir")
> which gets used as a base for many other options, for example
> "state_path". The option "state_path" shows then a default value
> "state_path = $pybasedir".
> My question here is, is it possible/reasonable to enhance oslo.config
> to add an information to "pybasedir" that is used as a base for other
> config options?
> My concern is, that one could change "pybasedir" and expect that only
> this one single value changes, but actually one changes multiple other
> config options as well. Making it explicit that "pybasedir" gets used
> multiple times as a base could prevent confusion.
> 
> References:
> [1] https://review.openstack.org/#/c/299236/7/nova/conf/paths.py
> 
> Regards, Markus Zoeller (markus_z)
> 

(Sorry if this is a dupe, I'm having mail client issues.)

We can detect interpolated values in defaults, but those can also appear
in user-provided values. There are also plenty of options that are
related to each other without using interpolation.

Given that we have to handle the explicit cases anyway, and that
interpolation isn't used all that often, I think it likely makes more
sense to start with the explicit implementation and see how far that
takes us before adding any automation.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Clint Byrum
Excerpts from Lance Bragstad's message of 2016-05-03 07:42:43 -0700:
> If we were to write a uuid/fernet hybrid provider, it would only be
> expected to support something like stable/liberty to stable/mitaka, right?
> This is something that we could contribute to stackforge, too.
> 

If done the way Adam Young described, with Fernet content as UUIDs,
one could in theory update from any UUID-aware provider, since the
Fernet-emitting nodes would just be writing their Fernet tokens into
the database that the UUID nodes read from, allowing the UUID-only nodes
to validate the new tokens. However, we never support jumping more than
one release at a time, so that is somewhat moot.

Also, stackforge isn't a thing, but I see what you're saying. It could
live out of tree, but let's not abandon all hope that we can collaborate
on something that works for users who desire to not have a window mass
token invalidation on update.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Clint Byrum
Excerpts from Adam Young's message of 2016-05-03 07:21:52 -0700:
> On 05/03/2016 09:55 AM, Clint Byrum wrote:
> > When the operator has configured a new token format to emit, they should
> > also be able to allow any previously emitted formats to be validated to
> > allow users a smooth transition to the new format. We can then make the
> > default behavior for one release cycle to emit Fernet, and honor both
> > Fernet and UUID.
> >
> > Perhaps ignore the other bit that I put in there about switching formats
> > just because you have fernet keys. Let's say the new pseudo code only
> > happens in validation:
> >
> > try:
> >self._validate_fernet_token()
> > except NotAFernetToken:
> >self._validate_uuid_token()
> 
> I was actually thinking of a different migration strategy, exactly the 
> opposite:  for a while, run with the uuid tokens, but store the Fernet 
> body.  After while, switch from validating the uuid token body to the 
> stored Fernet.  Finally, switch to validating the Fernet token from the 
> request.  That way, we always have only one token provider, and the 
> migration can happen step by step.
> 
> It will not help someone that migrates from Icehouse to Ocata. Then 
> again, the dual plan you laid out above will not either;  at some point, 
> people will have to dump the token table to make major migrations.
> 

Your plan has a nice aspect that it allows validating Fernet tokens on
UUID-configured nodes too, which means operators don't have to be careful
to update all nodes at one time. So I think what you describe above is
an even better plan.

Either way, the point is to avoid an immediate mass token invalidation
event on change of provider.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] lock the distro version in the stable branch

2016-05-03 Thread Steven Dake (stdake)


From: Jeffrey Zhang mailto:zhang.lei@gmail.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, May 3, 2016 at 9:12 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Kolla] lock the distro version in the stable branch

Hey guys,

Recently, the ubuntu 16.04 is out and it crashed kolla when using 
ubuntu:lastest to
build the images.

even though kolla support multi base-tag, the kolla will failed when using other
base-tag except for centos:7, ubuntu:14.04, rhel:7.
And it is also hard to support all kind of the image tag.

So I support that kolla should restrict the base-tag. the lastest tag is 
mutable and
we should not use it, especially in the stable branch. When using a mutable 
image,
it is never a *stable* release.

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me

Totally agree.  File bug - fix :)

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Newton mid-cycle meetup RSVP

2016-05-03 Thread Matt Riedemann
We're doing the Nova mid-cycle meetup for Newton at the Intel campus in 
Hillsboro, OR on July 19-21.


I have an RSVP form here: http://goo.gl/forms/MxrriHsABq

If you plan on attending, or think you might be able to (or are trying 
to), please fill that out.


I'd like to have RSVPs completed by Tuesday 5/10 so I can get this 
information to the event planners at Intel.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Monty Taylor

On 05/03/2016 11:47 AM, Clint Byrum wrote:

Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:

On 05/03/2016 08:55 AM, Clint Byrum wrote:


Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.


I agree - I see no reason we can't validate previously emitted tokens.
But I don't agree strongly, because re-authing on invalid token is a
thing users do hundreds of times a day. (these aren't oauth API Keys or
anything)



Sure, one should definitely not be expecting everything to always work
without errors. On this we agree for sure. However, when we do decide to
intentionally induce errors for reasons we have not done so before, we
should weigh the cost of avoiding that with the cost of having it
happen. Consider this strawman:

- User gets token, it says "expires_at Now+4 hours"
- User starts a brief set of automation tasks in their system
   that does not use python and has not failed with invalid tokens thus
   far.
- Keystone nodes are all updated at one time (AMAZING cloud ops team)
- User's automation jobs fail at next OpenStack REST call
- User begins debugging, wasting hours of time figuring out that
   their tokens, which they stored and show should still be valid, were
   rejected.


Ah - I guess this is where we're missing each other, which is good and 
helpful.


I would argue that any user that is _storing_ tokens is doing way too 
much work. If they are doing short tasks, they should just treat them as 
ephemeral. If they are doing longer tasks, they need to deal with 
timeouts. SO, this:



- User gets token, it says "expires_at Now+4 hours"
- User starts a brief set of automation tasks in their system
   that does not use python and has not failed with invalid tokens thus
   far.

should be:

- User starts a brief set of automation tasks in their system
that does not use python and has not failed with invalid tokens thus
far.

"Get a token" should never be an activity that anyone ever consciously 
performs.



And now they have to refactor their app, because this may happen again,
and they have to make sure that invalid token errors can bubble up to the
layer that has the username/password, or accept rolling back and
retrying the whole thing.

I'm not saying anybody has this system, I'm suggesting we're putting
undue burden on users with an unknown consequence. Falling back to UUID
for a while has a known cost of a little bit of code and checking junk
tokens twice.


Totally. I have no problem with the suggestion that keystone handle 
this. But I also think that users should quite honestly stop thinking 
about tokens at all. Tokens are an implementation detail that if any 
user thinks about while writing their app they're setting themselves up 
to be screwed - so we should make sure we're not talking about them in a 
primary way such as to suggest that people focus a lot of energy on them.


(I also frequently see users who are using python libraries even get 
everything horribly wrong and screw themselves because they think they 
need to think about tokens)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] lock the distro version in the stable branch

2016-05-03 Thread Hui Kang
This commit fixes the tag:
https://github.com/openstack/kolla/commit/e2fa75fce6f90de8b2766070bb65d0b80bcad8c8

But I think fixing the tag in dockerfile of base container image is better

- Hui

On Tue, May 3, 2016 at 12:12 PM, Jeffrey Zhang  wrote:
> Hey guys,
>
> Recently, the ubuntu 16.04 is out and it crashed kolla when using
> ubuntu:lastest to
> build the images.
>
> even though kolla support multi base-tag, the kolla will failed when using
> other
> base-tag except for centos:7, ubuntu:14.04, rhel:7.
> And it is also hard to support all kind of the image tag.
>
> So I support that kolla should restrict the base-tag. the lastest tag is
> mutable and
> we should not use it, especially in the stable branch. When using a mutable
> image,
> it is never a *stable* release.
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [docs] [cinder] [swift] [glance] [keystone] [ironic] [trove] [neutron] [heat] [senlin] [manila] [sahara] RST + YAML files ready for pick up from WADL migration

2016-05-03 Thread Jim Rollenhagen
On Tue, May 03, 2016 at 08:29:16AM -0500, Anne Gentle wrote:
> Hi all,
> This patch contains all the RST + YAML for projects to bring over to their
> repos to begin building API reference information from within your repo.
> Get a copy of this patch, and pick up the files for your service in
> api-site/api-ref/source/:
> 
> https://review.openstack.org/#/c/311596/
> 
> There is required cleanup, and you'll need an index.rst, conf.py, and build
> jobs. All of these can be patterned after the nova repository api-ref
> directory. Read more at
> http://docs.openstack.org/contributor-guide/api-guides.html
> 
> It's overall in good shape thanks to Karen Bradshaw, Auggy Ragwitz, Andreas
> Jaeger, and Sean Dague. Appreciate the help over the finish line during
> Summit week, y'all.
> 
> The api-site/api-ref files are now frozen and we will not accept patches.
> The output at developer.openstack.org/api-ref.html remains frozen until we
> can provide redirects to the newly-sourced-and-built files. Please, make
> this work a priority in this release. Ideally we can get everyone ready by
> Milestone 1 (May 31).
> 
> If you would like to use a Swagger/OpenAPI file, pick that file up from
> developer.openstack.org/draft/swagger/ and create build jobs from your repo
> to publish it on developer.openstack.org.
> 
> Let me know if you have questions.
> Thanks,
> Anne

Thanks for doing this, Anne!

Don't forget to add an api-ref tox target, a la
https://github.com/openstack/nova/blob/1555736e3c1e0a66a99d0291934887250cd2e0cc/tox.ini#L106

This is necessary for the jobs to do anything.

// jim

> 
> -- 
> Anne Gentle
> www.justwriteclick.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][neutron] VMWare NSX CI - voting on devstack changes long after plugin decomposition

2016-05-03 Thread Sean M. Collins
When the VMWare plugin was decomposed from the main Neutron tree 
(https://review.openstack.org/#/c/160463/) it appears that the CI system was 
left turned on.

http://208.91.1.172/logs/neutron/168438/48/423669-large-ops/logs/q-svc.log.2016-05-03-085740

2016-05-03 09:21:00.577 21706 ERROR neutron plugin_class = 
self.load_class_for_provider(namespace, plugin_provider)
2016-05-03 09:21:00.577 21706 ERROR neutron   File 
"/opt/stack/neutron/neutron/manager.py", line 145, in load_class_for_provider
2016-05-03 09:21:00.577 21706 ERROR neutron raise ImportError(_("Plugin 
'%s' not found.") % plugin_provider)
2016-05-03 09:21:00.577 21706 ERROR neutron ImportError: Plugin 
'neutron.plugins.vmware.plugin.NsxPlugin' not found.


I don't know the criteria for when this specific CI job is run, I appear
to be the only one triggering it for a . rather long time

http://paste.openstack.org/show/495994/

So, it's still voting on DevStack changes but I think we probably should
revoke that.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Austin summit sessions recap

2016-05-03 Thread Andrew Woodward
On Tue, May 3, 2016 at 6:38 AM Emilien Macchi  wrote:

> Here's a summary of Puppet OpenStack sessions [1] during Austin summit.
>
> * General feedback is excellent, things are stable, no major changes
> are coming during the next cycle.
> * We discussed about the work we want to do during Newton cycle [2]:
>
> Ubuntu 16.04 LTS
> Make Puppet OpenStack modules working and gated on Ubuntu 16.04,
> starting from Newton.
> Keep stable/mitaka and before gated on Ubuntu 14.04 LTS.
>
> Release management with trailing cycle
> The release model changed to:
> http://governance.openstack.org/reference/tags/release_cycle-trailing.html
> We'll start producing milestones within a cycle, continue efforts on
> tarballs and investigate package builds (rpm, etc).
>

We spoke some about marking point releases more often. On stable it sounded
like everyone was in favor for this to be automated. I didn't catch how we
wanted to handle dev milestones


>
> Move documentation out from Wiki
> See [3].
>
> puppet-pacemaker unification
> Mirantis & Red Hat to continue collaboration on merging efforts on
> puppet-pacemaker module: https://review.openstack.org/#/c/296440/)
> So both Fuel & TripleO will use the same Puppet module to deploy Pacemaker.
>
> CI stabilization
> We're supporting 18 months old releases, so we will continue all
> efforts to stabilize our CI and make it robust so it does not break
> every morning.


Does this make it the last 3 releases + dev = 4 or the last 2 + dev? Since
-3 technically falls off when the next dev cycle starts

>
>
Containers
> Most of containers deployments have common bits (user/group
> management, config files management, etc).
> We decided that we would add the common bits in our modules, so they
> can be used by people deploying OpenStack in containers. See [4].
>
> [1] https://etherpad.openstack.org/p/newton-design-puppet
> [2] https://etherpad.openstack.org/p/newton-puppet-project-status
> [3] https://etherpad.openstack.org/p/newton-puppet-docs
> [4] https://etherpad.openstack.org/p/newton-puppet-multinode-containers
>
>
> As a retrospective, we've noticed that we had a quiet agenda &
> sessions this time, without critical things. It is a sign for our
> group things are now very stable and we did an excellent job to be at
> this point.
> Thanks for everyone who attended our sessions, feel free to add more
> things that I might have missed, or any questions.
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
--
Andrew Woodward
Mirantis
Fuel Community Ambassador
Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] weekly meeting canceled on 5/3

2016-05-03 Thread Steve Martinelli
sorry for the late notice - there are no items on the agenda and i think
most are still decompressing from the summit
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron client and plan to transition to OpenStack client

2016-05-03 Thread Richard Theis
Steve Martinelli  wrote on 04/22/2016 05:49:32 PM:

> From: Steve Martinelli 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 04/22/2016 05:52 PM
> Subject: Re: [openstack-dev] [Neutron] Neutron client and plan to 
> transition to OpenStack client
> 
> thanks to richard, tangchen, reepid and others for picking this up 
> and running with it; and thanks to armando for embracing OSC and 
> putting it in neutron's plan.
> 
> On Fri, Apr 22, 2016 at 6:33 PM, reedip banerjee  
wrote:
> Hi Richard, 
> Thanks for the information :)
> 
> Was waiting for it.
> 
> On Sat, Apr 23, 2016 at 3:27 AM, Armando M.  wrote:
> 
> On 22 April 2016 at 13:58, Richard Theis  wrote:
> FYI: I've pushed a series of WIP patch sets [1], [2] and [3] to 
> enable python-neutronclient OSC plugins. I've used "openstack 
> network agent list" as the initial OSC plugin command example.  
> Hopefully these will help during the discussions at the summit. 
> 
> [1] https://review.openstack.org/#/c/309515/ 
> [2] https://review.openstack.org/#/c/309530/ 
> [3] https://review.openstack.org/#/c/309587/ 
> 
> Super! Thanks for your help Richard!
> 
> Cheers,
> Armando
>  
> 
> "Armando M."  wrote on 04/22/2016 12:19:45 PM:
> 
> > From: "Armando M."  
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> >  
> > Date: 04/22/2016 12:22 PM 
> > Subject: [openstack-dev] [Neutron] Neutron client and plan to 
> > transition to OpenStack client 
> > 
> > Hi Neutrinos, 
> > 
> > During the Mitaka release the team sat together to figure out a plan
> > to embrace the OpenStack client and supplant the neutron CLI tool. 
> > 
> > Please note that this does not mean we will get rid of the 
> > openstack-neutronclient repo. In fact we still keep python client 
> > bindings and keep the development for features that cannot easily go
> > in the OSC client (like the high level services). 
> > 
> > We did put together a transition plan in pace [1], but we're 
> > revising it slightly and we'll continue the discussion at the summit. 
> > 
> > If you are interested in this topic, are willing to help with the 
> > transition or have patches currently targeting the client and are 
> > unclear on what to do, please stay tuned. We'll report back after 
> the summit.

Hi,

Is there an update available from the summit session? I didn't see a
resolution documented in [3].

Thanks,
Richard

[3] https://etherpad.openstack.org/p/newton-neutron-future-neutron-client

> > 
> > Armando 
> > 
> > [1] http://docs.openstack.org/developer/python-neutronclient/devref/
> > transition_to_osc.html 
> > [2] 
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9096
> 
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> 
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> 

> 
> -- 
> Thanks and Regards,
> Reedip Banerjee
> IRC: reedip
> 
> 
> 

> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Morgan Fainberg
On Tue, May 3, 2016 at 10:28 AM, Monty Taylor  wrote:

> On 05/03/2016 11:47 AM, Clint Byrum wrote:
>
>> Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:
>>
>>> On 05/03/2016 08:55 AM, Clint Byrum wrote:
>>>

 Perhaps we have different perspectives. How is accepting what we
 previously emitted and told the user would be valid sneaky or wrong?
 Sounds like common sense due diligence to me.

>>>
>>> I agree - I see no reason we can't validate previously emitted tokens.
>>> But I don't agree strongly, because re-authing on invalid token is a
>>> thing users do hundreds of times a day. (these aren't oauth API Keys or
>>> anything)
>>>
>>>
>> Sure, one should definitely not be expecting everything to always work
>> without errors. On this we agree for sure. However, when we do decide to
>> intentionally induce errors for reasons we have not done so before, we
>> should weigh the cost of avoiding that with the cost of having it
>> happen. Consider this strawman:
>>
>> - User gets token, it says "expires_at Now+4 hours"
>> - User starts a brief set of automation tasks in their system
>>that does not use python and has not failed with invalid tokens thus
>>far.
>> - Keystone nodes are all updated at one time (AMAZING cloud ops team)
>> - User's automation jobs fail at next OpenStack REST call
>> - User begins debugging, wasting hours of time figuring out that
>>their tokens, which they stored and show should still be valid, were
>>rejected.
>>
>
> Ah - I guess this is where we're missing each other, which is good and
> helpful.
>
> I would argue that any user that is _storing_ tokens is doing way too much
> work. If they are doing short tasks, they should just treat them as
> ephemeral. If they are doing longer tasks, they need to deal with timeouts.
> SO, this:
>
>
> - User gets token, it says "expires_at Now+4 hours"
> - User starts a brief set of automation tasks in their system
>that does not use python and has not failed with invalid tokens thus
>far.
>
> should be:
>
> - User starts a brief set of automation tasks in their system
> that does not use python and has not failed with invalid tokens thus
> far.
>
> "Get a token" should never be an activity that anyone ever consciously
> performs.
>
>
This is my view. Never, ever, ever assume your token is good until
expiration. Assume the token might be broken at any request and know how to
re-auth.


> And now they have to refactor their app, because this may happen again,
>> and they have to make sure that invalid token errors can bubble up to the
>> layer that has the username/password, or accept rolling back and
>> retrying the whole thing.
>>
>> I'm not saying anybody has this system, I'm suggesting we're putting
>> undue burden on users with an unknown consequence. Falling back to UUID
>> for a while has a known cost of a little bit of code and checking junk
>> tokens twice.
>>
>
Please do not advocate "falling back" to UUID. I am actually against making
fernet the default (very, very strongly), if we have to have this
"fallback" code. It is the wrong kind of approach, we already have serious
issues with complex code paths that produce subtly different results. If
the options are:

1) Make Fernet Default and have "fallback" code

or

2) Leave UUID default and highly recommend fernet (plus gate on fernet
primarily, default in devstack)

I will jump on my soapbox and be very loudly in favor of the 2nd option. If
we communicate this is a change that will happen (hey, maybe throw an
error/make the config option "none" so it has to be explicit) in Newton,
and then move to a Fernet default in O - I'd be ok with that.


>
> Totally. I have no problem with the suggestion that keystone handle this.
> But I also think that users should quite honestly stop thinking about
> tokens at all. Tokens are an implementation detail that if any user thinks
> about while writing their app they're setting themselves up to be screwed -
> so we should make sure we're not talking about them in a primary way such
> as to suggest that people focus a lot of energy on them.
>
> (I also frequently see users who are using python libraries even get
> everything horribly wrong and screw themselves because they think they need
> to think about tokens)
>

Better communication that tokens are ephemeral and should not assume to
work always (even until their expiry) should be the messaging we use. It's
simple, plan to reauth as needed and handle failures.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Nominating Michał Dulko to Cinder Core

2016-05-03 Thread Sean McGinnis
Hey everyone,

I would like to nominate Michał Dulko to the Cinder core team. Michał's
contributions with both code reviews [0] and code contributions [1] have
been significant for some time now.

His persistence with versioned objects has been instrumental in getting
support in the Mitaka release for rolling upgrades.

If there are no objections from current cores by next week, I will add
Michał to the core group.

[0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
[1]
https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged

Thanks!

Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Michał Dulko to Cinder Core

2016-05-03 Thread Sheel Rana Insaan
Even though I am not a core member, but I would like to vote for Michal as
he truely deserves it...
Huge acceptance from my side..
+1

Best Regards,
Sheel Rana

On Tue, May 3, 2016 at 11:46 PM, Sean McGinnis 
wrote:

> Hey everyone,
>
> I would like to nominate Michał Dulko to the Cinder core team. Michał's
> contributions with both code reviews [0] and code contributions [1] have
> been significant for some time now.
>
> His persistence with versioned objects has been instrumental in getting
> support in the Mitaka release for rolling upgrades.
>
> If there are no objections from current cores by next week, I will add
> Michał to the core group.
>
> [0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
> [1]
>
> https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged
>
> Thanks!
>
> Sean McGinnis (smcginnis)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Austin recap

2016-05-03 Thread Tim Hinrichs
Hi all,

Here’s a quick summary of the Congress activities in Austin.  Everyone
should feel free to chime in with corrections and things I missed.

1. Talks

Masahito gave a talk on applying Congress for fault recovery in the context
of NFV.

https://www.openstack.org/summit/austin-2016/summit-schedule/events/7199

Fabio gave a talk on applying Congress + Monasca to enforce
application-level SLAs.

https://www.openstack.org/summit/austin-2016/summit-schedule/events/7363

2. Integrations

We had discussions, both within the Congress Integrations fishbowl session,
and outside of that session on potential integrations with other OpenStack
projects.  Here's a quick overview.

- Monasca (fabiog). The proposed integration: Monasca pushes data to
Congress using the push driver to let Congress know about the alarms
Monasca configured.  Can use multiple alarms using a single table.
Eventually we talked about having Congress analyze the policy to configure
the alarms that Monasca uses, completing the loop.

- Watcher (acabot). Watcher aims to optimize the placement of VMs by
pulling data from Ceilometer/Monasca and Nova (including
affinity/anti-affinity info), computing necessary migrations for whichever
strategy is configured, and migrates the VMs.  Want to use Congress as a
source of policies that they take into account when computing the necessary
migrations.

- Nova scheduler.  There’s interest in policy-enabling the Nova scheduler,
and then integrating that with Congress in the context of delegation, both
to give Congress the ability to pull in the scheduling policy and to push
the scheduling policy.

- Mistral.  The use case for this integration is to help people create an
HA solution for VMs.  So have Congress monitor VMs, identify when they have
failed, and kick off a Mistral workflow to resurrect them.

- Vintrage.  Vintrage does root-cause-analysis.  It provides a graph-based
model for the structure of the datacenter (switches attached to
hypervisors, servers attached to hypervisors, etc.) and a templating
language for defining how to create new alarms from existing alarms.  The
action item that we left is that the Vintrage team will initiate a mailing
list thread where we discuss which Vintrage data might be valuable for
Congress policies.

3. Working sessions

- The new distributed architecture is nearing completion.  There seems to
be 1 blocker for having the basic functionality ready to test: at boot,
Congress doesn’t properly spin up datasources that have already been
configured in the database.  As an experiment to see how close we were to
completion, we started up the Congress server with just the API and policy
engine and saw the basics actually working!  When we added the datasources,
we found a bug where the API was assuming the datasources could be
referenced by UUID, when in fact they can only be referenced by Name on the
message-bus.   So while there’s still quite a bit to do, we’re getting
close to having all the basics working.

- We made progress on the high-availability and high-throughput design.
This is still very much open to design and discussion, so continuing the
design on the mailing list would be great.  Here are the highlights.

   o  Policy engine: split into (i) active-active for queries to deal with
high-throughput (ii) active-passive for action-execution (requiring
leader-election, etc.).  Policy CRUD modifies DB; undecided whether API
also informs all policy-engines, or whether they all sync from the DB.

   o  Pull datasources: no obvious need for replication, since they restart
really fast and will just re-pull the latest data anyhow

   o  Push datasources: Need HA for ensuring the pusher can always push,
e.g. the pusher drops the message onto oslo-messaging.  Still up for debate
is whether we also need HA for storing the data since there is no way to
ask for it after a restart; one suggestion is that every datasource must
allow us to ask for the state.  HT does not require replication, since
syncing the state between several instances would be required and would be
less performant than a single instance.

   o  API (didn’t really discuss this, so here’s my take).  No obvious need
for replication for HT, since if the API is a bottleneck, the backend will
be an even bigger bottleneck.  For HA, could do active-active since the API
is just a front-end to the message bus + database, though we would need to
look into locking now that there is no GIL.

It was great seeing everyone in Austin!
Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] [fuel] [javascript] Supporting ES6

2016-05-03 Thread Michael Krotscheck
TL/DR: Should we support EcmaScript 6?

Discussions I've had on the topic:

Vancouver:
- Browser support is not yet broad enough, so no- we shouldn't support ES6.
- TypeScript is too closely tied to Corporations (tm), not really an open
standard. Do not support TypeScript.

Austin:
- Fuel is using ES6, is now an official project (?).
- We have non-browser projects that could use it, assuming that we have a
recent version of Node.js that we can test on.
- We now have Node4 LTS on our infra build nodes, which support _most_ of
EcmaScript 6 things.
- EcmaScript continues to be moving target (And will likely always be a
moving target).
- Xenial contains Node 4 LTS. Ubuntu does _not_ have an upgrade exception
for node (akin to Firefox).
- Node 6 LTS was released during the summit.

Body of work required:
- Discuss and enable linting rules for ES6 in eslint-config-openstack.
- Smoke-test fuel's unit and functional testing for ES6 components.

Personal Assessment:

Frankly, we already have ES6 in our infra, so that train has left the
building. What we need to do is make sure it has the same level of support
as other languages, which, I believe, isn't going to be too difficult. I
also have some commitments of mutual assistance from Vitaly (Fuel) to keep
things sane and keep communication open. As for the upcoming Node4 vs.
Node6 question, I recommend that we _not_ upgrade to Node 6 LTS in the
Newton cycle, however strongly consider it for the Ocata cycle.

Am I missing anything? Does anyone have opinions?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] stadium evolution - report

2016-05-03 Thread Armando M.
Hi Neutrinos,

For those who could not attend or be in Austin for session [2], we've had
some recent discussions [1] and past ones in [3]. I am trying to get to a
closure on this topic, and I followed up with a spec proposal on [4]. I am
open to suggestions on how to improve the proposal and how to achieve
consensus.

I would strongly encourage you to take the opportunity to review and
provide feedback on [4].

Many thanks,
Armando

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093561.html
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9097
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-December/080865.html
[4] https://review.openstack.org/#/c/312199/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Michael Krotscheck
On Tue, May 3, 2016 at 9:03 AM John Dickinson  wrote:

>
> As a starting point, what would you like to see addressed in the document
> I'm drafting?
>

I'm going through this project with JavaScript right now. Here's some of
the things I've had to address:

- Common language formatting rules (ensure that a pep8-like thing exists).
- Mirroring dependencies?
- Building Documentation
- Common tool choices for testing, coverage, etc.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-03 Thread Truman, Travis
Major has made an incredible number of contributions of code and reviews to the 
OpenStack-Ansible community. Given his role as the primary author of the 
openstack-ansible-security project, I can think of no better addition to the 
core reviewer team.

Travis Truman


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-03 Thread Matthew Thode
On 05/03/2016 01:47 PM, Truman, Travis wrote:
> Major has made an incredible number of contributions of code and reviews
> to the OpenStack-Ansible community. Given his role as the primary author
> of the openstack-ansible-security project, I can think of no better
> addition to the core reviewer team.
> 
> Travis Truman
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
+1 because it still means something to me

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >