[openstack-dev] [ceilometer] When do we import aodh?

2015-06-16 Thread Julien Danjou
Hi there,

The alarm code split blueprint¹ has been approved, and I finished the
split of the code base. It's available online at:

  https://github.com/jd/aodh

tox -e pep8,py27,docs passes.

To me the next step is to:
1. Someone cares and review what I've done in the repository
2. import the code into openstack/aodh
3. enable gate jobs (unit tests at least)
4. enable and fix devstack gating (probably writing a devstack plugin
   for aodh)

WDYT?

¹  
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/split-ceilometer-alarming.html

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [neutron] Re: [Openstack-operators] How do your end users use networking?

2015-06-16 Thread Jay Pipes
Adding -dev because of the reference to the Neutron Get me a network 
spec. Also adding [nova] and [neutron] subject markers.


Comments inline, Kris.

On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:

During the Openstack summit this week I got to talk to a number of other
operators of large Openstack deployments about how they do networking.
  I was happy, surprised even, to find that a number of us are using a
similar type of networking strategy.  That we have similar challenges
around networking and are solving it in our own but very similar way.
  It is always nice to see that other people are doing the same things
as you or see the same issues as you are and that you are not crazy.
So in that vein, I wanted to reach out to the rest of the Ops Community
and ask one pretty simple question.

Would it be accurate to say that most of your end users want almost
nothing to do with the network?


That was my experience at ATT, yes. The vast majority of end users 
could not care less about networking, as long as the connectivity was 
reliable, performed well, and they could connect to the Internet (and 
have others connect from the Internet to their VMs) when needed.



In my experience what the majority of them (both internal and external)
want is to consume from Openstack a compute resource, a property of
which is it that resource has an IP address.  They, at most, care about
which network they are on.  Where a network is usually an arbitrary
definition around a set of real networks, that are constrained to a
location, in which the company has attached some sort of policy.  For
example, I want to be in the production network vs's the xyz lab
network, vs's the backup network, vs's the corp network.  I would say
for Godaddy, 99% of our use cases would be defined as: I want a compute
resource in the production network zone, or I want a compute resource in
this other network zone.  The end user only cares that the IP the vm
receives works in that zone, outside of that they don't care any other
property of that IP.  They do not care what subnet it is in, what vlan
it is on, what switch it is attached to, what router its attached to, or
how data flows in/out of that network.  It just needs to work. We have
also found that by giving the users a floating ip address that can be
moved between vm's (but still constrained within a network zone) we
can solve almost all of our users asks.  Typically, the internal need
for a floating ip is when a compute resource needs to talk to another
protected internal or external resource. Where it is painful (read:
slow) to have the acl's on that protected resource updated. The external
need is from our hosting customers who have a domain name (or many) tied
to an IP address and changing IP's/DNS is particularly painful.


This is precisely my experience as well.


Since the vast majority of our end users don't care about any of the
technical network stuff, we spend a large amount of time/effort in
abstracting or hiding the technical stuff from the users view. Which has
lead to a number of patches that we carry on both nova and neutron (and
are available on our public github).


You may be interested to learn about the Get Me a Network 
specification that was discussed in a session at the summit. I had 
requested some time at the summit to discuss this exact use case -- 
where users of Nova actually didn't care much at all about network 
constructs and just wanted to see Nova exhibit similar behaviour as the 
nova-network behaviour of admin sets up a bunch of unassigned networks 
and the first time a tenant launches a VM, she just gets an available 
network and everything is just done for her.


The spec is here:

https://review.openstack.org/#/c/184857/

 At the same time we also have a

*very* small subset of (internal) users who are at the exact opposite
end of the scale.  They care very much about the network details,
possibly all the way down to that they want to boot a vm to a specific
HV, with a specific IP address on a specific network segment.  The
difference however, is that these users are completely aware of the
topology of the network and know which HV's map to which network
segments and are essentially trying to make a very specific ask for
scheduling.


Agreed, at Mirantis (and occasionally at ATT), we do get some customers 
(mostly telcos, of course) that would like total control over all things 
networking.


Nothing wrong with this, of course. But the point of the above spec is 
to allow normal users to not have to think or know about all the 
advanced networking stuffs if they don't need it. The Neutron API should 
be able to handle both sets of users equally well.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Jay Pipes

On 06/16/2015 08:00 AM, Dmitry Tantsur wrote:


16 июня 2015 г. 13:52 пользователь Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com написал:
 
  On 06/16/2015 04:36 AM, Alex Xu wrote:
 
  So if our min_version is 2.1 and the max_version is 2.50. That means
  alternative implementations need implement all the 50 versions
  api...that sounds pain...
 
 
  Yes, it's pain, but it's no different than someone who is following
the Amazon EC2 API, which cuts releases at a regular (sometimes every
2-3 weeks) clip.
 
  In Amazon-land, the releases are date-based, instead of
microversion/incrementing version-based, but the idea is essentially the
same.
 
  There is GREAT value to having an API mean ONE thing and ONE thing
only. It means that developers can code against something that isn't
like quicksand -- constantly changing meanings.

Being one of such developers, I only see this value for breaking changes.


Sorry, Dmitry, I'm not quite following you. Could you elaborate on what 
you mean by above?


Thanks,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel API settings reference

2015-06-16 Thread Oleg Gelbukh
Andrew,

I've also noticed that incompatible changes are being introduced in JSON
schemas for different objects in almost every release. I hope that explicit
reference that lists and explains all parameters will discourage such
modifications, or at least will increase their visibility and allow to
understand justifications for them.

--
Best regards,
Oleg Gelbukh

On Mon, Jun 15, 2015 at 4:21 PM, Andrew Woodward awoodw...@mirantis.com
wrote:

 I think there is some desire to see more documentation around here as
 there are some odd interactions with parts of the data payload, and perhaps
 documenting these may improve some of them.

 I think the gaps in order of most used are:
 * node object create / update
 * environment networks ( the fact that metadata cant be updated kills me)
 * environment settings (the separate api for hidden and non kills me)
 * release update
 * role add/update

 After these are updated I think we can move on to common but less used
 * node interface assignment
 * node disk assignment



 On Mon, Jun 15, 2015 at 8:09 AM Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Good day, fellow fuelers

 Fuel API is a powerful tool that allow for very fine tuning of deployment
 settings and parameters, and we all know that UI exposes only a fraction of
 the full range of attributes client can pass to Fuel installer.

 However, there are very little documentation that explains what settings
 are accepted by Fuel objects, what are they meanings and what is their
 syntax. There is a main reference document for API [1], but it does give
 almost no insight into payload of parameters that every entity accepts.
 Which are they and what they for seems to be mostly scattered as a tribal
 knowledge.

 I would like to understand if there is a need in such a document among
 developers and deployers who consume Fuel API? Or might be there is already
 such document or effort to create it going on?

 --
 Best regards,
 Oleg Gelbukh
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 --
 Andrew Woodward
 Mirantis
 Fuel Community Ambassador
 Ceph Community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] drop monolithic plugins in neutron module

2015-06-16 Thread Emilien Macchi
The patch has been merged: https://review.openstack.org/#/c/190395

These plugins are now deleted from puppet-neutron:
* monolithic OVS (replaced in favor of ML2 with openvswitch mechanism
driver)
* monolithic Linux Bridge (replaced in favor of ML2 with linuxbridge
mechanism driver)

On 06/12/2015 06:56 AM, Clayton O'Neill wrote:
 Makes sense to drop them to me.  
 
 On Wed, Jun 10, 2015 at 7:20 PM, Emilien Macchi emil...@redhat.com
 mailto:emil...@redhat.com wrote:
 
 Hi,
 
 Monolithic plugins have been dropped in Neutron tree since Juno, I think
 it's time to drop the code from puppet-neutron (I guess everyone is
 using ML2, at least I hope for them).
 
 If anyone is running master against OpenStack Icehouse, please raise
 your voice here and please vote in this patch:
 https://review.openstack.org/#/c/190395
 
 Thanks,
 --
 Emilien Macchi
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-16 Thread Jim Rollenhagen
On Tue, Jun 16, 2015 at 08:56:37AM +0200, Dmitry Tantsur wrote:
 On 06/04/2015 08:58 AM, Xu, Hejie wrote:
 Hi, guys,
 I’m working on adding Microversion into the API-WG’s guideline which
 make sure we have consistent Microversion behavior in the API for user.
 The Nova and Ironic already have Microversion implementation, and as I
 know Magnum _https://review.openstack.org/#/c/184975/_ is going to
 implement Microversion also.
 Hope all the projects which support( or plan to) Microversion can join
 the review of guideline.
 The Mircoversion specification(this almost copy from nova-specs):
 _https://review.openstack.org/#/c/187112_
 And another guideline for when we should bump Mircoversion
 _https://review.openstack.org/#/c/187896/_
 As I know, there already have a little different between Nova and
 Ironic’s implementation. Ironic return min/max version when the requested
 version doesn’t support in server by http-headers. There isn’t such
 thing in nova. But that is something for version negotiation we need for
 nova also.
 Sean have pointed out we should use response body instead of http
 headers, the body can includes error message. Really hope ironic team
 can take a
 look at if you guys have compelling reason for using http headers.
 And if we think return body instead of http headers, we probably need
 think about back-compatible also. Because Microversion itself isn’t
 versioned.
 So I think we should keep those header for a while, does make sense?
 Hope we have good guideline for Microversion, because we only can change
 Mircoversion itself by back-compatible way.
 Thanks
 Alex Xu
 
 Hi all!
 
 I'd like to try put in feedback based on living with microversions in Kilo
 release of Ironic.

And here's my take, based on my experiences. Keep in mind I'm a core
reviewer, a developer, and an operator of Ironic.

From an ops perspective, our team has built our fair share of tooling to
help us run Ironic. Some of it uses the REST API via python or node.js,
and of course we all use the CLI client often.

We also continuously deploy Ironic, for full transparency. My experience
is not with how this works every 6 months, but in the day-to-day.

 
 First of all, after talking to folks off-list, I realized that we all, and
 the spec itself, confuse 3 aspects of microversion usage:
 
 1. protecting from breaking changes.
 This is clearly a big win from user's point of view, and it allowed us to
 conduct painful change with renaming an important node state in our state
 machine. It will allows us even worse change this cycle: change of the
 default state.
 

+1. Good stuff. My tooling doesn't break when I upgrade. Yay.

 2. API discoverability.
 While I believe that there maybe be better implementation of this idea, I
 think I got it now. People want services to report API versions they
 support. People want to be able to request a specific version, and fail
 early if it is not present. Also +1 from me.
 

I don't tend to personally do this. I usually am aware of what version
of Ironic I'm running against. However I see how this could be useful
for other folks.

I do, however, use the versions to say, Oh, I can now request 1.5 which
has logical names! That's useful, let's set those to the names in our
CMDB. Now my tooling that interacts with the CMDB and Ironic can look
at the version and decide to use node.name instead of the old hack we
used to use.

 3. hiding new features from older clients
 This is not directly stated by the spec, but many people imply it, and Nova
 and Ironic did it in Kilo. I want us to be clear: it is not the same as #2.
 You can report versions, but still allow new features to be used.
 

This is still totally useful. If you know what version you are running
against, you know exactly what features are available.

I think the disconnect here is that we don't expect users (whether those
are people or computers) to explicitly request a version. We need to
message better that if you are using Ironic or building a tool against
Ironic's API, you should be pinning the version. We also need to take
this comment block[0] and put it in our docs, so users know what each
version does.

Knowing that I get feature X when I upgrade to version Y is useful.

 It is this particular thing that gets -2 from me, after I've seen how it
 worked in practice, and that's why.
 
 First of all, I don't believe anyone needs it. Seriously, I can't imagine a
 user asking please prevent me from using non-breaking changes. And attempt
 to implement it was IMO a big failure for the following reasons:
 
 a) It's hard to do. Even we, the core team, got confused, and for non-core
 people it took several iteration to do right. It's a big load on both
 developers and reviewers.
 

I do agree with this. It's been painful. However, I think we're mostly
past that pain at this point. Does this patch[1] look like developer
pain?

 b) It's incomplete (at least for Ironic). We have several API-exposed things
 that are just impossible 

Re: [openstack-dev] [Openstack-operators] Device {UUID}c not defined on plugin

2015-06-16 Thread Alvise Dorigo

Hi,
I forgot to attach some relevant config files:

/etc/neutron/plugins/ml2/ml2_conf.ini :

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = True
[ovs]
local_ip = 192.168.61.106
tunnel_type = gre
enable_tunneling = True

/etc/neutron/neutron.conf :

[DEFAULT]
nova_ca_certificates_file = /etc/grid-security/certificates/INFN-CA-2006.pem
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_hosts = 192.168.60.105:5672,192.168.60.106:5672
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = https://cloud-areapd.pd.infn.it:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 1b2caeedb3e2497b935723dc6e142ec9
nova_admin_password = X
nova_admin_auth_url = https://cloud-areapd.pd.infn.it:35357/v2.0
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
verbose = True
debug = False
rabbit_ha_queues = True
dhcp_agents_per_network = 2
[quotas]
[agent]
[keystone_authtoken]
auth_uri = https://cloud-areapd.pd.infn.it:35357/v2.0
auth_url = https://cloud-areapd.pd.infn.it:35357/v2.0
auth_host = cloud-areapd.pd.infn.it
auth_protocol = https
auth_port = 35357
admin_tenant_name = services
admin_user = neutron
admin_password = X
cafile = /etc/grid-security/certificates/INFN-CA-2006.pem
[database]
connection = mysql://neutron_prod:XX@192.168.60.10/neutron_prod
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default



And here

http://pastebin.com/P977162t

the output of ovs-vsctl show.

Alvise

On 16/06/2015 15:30, Alvise Dorigo wrote:

Hi
after a migration of Havana to IceHouse (using controller and network 
services/agents on the same physical node, and using OVS/GRE) we 
started facing some network-related problems (the internal tag of the 
element shown by ovs-vsctl show was set to 4095, which is wrong 
AFAIK). At the beginning the problems could be solved by just 
restarting the openvswitch related agents (and openvswitch itself), or 
changing the tag by hand; but now the networking definitely stopped 
working.


When we add a new router interface connected to a tenant lan, it is 
created in DOWN state. The in the openvswitch-agent.log we see this 
errore message:


2015-06-16 15:07:43.275 40708 WARNING 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device 
ba295e45-9a73-48c1-8864-a59edd5855dc not defined on plugin


and nothing more.

Any suggestion ?

thanks,

Alvise


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] summit session summary: Release Versioning for Server Applications

2015-06-16 Thread Doug Hellmann
Excerpts from Timur Nurlygayanov's message of 2015-06-16 12:49:02 +0300:
 Hi Doug,
 
 I suggest to use some version for neutron-*aas plugins, probably, 1.0.0
 just to have one pattern for all components. If we will not use numbers for
 the first releases (or release candidates) it will be hard to understand
 what the version it is. What do you think about it?

We actually *don't* want everything to look the same, because we
don't expect it to stay the same over time and so we don't want to
introduce confusion twice (once now, when we change the versions,
and again later, when the versions diverge).

So, for the neutron plugins, we want versions that reflect their
history. If that happens to mean they are all 1.0, that's OK, but
we don't want to choose the same versions just because we want them
to be the same.

Doug

 
 Thank you!
 
 On Tue, Jun 16, 2015 at 12:31 PM, Sergey Lukjanov slukja...@mirantis.com
 wrote:
 
  I have a question regarding proposed release versions - are we starting to
  count releases from zero? And 2015.1 (Kilo) missed in all projects in
  etherpad. So, it means if we're starting from 0.0.0 then the proposed
  versions are correct, but if we want to start from 1.0.0 (IMO it's better),
  we should increment all proposed versions.
 
  Re Sahara version I think it'll be more logical to count 0.X releases as a
  single version 0.X and start from 1.0.0 for Icehose, 2.0.0 for Juno and
  3.0.0 for Kilo, so, for Sahara I think anyway it'll be better to start
  versioning from 4.0.0.
 
  Thanks.
 
  On Tue, Jun 16, 2015 at 12:04 PM, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
 
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA256
 
  On 06/16/2015 09:44 AM, Thierry Carrez wrote:
   Doug Hellmann wrote:
   [...] I still need to chat with Kyle about some of the neutron
   spin-out projects, since their repositories have the old neutron
   tags but I don't think it's appropriate to use the same version
   number as neutron for newer projects. [...] neutron 8.0.0
   neutron-fwaas neutron-lbaas neutron-vpnaas
  
   In Kilo (where they were introduced) those were released as a
   single deliverable made of 4 source code tarballs. They would do
   release candidates together, release together. My understanding is
   that they are not versioned separately, they are supposed to be
   used at the same version together.
  
   If that assumption is correct, I think we should still consider it
   a single deliverable with a single version scheme, and have the
   neutron-*aas all starting at 8.0.0.
  
 
  Yes, please don't assume they are independent, at least for now while
  *aas don't have their own API endpoints.
 
  Ihar
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v2
 
  iQEcBAEBCAAGBQJVf+akAAoJEC5aWaUY1u57ryYH/30vflSJUTq8dseE4fL1Qv1u
  zLkq1bS39+AKRUihSqGQH+tNyq3tATBmqMDfMMzZ/WEYJfxsopiHnTJ4DMdpwNUo
  z31MmDO1CplG99PgK/9LE2jRagxeC18QpstnE5G4UkG/Ul/jzG+0os1pGjCi69i1
  9CQI1ZWUjRU8bbq8s7JISi54eCi0t5pzyoVqjh9MtJ5oOWhxFdD7bJg8jvjNzEb4
  5qT5ZSUk2iTI647sfap27fZS1DZ2KAmJwSFSE6jew+FVeepQh/UBPJmmNbcN1D3q
  uBg5+MDbnGXVhka/gB1m3k77JKglZI3DH4oI3wVsinRlyHsfh/gXNsOyPxHGHw0=
  =vQnQ
  -END PGP SIGNATURE-
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Sincerely yours,
  Sergey Lukjanov
  Sahara Technical Lead
  (OpenStack Data Processing)
  Principal Software Engineer
  Mirantis Inc.
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-16 Thread Sean Dague
FYI,

One of the things that came out of the summit for Devstack plans going
forward is to trim it back to something more opinionated and remove a
bunch of low use optionality in the process.

One of those branches to be trimmed is all the support for things beyond
RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
community, that's what the development environment should focus on.

The patch to remove all of this is here -
https://review.openstack.org/#/c/192154/. Expect this to merge by the
end of the month. If people are interested in non RabbitMQ external
plugins, now is the time to start writing them. The oslo.messaging team
already moved their functional test installation for alternative
platforms off of devstack, so this should impact a very small number of
people.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-16 Thread Davanum Srinivas
+1 Sean.

-- dims

On Tue, Jun 16, 2015 at 9:22 AM, Sean Dague s...@dague.net wrote:
 FYI,

 One of the things that came out of the summit for Devstack plans going
 forward is to trim it back to something more opinionated and remove a
 bunch of low use optionality in the process.

 One of those branches to be trimmed is all the support for things beyond
 RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
 community, that's what the development environment should focus on.

 The patch to remove all of this is here -
 https://review.openstack.org/#/c/192154/. Expect this to merge by the
 end of the month. If people are interested in non RabbitMQ external
 plugins, now is the time to start writing them. The oslo.messaging team
 already moved their functional test installation for alternative
 platforms off of devstack, so this should impact a very small number of
 people.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-16 Thread Dmitry Tantsur

On 06/16/2015 03:47 PM, Jim Rollenhagen wrote:

On Tue, Jun 16, 2015 at 08:56:37AM +0200, Dmitry Tantsur wrote:

On 06/04/2015 08:58 AM, Xu, Hejie wrote:

Hi, guys,
I’m working on adding Microversion into the API-WG’s guideline which
make sure we have consistent Microversion behavior in the API for user.
The Nova and Ironic already have Microversion implementation, and as I
know Magnum _https://review.openstack.org/#/c/184975/_ is going to
implement Microversion also.
Hope all the projects which support( or plan to) Microversion can join
the review of guideline.
The Mircoversion specification(this almost copy from nova-specs):
_https://review.openstack.org/#/c/187112_
And another guideline for when we should bump Mircoversion
_https://review.openstack.org/#/c/187896/_
As I know, there already have a little different between Nova and
Ironic’s implementation. Ironic return min/max version when the requested
version doesn’t support in server by http-headers. There isn’t such
thing in nova. But that is something for version negotiation we need for
nova also.
Sean have pointed out we should use response body instead of http
headers, the body can includes error message. Really hope ironic team
can take a
look at if you guys have compelling reason for using http headers.
And if we think return body instead of http headers, we probably need
think about back-compatible also. Because Microversion itself isn’t
versioned.
So I think we should keep those header for a while, does make sense?
Hope we have good guideline for Microversion, because we only can change
Mircoversion itself by back-compatible way.
Thanks
Alex Xu


Hi all!

I'd like to try put in feedback based on living with microversions in Kilo
release of Ironic.


And here's my take, based on my experiences. Keep in mind I'm a core
reviewer, a developer, and an operator of Ironic.


Thanks Jim, much appreciated!



 From an ops perspective, our team has built our fair share of tooling to
help us run Ironic. Some of it uses the REST API via python or node.js,
and of course we all use the CLI client often.

We also continuously deploy Ironic, for full transparency. My experience
is not with how this works every 6 months, but in the day-to-day.



First of all, after talking to folks off-list, I realized that we all, and
the spec itself, confuse 3 aspects of microversion usage:

1. protecting from breaking changes.
This is clearly a big win from user's point of view, and it allowed us to
conduct painful change with renaming an important node state in our state
machine. It will allows us even worse change this cycle: change of the
default state.



+1. Good stuff. My tooling doesn't break when I upgrade. Yay.


2. API discoverability.
While I believe that there maybe be better implementation of this idea, I
think I got it now. People want services to report API versions they
support. People want to be able to request a specific version, and fail
early if it is not present. Also +1 from me.



I don't tend to personally do this. I usually am aware of what version
of Ironic I'm running against. However I see how this could be useful
for other folks.

I do, however, use the versions to say, Oh, I can now request 1.5 which
has logical names! That's useful, let's set those to the names in our
CMDB. Now my tooling that interacts with the CMDB and Ironic can look
at the version and decide to use node.name instead of the old hack we
used to use.


3. hiding new features from older clients
This is not directly stated by the spec, but many people imply it, and Nova
and Ironic did it in Kilo. I want us to be clear: it is not the same as #2.
You can report versions, but still allow new features to be used.



This is still totally useful. If you know what version you are running
against, you know exactly what features are available.


You know is about #2 - that's where confusion is :)
so if you know, that moving to inspection state is disallowed for your 
tooling (but not for the whole system!), what does it give you?




I think the disconnect here is that we don't expect users (whether those
are people or computers) to explicitly request a version. We need to
message better that if you are using Ironic or building a tool against
Ironic's API, you should be pinning the version. We also need to take
this comment block[0] and put it in our docs, so users know what each
version does.

Knowing that I get feature X when I upgrade to version Y is useful.


It is this particular thing that gets -2 from me, after I've seen how it
worked in practice, and that's why.

First of all, I don't believe anyone needs it. Seriously, I can't imagine a
user asking please prevent me from using non-breaking changes. And attempt
to implement it was IMO a big failure for the following reasons:

a) It's hard to do. Even we, the core team, got confused, and for non-core
people it took several iteration to do right. It's a big load on both
developers and reviewers.



I do agree with this. It's 

Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Jay Pipes

On 06/16/2015 04:12 AM, Ken'ichi Ohmichi wrote:

2015-06-16 2:07 GMT+09:00 Jay Pipes jaypi...@gmail.com:

It has come to my attention in [1] that the microversion spec for Nova [2]
and Ironic [3] have used the project name -- i.e. Nova and Ironic -- instead
of the name of the API -- i.e. OpenStack Compute and OpenStack Bare
Metal -- in the HTTP header that a client passes to indicate a preference
for or knowledge of a particular API microversion.

The original spec said that the HTTP header should contain the name of the
service type returned by the Keystone service catalog (which is also the
official name of the REST API). I don't understand why the spec was changed
retroactively and why Nova has been changed to return
X-OpenStack-Nova-API-Version instead of X-OpenStack-Compute-API-Version HTTP
headers [4].

To be blunt, Nova is the *implementation* of the OpenStack Compute API.
Ironic is the *implementation* of the OpenStack BareMetal API.

The HTTP headers should never have been changed like this, IMHO, and I'm
disappointed that they were. In fact, it looks like a very select group of
individuals pushed through this change [5] with little to no input from the
mailing list or community.


Yeah, that is my regret now. Sorry about that.
It was better to take conversation more on ml or some place.


I apologize for making you feel bad about it, that wasn't my intent, 
Ken'ichi. :(



but I have the same question with Dmitry.
If using service names in the header, how to define these name before that?
Current big-tent situation can make duplications between projects like
X-OpenStack-Container-API-Version or something.
Project names are unique even if they are just implementations.


Well, I actually like Kevin's suggestion of just removing the 
project/service-type altogether and using OpenStack-API-Version, but to 
answer your question above, I'd just say that having a single API for 
OpenStack Containers has value. See my previous responses about why 
having the API mean a single thing allows developers to better use our APIs.



Since no support for these headers has yet to land in the client packages,
can we please reconsider this?


IMO, I am fine to change them if we build a consensus about that.
My main concern is just consistency between projects.


Understood.


In addition, Tempest also doesn't support/test microversions at all yet.
So it seems good timing to reconsider it now.


Good point,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Lucas Alvares Gomes
Hi

 So if our min_version is 2.1 and the max_version is 2.50. That means
 alternative implementations need implement all the 50 versions
 api...that sounds pain...


 Yes, it's pain, but it's no different than someone who is following the
 Amazon EC2 API, which cuts releases at a regular (sometimes every 2-3 weeks)
 clip.

 In Amazon-land, the releases are date-based, instead of
 microversion/incrementing version-based, but the idea is essentially the
 same.


Sorry I might be missing something. I don't think one thing justify
the other, plus the problem seems to be the source of truth. I thought
that the idea of big tent in OpenStack was to not having TC to pick
winners. E.g, If someone wants to have an alternative implementation
of the Baremetal service they will always have to follow Ironic's API?
That's unfair, cause they will always be behind and mostly likely
won't weight much on the decisions of the API.

As I mentioned in the other reply, I find it difficult to talk about
alternative implementations while we do not decouple the API
definition level from the implementation level. If we want alternative
implementations to be a real competitor we need to have a sorta of
program in OpenStack that will be responsible for delivering a
reference API for each type of project (Baremetal, Compute, Identity,
and so on...).

 There is GREAT value to having an API mean ONE thing and ONE thing only. It
 means that developers can code against something that isn't like quicksand
 -- constantly changing meanings.

+1, sure.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-16 Thread Dmitry Tantsur

On 06/16/2015 08:56 AM, Dmitry Tantsur wrote:

On 06/04/2015 08:58 AM, Xu, Hejie wrote:

Hi, guys,
I’m working on adding Microversion into the API-WG’s guideline which
make sure we have consistent Microversion behavior in the API for user.
The Nova and Ironic already have Microversion implementation, and as I
know Magnum _https://review.openstack.org/#/c/184975/_ is going to
implement Microversion also.
Hope all the projects which support( or plan to) Microversion can join
the review of guideline.
The Mircoversion specification(this almost copy from nova-specs):
_https://review.openstack.org/#/c/187112_
And another guideline for when we should bump Mircoversion
_https://review.openstack.org/#/c/187896/_
As I know, there already have a little different between Nova and
Ironic’s implementation. Ironic return min/max version when the requested
version doesn’t support in server by http-headers. There isn’t such
thing in nova. But that is something for version negotiation we need for
nova also.
Sean have pointed out we should use response body instead of http
headers, the body can includes error message. Really hope ironic team
can take a
look at if you guys have compelling reason for using http headers.
And if we think return body instead of http headers, we probably need
think about back-compatible also. Because Microversion itself isn’t
versioned.
So I think we should keep those header for a while, does make sense?
Hope we have good guideline for Microversion, because we only can change
Mircoversion itself by back-compatible way.
Thanks
Alex Xu


Hi all!

I'd like to try put in feedback based on living with microversions in
Kilo release of Ironic.

First of all, after talking to folks off-list, I realized that we all,
and the spec itself, confuse 3 aspects of microversion usage:

1. protecting from breaking changes.
This is clearly a big win from user's point of view, and it allowed us
to conduct painful change with renaming an important node state in our
state machine. It will allows us even worse change this cycle: change of
the default state.

2. API discoverability.
While I believe that there maybe be better implementation of this idea,
I think I got it now. People want services to report API versions they
support. People want to be able to request a specific version, and fail
early if it is not present. Also +1 from me.

3. hiding new features from older clients
This is not directly stated by the spec, but many people imply it, and
Nova and Ironic did it in Kilo. I want us to be clear: it is not the
same as #2. You can report versions, but still allow new features to be
used.

It is this particular thing that gets -2 from me, after I've seen how it
worked in practice, and that's why.

First of all, I don't believe anyone needs it. Seriously, I can't
imagine a user asking please prevent me from using non-breaking
changes. And attempt to implement it was IMO a big failure for the
following reasons:

a) It's hard to do. Even we, the core team, got confused, and for
non-core people it took several iteration to do right. It's a big load
on both developers and reviewers.

b) It's incomplete (at least for Ironic). We have several API-exposed
things that are just impossible to hide. Good example are node states:
if node is in a new state, we can't but expose it to older tooling. Our
free-form JSON fields properties, instance_info, driver_info and
driver_internal_info are examples as well. It's useless to speak about
API contract, while we have those.

c) It gives additional push back to making (required) breaking changes.
We already got suggestions to have ONE MORE feature gating for breaking
changes. Reason: people will need to increase microversions to get
features, and your breaking change will prevent it.

d) It requires a hard compromise on the CLI tool. You either default it
to 1.0 forever, and force all the people to get used to figuring out
version numbers and using `ironic --ironic-api-version x.y` every time
(terrible user experience), or you default it to some known good
version, bumping it from time to time. This, in turn, has 2 more serious
problems:

d.1) you start to break people \o/ that's not a theoretical concern: our
downstream tooling did get broken by updating to newer ironicclient from
git

d.2) you require complex version negotiations on the client side.
Otherwise imaging CLI tool defaulting to 1.6 will issue `node-create` to
Ironic supporting only 1.5. Guess what? It will fail despite node-create
being very old feature. Again, that's not something theoretical: that's
how we broke TripleO CI.

e) Every microversion should be fully tested separately. Which ended up
in Ironic having 4 versions 1.2-1.5 that were never ever gate tested.
Even worse, initially, our gate tested only the oldest version 1.1, but
we solved it (though it took time to realize). The only good thing here
is that these versions 1.2-1.5 were probably never used by anyone.


To sum this long post up, I'm seeing that 

Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Salvatore Orlando
On 16 June 2015 at 14:38, Lucas Alvares Gomes lucasago...@gmail.com wrote:

 Hi

  So if our min_version is 2.1 and the max_version is 2.50. That means
  alternative implementations need implement all the 50 versions
  api...that sounds pain...
 
 
  Yes, it's pain, but it's no different than someone who is following the
  Amazon EC2 API, which cuts releases at a regular (sometimes every 2-3
 weeks)
  clip.
 
  In Amazon-land, the releases are date-based, instead of
  microversion/incrementing version-based, but the idea is essentially the
  same.
 

 Sorry I might be missing something. I don't think one thing justify
 the other, plus the problem seems to be the source of truth. I thought
 that the idea of big tent in OpenStack was to not having TC to pick
 winners. E.g, If someone wants to have an alternative implementation
 of the Baremetal service they will always have to follow Ironic's API?
 That's unfair, cause they will always be behind and mostly likely
 won't weight much on the decisions of the API.


I agree and at the same I disagree with this statement.

A competing project in the Baremetal (or networking, or pop-corn as a
service) areas, can move into two directions:
1) Providing a different implementation for the same API that the
incumbent (Ironic in this case) provides.
2) Supply different paradigms, including a different user API, thus
presenting itself as a new way of doing Baremetal (and this is exactly
what Quantum did to nova-network).

Both cases are valid, I believe.
In the first case, the advantage is that operators could switch between the
various implementations without affecting their users (this does not mean
that the switch won't be painful for them of course). Also, users shouldn't
have to worry about what's implementing the service, as they always
interact with the same API.
However, it creates a problem regarding control of said API... the team
from the incumbent project, the new team, both teams, the API-WG, or
no-one?
The second case is super-painful for both operators and users (do you need
a refresh on the nova-network vs neutron saga? We're at the 5th series now,
and the end is not even in sight) However, it completely avoid the
governance problem arising from having APIs which are implemented by
multiple projects.

So, even I understand where Jay is coming from, and ideally I'd love to
have APIs associated with app catalog elements rather than projects, I
think there is not yet a model that would allow to achieve this when
multiple API implementations are present. So I also understand why the
headers have been implemented in the current way.




 As I mentioned in the other reply, I find it difficult to talk about
 alternative implementations while we do not decouple the API
 definition level from the implementation level. If we want alternative
 implementations to be a real competitor we need to have a sorta of
 program in OpenStack that will be responsible for delivering a
 reference API for each type of project (Baremetal, Compute, Identity,
 and so on...).


Indeed. If I understood what you wrote correctly, this is in-line with what
I stated in the previous paragraph.
Nevertheless, since afaict we do not have any competing APIs at the moment
(the nova-network API is part of the Nova API so we might be talking about
overlap there rather than competition), how crazy does it sound if we say
that for OpenStack Nova is the compute API and Ironic the Bare Metal API
and so on? Would that be an unacceptable power grab?



  There is GREAT value to having an API mean ONE thing and ONE thing only.
 It
  means that developers can code against something that isn't like
 quicksand
  -- constantly changing meanings.

 +1, sure.

 Cheers,
 Lucas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Online Migrations.

2015-06-16 Thread Mike Bayer



On 6/15/15 8:34 PM, Philip Schwartz wrote:

I discussed this a bit earlier with John and we came up with a thought that I 
was going to present after getting a little bit more documentation and spec 
around. With out going into too much detail, here is the basics of the idea.

Add a new column to all data models that allow us to inject with insert/update 
of rows the version of the Nova object it is for. Then we can add logic that 
prevents the contract from being run till a condition is met for a specific 
period of time after an object version has been deprecated. Once the 
depreciation window passes, it would be safe to remove the column form the 
model and contract the DB. This fits with our current thinking and the ability 
for conductor to down cast objects to older object versions and best of all, it 
is easy for us to maintain and access as the version for each row creation has 
access to the nova object and the version set in the object class.

If we set the criteria for breaking backwards compatibility and object 
downgrading with a new major version `VERSION = ‘2.0’` we know at that point it 
is safe to remove columns from the model that became deprecated prior to ‘2.0’ 
and allow the contract to run as long as all rows of data have a version in 
them of ‘2.0’.

This does not have to be a major version and could really just be an arbitrary 
object version + N that we decide as a community.


How much of a 1-1 relationship is there from database table - Nova 
object ?To what extent does this change enforce that 1-1 vs. 
remaining agnostic of it?  I ask because one of the issues some of us 
see with the objects approach is that it can be taxing on performance 
and flexibility if it exposes an API that is too fine-grained and molded 
to the structure of tables.







-Ph


On Jun 15, 2015, at 8:06 PM, Mike Bayer mba...@redhat.com wrote:



On 6/15/15 6:37 PM, Mike Bayer wrote:


On 6/15/15 4:21 PM, Andrew Laski wrote:

If I had to visualize what an approach looks like that does this somewhat cleanly, other 
than just putting off contract until the API has naturally moved beyond it, it would 
involve a fixed and structured source of truth about the specific changes we care about, 
such as a versioning table or other data table indicating specific remove() 
directives we're checking for, and the application would be organized such that it can 
always get to this information from an in-memory-cached source before it makes decisions 
about queries. The information would need to support being pushed in from the outside 
such as via a message queue. This would still not protect against operations currently in 
progress failing but at least would prevent future operations from failing a first time.


Or, what I was thinking earlier before I focused too deeply on this whole 
thing, you basically get all running applications to no longer talk to the 
to-be-removed structures at all first, *then* do the contract.

That is, you're on version L.   You've done your expand, you're running the multi-schema 
version of the model.  All your data is migrated.Now some config flag or something 
else changes somewhere (still need to work out this part), which says, we're done 
with all the removed() columns.   All the apps ultimately get restarted with this 
new flag in place - the whole thing is now running without including removed() columns in 
the model (they're still there in the source code, but as I illustrated earlier, some 
conditional logic has prevented them from actually being part of the model on this new 
run).

*Then* you run the contract. Then you don't have to worry about runtime failures or tracking 
specific columns or any of that. There's just some kind of state that indicates, ready for L 
contract.   It's still something of a version but it is local to a single version 
of the software; instead of waiting for a full upgrade from version L to M, you have this internal 
state that can somehow move from L(m) to L(c).That is a lot more doable and sane than trying to 
guess at startup / runtime what columns are being yanked.








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] summit session summary: Release Versioning for Server Applications

2015-06-16 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-06-16 11:45:51 +0200:
 Doug Hellmann wrote:
  [...]
  I put together a little script [1] to try to count the previous
  releases for projects, to use that as the basis for their first
  SemVer-based version number. I pasted the output into an etherpad
  [2] and started making notes about proposed release numbers at the
  top. For now, I'm only working with the projects that have been
  managed by the release team (have the release:managed tag in the
  governance repository), but it should be easy enough for other projects
  to use the same idea to pick a version number.
 
 Your script missed 2015.1 tags for some reason...

They didn't match the pattern because they had a trailing digit, and I
didn't notice.

 I still think we should count the number of integrated releases
 instead of the number of releases (basically considering pre-integration
 releases as 0.x releases). That would give:

Hmm, OK, I didn't really consider them as pre-1.0 but I see your point.
I can go along with counting only integrated releases.

 ceilometer 5.0.0
 cinder 7.0.0
 glance 11.0.0
 heat 5.0.0
 horizon 8.0.0
 ironic 2.0.0
 keystone 8.0.0
 neutron* 7.0.0
 nova 12.0.0
 sahara 3.0.0
 trove 4.0.0
 
 We also traditionally managed the previously-incubated projects. That
 would add the following to the mix:
 
 barbican 1.0.0
 designate 1.0.0
 manila 1.0.0
 zaqar 1.0.0
 

Those didn't have the release:managed tag, so didn't show up in the
output of the script. But I agree, if we're counting only from the
integrated releases then starting from 1.0 makes sense for those and any
other new projects that have joined the big tent during kilo.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Device {UUID}c not defined on plugin

2015-06-16 Thread Alvise Dorigo

Hi
after a migration of Havana to IceHouse (using controller and network 
services/agents on the same physical node, and using OVS/GRE) we started 
facing some network-related problems (the internal tag of the element 
shown by ovs-vsctl show was set to 4095, which is wrong AFAIK). At the 
beginning the problems could be solved by just restarting the 
openvswitch related agents (and openvswitch itself), or changing the tag 
by hand; but now the networking definitely stopped working.


When we add a new router interface connected to a tenant lan, it is 
created in DOWN state. The in the openvswitch-agent.log we see this 
errore message:


2015-06-16 15:07:43.275 40708 WARNING 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device 
ba295e45-9a73-48c1-8864-a59edd5855dc not defined on plugin


and nothing more.

Any suggestion ?

thanks,

Alvise


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Tempest] Regarding deleting snapshot when instance is OFF

2015-06-16 Thread Jordan Pittier
On Thu, Apr 9, 2015 at 6:10 PM, Eric Blake ebl...@redhat.com wrote:

 On 04/08/2015 11:22 PM, Deepak Shetty wrote:
  + [Cinder] and [Tempest] in the $subject since this affects them too
 
  On Thu, Apr 9, 2015 at 4:22 AM, Eric Blake ebl...@redhat.com wrote:
 
  On 04/08/2015 12:01 PM, Deepak Shetty wrote:
 
  Questions:
 
  1) Is this a valid scenario being tested ? Some say yes, I am not sure,
  since the test makes sure that instance is OFF before snap is deleted
 and
  this doesn't work for fs-backed drivers as they use hyp assisted snap
  which
  needs domain to be active.
 
  Logically, it should be possible to delete snapshots when a domain is
  off (qemu-img can do it, but libvirt has not yet been taught how to
  manage it, in part because qemu-img is not as friendly as qemu in having
  a re-connectible Unix socket monitor for tracking long-running
 progress).
 
 
  Is there a bug/feature already opened for this ?

 Libvirt has this bug: https://bugzilla.redhat.com/show_bug.cgi?id=987719
 which tracks generic ability of libvirt to delete snapshots; ideally,
 the code to manage snapshots will work for both online and persistent
 offline guests, but it may result in splitting the work into multiple bugs.


I can't access this bug report, it seems private, I need to authenticate.


  I didn't understand much
  on what you
  mean by re-connectible unix socket :)... are you hinting that qemu-img
  doesn't have
  ability to attach to a qemu / VM process for long time over unix socket ?

 For online guest control, libvirt normally creates a Unix socket, then
 starts qemu with its -qmp monitor pointing to that socket.  That way, if
 libvirtd goes away and then restarts, it can reconnect as a client to
 the existing socket file, and qemu never has to know that the person on
 the other end changed.  With that QMP monitor, libvirt can query qemu's
 current state at will, get event notifications when long-running jobs
 have finished, and issue commands to terminate long-running jobs early,
 even if it is a different libvirtd issuing a later command than the one
 that started the command.

 qemu-img, on the other hand, only has the -p option or SIGUSR1 signal
 for outputting progress to stderr on a long-running operation (not the
 most machine-parseable), but is not otherwise controllable.  It does not
 have a management connection through a Unix socket.  I guess in thinking
 about it a bit more, a Unix socket is not essential; as long as the old
 libvirtd starts qemu-img in a manner that tracks its pid and collects
 stderr reliably, then restarting libvirtd can send SIGUSR1 to the pid
 and track the changes to stderr to estimate how far along things are.

 Also, the idea has been proposed that qemu-img is not necessary; libvirt
 could use qemu -M none to create a dummy machine with no CPUs and JUST
 disk images, and then use the qemu QMP monitor as usual to perform block
 operations on those disks by reusing the code it already has working for
 online guests.  But even this approach needs coding into libvirt.

 --
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi,
I'd like to progress on this issue, so I will spend some time on it.

Let's recap. The issue is deleting a Cinder snapshot that was created
during an Nova Instance snapshot (booted from a cinder volume) doesn't work
when the original Nova Instance is stopped. This bug only arises when a
Cinder driver uses the feature called QEMU Assisted
Snapshots/live-snapshot. (currently only GlusterFS, but soon generic NFS
when https://blueprints.launchpad.net/cinder/+spec/nfs-snapshots gets in).

This issue is triggered by the Tempest scenario test_volume_boot_pattern.
This scenario:
[does some stuff]
1) Creates a cinder volume from an Cirros Image
2) Boot a Nova Instance on the volume
3) Make a snapshot of this instance (which creates a cinder snapshot
because the instance was booted from a volume), using the feature QEMU
Assisted Snapshots
[do some other stuff]
4) stop the instance created in step 2 then delete the snapshot created in
step 3.

The deletion of snapshot created in step 3 fails because Nova wants libvirt
to do a blockRebase (see
https://github.com/openstack/nova/blob/68f6f080b2cddd3d4e97dc25a98e0c84c4979b8a/nova/virt/libvirt/driver.py#L1920
)

For reference, there's a bug targeting Cinder for this :
https://bugs.launchpad.net/cinder/+bug/1444806

What I'd like to do, but I am asking your advice first is:
Just before doing the call to virt_dom.blockRebase(), check if the domain
is running, and if not call qemu-img rebase -b $rebase_base rebase_disk.
(this idea was brought up by Eric Blake in the previous reply).


Re: [openstack-dev] Device {UUID}c not defined on plugin

2015-06-16 Thread Alvise Dorigo

Hi,
I forgot to attach some relevant config files:

/etc/neutron/plugins/ml2/ml2_conf.ini :

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = True
[ovs]
local_ip = 192.168.61.106
tunnel_type = gre
enable_tunneling = True

/etc/neutron/neutron.conf :

[DEFAULT]
nova_ca_certificates_file = /etc/grid-security/certificates/INFN-CA-2006.pem
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_hosts = 192.168.60.105:5672,192.168.60.106:5672
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = https://cloud-areapd.pd.infn.it:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 1b2caeedb3e2497b935723dc6e142ec9
nova_admin_password = X
nova_admin_auth_url = https://cloud-areapd.pd.infn.it:35357/v2.0
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
verbose = True
debug = False
rabbit_ha_queues = True
dhcp_agents_per_network = 2
[quotas]
[agent]
[keystone_authtoken]
auth_uri = https://cloud-areapd.pd.infn.it:35357/v2.0
auth_url = https://cloud-areapd.pd.infn.it:35357/v2.0
auth_host = cloud-areapd.pd.infn.it
auth_protocol = https
auth_port = 35357
admin_tenant_name = services
admin_user = neutron
admin_password = X
cafile = /etc/grid-security/certificates/INFN-CA-2006.pem
[database]
connection = mysql://neutron_prod:XX@192.168.60.10/neutron_prod
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default



And here

http://pastebin.com/P977162t

the output of ovs-vsctl show.

Alvise



On 16/06/2015 15:30, Alvise Dorigo wrote:

Hi
after a migration of Havana to IceHouse (using controller and network 
services/agents on the same physical node, and using OVS/GRE) we 
started facing some network-related problems (the internal tag of the 
element shown by ovs-vsctl show was set to 4095, which is wrong 
AFAIK). At the beginning the problems could be solved by just 
restarting the openvswitch related agents (and openvswitch itself), or 
changing the tag by hand; but now the networking definitely stopped 
working.


When we add a new router interface connected to a tenant lan, it is 
created in DOWN state. The in the openvswitch-agent.log we see this 
errore message:


2015-06-16 15:07:43.275 40708 WARNING 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device 
ba295e45-9a73-48c1-8864-a59edd5855dc not defined on plugin


and nothing more.

Any suggestion ?

thanks,

Alvise


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [openstack-qa] [tempest] UUIDs and names in tempest.conf file

2015-06-16 Thread Matthew Treinish
So I need to point out that the openstack-qa list isn't used anymore. We only
keep it around so we have a place to send for periodic test results. In the
future you should just send things to the openstack-dev ML with a [QA] tag
in the subject.

On Tue, Jun 16, 2015 at 05:25:30AM +, Tikkanen, Viktor (Nokia - FI/Espoo) 
wrote:
 Hi!
 
 I have a question regarding usage of UUIDs and names in the tempest.conf 
 file. Are there some common ideas/reasons (except unambiguousness and making 
 test cases simpler) why some parameters (e.g. public_network_id, flavor_ref, 
 image_ref, ...) are designed so that they require entity UUIDs but others 
 (e.g. fixed_network_name, floating_network_name, ...) require entity names?

So this is mostly a historical artifact from before I even started working on
the project, my guess is this was done because not all resources require unique
names, but that's just my guess. Config options to tell tempest resources to use
which were added more recently use a name because it's hard for people to deal
with uuids. That being said there is a spec still under review to rationalize
how we specify resources in tempest to make things a bit simpler and more
consistent: https://review.openstack.org/173334 Once the details are ironed out
in the spec review and implementation begins we'll deprecate most of the
existing options in favor of the new format for specifying resources.

 
 Currently I use shell scripts for creating images and flavors with some 
 predefined ID (like 01010101-0101-0101-0101-010101010101) just to avoid 
 updating of the related parameters into configuration file every time when 
 some new environment is taken into use. The problem here is that e.g. image 
 ID cannot be reused after deleting the image (unless the data is removed 
 directly from database) and flavors cannot be updated without changing their 
 ID.
 

So there are several scripts out there to do something similar (make a couple
images and flavors to use for testing) to do some of those common setup steps.
There used to be an in-tree bash script in tempest to do the same thing too.
It was removed because it was basically unmaintained and had bit-rotted to
the point it didn't really work. There is a patch up for review right now to
provide a new in-tree tool for automating some of these initial configuration
steps here: https://review.openstack.org/#/c/133245/

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] python versions

2015-06-16 Thread Serg Melikyan
Stan, +100500

On Fri, Jun 12, 2015 at 3:13 PM, Stan Lagun sla...@mirantis.com wrote:

 I'd rather go with Heat approach (job first) because it makes easier to track 
 what is left to port to Py34 and track progress in this area

 Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis


 On Mon, Jun 8, 2015 at 2:46 PM, Kirill Zaitsev kzait...@mirantis.com wrote:

 I’ve looked into several OS projects, and they first seen to implement py3 
 support and create a job later. (Except for heat. They already have a 
 non-voting py34, which seem to fail every time =))

 I suggest we do the same: first make murano work on py34, then make a py34 
 job. I’ll file a blueprint shortly.

 --
 Kirill Zaitsev
 Murano team
 Software Engineer
 Mirantis, Inc

 On 2 Jun 2015 at 15:58:17, Serg Melikyan (smelik...@mirantis.com) wrote:

 Hi Kirill,

 I agree with Alexander that we should not remove support for python
 2.6 in python-muranoclient.

 Regarding adding python-3 jobs - great idea! But we need to migrate
 python-muranoclient to yaql 1.0 first and then add python-3 jobs,
 because previous versions of yaql are not compatible with python-3.

 On Tue, Jun 2, 2015 at 3:33 PM, Alexander Tivelkov
 ativel...@mirantis.com wrote:
  Hi Kirill,
 
  Client libraries usually have wider range of python requirements, as they
  may be run on various kinds of legacy environments, including the ones with
  python 2.6. only.
  Murano is definitely not the only project in Openstack which still 
  maintains
  py26 compatibility for its client: nova, glance, cinder and other 
  integrated
  ones do this.
 
  So, I would not drop py26 support for client code without any good reasons
  to. Are there any significant reasons to do it?
  Regarding py3.4 - this is definitely a good idea.
 
 
  --
  Regards,
  Alexander Tivelkov
 
  On Tue, Jun 2, 2015 at 3:04 PM, Kirill Zaitsev kzait...@mirantis.com
  wrote:
 
  It seems that python-muranoclient is the last project from murano-official
  group, that still supports python2.6. Other projects do not have a 2.6
  testing job (correct me if I’m wrong).
 
  Personally I think it’s time to drop support for 2.6 completely, and add
  (at least non-voting) jobs with python3.4 support tests.
  It seems to fit the whole process of moving OS projects towards python 3:
  https://etherpad.openstack.org/p/liberty-cross-project-python3
 
  What do you think? Does anyone have any objections?
 
  --
  Kirill Zaitsev
  Murano team
  Software Engineer
  Mirantis, Inc
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Dmitry Tantsur
16 июня 2015 г. 13:52 пользователь Jay Pipes jaypi...@gmail.com написал:

 On 06/16/2015 04:36 AM, Alex Xu wrote:

 So if our min_version is 2.1 and the max_version is 2.50. That means
 alternative implementations need implement all the 50 versions
 api...that sounds pain...


 Yes, it's pain, but it's no different than someone who is following the
Amazon EC2 API, which cuts releases at a regular (sometimes every 2-3
weeks) clip.

 In Amazon-land, the releases are date-based, instead of
microversion/incrementing version-based, but the idea is essentially the
same.

 There is GREAT value to having an API mean ONE thing and ONE thing only.
It means that developers can code against something that isn't like
quicksand -- constantly changing meanings.

Being one of such developers, I only see this value for breaking changes.


 Best,
 -jay


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [tests] [dsvm] Tests failed because of timeout during the images upload

2015-06-16 Thread Timur Sufiev
Timur,

If old jQuery code (not AngularJS) is still used for processing 'Create
Image' form, then the spinner is shown just before submitting the form
contents [1] and hidden right after the request completes [2] in case the
form is being redrawn or the whole page is redrawn in case of redirect -
which in case of Image means that it was successfully created. Speaking of
your scenario it should be redirect. It could mean that either the request
to server takes too long, or the redirect doesn't redraw the page.

[1]
https://github.com/openstack/horizon/blob/2f7a2dd891396f848278dc1bc2216e5720b602f6/horizon/static/horizon/js/horizon.modals.js#L229
[2]
https://github.com/openstack/horizon/blob/2f7a2dd891396f848278dc1bc2216e5720b602f6/horizon/static/horizon/js/horizon.modals.js#L232
[3]
https://github.com/openstack/horizon/blob/2f7a2dd891396f848278dc1bc2216e5720b602f6/horizon/static/horizon/js/horizon.modals.js#L245

On Tue, Jun 16, 2015 at 12:31 PM Matthias Runge mru...@redhat.com wrote:

 On 16/06/15 11:20, Timur Nurlygayanov wrote:

  In this method integration tests try to upload image by the following
  link [4]:
  http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-uec.tar.gz
 

 Imho it would be better to host this somewhere internal in infra rather
 than getting it from the net.

 Wouldn't that be an option for improvement? It would even make tests
 more reliable.

 Matthias


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-16 Thread Fox, Kevin M
Out of the box, vms usually can contact the controllers though the routers nat, 
but not visa versa. So its preferable for guest agents to make the connection, 
not the controller connect to the guest agents. No floating ips, security group 
rules or special networks are needed then.

Thanks,
Kevin


From: Clint Byrum
Sent: Monday, June 15, 2015 6:10:27 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
 No, I was confused by your statement:
 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create.

 It sounded like you were using that keypair to inject a public key. I just 
 misunderstood.

 It does raise the question though, are you using ssh between the controller 
 and the instance anywhere? If so, we will still run into issues when we go to 
 try and test it at our site. Sahara does currently, and we're forced to put a 
 floating ip on every instance. Its less then ideal...


Why not just give each instance a port on a network which can route
directly to the controller's network? Is there some reason you feel
forced to use a floating IP?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Proposing a slight change in requirements.txt syncing output.

2015-06-16 Thread Doug Hellmann
Excerpts from Robert Collins's message of 2015-06-16 11:18:55 +1200:
 At the moment we copy the global-requirements lines verbatim.
 
 So if we have two lines in global-requirements.txt:
 oslotest=1.5.1  # Apache-2.0
 PyECLib=1.0.7  # BSD
 with very different layouts

Most of the inline comments for packages are license indicators that we
started collecting a while back at someone's request. Are we actually
using those? If not, maybe we should clean up that file and reserve
inline comments for something we do actually care about?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Sean Dague
On 06/16/2015 07:38 AM, Alex Xu wrote:
 
 
 2015-06-16 18:57 GMT+08:00 Sean Dague s...@dague.net
 mailto:s...@dague.net:
 
 On 06/15/2015 03:45 PM, Kevin L. Mitchell wrote:
  On Mon, 2015-06-15 at 13:07 -0400, Jay Pipes wrote:
  The original spec said that the HTTP header should contain the name of
  the service type returned by the Keystone service catalog (which is 
 also
  the official name of the REST API). I don't understand why the spec was
  changed retroactively and why Nova has been changed to return
  X-OpenStack-Nova-API-Version instead of X-OpenStack-Compute-API-Version
  HTTP headers [4].
 
  Given the disagreement evinced by the responses to this thread, let me
  ask a question: Would there be any particular problem with using
  X-OpenStack-API-Version?
 
 So, here is my concern with not having the project namespacing at all:
 
 Our expectation is that services are going to move towards real wsgi on
 their API instead of eventlet. Which is, hopefully, naturally going to
 give you things like this:
 
 GET api.server/compute/servers
 GET api.server/baremetal/chasis
 
 In such a world it will end up possibly confusing that
 OpenStack-API-Version 2.500 is returned from api.server/compute/servers,
 but OpenStack-API-Version 1.200 is returned from
 api.server/baremetal/chasis.
 
 
 Client should get those url from keystone SC, that means client should
 know what he request to.

Sure, there is a lot of should in there though. But by removing a level
of explicitness in this we potentially introduce more confusion. The
goal of a good interface is not just to make it easy to use, but make it
hard to misuse. Being explicit about the service in the return header
will eliminate a class of errors where the client code got confused
about which service they were talking to (because to setup a VM with a
network in a neutron case you have to jump back and forth between Nova /
Neutron quite a bit).

This would provide an additional bit of signaling on that fact, which
will prevent a class of mistakes by API consumers.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Sean Dague
On 06/16/2015 08:38 AM, Lucas Alvares Gomes wrote:
 Hi
 
 So if our min_version is 2.1 and the max_version is 2.50. That means
 alternative implementations need implement all the 50 versions
 api...that sounds pain...


 Yes, it's pain, but it's no different than someone who is following the
 Amazon EC2 API, which cuts releases at a regular (sometimes every 2-3 weeks)
 clip.

 In Amazon-land, the releases are date-based, instead of
 microversion/incrementing version-based, but the idea is essentially the
 same.

 
 Sorry I might be missing something. I don't think one thing justify
 the other, plus the problem seems to be the source of truth. I thought
 that the idea of big tent in OpenStack was to not having TC to pick
 winners. E.g, If someone wants to have an alternative implementation
 of the Baremetal service they will always have to follow Ironic's API?
 That's unfair, cause they will always be behind and mostly likely
 won't weight much on the decisions of the API.
 
 As I mentioned in the other reply, I find it difficult to talk about
 alternative implementations while we do not decouple the API
 definition level from the implementation level. If we want alternative
 implementations to be a real competitor we need to have a sorta of
 program in OpenStack that will be responsible for delivering a
 reference API for each type of project (Baremetal, Compute, Identity,
 and so on...).

I kind of feel like we've sprung up a completely unrelated conversation
about what big tent means under a pretty narrow question about 'what
should this header be called, and if/when should we change it now that
it's in the field'. I've probably contributed to it drifting off topic
as well.

However, I think it would be good to try to focus on the topic at hand
which is header naming, what the implications are, and if/when changes
should happen.

The goal of Microversions was crisping up the API contract to the user
across multiple deploys, at different points in time, of the *same*
upstream codebase. That's the narrow problem we are trying to fix. It's
not a grandious API abstraction. It might let us get to one down the
road, now that we can evolve the API one bit at a time. But that's a
down the road thing.

So in that context we have a current header which references a service
by code name.

The plus side of that is we've already got a central registry for what
that should be, openstack/{name}.

Also the problem with expanding to generic names is with Neutron you get
OpenStack-Network-API-Version but there are multiple network
implementations still. Or even worse, what if Congress and or GBP
implement microversions? OpenStack-Policy-API-Version? What about
projects that start off outside of openstack/ and implement this kind of
mechanism, so either don't have a moniker, or land grab one that we're
not comfortable with them having inside of OpenStack.

So I don't think it's clear that in the general case the generic moniker
is better than the code name one. And it's a change to a thing in the
field, so I feel like deciding on that kind of change is probably a
thing we need to make sure we really think the change will be beneficial
to our stake holders, API consumers, Operators, Developers, Integrators.

On a change like this I'd much rather not preoptimize for out of tree
re-implementations, which I think we've said pretty strongly at a TC
level that we don't want, and instead leave the status quo until there
is a strong reason that benefits once of our stake holders.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release announcements convergence

2015-06-16 Thread Thierry Carrez
Hi everyone,

Release announcements in OpenStack come in various forms and shapes. So
far we had:

- Integrated release service components being announced on
openstack-announce and openstack general lists.

- Other service components sometimes being announced on openstack-dev

- Oslo libraries being announced on openstack-dev

- Other libraries sometimes being announced on openstack-announce,
sometimes on openstack-dev, sometimes not at all

With the move out of the integrated release we'd like to streamline
release announcements and make them *all* converge to openstack-announce.

The release management team proposes to push all announcements
(services, libraries that they release, etc) to openstack-announce, with
reply-to: set to openstack-dev (in case the announce generates a thread,
it will happen on openstack-dev and not on the moderated announce list).

Teams with deliverables that are not released by the release management
team are encouraged to publish their release announcements on
openstack-announce (their email there will be moderated through as long
as it's a release of an openstack project).

In summary, if you're not yet subscribed to -announce and would like to
be the first to know when something is released in the OpenStack world,
now would be a good time to do so. openstack-announce is very low
traffic, you should expect less than 12 emails per week on average.

Comments ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-16 Thread Andre Martin

On Jun 15, 2015, at 02:48, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:

I am proposing Harm Waites for the Kolla core team.

+1
Harm did excellent work on the designate container and does very thorough 
reviews, the cinder container review being just one example among many.

Martin

PLEASE NOTE:  This email,  and  any  attachments  hereto,  are
intended only  for use  by the specified addressee(s)  and  may
contain legally privileged and/or confidential and/or proprietary
information of KVH Co., Ltd.  and/or its affiliates  (including
personal information).  If you are not the intended recipient of
this email, please immediately notify the sender by email,  and
please permanently  delete the original,  any print out and any
copies of the foregoing. 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] When do we import aodh?

2015-06-16 Thread Chris Dent

On Tue, 16 Jun 2015, Julien Danjou wrote:


To me the next step is to:
1. Someone cares and review what I've done in the repository
2. import the code into openstack/aodh


Assuming that we'll do whatever is required to finish things after
moving it under openstack/ then whatever you've done in step one
doesn't matter all that much, it's just a stepping stone in the
process.

My cursory look just now says yeah, let's do it assuming the
additional steps below (which we need to clarify) don't disappear.


3. enable gate jobs (unit tests at least)


yah


4. enable and fix devstack gating (probably writing a devstack plugin
  for aodh)


yah

and:

5. anything in tempest to worry about?
6. what's that stuff in the ceilometer dir?
   6.1. Looks like migration artifacts, what about migration in
general?
7. removing all the rest of the cruft (whatever it might be)
8. awareness of and attention to downstream packaging concerns
9. the inevitable several steps we've forgotten

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Returning information from reverted flow

2015-06-16 Thread Joshua Harlow

Dulko, Michal wrote:

-Original Message-
From: Joshua Harlow [mailto:harlo...@outlook.com]
Sent: Friday, June 12, 2015 5:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [taskflow] Returning information from reverted
flow

Dulko, Michal wrote:

Hi,

In Cinder we had merged a complicated piece of code[1] to be able to
return something from flow that was reverted. Basically outside we
needed an information if volume was rescheduled or not. Right now this
is done by injecting information needed into exception thrown from the
flow. Another idea was to use notifications mechanism of TaskFlow.
Both ways are rather workarounds than real solutions.

Unsure about notifications being a workaround (basically u are notifying to
some other entities that rescheduling happened, which seems like exactly
what it was made for) but I get the point ;)


Please take a look at this review - https://review.openstack.org/#/c/185545/. 
Notifications cannot help if some further revert decision needs to be based on 
something that happened earlier.


That sounds like conditional reverting, which seems like it should be 
handled differently anyway, or am I misunderstanding something?



I wonder if TaskFlow couldn't provide a mechanism to mark stored element
to not be removed when revert occurs. Or maybe another way of returning
something from reverted flow?

Any thoughts/ideas?

I have a couple, I'll make some paste(s) and see what people think,

How would this look (as pseudo-code or other) to you, what would be your
ideal, and maybe we can work from there (maybe u could do some paste(s)
to and we can prototype it), just storing information that is returned
from revert() somewhere? Or something else? There has been talk about
task 'local storage' (or something like that/along those lines) that
could also be used for this similar purpose.


I think that the easiest idea from the perspective of an end user would be to 
save items returned from revert into flow engine's storage *and* do not remove 
it from storage when whole flow gets reverted. This is completely backward 
compatible, because currently revert doesn't return anything. And if revert has 
to record some information for further processing - this will also work.



Ok, let me see what this looks like and maybe I can have a POC in the 
next few days, I don't think its impossible to do (obviously) and 
hopefully will be useful for this.



[1] https://review.openstack.org/#/c/154920/



__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-

requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Add your name and TZ to wiki

2015-06-16 Thread Paul Bourke

Hi all,

Steve suggested adding a new table to the Kolla wiki to help us keep 
track of who's actively working on Kolla along with relevant info such 
as timezones and IRC names.


I'm missing lots of names and timezones so if you'd like to be on this 
please feel free to update it at 
https://wiki.openstack.org/wiki/Kolla#Active_Contributors


Cheers,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How does instance's tap device macaddress generate?

2015-06-16 Thread Tapio Tallgren

On 11.06.2015 18:52, Andreas Scheuring wrote:

Maybe this helps (taken from [1])

Actually there is one way that the MAC address of the tap device
affects
proper operation of guest networking - if you happen to set the tap
device's MAC identical to the MAC used by the guest, you will get errors
from the kernel similar to this:


   kernel: vnet9: received packet with own address as source address



[1] http://www.redhat.com/archives/libvir-list/2012-July/msg00984.html
I was wondering the same question myself one day and found this 
explanation from the same mail list:


vnet0 is the backend of the guest NIC, and its MAC addr
is more or less irrelevant to functioning of the guest
itself, since traffic does not originate on this NIC.
The only important thing is that this TAP device must
have a high value MAC address, to avoid the bridge
device using the TAP device's MAC as its own. Hence
when creating the TAP Device  libvirt takes the guest
MAC addr and simply sets the top byte to 0xFE

http://www.redhat.com/archives/libvir-list/2012-June/msg01330.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Cloud Foundry Service Broker Api in Murano

2015-06-16 Thread Nikolay Starodubtsev
Here is a draft spec for this: https://review.openstack.org/#/c/192250/



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-06-16 13:11 GMT+03:00 Nikolay Starodubtsev nstarodubt...@mirantis.com
:

 Hi all,
 I've started a work on bp:
 https://blueprints.launchpad.net/murano/+spec/cloudfoundry-api-support
 I plan to publish a spec in a day or two. If anyone interesting to
 cooperate please drop me a message here or in IRC: Nikolay_St



 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] summit session summary: Release Versioning for Server Applications

2015-06-16 Thread Thierry Carrez
Doug Hellmann wrote:
 Excerpts from Thierry Carrez's message of 2015-06-16 11:45:51 +0200:
 We also traditionally managed the previously-incubated projects. That
 would add the following to the mix:

 barbican 1.0.0
 designate 1.0.0
 manila 1.0.0
 zaqar 1.0.0

 
 Those didn't have the release:managed tag, so didn't show up in the
 output of the script. [...]

Proposed as https://review.openstack.org/192193

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [all] Proposal for Glance Artifacts Sub-Team meeting.

2015-06-16 Thread Nikhil Komawar
Hi all,

We have planned a fast track development for Glance Artifacts; also it
would be our v3 API. To balance pace, knowledge sharing, synchronous
discussion on ideas and opinions as well as seeing this great feature
through in Liberty:

We hereby propose a non-mandatory, open to all, sub-team meeting for
Glance Artifacts.

Please vote on the time and date:
https://review.openstack.org/#/c/192270/ (Note: Run the tests for your
vote to ensure we are considering feasible  non-conflicting times.) We
will start the meeting next week unless there are strong conflicts.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] apache wsgi application support

2015-06-16 Thread Sean Dague
I was just looking at the patches that put Nova under apache wsgi for
the API, and there are a few things that I think are going in the wrong
direction. Largely I think because they were copied from the
lib/keystone code, which we've learned is kind of the wrong direction.

The first is the fact that a big reason for putting {SERVICES} under
apache wsgi is we aren't running on a ton of weird unregistered ports.
We're running on 80 and 443 (when appropriate). In order to do this we
really need to namespace the API urls. Which means that service catalog
needs to be updated appropriately.

I'd expect nova to be running on http://localhost/compute not
http://localhost:8774 when running under wsgi. That's going to probably
interestingly break a lot of weird assumptions by different projects,
but that's part of the reason for doing this exercise. Things should be
using the service catalog, and when they aren't, we need to figure it out.

(Exceptions can be made for third party APIs that don't work this way,
like the metadata server).

I also think this -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
is completely wrong.

The Apache configs should instead specify access rules such that the
installed console entry point of nova-api can be used in place as the
WSGIScript.

This should also make lines like -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
L274 uneeded. (The WSGI Script will be in a known place). It will also
make upgrades much more friendly.

I think that we need to get these things sorted before any further
progression here. Volunteers welcomed to help get us there.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack] Javascript Linting

2015-06-16 Thread Michael Krotscheck
Just for the sake of clarity- did the Horizon team discuss the tool
selection for JSCS with the greater community? I can't find anything on the
dev list. Furthermore, there've been situations before (karma) where a
patch was landed without appropriate upstream notifications and/or
discussion, which then resulted in a lot of unnecessary work.

Horizon isn't the only UI project anymore. While it's certainly the
elephant in the room, that doesn't mean its decisions shouldn't be up to
scrutiny.

Michael

On Tue, Jun 16, 2015 at 12:44 AM Rob Cresswell (rcresswe) 
rcres...@cisco.com wrote:

  So my view here is that I don’t particularly mind which plugin/ set of
 plugins Horizon uses, but the biggest deterrent is the workload. We’re
 already cleaning everything up quite productively, so I’m reluctant to
 swap. That said, the cleanup from JSCS/ JSHint should be largely relevant
 to ESLint. Michael, do you have any ideas on the numbers/ workload behind a
 possible swap?

  With regards to licensing, does this mean we must stop using JSHint, or
 that we’re still okay to use it as a dev tool? Seems that if the former is
 the case, then the decision is made for us.

  Rob



   From: Michael Krotscheck krotsch...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, 16 June 2015 00:36
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [javascript] [horizon] [merlin] [refstack]
 Javascript Linting

   I'm restarting this thread with a different subject line to get a
 broader audience. Here's the original thread:
 http://lists.openstack.org/pipermail/openstack-dev/2015-June/066040.html

  The question at hand is What will be OpenStack's javascript equivalent
 of flake8. I'm going to consider the need for common formatting rules to
 be self-evident. Here's the lay of the land so far:

- Horizon currently uses JSCS.
- Refstack uses Eslint.
- Merlin doesn't use anything.
- StoryBoard (deprecated) uses eslint.
- Nobody agrees on rules.

  *JSCS*
  JSCS Stands for JavaScript CodeStyle. Its mission is to enforce a
 style guide, yet it does not check for potential bugs, variable overrides,
 etc. For those tests, the team usually defers to (preferred) JSHint, or
 ESLint.

  *JSHint*
 Ever since JSCS was extracted from JSHint, it has actively removed rules
 that enforce code style, and focused on findbug style tests instead. JSHint
 still contains the Do no evil license, therefore is not an option for
 OpenStack, and has been disqualified.

  *ESLint*
 ESLint's original mission was to be an OSI compliant replacement for
 JSHint, before the JSCS split. It wants to be a one-tool solution.

  My personal opinion/recommendation: Based on the above, I recommend we
 use ESLint. My reasoning: It's one tool, it's extensible, it does both
 codestyle things and bug finding things, and it has a good license. JSHint
 is disqualified because of the license. JSCS is disqualified because it is
 too focused, and only partially useful on its own.

  I understand that this will mean some work by the Horizon team to bring
 their code in line with a new parser, however I personally consider this to
 be a good thing. If the code is good to begin with, it shouldn't be that
 difficult.

  This thread is not there to argue about which rules to enforce. Right
 now I just want to nail down a tool, so that we can (afterwards) have a
 discussion about which rules to activate.

  Michael

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Improvement of the blueprint specs template

2015-06-16 Thread Roman Prykhodchenko
Hi folks!

I was reviewing one of specs for Fuel 7.0 and realized the information there is 
messed up and it’s pretty hard to put it all together. The reason for that is 
basically that Fuel is a multicomponent project but the template does not 
consider that — there is a Proposed change section which is used to define all 
the changes in the entire project; then there is the API and Data impact 
sections that are specific to only specific components but still have to be 
filled in.

Since most of new features consider changes to several components I propose to 
stick to the following structure. It eliminates the need to create several 
specs to describe one feature and allows to organize everything in one document 
without messing it up:

-- Title
-- Excerpt (short version of the Problem description, proposed solution and 
final results)
-- Problem description
-- Proposed changes
-- Web UI
-- Nailgun
-- General
-- REST API
-- Data model
-- Astute
-- General
-- RPC protocol
-- Fuel Client
-- Plugins
-- Impact
-- End-user
-- QA
-- Developer
-- Infrastructure (operations)
-- Upgrade
-- Performance
-- Implementation
-- Assignee
-- Work items
-- Web UI
-- Nailgun
-- Astute
-- Fuel Client
-- Plugins
-- Documentation
-- References


- romcheg





signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #38

2015-06-16 Thread Emilien Macchi


On 06/15/2015 08:06 PM, Emilien Macchi wrote:
 Hi everyone,
 
 Here's an initial agenda for our weekly meeting tomorrow at 1500 UTC in
 #openstack-meeting-4:
 
 https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150616
 
 Please add additional items you'd like to discuss.

The meeting was short but productive, you can read the notes here:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-06-16-15.00.html

Have a nice day,

 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-lib library

2015-06-16 Thread Ruby Loo
Hi,

I haven't paid any attention to ironic-lib; I just knew that we wanted to
have a library of common code so that we didn't cut/paste. I just took a
look[1] and there are files there from 2 months ago. So far, everything is
under ironic_lib (ie, no subdirectories to group things). Going forward,
are there guidelines as to where/what goes into this library?


I think it would be good to note down the process wrt using this library.
I'm guessing that having this library will most certainly delay things wrt
development. Changes will need to be made to the library first, then need
to wait until a new version is released, then possibly update the min
version in global-requirements, then use (and profit) in ironic-related
projects.


With the code in ironic, we were able to do things like change the
arguments to methods etc. With the library -- do we need to worry about
backwards compatibility?


How frequently were we thinking of releasing a new version? (Depends on
whether anything was changed there that is needed really soon?)


Anything else that we should keep in mind when making changes to the
library?

--ruby

[1] https://github.com/openstack/ironic-lib
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][db] online schema upgrades

2015-06-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi neutron folks,

I'd like to discuss a plan on getting support for online db schema
upgrades in neutron.

*What is it even about?*

Currently, any major version upgrade, or master-to-master upgrade,
requires neutron-server shutdown. After shutdown, operators apply db
migration rules to their database (if any), and when it's complete,
are able to start their neutron-server service(s).

It has several drawbacks:
- - while db is upgraded, API endpoints are not available (user visible
out-of-service period);
- - db upgrade may take a significant time, and the out-of-service
period can become quite long.

For rolling master-based environments, it's especially painful, since
you get the scheduled offline time more often than once per 6 months.
(Though even once per 6 months is not ideal.)

*Proposal*

Make neutron-server resilient to under-the-hood db schema changes.

How can we achieve this? There are multiple things to touch both code-
and culture-wise:
- - if we want old neutron-server to continue working with db that is
potentially upgraded to a newer schema, it means that we should stop
applying non-additive changes to schema in migration rules. (Note that
we still have a way to collect fossils once they are unused, e.g.
during the next cycle).
- - we should stop applying live data changes to database as part of
migration rules. The only changes that should be allowed should touch
schema but not insert/update/delete actual records. (I know neutron is
especially guilty of it in the past, but I believe we can stop doing it.
)
- - instead of migrating data with alembic rules, migrate it in runtime.
There should be a abstraction layer that will make sure that data is
migrated into new schema fields and objects, while preserving data
originally stored in 'old' schema elements.

That would allow old neutron-server code to run against new schema (it
will just ignore new additions); and new neutron-server code to
gradually migrate data into new columns/fields/tables while serving user
s.

Note that all neutron-server instances are still expected to restart
at the same time. There should be no neutron-servers of different
versions running, otherwise older instances will undo migration work
applied by new ones, and it may result in data loss, db conflicts,
hell raise. We may think of how to support iterative controller
restart without any downtime, but that's out of scope of the proposal.

*Isn't it too crazy?*

Not really. Other projects achieved this already. Specifically, Nova
does it since Liberty. Heat, Cinder are considering it now.

Nova needed to stop doing data migrations or non-additive changes to
schema in Kilo already. It suggests that the nearest possible time we
get actual online migration in neutron is M; that's assuming we adopt
stricter rules for migrations *now*, before anything incompatible is
merged in Liberty.

Also note that I haven't checked *aas migration rules yet: if there
are incompatible changes there, it means that for setups that rely on
those services, online migrations will become reality in Nausea only.

Since neutron joins the game late, we are in better position than nova
was, since a lot of tooling and practices are already implemented.
Specifically, I mean oslo.versionedobjects that would serve as an
abstraction object middleware in between db and the rest of neutron.

*The plan for Liberty*

We can't technically achieve online migrations in Liberty, for reasons
stated above. It does not mean that we have nothing to do this cycle
though.

We should prepare ourselves doing the following:
- - adopt stricter rules for migrations;
- - adopt oslo.versionedobjects to represent neutron resources. (It will
buy us more benefits, like object interface instead of passing dicts
around; clear versioning on RPC side of things; potentially, assuming
we apply corresponding practices, transparent remote calls to
controller from agent side using the same objects defined on
neutron-server side).

===

So, keeping in mind that there can be concerns or conflicts with
existing efforts (f.e. plugin decomp part 2) that I don't fully
realize, or maybe some architectural issues that would not allow us to
start on the road just now, I'd like to hear from others on whether
the strict rules even make sense in context of neutron.

Of course, I especially look forward to hear from our db gods: Henry,
Ann, and others.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVgEOpAAoJEC5aWaUY1u57rzIIAKg6tgJ23OUzEx9WEWly8Evy
YCRRSYAPjgX5rQ8UY1BLIPEH1j/FAdbE7RKuHuW+b2fcsKafFbh7EqW0HkCy75w7
5cja5VKZMoZ8MzR4A3TyLfR0C1IQ6FB9U+ISgaaDyqjrp/2pmr6Sobv+f9gtT6IR
viLASdvsFyC8fQOGPNNG4Q2I5mnl+q1l8oji6jxp1uL49PETdStH6R88h6LWYBJg
lGztStcVcAq1l0WVVdhgnJU8UaSJVYzlkUkTxzWiHscd8JSelCgR+Zq7rc6bx6RY
+5uDmk8ZGVXDZIz9TEZbP2KgaF9tcIhYCPajCqS5wHFoJj/8UTz1MdsaqjHBv6w=
=J+iJ
-END PGP SIGNATURE-


Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-16 Thread Steven Dake (stdake)
Its unanimous!  Welcome to the core reviewer team Harm!

Regards
-steve


From: Steven Dake std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Sunday, June 14, 2015 at 10:48 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

Hey folks,

I am proposing Harm Waites for the Kolla core team.  He did a fantastic job 
implementing Designate in a container[1] which I’m sure was incredibly 
difficult and never gave up even though there were 13 separate patch reviews :) 
 Beyond Harm’s code contributions, he is responsible for 32% of the 
“independent” reviews[1] where independents compose 20% of our total reviewer 
output.  I think we should judge core reviewers on more then output, and I knew 
Harm was core reviewer material with his fantastic review of the cinder 
container where he picked out 26 specific things that could be broken that 
other core reviewers may have missed ;) [3].  His other reviews are also as 
thorough as this particular review was.  Harm is active in IRC and in our 
meetings for which his TZ fits.  Finally Harm has agreed to contribute to the 
ansible-multi implementation that we will finish in the liberty-2 cycle.

Consider my proposal to count as one +1 vote.

Any Kolla core is free to vote +1, abstain, or vote –1.  A –1 vote is a veto 
for the candidate, so if you are on the fence, best to abstain :)  Since our 
core team has grown a bit, I’d like 3 core reviewer +1 votes this time around 
(vs Sam’s 2 core reviewer votes).  I will leave the voting open until June 21 
 UTC.  If the vote is unanimous prior to that time or a veto vote is 
received, I’ll close voting and make appropriate adjustments to the gerrit 
groups.

Regards
-steve

[1] https://review.openstack.org/#/c/182799/
[2] 
http://stackalytics.com/?project_type=allmodule=kollacompany=%2aindependent
[3] https://review.openstack.org/#/c/170965/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] When do we import aodh?

2015-06-16 Thread Julien Danjou
On Tue, Jun 16 2015, Chris Dent wrote:

 5. anything in tempest to worry about?

Yes, we need to adapt and reenable tempest after.

 6. what's that stuff in the ceilometer dir?
6.1. Looks like migration artifacts, what about migration in
 general?

That's a rest of one of the many rebases I've made during these last
weeks, I just fixed it.

I removed all the migration as we should start fresh on Alembic.

 7. removing all the rest of the cruft (whatever it might be)

In Ceilometer you mean?

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] (officially) deprecate stable/{grizzly, havana} branches

2015-06-16 Thread Emilien Macchi
Hi,

Some of our modules have stable/grizzly and stable/havana branches. Some
of them have the CI broken due to rspec issues that would require some
investigation and time if we wanted to fix it.

We would like to know who plan to backport some patches in these branches?

If nobody plans to do that, we will let the branches as they are now but
won't officially support them.

By support I mean maintaining the CI jobs green (rspec, syntax, etc),
fixing bugs and adding new features.

Any feedback is welcome!

Regards,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all] Request from Oslo team for Liberty Cycle

2015-06-16 Thread Davanum Srinivas
Hello fellow stackers,

The Oslo team came up with a handful of requests to the projects that
use Oslo-*. Here they are:

0. Check if your project has a Oslo Liaison

Please see https://wiki.openstack.org/wiki/CrossProjectLiaisons#Oslo
and volunteer for your project. We meet once a week to go over specs,
issues with releases, etc. If you can't attend the meetings, review
the logs and send questions/feedback to the -dev mailing list or hop
onto #openstack-oslo channel.

If you filter the -dev mailing list, include the [oslo] topic in
your whitelist to ensure you see team announcements.

1. Update files from oslo-incubator

Check what files you have listed in [my_project]/openstack-common.conf
and under [my_project]/openstack/common/* tree. You can run the
update.py script in oslo-incubator
(https://github.com/openstack/oslo-incubator/blob/master/update.py) to
refresh the files in your project. You may see that some of the files
you may have already graduated into a library, in which case you will
need to switch to the library.

2. Use oslo.context with oslo.log

Several projects still have a custom RequestContext. For oslo.log to
log the details stored in the RequestContext, you will need to extend
your custom RequestContext from the one in oslo.context. See example
in Nova - https://github.com/openstack/nova/blob/master/nova/context.py

3. Switch to oslo-config-generator

The discovery mechanism in the old style generator.py is fragile and
hence we have replaced it with a better (at least in our eyes!)
solution. Please see
http://specs.openstack.org/openstack/oslo-specs/specs/juno/oslo-config-generator.html.
This will help generate configuration files for different services
with different content/options as well.

4. Review new libraries to be added in Liberty and older ones from Kilo

Please see the specs we have for Liberty -
http://specs.openstack.org/openstack/oslo-specs/ We have a handful of
new libraries from existing oslo-incubator code as well as some brand
new ones like futurist and automaton that are not oslo specific and
very useful (Don't forget Debtcollector, tooz, taskflow from Kilo).
Projects like oslo.versionedobjects is getting a lot of traction as
well. So please review what's useful to your project and let us know
if you need more information.

Thanks,
The Oslo Team

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][db] online schema upgrades

2015-06-16 Thread Mike Bayer



On 6/16/15 11:41 AM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

- - instead of migrating data with alembic rules, migrate it in runtime.
There should be a abstraction layer that will make sure that data is
migrated into new schema fields and objects, while preserving data
originally stored in 'old' schema elements.

That would allow old neutron-server code to run against new schema (it
will just ignore new additions); and new neutron-server code to
gradually migrate data into new columns/fields/tables while serving user
s.

Hi Ihar -

I was in the middle of writing a spec for neutron online schema 
migrations, which maintains expand / contract workflow but also 
maintains Alembic migration scripts.   As I've stated many times in the 
past, there is no reason to abandon migration scripts, while there are 
many issues related to abandoning the notion of the database in a 
specific versioned state as well as the ability to script any migrations 
whatsoever.   The spec amends Nova's approach and includes upstream 
changes to Alembic such that both approaches can be supported using the 
same codebase.


- mike



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Adopting ironic-lib in Ironic

2015-06-16 Thread Ruby Loo
On 16 June 2015 at 03:12, Dmitry Tantsur dtant...@redhat.com wrote:

 On 06/16/2015 08:58 AM, Ramakrishnan G wrote:


 Hi All,

 Some time back we created a new repository[1] to move all the reusable
 code components of Ironic to a separate library.  The branched out code
 has changed and there has been a review out to sync it [2].  But
 unfortunately, it has got stale again as some more changes have gone in
 to the branched out code.  To avoid repeated efforts of such syncing, I
 suggest we sync the latest code from Ironic to ironic-lib (in
 appropriate files) and immediately change Ironic to start using it.

 I suggest we can do the following:
 1) Decide on a timeline for the change (1 or 2 days)


 Now is a good time, IMO, I don't think we're in pressing need to change
 this code.

  2) Stop +Aing changes in Ironic to the files/code being moved to
 ironic-lib
 3) Sync the latest code in ironic-lib and merge it
 4) Make a new release of ironic-lib
 5) Make changes in Ironic to use ironic-lib and make sure gate is back
 up and running again (I can't think of anything that will break gate on
 switching to ironic-lib as it's just a pip install)


 Note that this will need adding ironic-lib to global-requirements, which
 will take time, unless you grab a couple of g-r cores to do it asap.

  6) Make new reviews in ironic-lib for any pending reviews in Ironic

 If we come to an agreement on #1 and #2 above, Syed Ismail Faizan
 Barmawer can continue to work on #3 - #5

 Let me know if it will work out or if there are any better plans (or I
 am missing something)


 Otherwise plan LGTM


 Thanks.

 [1] https://github.com/openstack/ironic-lib
 [2] https://review.openstack.org/#/c/162162/

 Regards,
 Ramesh


If we haven't yet released a version of ironic-lib, I suggest taking a more
conservative (but more work) approach:
0.1. sync the latest code in ironic-lib (this is optional)
0.2. make a first release of ironic-lib
0.3. add ironic-lib to global-requirements

Then the steps you suggested Ramesh. (Changes need to be made to IPA too?
Not sure what code is being copied.)

Hopefully that will get any kinks out of the process, and will give us an
idea of how long that process might take. (Eg, there are only certain
people that can do releases, and if we can get things set up in
global-requirements sooner rather than later, that is one less thing to
do).

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Proposing a slight change in requirements.txt syncing output.

2015-06-16 Thread Joshua Harlow

Doug Hellmann wrote:

Excerpts from Robert Collins's message of 2015-06-16 11:18:55 +1200:

At the moment we copy the global-requirements lines verbatim.

So if we have two lines in global-requirements.txt:
oslotest=1.5.1  # Apache-2.0
PyECLib=1.0.7  # BSD
with very different layouts


Most of the inline comments for packages are license indicators that we
started collecting a while back at someone's request. Are we actually
using those? If not, maybe we should clean up that file and reserve
inline comments for something we do actually care about?


Or if it really matters run 
https://github.com/openstack/requirements/blob/master/detail.py (which I 
submitted a while ago) on the requirements file/s and write out all the 
detailed information to a json file (stdout from a run of this @ 
http://paste.openstack.org/show/295537/ with large output detailing 
author information, license information ... @ 
http://paste.openstack.org/show/295538/). Keeping all the license + 
detailed info out of the main file seems to make sense to me...




Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Randall Burt
+1 Murali. AFIAK, there is no precedent for what Keith proposes, but that 
doesn't mean its a bad thing.

On Jun 16, 2015, at 12:21 AM, Murali Allada murali.all...@rackspace.com wrote:

 I agree, users should have a mechanism to keep logs around.
 
 I implemented the logs deletion feature after we got a bunch of requests from 
 users to delete logs once they delete an app, so they don't get charged for 
 storage once the app is deleted.
 
 My implementation deletes the logs by default and I think that is the right 
 behavior. Based on user requests, that is exactly what they were asking for. 
 I'm planning to add a --keep-logs flag in a follow up patch. The command will 
 look as follows
 
 Solum delete app MyApp --keep-logs
 
 -Murali
 
 
 
 
 
 On Jun 15, 2015, at 11:19 PM, Keith Bray keith.b...@rackspace.com wrote:
 
 Regardless of what the API defaults to, could we have the CLI prompt/warn so 
 that the user easily knows that both options exist?  Is there a precedent 
 within OpenStack for a similar situation?
 
 E.g. 
  solum app delete MyApp
  Do you want to also delete your logs? (default is Yes):  [YES/no]
   NOTE, if you choose No, application logs will remain on your 
 account. Depending on your service provider, you may incur on-going storage 
 charges.  
 
 Thanks,
 -Keith
 
 From: Devdatta Kulkarni devdatta.kulka...@rackspace.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, June 15, 2015 9:56 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete 
 an app?
 
 Yes, the log deletion should be optional.
 
 
 The question is what should be the default behavior. Should the default be 
 to delete the logs and provide a flag to keep them, or keep the logs by 
 default and provide a override flag to delete them?
 
 Delete-by-default is consistent with the view that when an app is deleted, 
 all its artifacts are deleted (the app's meta data, the deployment units 
 (DUs), and the logs). This behavior is also useful in our current state when 
 the app resource and the CLI are in flux. For now, without a way to specify 
 a flag, either to delete the logs or to keep them, delete-by-default 
 behavior helps us clean all the log files from the application's cloud files 
 container when an app is deleted.
 
 This is very useful for our CI jobs. Without this, we end up with lots of 
 log files in the application's container,
 
 and have to resort to separate scripts to delete them up after an app is 
 deleted.
 
 
 Once the app resource and CLI stabilize it should be straightforward to 
 change the default behavior if required.
 
 - Devdatta
 
 From: Adrian Otto adrian.o...@rackspace.com
 Sent: Friday, June 12, 2015 6:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Should logs be deleted when we delete an 
 app?
  
 Team,
 
 We currently delete logs for an app when we delete the app[1]. 
 
 https://bugs.launchpad.net/solum/+bug/1463986
 
 Perhaps there should be an optional setting at the tenant level that 
 determines whether your logs are deleted or not by default (set to off 
 initially), and an optional parameter to our DELETE calls that allows for 
 the opposite action from the default to be specified if the user wants to 
 override it at the time of the deletion. Thoughts?
 
 Thanks,
 
 Adrian
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-lib library

2015-06-16 Thread Lucas Alvares Gomes
Hi,

 I haven't paid any attention to ironic-lib; I just knew that we wanted to
 have a library of common code so that we didn't cut/paste. I just took a
 look[1] and there are files there from 2 months ago. So far, everything is
 under ironic_lib (ie, no subdirectories to group things). Going forward, are
 there guidelines as to where/what goes into this library?

I don't think we have guidelines for the struct of the project, we
should of course try to organize it well.

About what goes into this library, AFAICT, this is place where code
which is used in more than one project under the Ironic umbrella
should go. For example, both Ironic and IPA (ironic-python-agent)
deals with disk partitioning, so we should create a module for disk
partitioning in the ironic-libs repository which both Ironic and IPA
will import and use.


 I think it would be good to note down the process wrt using this library.
 I'm guessing that having this library will most certainly delay things wrt
 development. Changes will need to be made to the library first, then need to
 wait until a new version is released, then possibly update the min version
 in global-requirements, then use (and profit) in ironic-related projects.


 With the code in ironic, we were able to do things like change the arguments
 to methods etc. With the library -- do we need to worry about backwards
 compatibility?

I would say so, those are things that we have to take in account when
creating a shared library. But it also brings benefits:

1. Code sharing
2. Bug are fixed in one place only
3. Flexibility, I believe that more projects using the same code will
require it to be more flexible

 How frequently were we thinking of releasing a new version? (Depends on
 whether anything was changed there that is needed really soon?)

Yes, just like the python-ironicclient a release can be cut when needed.

Thanks for starting this thread, it would be good to the community
evaluate whether we should go forward with ironic-libs or not.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-16 Thread Paul Belanger

On 06/16/2015 12:41 PM, Allison Randal wrote:

On 06/15/2015 01:43 PM, Paul Belanger wrote:

While I agree those points are valid, and going to be helpful, moving
under OpenStack (even Stackforge) does also offer the chance to get more
test integration upstream (not saying this was the original scope).
However, this could also be achieved by 3rd party integration too.


Nod, 3rd party integration is worth exploring.


I'm still driving forward with some -infra specific packaging for Debian
/ Fedora ATM (zuul packaging). Mostly because of -infra needs for
packages. Not saying that is a reason to reconsider, but there is the
need for -infra to consume packages from upstream.


I suspect that, at least initially, the needs of -infra specific
packaging will be quite different than the needs of general-purpose
packaging in Debian/Fedora distros. Trying to tightly couple the two
will just bog you down in trying to solve far too many problems for far
too many people. But, I also suspect that -infra packaging will be quite
minimal and intended for the services to be configured by puppet, so
there's a very good chance that if you sprint ahead and just do it, your
style of packaging will end up feeding back into future packaging in the
distros.

My thoughts exactly. I believe by the next summit, we should have a base 
in -infra for producing packages (unsure about consuming ATM). 
Interesting times ahead.



Allison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] CLI problem

2015-06-16 Thread Steve Martinelli
What was the command you used? What was the output? Can you try running it 
with --debug? More information is needed here.

It would also probably be quicker to jump on IRC and ask around.

Thanks,

Steve Martinelli
OpenStack Keystone Core

Ali Reza Zamani alireza.zam...@cs.rutgers.edu wrote on 06/16/2015 
12:46:16 PM:

 From: Ali Reza Zamani alireza.zam...@cs.rutgers.edu
 To: openstack-dev@lists.openstack.org
 Date: 06/16/2015 12:47 PM
 Subject: [openstack-dev] CLI problem
 
 Hi all,
 
 I have a problem in creating the instances. When I create the instances
 using GUI web interface everything is fine. But when I do it using CLI
 after spawning it says Error.
 And the error is: ne
 
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian

2015-06-16 Thread Pete Zaitcev
On Thu, 11 Jun 2015 11:08:55 +0300
Duncan Thomas duncan.tho...@gmail.com wrote:

 There's only one cinder driver using it (nimble storage), and it seems to
 be using only very basic features. There are half a dozen suds forks on
 pipi, or there's pisimplesoap that the debian maintainer recommends. None
 of the above are currently packaged for Ubuntu that I can see, so can
 anybody in-the-know make a reaasoned recommendation as to what to move to?

In instances I had to deal with (talking to VMware), it was easier and
better to roll-your-own with python-xml and libhttp.

-- P

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-06-16 Thread Terry Wilson
 Right now I'm leaning toward parent always does nothing + PluginWorker.
 Everything is forked, no special case for workers==0, and explicit
 designation of the only one case. Of course, it's still early in the day
 and I haven't had any coffee.

I have updated the patch (https://review.openstack.org/#/c/189391/) to 
implement the above. I have it marked WIP because it doesn't have any tests and 
it modifies ServicePluginBase to have a call to get_processes(), but almost no 
service plugins actually inherit from it even though they implement its 
interface. The get_processes stuff in general could be fleshed out a bit as 
well. I just wanted to get something up for the purposes of discussion, so 
anyone interested in this particular problem should take a look and discuss. :)

Terry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] CLI problem

2015-06-16 Thread Ali Reza Zamani
Hi all,

I have a problem in creating the instances. When I create the instances
using GUI web interface everything is fine. But when I do it using CLI
after spawning it says Error.
And the error is: ne

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] (officially) deprecate stable/{grizzly, havana} branches

2015-06-16 Thread Matt Fischer
+1 from me for deprecation.

I'd also like to know or have an official policy for future deprecations,
such as when will we deprecate Icehouse?

On Tue, Jun 16, 2015 at 9:50 AM, Emilien Macchi emil...@redhat.com wrote:

 Hi,

 Some of our modules have stable/grizzly and stable/havana branches. Some
 of them have the CI broken due to rspec issues that would require some
 investigation and time if we wanted to fix it.

 We would like to know who plan to backport some patches in these branches?

 If nobody plans to do that, we will let the branches as they are now but
 won't officially support them.

 By support I mean maintaining the CI jobs green (rspec, syntax, etc),
 fixing bugs and adding new features.

 Any feedback is welcome!

 Regards,
 --
 Emilien Macchi


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Proposal for Weekly Glance Drivers meeting.

2015-06-16 Thread Nikhil Komawar
FYI, We will be closing the vote on Friday, June 19 at 1700 UTC.

On 6/15/15 7:41 PM, Nikhil Komawar wrote:
 Hi,

 As per the discussion during the last weekly Glance meeting (14:51:42at
 http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-06-11-14.00.log.html
 ), we will begin a short drivers' meeting where anyone can come and get
 more feedback.

 The purpose is to enable those who need multiple drivers in the same
 place; easily co-ordinate, schedule  collaborate on the specs, get
 core-reviewers assigned to their specs etc. This will also enable more
 synchronous style feedback, help with more collaboration as well as with
 dedicated time for giving quality input on the specs. All are welcome to
 attend and attendance from drivers is not mandatory but encouraged.
 Initially it would be a 30 min meeting and if need persists we will
 extend the period.

 Please vote on the proposed time and date:
 https://review.openstack.org/#/c/192008/ (Note: Run the tests for your
 vote to ensure we are considering feasible  non-conflicting times.) We
 will start the meeting next week unless there are strong conflicts.


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-06-16 Thread Dmitri Zimine
+1 great write-up Winson,

I propose we move the discussion to an etherpad, and flash out details there so 
it won’t get lost in a long thread. 
Winson would you care to create one and post here? 

Re: ‘error state’: I think it’s not absolutely necessary: pause/resume can be 
done without enabling ‘error-running’ transition, 
by making default task policy `on-error: pause`so that if user chooses, the 
workflow goes into paused state on errors.
But it may be convenient, so no strong opinion on this yet. 


Re: checkpoint and roll-backs - yes! I see this and pause-resume complimentary. 
To be precise on terminology, workflows don't “roll-back” - this is more 
transactional term, they “compensate”, by running a ‘compensation workflow’ 
that gets system to back to a checkpoint state. 
At the end of compensational process the system goes in “paused” state where it 
can be resumed once the ‘cause of failure’ is fixed. 

DZ. 

On Jun 15, 2015, at 10:25 PM, BORTMAN, Limor (Limor) 
limor.bort...@alcatel-lucent.com wrote:

 +1,
 I just have one question. Do we want to able resume for WF  in error state?
 I mean isn't real resume it should be more of a rerun, don't you think?
 So in an error state we will create new executor and just re run it
 Thanks Limor
 
 
 
 -Original Message-
 From: Lingxian Kong [mailto:anlin.k...@gmail.com] 
 Sent: Tuesday, June 16, 2015 5:47 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Mistral] Proposal for the Resume Feature
 
 Thanks Winson for the write-up, very detailed infomation. (the format was 
 good)
 
 I'm totally in favor of your idea, actually, I really think you proposal is 
 complementary to my proposal in 
 https://etherpad.openstack.org/p/vancouver-2015-design-summit-mistral,
 please see 'Workflow rollback/recovery' section.
 
 What I wanna do is configure some 'checkpoints' throughout the workflow, and 
 if some task failed, we could rollback the execution to some checkpoint, and 
 resume the whole workflow after we have fixed some problem, seems like the 
 execution has never been failed before.
 
 It's just a initial idea, I'm waiting for our discussion to see if it really 
 makes sense to users, to get feedback, then we can talk about the 
 implementation and cooperation.
 
 On Tue, Jun 16, 2015 at 7:51 AM, W Chan m4d.co...@gmail.com wrote:
 Resending to see if this fixes the formatting for outlines below.
 
 
 I want to continue the discussion on the workflow resume feature.
 
 
 Resuming from our last conversation @
 http://lists.openstack.org/pipermail/openstack-dev/2015-March/060265.h
 tml. I don't think we should limit how users resume. There may be 
 different possible scenarios. User can fix the environment or 
 condition that led to the failure of the current task and the user 
 wants to just re-run the failed task.  Or user can actually fix the 
 environment/condition which include fixing what the task was doing, 
 then just want to continue the next set of task(s).
 
 
 The following is a list of proposed changes.
 
 
 1. A new CLI operation to resume WF (i.e. mistral workflow-resume).
 
A. If no additional info is provided, assume this WF is manually 
 paused and there are no task/action execution errors. The WF state is 
 updated to RUNNING. Update using the put method @ 
 ExecutionsController. The put method checks that there's no task/action 
 execution errors.
 
B. If WF is in an error state
 
i. To resume from failed task, the workflow-resume command 
 requires the WF execution ID, task name, and/or task input.
 
ii. To resume from failed with-items task
 
a. Re-run the entire task (re-run all items) requires WF
 execution ID, task name and/or task input.
 
b. Re-run a single item requires WF execution ID, task 
 name, with-items index, and/or task input for the item.
 
c. Re-run selected items requires WF execution ID, task 
 name, with-items indices, and/or task input for each items.
 
- To resume from the next task(s), the workflow-resume 
 command requires the WF execution ID, failed task name, output for the 
 failed task, and a flag to skip the failed task.
 
 
 2. Make ERROR - RUNNING as valid state transition @ 
 is_valid_transition function.
 
 
 3. Add a comments field to Execution model. Add a note that indicates 
 the execution is launched by workflow-resume. Auto-populated in this case.
 
 
 4. Resume from failed task.
 
A. Re-run task with the same task inputs  POST new action 
 execution for the task execution @ ActionExecutionsController
 
B. Re-run task with different task inputs  POST new action 
 execution for the task execution, allowed for different input @ 
 ActionExecutionsController
 
 
 5. Resume from next task(s).
 
A. Inject a noop task execution or noop action execution 
 (undecided yet) for the failed task with appropriate output.  The spec 
 is an adhoc spec that 

Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-06-16 Thread W Chan
Here's the etherpad link.  I replied to the comments/feedbacks there.
Please feel free to continue the conversation there.
https://etherpad.openstack.org/p/mistral-resume
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Network path between admin network and shares

2015-06-16 Thread Sturdevant, Mark

Yes.  I think this is possible with the HP 3PAR.  I'd have to test more to be 
sure, but if I understand the plan correctly, it'll work.  However, there are 
limited resources for doing this, so it'll only work if resources allow.  I'm 
thinking that the administrator config+startup/setup code would setup admin 
network access and hold those resources to make sure that migration is possible.

I could see a scenario where a backend is usable for shares, but can't spare 
the extra resources to allow migration.  That could be a problem.  I'm not sure 
how/if we'd support that.




From: Rodrigo Barbieri [rodrigo.barbieri2...@gmail.com]
Sent: Thursday, June 11, 2015 1:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Manila] Network path between admin network and shares

Hello all,

There has been a lot of discussion around Share Migration lately. This feature 
has two main code paths:

- Driver Migration: optimized migration of shares from backend A to backend B 
where both backends belong to the same driver vendor. The driver is responsible 
for migrating and just returns a model update dictionary with necessary changes 
to DB entry.

- Generic Migration: This is the universal fallback for migrating a share from 
backend A to backend B, from any vendor to any vendor. In order to do this we 
have the approach where a machine in the admin network mounts both shares 
(source and destination) and copy the files. The problem is that it has been 
unusual so far in Manila design for a machine in the admin network to access 
shares which are served inside the cloud, a network path must exist for this to 
happen.

I was able to code this change for the generic driver in the Share Migration 
prototype (https://review.openstack.org/#/c/179791/).

We are not sure if all driver vendors are able to accomplish this. We would 
like to ask you to reply to this email if you are not able (or even not sure) 
to create a network path from your backend to the admin network so we can 
better think on the feasability of this feature.

More information in blueprint: 
https://blueprints.launchpad.net/manila/+spec/share-migration


Regards,
--
Rodrigo Barbieri
Computer Scientist
Federal University of São Carlos
+55 (11) 96889 3412

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] CLI problem

2015-06-16 Thread Ali Reza Zamani
It is weired. I deleted my devstack and redo everything. I am using the 
same command and everything is fine.


Thanks,
Regards,

On 06/16/2015 01:03 PM, Steve Martinelli wrote:
What was the command you used? What was the output? Can you try 
running it with --debug? More information is needed here.


It would also probably be quicker to jump on IRC and ask around.

Thanks,

Steve Martinelli
OpenStack Keystone Core

Ali Reza Zamani alireza.zam...@cs.rutgers.edu wrote on 06/16/2015 
12:46:16 PM:


 From: Ali Reza Zamani alireza.zam...@cs.rutgers.edu
 To: openstack-dev@lists.openstack.org
 Date: 06/16/2015 12:47 PM
 Subject: [openstack-dev] CLI problem

 Hi all,

 I have a problem in creating the instances. When I create the instances
 using GUI web interface everything is fine. But when I do it using CLI
 after spawning it says Error.
 And the error is: ne

 
__

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-16 Thread Harm Weites

Thanks guys, both for all the nice words and the acceptance!

harmw

Op 16-06-15 om 16:32 schreef Steven Dake (stdake):

Its unanimous!  Welcome to the core reviewer team Harm!

Regards
-steve


From: Steven Dake std...@cisco.com mailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage 
questions) openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Date: Sunday, June 14, 2015 at 10:48 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] Proposal for new core-reviewer Harm 
Waites


Hey folks,

I am proposing Harm Waites for the Kolla core team.  He did a
fantastic job implementing Designate in a container[1] which I’m
sure was incredibly difficult and never gave up even though there
were 13 separate patch reviews :)  Beyond Harm’s code
contributions, he is responsible for 32% of the “independent”
reviews[1] where independents compose 20% of our total reviewer
output.  I think we should judge core reviewers on more then
output, and I knew Harm was core reviewer material with his
fantastic review of the cinder container where he picked out 26
specific things that could be broken that other core reviewers may
have missed ;) [3].  His other reviews are also as thorough as
this particular review was.  Harm is active in IRC and in our
meetings for which his TZ fits.  Finally Harm has agreed to
contribute to the ansible-multi implementation that we will finish
in the liberty-2 cycle.

Consider my proposal to count as one +1 vote.

Any Kolla core is free to vote +1, abstain, or vote –1.  A –1 vote
is a veto for the candidate, so if you are on the fence, best to
abstain :)  Since our core team has grown a bit, I’d like 3 core
reviewer +1 votes this time around (vs Sam’s 2 core reviewer
votes).  I will leave the voting open until June 21  UTC.  If
the vote is unanimous prior to that time or a veto vote is
received, I’ll close voting and make appropriate adjustments to
the gerrit groups.

Regards
-steve

[1] https://review.openstack.org/#/c/182799/
[2]

http://stackalytics.com/?project_type=allmodule=kollacompany=%2aindependent
[3] https://review.openstack.org/#/c/170965/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-16 Thread Morgan Fainberg
Long term we want to see Keystone move to http://host/identity. However the 
reason for choosing 5000/35357 for ports was compatibility and avoiding 
breaking horizon. At the time we did the initial change over, sharing the root 
80/443 ports with horizon was more than challenging since horizon needed to 
be based at /. 

If that issue/assumption for horizon is no longer present, moving keystone to 
be on port 80/443 would be doable. The last factor is that keystone was an a 
priori knowledge for discovering other services. As long as we update docs 
(possibly 302? For a cycle in devstack from the alternate ports) I think we're 
good to make the change. 

--Morgan

Sent via mobile

 On Jun 16, 2015, at 09:25, Sean Dague s...@dague.net wrote:
 
 I was just looking at the patches that put Nova under apache wsgi for
 the API, and there are a few things that I think are going in the wrong
 direction. Largely I think because they were copied from the
 lib/keystone code, which we've learned is kind of the wrong direction.
 
 The first is the fact that a big reason for putting {SERVICES} under
 apache wsgi is we aren't running on a ton of weird unregistered ports.
 We're running on 80 and 443 (when appropriate). In order to do this we
 really need to namespace the API urls. Which means that service catalog
 needs to be updated appropriately.
 
 I'd expect nova to be running on http://localhost/compute not
 http://localhost:8774 when running under wsgi. That's going to probably
 interestingly break a lot of weird assumptions by different projects,
 but that's part of the reason for doing this exercise. Things should be
 using the service catalog, and when they aren't, we need to figure it out.
 
 (Exceptions can be made for third party APIs that don't work this way,
 like the metadata server).
 
 I also think this -
 https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
 is completely wrong.
 
 The Apache configs should instead specify access rules such that the
 installed console entry point of nova-api can be used in place as the
 WSGIScript.
 
 This should also make lines like -
 https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
 L274 uneeded. (The WSGI Script will be in a known place). It will also
 make upgrades much more friendly.
 
 I think that we need to get these things sorted before any further
 progression here. Volunteers welcomed to help get us there.
 
-Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] (officially) deprecate stable/{grizzly, havana} branches

2015-06-16 Thread Richard Raseley

Matt Fischer wrote:

+1 from me for deprecation.

I'd also like to know or have an official policy for future
deprecations, such as when will we deprecate Icehouse?

On Tue, Jun 16, 2015 at 9:50 AM, Emilien Macchi emil...@redhat.com
mailto:emil...@redhat.com wrote:

Hi,

Some of our modules have stable/grizzly and stable/havana branches. Some
of them have the CI broken due to rspec issues that would require some
investigation and time if we wanted to fix it.

We would like to know who plan to backport some patches in these
branches?

If nobody plans to do that, we will let the branches as they are now but
won't officially support them.

By support I mean maintaining the CI jobs green (rspec, syntax, etc),
fixing bugs and adding new features.

Any feedback is welcome!

Regards,
--
Emilien Macchi



I echo your +1.

Perhaps most current stable supported, -1 stable version?

In that example, once the Liberty release of modules (or a particular 
module) is cut we would support Liberty and Kilo. When the same happens 
for M, we would deprecate Kilo and support M and Liberty.


Stable -2 also seems sane - I don't have a good sense of how far people 
are generally behind.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] Nominating Travis McPeak for Security CoreSec

2015-06-16 Thread michael mccune

On 06/16/2015 05:28 AM, Clark, Robert Graham wrote:

I’d like to nominate Travis for a CoreSec position as part of the
Security project. - CoreSec team members support the VMT with extended
consultation on externally reported vulnerabilities.

Travis has been an active member of the Security project for a couple of
years he’s a part of the bandit subproject and has been very active in
discussions over this time. He’s also found multiple vulnerabilities and
has experience of the VMT process.


+1

i'm not a core member, but Travis is very knowledgeable about the 
security domain and has been welcoming and helpful. he would make a 
great addition.


mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-06-16 Thread Jeremy Stanley
On 2015-06-16 12:58:18 -0400 (-0400), Sean Dague wrote:
[...]
 I think the only complexity here is the fact that grenade.sh
 implicitly drives stack.sh. Which means one of:
 
 1) devstack-gate could build the worker first, then run grenade.sh
 
 2) we make it so grenade.sh can execute in parts more easily, so
 it can hand something else running stack.sh for it.'
 
 3) we make grenade understand the subnode for partial upgrade, so
 it will run the stack phase on the subnode itself (given
 credentials).
[...]

As a point of reference, have a look at Clark's change which
introduced Ansible for driving commands on arbitrary systems in a
devstack-gate based job:

https://review.openstack.org/172614

The idea is that you wrap all relevant commands in calls to ansible,
and then the only additional logic you need to abstract out is the
decision of which node(s) you want running those commands. It
generalizes fine to a single-node solution so that you don't need to
maintain separate multi-node-vs-single-node frameworks.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Carl Baldwin
On Tue, Jun 16, 2015 at 12:33 AM, Kevin Benton blak...@gmail.com wrote:
Do these kinds of test even make sense? And are they feasible at all? I
 doubt we have any framework for injecting anything in neutron code under
 test.

 I was thinking about this in the context of a lot of the fixes we have for
 other concurrency issues with the database. There are several exception
 handlers that aren't exercised in normal functional, tempest, and API tests
 because they require a very specific order of events between workers.

 I wonder if we could write a small shim DB driver that wraps the python one
 for use in tests that just makes a desired set of queries take a long time
 or fail in particular ways? That wouldn't require changes to the neutron
 code, but it might not give us the right granularity of control.

Might be worth a look.

Finally, please note I am using DB-level locks rather than non-locking
 algorithms for making reservations.

 I thought these were effectively broken in Galera clusters. Is that not
 correct?

As I understand it, if two writes to two different masters end up
violating some db-level constraint then the operation will cause a
failure regardless if there is a lock.

Basically, on Galera, instead of waiting for the lock, each will
proceed with the transaction.  Finally, on commit, a write
certification will double check constraints with the rest of the
cluster (with a write certification).  It is at this point where
Galera will fail one of them as a deadlock for violating the
constraint.  Hence the need to retry.  To me, non-locking just means
that you embrace the fact that the lock won't work and you don't
bother to apply it in the first place.

If my understanding is incorrect, please set me straight.

 If you do go that route, I think you will have to contend with DBDeadlock
 errors when we switch to the new SQL driver anyway. From what I've observed,
 it seems that if someone is holding a lock on a table and you try to grab
 it, pymsql immediately throws a deadlock exception.

I'm not familiar with pymysql to know if this is true or not.  But,
I'm sure that it is possible not to detect the lock at all on galera.
Someone else will have to chime in to set me straight on the details.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-16 Thread Sean Dague
On 06/16/2015 12:49 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2015-06-16 06:22:23 -0700:
 FYI,

 One of the things that came out of the summit for Devstack plans going
 forward is to trim it back to something more opinionated and remove a
 bunch of low use optionality in the process.

 One of those branches to be trimmed is all the support for things beyond
 RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
 community, that's what the development environment should focus on.

 The patch to remove all of this is here -
 https://review.openstack.org/#/c/192154/. Expect this to merge by the
 end of the month. If people are interested in non RabbitMQ external
 plugins, now is the time to start writing them. The oslo.messaging team
 already moved their functional test installation for alternative
 platforms off of devstack, so this should impact a very small number of
 people.

 
 The recent spec we added to define a policy for oslo.messaging drivers is
 intended as a way to encourage that 5% who feels a different messaging
 layer is critical to participate upstream by adding devstack-gate jobs
 and committing developers to keep them stable. This change basically
 slams the door in their face and says good luck, we don't actually care
 about accomodating you. This will drive them more into the shadows,
 and push their forks even further away from the core of the project. If
 that's your intention, then we need to have a longer conversation where
 you explain to me why you feel that's a good thing.

I believe it is not the responsibility of the devstack team to support
every possible backend one could imagine and carry that technical debt
in tree, confusing new users in the process that any of these things
might actually work. I believe that if you feel that your spec assumed
that was going to be the case, you made a large incorrect externalities
assumption.

 Also, I take issue with the value assigned to dropping it. If that 95%
 is calculated as orgs_running_on_rabbit/orgs then it's telling a really
 lop-sided story. I'd rather see compute_nodes_on_rabbit/compute_nodes.
 
 I'd like to propose that we leave all of this in tree to match what is
 in oslo.messaging. I think devstack should follow oslo.messaging and
 deprecate the ones that oslo.messaging deprecates. Otherwise I feel like
 we're Vizzini cutting the rope just as The Dread Pirate 0mq is about to
 climb the last 10 meters to the top of the cliffs of insanity and battle
 RabbitMQ left handed. I know, inconceivable right?

We have an external plugin mechanism for devstack. That's a viable
option here. People will have to own and do that work, instead of
expecting the small devstack team to do that for them. I believe I left
enough of a hook in place that it's possible.

That would also let them control the code relevant to their plugin,
because there is no way that devstack was going to gate against other
backends here, so we'd end up breaking them pretty often, and it taking
a while to fix them in tree.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] (officially) deprecate stable/{grizzly, havana} branches

2015-06-16 Thread David Moreau Simard
+1 for deprecation

-- 
David Moreau Simard

On 2015-06-16 11:54 AM, Emilien Macchi wrote:
 Hi,

 Some of our modules have stable/grizzly and stable/havana branches. Some
 of them have the CI broken due to rspec issues that would require some
 investigation and time if we wanted to fix it.

 We would like to know who plan to backport some patches in these branches?

 If nobody plans to do that, we will let the branches as they are now but
 won't officially support them.

 By support I mean maintaining the CI jobs green (rspec, syntax, etc),
 fixing bugs and adding new features.

 Any feedback is welcome!

 Regards,


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][OSC] Keystone v3 user create --project $projid does not add user to project?

2015-06-16 Thread Rich Megginson
Using admin token credentials with the Keystone v2.0 API and the 
openstackclient, doing this:


# openstack project create bar --enable
# openstack user create foo --project bar --enable ...

The user will be added to the project.

Using admin token credentials with the Keystone v3 API and the 
openstackclient, using the v3 policy file with is_admin:1 added just 
about everywhere, doing this:


# openstack project create bar --domain Default --enable
# openstack user create foo --domain Default --enable --project 
$project_id_of_bar ...


The user will NOT be added to the project.

Is this intentional?  Am I missing some sort of policy to allow user 
create to add the user to the given project?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Carl Baldwin
On Thu, Jun 11, 2015 at 2:45 PM, Salvatore Orlando sorla...@nicira.com wrote:
 I have been then following a different approach. And a set of patches,
 including a devref one [2], if up for review [3]. This hardly completes the
 job: more work is required on the testing side, both as unit and functional
 tests.

 As for the spec, since I honestly would like to spare myself the hassle of
 rewriting it, I would kindly ask our glorious drivers team if they're ok
 with me submitting a spec in the shorter format approved for Liberty without
 going through the RFE process, as the spec is however in the Kilo backlog.

It took me a second read through to realize that you're talking to me
among the drivers team.  Personally, I'm okay with this and our
currently documented policy seems to allow for this until Liberty-1.

I just hope that this isn't an indication that we're requiring too
much in this new RFE process and scaring potential filers away.  I'm
trying to learn how to write good RFEs, so let me give it a shot:

  Summary:  Need robust quota enforcement in Neutron.

  Further Information:  Neutron can allow exceeding the quota in
certain cases.  Some investigation revealed that quotas in Neutron are
subject to a race where parallel requests can each check quota and
find there is just enough left to fulfill its individual request.
Each request proceeds to fulfillment with no more regard to the quota.
When all of the requests are eventually fulfilled, we find that they
have exceeded the quota.

Given my current knowledge of the RFE process, that is what I would
file as a bug in launchpad and tag it with 'rfe.'

 For testing I wonder what strategy do you advice for implementing functional
 tests. I could do some black-box testing and verifying quota limits are
 correctly enforced. However, I would also like to go a bit white-box and
 also verify that reservation entries are created and removed as appropriate
 when a reservation is committed or cancelled.
 Finally it would be awesome if I was able to run in the gate functional
 tests on multi-worker servers, and inject delays or faults to verify the
 systems behaves correctly when it comes to quota enforcement.

Full black box testing would be impossible to achieve without multiple
workers, right?  We've proposed adding multiple worker processes to
the gate a couple of times if I recall including a recent one to .
Fixing the failures has not yet been seen as a priority.

I agree that some whitebox testing should be added.  It may sound a
bit double-entry to some but I don't mind, especially given the
challenges around block box testing.  Maybe Assaf can chime in here
and set us straight.

 Do these kinds of test even make sense? And are they feasible at all? I
 doubt we have any framework for injecting anything in neutron code under
 test.

Dunno.

 Finally, please note I am using DB-level locks rather than non-locking
 algorithms for making reservations. I can move to a non-locking algorithm,
 Jay proposed one for nova for Kilo, and I can just implement that one, but
 first I would like to be convinced with a decent proof (or sort of) that the
 extra cost deriving from collision among workers is overshadowed by the cost
 for having to handle a write-set certification failure and retry the
 operation.

Do you have a reference describing the algorithm Jay proposed?

 Please advice.

 Regards,
 Salvatore

 [1]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo-backlog/better-quotas.html
 [2] https://review.openstack.org/#/c/190798/
 [3]
 https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/better-quotas,n,z

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-16 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-06-16 06:22:23 -0700:
 FYI,
 
 One of the things that came out of the summit for Devstack plans going
 forward is to trim it back to something more opinionated and remove a
 bunch of low use optionality in the process.
 
 One of those branches to be trimmed is all the support for things beyond
 RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
 community, that's what the development environment should focus on.
 
 The patch to remove all of this is here -
 https://review.openstack.org/#/c/192154/. Expect this to merge by the
 end of the month. If people are interested in non RabbitMQ external
 plugins, now is the time to start writing them. The oslo.messaging team
 already moved their functional test installation for alternative
 platforms off of devstack, so this should impact a very small number of
 people.
 

The recent spec we added to define a policy for oslo.messaging drivers is
intended as a way to encourage that 5% who feels a different messaging
layer is critical to participate upstream by adding devstack-gate jobs
and committing developers to keep them stable. This change basically
slams the door in their face and says good luck, we don't actually care
about accomodating you. This will drive them more into the shadows,
and push their forks even further away from the core of the project. If
that's your intention, then we need to have a longer conversation where
you explain to me why you feel that's a good thing.

Also, I take issue with the value assigned to dropping it. If that 95%
is calculated as orgs_running_on_rabbit/orgs then it's telling a really
lop-sided story. I'd rather see compute_nodes_on_rabbit/compute_nodes.

I'd like to propose that we leave all of this in tree to match what is
in oslo.messaging. I think devstack should follow oslo.messaging and
deprecate the ones that oslo.messaging deprecates. Otherwise I feel like
we're Vizzini cutting the rope just as The Dread Pirate 0mq is about to
climb the last 10 meters to the top of the cliffs of insanity and battle
RabbitMQ left handed. I know, inconceivable right?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Adrian Otto

On Jun 15, 2015, at 9:10 PM, Keith Bray 
keith.b...@rackspace.commailto:keith.b...@rackspace.com wrote:

Regardless of what the API defaults to, could we have the CLI prompt/warn so 
that the user easily knows that both options exist?  Is there a precedent 
within OpenStack for a similar situation?

E.g.
 solum app delete MyApp
 Do you want to also delete your logs? (default is Yes):  [YES/no]
  NOTE, if you choose No, application logs will remain on your 
account. Depending on your service provider, you may incur on-going storage 
charges.

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs. Today the help text is:

solum app delete NAME|UUID
Delete an application and all related artifacts.

Initial alternative:

solum app delete NAME|UUID
Delete an application and all related artifacts, including logs.

We could add the --keep-logs option Murali mentioned and say this instead:

solum app delete [--keep-logs] NAME|UUID
Delete an application and all related artifacts. Logs are kept if 
--keep-logs is used.

This should conform to the principle of least surprise, allow for keeping logs 
around for those who want them, and not interfere with those wanting to script 
with the CLI.

Cheers,

Adrian

Thanks,
-Keith

From: Devdatta Kulkarni 
devdatta.kulka...@rackspace.commailto:devdatta.kulka...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 15, 2015 9:56 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Yes, the log deletion should be optional.

The question is what should be the default behavior. Should the default be to 
delete the logs and provide a flag to keep them, or keep the logs by default 
and provide a override flag to delete them?

Delete-by-default is consistent with the view that when an app is deleted, all 
its artifacts are deleted (the app's meta data, the deployment units (DUs), and 
the logs). This behavior is also useful in our current state when the app 
resource and the CLI are in flux. For now, without a way to specify a flag, 
either to delete the logs or to keep them, delete-by-default behavior helps us 
clean all the log files from the application's cloud files container when an 
app is deleted.
This is very useful for our CI jobs. Without this, we end up with lots of log 
files in the application's container,
and have to resort to separate scripts to delete them up after an app is 
deleted.

Once the app resource and CLI stabilize it should be straightforward to change 
the default behavior if required.

- Devdatta


From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Sent: Friday, June 12, 2015 6:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

Team,

We currently delete logs for an app when we delete the app[1].

https://bugs.launchpad.net/solum/+bug/1463986

Perhaps there should be an optional setting at the tenant level that determines 
whether your logs are deleted or not by default (set to off initially), and an 
optional parameter to our DELETE calls that allows for the opposite action from 
the default to be specified if the user wants to override it at the time of the 
deletion. Thoughts?

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-16 Thread Georgy Okrokvertskhov
In Murano project we do see a positive impact of BigTent model. Since
Murano was accepted as a part of BigTent community we had a lot of
conversations with potential users. They were driven exactly by the fact
that Murano is now officially recognized in OpenStack community. It might
be a wrong perception, but this is a perception they have.
Most of the guys we met  are enterprises for whom catalog functionality is
interesting. The problem with enterprises is that their thinking periods
are often more than 6-9 months. They are not individuals who can start
contributing over a night. They need some time to create proper org
structure changes to organize development process. The benefits of that is
more stable and predictable development over time as soon as they start
contributing.

Thanks
Gosha



On Tue, Jun 16, 2015 at 4:44 AM, Jay Pipes jaypi...@gmail.com wrote:

 You may also find my explanation about the Big Tent helpful in this
 interview with Niki Acosta and Jeff Dickey:

 http://blogs.cisco.com/cloud/ospod-29-jay-pipes

 Best,
 -jay


 On 06/16/2015 06:09 AM, Flavio Percoco wrote:

 On 16/06/15 04:39 -0400, gordon chung wrote:

 i won't speak to whether this confirms/refutes the usefulness of the
 big tent.
 that said, probably as a by-product of being in non-stop meetings with
 sales/
 marketing/managers for last few days, i think there needs to be better
 definitions (or better publicised definitions) of what the goals of
 the big
 tent are. from my experience, they've heard of the big tent and they
 are, to
 varying degrees, critical of it. one common point is that they see it as
 greater fragmentation to a process that is already too slow.


 Not saying this is the final answer to all the questions but at least
 it's a good place to start from:


 https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/the-big-tent-a-look-at-the-new-openstack-projects-governance



 That said, this is great feedback and we may indeed need to do a
 better job to explain the big tent. That presentation, I believe, was
 an attempt to do so.

 Flavio


 just giving my fly-on-the-wall view from the other side.

 On 15/06/2015 6:20 AM, Joe Gordon wrote:

One of the stated problems the 'big tent' is supposed to solve is:

'The binary nature of the integrated release results in projects
 outside
the integrated release failing to get the recognition they deserve.
Non-official projects are second- or third-class citizens which
 can't get
development resources. Alternative solutions can't emerge in the
 shadow of
the blessed approach. Becoming part of the integrated release,
 which was
originally designed to be a technical decision, quickly became a
life-or-death question for new projects, and a political/community
minefield.' [0]

Meaning projects should see an uptick in development once they drop
 their
second-class citizenship and join OpenStack. Now that we have been
 living
in the world of the big tent for several months now, we can see if
 this
claim is true.

Below is a list of the first few few projects to join OpenStack
 after the
big tent, All of which have now been part of OpenStack for at least
 two
months.[1]

* Mangum -  Tue Mar 24 20:17:36 2015
* Murano - Tue Mar 24 20:48:25 2015
* Congress - Tue Mar 31 20:24:04 2015
* Rally - Tue Apr 7 21:25:53 2015

When looking at stackalytics [2] for each project, we don't see any
noticeably change in number of reviews, contributors, or number of
 commits
from before and after each project joined OpenStack.

So what does this mean? At least in the short term moving from
 Stackforge
to OpenStack does not result in an increase in development
 resources (too
early to know about the long term).  One of the three reasons for
 the big
tent appears to be unfounded, but the other two reasons hold.  The
 only
thing I think this information changes is what peoples expectations
 should
be when applying to join OpenStack.

[0] https://github.com/openstack/governance/blob/master/resolutions/
20141202-project-structure-reform-spec.rst
[1] Ignoring OpenStackClent since the repos were always in
 OpenStack it
just didn't have a formal home in the governance repo.
[2] h http://stackalytics.com/?module=magnum-groupmetric=commits




 __

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 gord



 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-16 Thread Gordon Sim

On 06/12/2015 09:41 PM, Alec Hothan (ahothan) wrote:

One long standing issue I can see is the fact that the oslo messaging API
documentation is sorely lacking details on critical areas such as API
behavior during fault conditions, load conditions and scale conditions.


I very much agree, particularly on the contract/expectations in the face 
of different failure conditions. Even for those who are critical of the 
pluggability of oslo.messaging, greater clarity here would be of benefit.


As I understand it, the intention is that RPC calls are invoked on a 
server at-most-once, meaning that in the event of any failure, the call 
will only be retried by the olso.messaging layer if it believes it can 
ensure the invocation is not made twice.


If that is correct, stating so explicitly and prominently would be 
worthwhile. The expectation for services using the API would then be to 
decide on any retry themselves. An idempotent call could retry for a 
configured number of attempts perhaps. A non-idempotent call might be 
able to check the result via some other call and decide based on that 
whether to retry. Giving up would then be a last resort. This would help 
increase robustness of the system overall.


Again if the assumption of at-most-once is correct, and explicitly 
stated, the design of the code can be reviewed to ensure it logically 
meets that guarantee and of course it can also be explicitly tested for 
in stress tests at the oslo.messaging level, ensuring there are no 
unintended duplicate invocations. An explicit contract also allows 
different approaches to be assessed and compared.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Mid-cycle sprint

2015-06-16 Thread Tim Hinrichs
Hi all,

In the last couple of IRCs we've been talking about running a mid-cycle
sprint focused on enabling our message bus to span multiple processes and
multiple hosts.  The message bus is what allows the Congress policy engine
to communicate with the Congress wrappers around external services like
Nova, Neutron.  This cross-process, cross-host message bus is the platform
we'll use to build version 2.0 of our distributed architecture.

If you're interested in participating, drop me a note.  Once we know who's
interested we'll work out date/time/location details.

Thanks!
Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-16 Thread Matt Riedemann
The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all 
very similar.


I want to extract a common base class that abstracts some of the common 
code and then let the sub-classes provide overrides where necessary.


As part of this, I'm wondering if we could just have a single 
'mount_point_base' config option rather than one per backend like we 
have today:


nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per 
compute host right?  So it seems to make sense that we could have one 
option used for all 4 different driver implementations and reduce some 
of the config option noise.


I checked the os-brick change [1] proposed to nova to see if there would 
be any conflicts there and so far that's not touching any of these 
classes so seems like they could be worked in parallel.


Are there any concerns with this?

Is a blueprint needed for this refactor?

[1] https://review.openstack.org/#/c/175569/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-16 Thread Chris Dent

On Tue, 16 Jun 2015, Sean Dague wrote:


I was just looking at the patches that put Nova under apache wsgi for
the API, and there are a few things that I think are going in the wrong
direction. Largely I think because they were copied from the
lib/keystone code, which we've learned is kind of the wrong direction.


Yes, that's certainly what I've done the few times I've done it.
devstack is deeply encouraging of cargo culting for reasons that are
not entirely clear.


The first is the fact that a big reason for putting {SERVICES} under
apache wsgi is we aren't running on a ton of weird unregistered ports.
We're running on 80 and 443 (when appropriate). In order to do this we
really need to namespace the API urls. Which means that service catalog
needs to be updated appropriately.


So:

a) I'm very glad to hear of this. I've been bristling about the weird
   ports thing for the last year.

b) You make it sound like there's been a plan in place to not use
   those ports for quite some time and we'd get to that when we all
   had some spare time. Where do I go to keep abreast of such plans?


I also think this -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
is completely wrong.

The Apache configs should instead specify access rules such that the
installed console entry point of nova-api can be used in place as the
WSGIScript.


I'm not able to parse this paragraph in any actionable way. The lines
you reference are one of several ways of telling mod wsgi where the
virtualenv is, which has to happen in some fashion if you are using
a virtualenv.

This doesn't appear to have anything to do with locating the module
that contains the WSGI app, so I'm missing the connection. Can you
explain please?

(Basically I'm keen on getting gnocchi and ceilometer wsgi servers
in devstack aligned with whatever the end game is, so knowing the plan
makes it a bit easier.)


This should also make lines like -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
L274 uneeded. (The WSGI Script will be in a known place). It will also
make upgrades much more friendly.


It sounds like maybe you are saying that the api console script and
the module containing the wsgi 'application' variable ought to be the
same thing. I don't reckon that's a great idea as the api console
scripts will want to import a bunch of stuff that the wsgi application
will not.

Or I may be completely misreading you. It's been a long day, etc.


I think that we need to get these things sorted before any further
progression here. Volunteers welcomed to help get us there.


Find me, happy to help. The sooner we can kill wacky port weirdness
the better.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-16 Thread Thomas Goirand
On 06/16/2015 12:06 PM, Thierry Carrez wrote:
 It also removes the stupid encouragement to use all components from the
 same date. With everything tagged at the same date, you kinda send the
 message that those various things should be used together. With
 everything tagged separately, you send te message that you can mix and
 match components from stable/* as you see fit. I mean, it's totally
 valid to use stable branch components from various points in time
 together, since they are all supposed to work.

 Though there's now zero guidance at what should be the speed of
 releasing server packages to our users.
 
 I really think it should be a distribution decision. You could release
 all commits, release every 2 months, release after each CVE, release
 as-needed when a bug in Debian BTS is fixed. I don't see what guidance
 upstream should give, apart from enabling all models. Currently we make
 most models more difficult than they should be, to promote an arbitrary
 time-based model. With plan D, we enable all models.

Let me put this in another way: with the plan D, I'll be lost, and wont
ever know when to release a new stable version in Debian. I don't know
better than anyone else. If we had each upstream project saying
individually: Ok, now we gathered enough bugfixes so that it's
important to get it in downstream distributions, I'd happily follow
this kind of guidance. But the plan is to just commit bugfixes, and hope
that downstream distros (ie: me in this case) just catch when a new
release worse the effort.

 As pointed elsewhere, plan D assumes we move to generating release notes
 for each commit. So you won't lose track of what is fixed in each
 version. If anything, that will give you proper release notes for
 CVE-fix commits, something you didn't have before, since we wouldn't cut
 a proper point release after a CVE fix but on a pre-determined
 time-based schedule.
 
 Overall, I think even your process stands to benefit from the proposed
 evolution.

I just hope so. If any core / PTL is reading me in this thread, I would
strongly encourage you guys to get in touch and ping me when you think
some commits in the stable release should be uploaded to Debian. A quick
message on IRC can be enough.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-16 Thread Allison Randal
On 06/15/2015 01:43 PM, Paul Belanger wrote:
 While I agree those points are valid, and going to be helpful, moving
 under OpenStack (even Stackforge) does also offer the chance to get more
 test integration upstream (not saying this was the original scope).
 However, this could also be achieved by 3rd party integration too.

Nod, 3rd party integration is worth exploring.

 I'm still driving forward with some -infra specific packaging for Debian
 / Fedora ATM (zuul packaging). Mostly because of -infra needs for
 packages. Not saying that is a reason to reconsider, but there is the
 need for -infra to consume packages from upstream.

I suspect that, at least initially, the needs of -infra specific
packaging will be quite different than the needs of general-purpose
packaging in Debian/Fedora distros. Trying to tightly couple the two
will just bog you down in trying to solve far too many problems for far
too many people. But, I also suspect that -infra packaging will be quite
minimal and intended for the services to be configured by puppet, so
there's a very good chance that if you sprint ahead and just do it, your
style of packaging will end up feeding back into future packaging in the
distros.

Allison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [grenade] future direction on partial upgrade support

2015-06-16 Thread Sean Dague
Back when Nova first wanted to test partial upgrade, we did a bunch of
slightly odd conditionals inside of grenade and devstack to make it so
that if you were very careful, you could just not stop some of the old
services on a single node, upgrade everything else, and as long as the
old services didn't stop, they'd be running cached code in memory, and
it would look a bit like a 2 node worker not upgraded model. It worked,
but it was weird.

There has been some interest by the Nova team to expand what's not being
touched, as well as the Neutron team to add partial upgrade testing
support. Both are great initiatives, but I think going about it the old
way is going to add a lot of complexity in weird places, and not be as
good of a test as we really want.

Nodepool now supports allocating multiple nodes. We have a multinode job
in Nova regularly testing live migration using this.

If we slice this problem differently, I think we get a better
architecture, a much easier way to add new configs, and a much more
realistic end test.

Conceptually, use devstack-gate multinode support to set up 2 nodes, an
all in one, and a worker. Let grenade upgrade the all in one, leave the
worker alone.

I think the only complexity here is the fact that grenade.sh implicitly
drives stack.sh. Which means one of:

1) devstack-gate could build the worker first, then run grenade.sh

2) we make it so grenade.sh can execute in parts more easily, so it can
hand something else running stack.sh for it.'

3) we make grenade understand the subnode for partial upgrade, so it
will run the stack phase on the subnode itself (given credentials).

This kind of approach means deciding which services you don't want to
upgrade doesn't require devstack changes, it's just a change of the
services on the worker.

We need a volunteer for taking this on, but I think all the follow on
partial upgrade support will be much much easier to do after we have
this kind of mechanism in place.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack] Javascript Linting

2015-06-16 Thread Tripp, Travis S
I’m copying and pasting from the other thread some info below.

I think agreeing on rules is the bigger problem here and I don’t think all
the projects should have to agree on rules. We’ve spent a good portion of
liberty 1 getting the code base cleaned up to meet the already adopted
horizon rules and it is still in progress.

My preference would be to see if we can use eslint to accomplish all of
our currently adopted horizon rules [3][4] AND to also add in the angular
specific plugin [1][2]. But we can’t do this at the expense of the entire
liberty release.

― My previously email below:

We¹ve adopted the John Papa style guide for Angular in horizon [0]. On
cursory inspection ES lint seems to have an angular specific plugin [1]
that could be very useful to us, but we¹d need to evaluate it in depth. It
looks like there was some discussion on the style guide on this not too
long ago [2]. The jscs rules we have [3] are very generic code formatting
type rules that are helpful, but don't really provide any angular specific
help. Here are the jshint rules [4]. It would be quite nice to put all
this goodness across tools into a single tool configuration if possible.

[0] 
http://docs.openstack.org/developer/horizon/contributing.html#john-papa-sty
le-guide
[1] https://www.npmjs.com/package/eslint-plugin-angular
[2] https://github.com/johnpapa/angular-styleguide/issues/194
[3] https://github.com/openstack/horizon/blob/master/.jscsrc
[4] https://github.com/openstack/horizon/blob/master/.jshintrc


From:  Rob Cresswell   (rcresswe) rcres...@cisco.com
Reply-To:  OpenStack List openstack-dev@lists.openstack.org
Date:  Tuesday, June 16, 2015 at 1:40 AM
To:  OpenStack List openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack]
Javascript Linting


So my view here is that I don’t particularly mind which plugin/ set of
plugins Horizon uses, but the biggest deterrent is the workload. We’re
already cleaning everything up quite productively, so I’m reluctant to
swap. That said, the cleanup from JSCS/
 JSHint should be largely relevant to ESLint. Michael, do you have any
ideas on the numbers/ workload behind a possible swap?

With regards to licensing, does this mean we must stop using JSHint, or
that we’re still okay to use it as a dev tool? Seems that if the former is
the case, then the decision is made for us.

Rob



From: Michael Krotscheck krotsch...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Tuesday, 16 June 2015 00:36
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [javascript] [horizon] [merlin] [refstack]
Javascript Linting


I'm restarting this thread with a different subject line to get a broader
audience. Here's the original thread:
http://lists.openstack.org/pipermail/openstack-dev/2015-June/066040.html


The question at hand is What will be OpenStack's javascript equivalent of
flake8. I'm going to consider the need for common formatting rules to be
self-evident. Here's the lay of the land so far:

* Horizon currently uses JSCS.
* Refstack uses Eslint.
* Merlin doesn't use anything.
* StoryBoard (deprecated) uses eslint.
* Nobody agrees on rules.


JSCS

JSCS Stands for JavaScript CodeStyle. Its mission is to enforce a style
guide, yet it does not check for potential bugs, variable overrides, etc.
For those tests, the team usually defers to (preferred) JSHint, or ESLint.

JSHint
Ever since JSCS was extracted from JSHint, it has actively removed rules
that enforce code style, and focused on findbug style tests instead.
JSHint still contains the Do no evil license, therefore is not an option
for OpenStack, and has been disqualified.

ESLint
ESLint's original mission was to be an OSI compliant replacement for
JSHint, before the JSCS split. It wants to be a one-tool solution.

My personal opinion/recommendation: Based on the above, I recommend we use
ESLint. My reasoning: It's one tool, it's extensible, it does both
codestyle things and bug finding things, and it has a good license. JSHint
is disqualified because of the license.
 JSCS is disqualified because it is too focused, and only partially useful
on its own.

I understand that this will mean some work by the Horizon team to bring
their code in line with a new parser, however I personally consider this
to be a good thing. If the code is good to begin with, it shouldn't be
that difficult.

This thread is not there to argue about which rules to enforce. Right now
I just want to nail down a tool, so that we can (afterwards) have a
discussion about which rules to activate.

Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-16 Thread Matt Riedemann



On 6/16/2015 4:21 PM, Matt Riedemann wrote:

The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all
very similar.

I want to extract a common base class that abstracts some of the common
code and then let the sub-classes provide overrides where necessary.

As part of this, I'm wondering if we could just have a single
'mount_point_base' config option rather than one per backend like we
have today:

nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per
compute host right?  So it seems to make sense that we could have one
option used for all 4 different driver implementations and reduce some
of the config option noise.

I checked the os-brick change [1] proposed to nova to see if there would
be any conflicts there and so far that's not touching any of these
classes so seems like they could be worked in parallel.

Are there any concerns with this?

Is a blueprint needed for this refactor?

[1] https://review.openstack.org/#/c/175569/



I threw together a quick blueprint [1] just for tracking.

I'm assuming I don't need a spec for this.

[1] 
https://blueprints.launchpad.net/nova/+spec/consolidate-libvirt-fs-volume-drivers


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Randall Burt
While I agree with what you're saying, the way the OpenStack clients are 
traditionally written/designed, the CLI *is* the SDK for those users who want 
to do scripting in a shell rather than in Python. If we go with your 
suggestion, we'd probably also want to have the ability to suppress those 
prompts for folks that want to shell script.

On Jun 16, 2015, at 4:42 PM, Keith Bray keith.b...@rackspace.com
 wrote:

 Isn't that what the SDK is for?   To chip in with a Product Management type 
 hat on, I'd think the CLI should be primarily focused on user experience 
 interaction, and the SDK should be primarily targeted for developer 
 automation needs around programmatically interacting with the service.   So, 
 I would argue that the target market for the CLI should not be the developer 
 who wants to script.
 
 -Keith
 
 From: Adrian Otto adrian.o...@rackspace.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 16, 2015 12:24 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
 app?
 
 Interactive choices like that one can make it more confusing for developers 
 who want to script with the CLI. My preference would be to label the app 
 delete help text to clearly indicate that it deletes logs
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Keith Bray
Isn't that what the SDK is for?   To chip in with a Product Management type hat 
on, I'd think the CLI should be primarily focused on user experience 
interaction, and the SDK should be primarily targeted for developer automation 
needs around programmatically interacting with the service.   So, I would argue 
that the target market for the CLI should not be the developer who wants to 
script.

-Keith

From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 16, 2015 12:24 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Keith Bray
That makes sense Randall.. .a sort of Novice mode vs. Expert mode.
I definitely want to see OpenStack to get easier to use, and lower the
barrier to entry. If projects only cater to developers, progress will be
slower than what it could be.

-Keith

On 6/16/15 4:52 PM, Randall Burt randall.b...@rackspace.com wrote:

While I agree with what you're saying, the way the OpenStack clients are
traditionally written/designed, the CLI *is* the SDK for those users who
want to do scripting in a shell rather than in Python. If we go with your
suggestion, we'd probably also want to have the ability to suppress those
prompts for folks that want to shell script.

On Jun 16, 2015, at 4:42 PM, Keith Bray keith.b...@rackspace.com
 wrote:

 Isn't that what the SDK is for?   To chip in with a Product Management
type hat on, I'd think the CLI should be primarily focused on user
experience interaction, and the SDK should be primarily targeted for
developer automation needs around programmatically interacting with the
service.   So, I would argue that the target market for the CLI should
not be the developer who wants to script.
 
 -Keith
 
 From: Adrian Otto adrian.o...@rackspace.com
 Reply-To: OpenStack Development Mailing List (not for usage
questions) openstack-dev@lists.openstack.org
 Date: Tuesday, June 16, 2015 12:24 PM
 To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we
delete an app?
 
 Interactive choices like that one can make it more confusing for
developers who want to script with the CLI. My preference would be to
label the app delete help text to clearly indicate that it deletes logs
 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-16 Thread Adam Young

On 06/16/2015 12:25 PM, Sean Dague wrote:

I was just looking at the patches that put Nova under apache wsgi for
the API, and there are a few things that I think are going in the wrong
direction. Largely I think because they were copied from the
lib/keystone code, which we've learned is kind of the wrong direction.

The first is the fact that a big reason for putting {SERVICES} under
apache wsgi is we aren't running on a ton of weird unregistered ports.
We're running on 80 and 443 (when appropriate). In order to do this we
really need to namespace the API urls. Which means that service catalog
needs to be updated appropriately.

I'd expect nova to be running on http://localhost/compute not

YES!

I had writtten this up for just this reason:

https://wiki.openstack.org/URLs

Lets make that the cannonical list.

Keystone suffers from the fact that the AUTH_URL is composed lots of 
places, and people hard coded port 5000 in...I would like that to die.

http://localhost:8774 when running under wsgi. That's going to probably
interestingly break a lot of weird assumptions by different projects,
but that's part of the reason for doing this exercise. Things should be
using the service catalog, and when they aren't, we need to figure it out.

Amen!


(Exceptions can be made for third party APIs that don't work this way,
like the metadata server).

I also think this -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
is completely wrong.

The Apache configs should instead specify access rules such that the
installed console entry point of nova-api can be used in place as the
WSGIScript.

This should also make lines like -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
L274 uneeded. (The WSGI Script will be in a known place). It will also
make upgrades much more friendly.

I think that we need to get these things sorted before any further
progression here. Volunteers welcomed to help get us there.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] summit session summary: Release Versioning for Server Applications

2015-06-16 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-06-16 11:45:51 +0200:
 Doug Hellmann wrote:
  [...]
  I put together a little script [1] to try to count the previous
  releases for projects, to use that as the basis for their first
  SemVer-based version number. I pasted the output into an etherpad
  [2] and started making notes about proposed release numbers at the
  top. For now, I'm only working with the projects that have been
  managed by the release team (have the release:managed tag in the
  governance repository), but it should be easy enough for other projects
  to use the same idea to pick a version number.
 
 Your script missed 2015.1 tags for some reason...
 
 I still think we should count the number of integrated releases
 instead of the number of releases (basically considering pre-integration
 releases as 0.x releases). That would give:
 
 ceilometer 5.0.0
 cinder 7.0.0
 glance 11.0.0
 heat 5.0.0
 horizon 8.0.0
 ironic 2.0.0
 keystone 8.0.0
 neutron* 7.0.0
 nova 12.0.0
 sahara 3.0.0
 trove 4.0.0
 
 We also traditionally managed the previously-incubated projects. That
 would add the following to the mix:
 
 barbican 1.0.0
 designate 1.0.0
 manila 1.0.0
 zaqar 1.0.0
 

I have submitted patches to update all of these projects to the versions
listed here.

See https://review.openstack.org/#/q/topic:semver-releases,n,z

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for changing 1600UTC meeting to 1700 UTC

2015-06-16 Thread Harm Weites

I'm ok with moving to 16:30 UTC instead of staying at 16:00.

I actually prefer it in my evening schedule :) Moving to 16:30 would 
already be a great improvement to the current schedule and should at 
least allow me to not miss everything.


- harmw

Op 12-06-15 om 15:44 schreef Steven Dake (stdake):

Even though 7am is not ideal for the west coast, I¹d be willing to go back
that far.  That would put the meeting at the morning school rush for the
west coast folks though (although we are in summer break in the US and we
could renegotiate a time in 3 months when school starts up again if its a
problem) - so creating different set of problems for different set of
people :)

This would be a 1400 UTC meeting.

While I wake up prior to 7am, (usually around 5:30) I am not going to put
people through the torture of a 6am meeting in any timezone if I can help
it so 1400 is the earliest we can go :)

Regards
-steve


On 6/12/15, 4:37 AM, Paul Bourke paul.bou...@oracle.com wrote:


I'm fairly easy on this but, if the issue is that the meeting is running
into people's evening schedules (in EMEA), would it not make sense to
push it back an hour or two into office hours, rather than forward?

On 10/06/15 18:20, Ryan Hallisey wrote:

After some upstream discussion, moving the meeting from 1600 to 1700
UTC does not seem very popular.
It was brought up that changing the time to 16:30 UTC could accommodate
more people.

For the people that attend the 1600 UTC meeting time slot can you post
further feedback to address this?

Thanks,
Ryan

- Original Message -
From: Jeff Peeler jpee...@redhat.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Tuesday, June 9, 2015 2:19:00 PM
Subject: Re: [openstack-dev] [kolla] Proposal for changing 1600UTC
meeting to 1700 UTC

On Mon, Jun 08, 2015 at 05:15:54PM +, Steven Dake (stdake) wrote:

Folks,

Several people have messaged me from EMEA timezones that 1600UTC fits
right into the middle of their family life (ferrying kids from school
and what-not) and 1700UTC while not perfect, would be a better fit
time-wise.

For all people that intend to attend the 1600 UTC, could I get your
feedback on this thread if a change of the 1600UTC timeslot to 1700UTC
would be acceptable?  If it wouldn¹t be acceptable, please chime in as
well.

Both 1600 and 1700 UTC are fine for me.

Jeff


_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-16 Thread Adam Young

On 06/16/2015 12:48 PM, Morgan Fainberg wrote:

Long term we want to see Keystone move to http://host/identity. However the reason for choosing 
5000/35357 for ports was compatibility and avoiding breaking horizon. At the time we did the initial 
change over, sharing the root 80/443 ports with horizon was more than challenging since 
horizon needed to be based at /.

If that issue/assumption for horizon is no longer present, moving keystone to 
be on port 80/443 would be doable. The last factor is that keystone was an a 
priori knowledge for discovering other services. As long as we update docs 
(possibly 302? For a cycle in devstack from the alternate ports) I think we're 
good to make the change.


The change to do this made its way into Horizon (courtesy of Matt Runge) 
and is in devstack as well, I think.  You need to specify WEBROOT for 
the Horizon install.




--Morgan

Sent via mobile


On Jun 16, 2015, at 09:25, Sean Dague s...@dague.net wrote:

I was just looking at the patches that put Nova under apache wsgi for
the API, and there are a few things that I think are going in the wrong
direction. Largely I think because they were copied from the
lib/keystone code, which we've learned is kind of the wrong direction.

The first is the fact that a big reason for putting {SERVICES} under
apache wsgi is we aren't running on a ton of weird unregistered ports.
We're running on 80 and 443 (when appropriate). In order to do this we
really need to namespace the API urls. Which means that service catalog
needs to be updated appropriately.

I'd expect nova to be running on http://localhost/compute not
http://localhost:8774 when running under wsgi. That's going to probably
interestingly break a lot of weird assumptions by different projects,
but that's part of the reason for doing this exercise. Things should be
using the service catalog, and when they aren't, we need to figure it out.

(Exceptions can be made for third party APIs that don't work this way,
like the metadata server).

I also think this -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
is completely wrong.

The Apache configs should instead specify access rules such that the
installed console entry point of nova-api can be used in place as the
WSGIScript.

This should also make lines like -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
L274 uneeded. (The WSGI Script will be in a known place). It will also
make upgrades much more friendly.

I think that we need to get these things sorted before any further
progression here. Volunteers welcomed to help get us there.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack] Javascript Linting

2015-06-16 Thread Michael Krotscheck
On Tue, Jun 16, 2015 at 10:22 AM Tripp, Travis S travis.tr...@hp.com
wrote:

 I think agreeing on rules is the bigger problem here and I don’t think all
 the projects should have to agree on rules.


I believe we agree there, mostly. I personally feel there is some benefit
to setting some rules, likely published as an openstack linting plugin,
which enforce things like Do not use fuzzy versions in your package.json
and other things that make things unstable. That should be a very carefully
reserved list of rules though.

I've created an eslint configuration file that includes every single rule,
it's high level purpose, and a link to the details on it, and provided it
in a patch against horizon. The intent is that it's a good starting point
from which to activate and deactivate rules that make sense for horizon.

https://review.openstack.org/#/c/192327/


 We’ve spent a good portion of liberty 1 getting the code base cleaned up
 to meet the already adopted horizon rules and it is still in progress.


As a side note, the non-voting horizon linting job for javascript things is
waiting for review here: https://review.openstack.org/#/c/16/

My preference would be to see if we can use eslint to accomplish all of
 our currently adopted horizon rules [3][4] AND to also add in the angular
 specific plugin [1][2]. But we can’t do this at the expense of the entire
 liberty release.


Again, I agree. The patch I've provided above sets up the horizon eslint
build, and adds about... 10K additional style violations. Since neither of
the builds pass, it's difficult to see the difference, yet either way you
should probably tweak the rules to match horizon's personal preferences.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Salvatore Orlando
Some more comments inline.

Salvatore

On 16 June 2015 at 19:00, Carl Baldwin c...@ecbaldwin.net wrote:

 On Tue, Jun 16, 2015 at 12:33 AM, Kevin Benton blak...@gmail.com wrote:
 Do these kinds of test even make sense? And are they feasible at all? I
  doubt we have any framework for injecting anything in neutron code under
  test.
 
  I was thinking about this in the context of a lot of the fixes we have
 for
  other concurrency issues with the database. There are several exception
  handlers that aren't exercised in normal functional, tempest, and API
 tests
  because they require a very specific order of events between workers.
 
  I wonder if we could write a small shim DB driver that wraps the python
 one
  for use in tests that just makes a desired set of queries take a long
 time
  or fail in particular ways? That wouldn't require changes to the neutron
  code, but it might not give us the right granularity of control.

 Might be worth a look.


It's a solution for pretty much mocking out the DB interactions. This would
work for fault injection on most neutron-server scenarios, both for RESTful
and RPC interfaces, but we'll need something else to mock interactions
with the data plane  that are performed by agents. I think we already have
a mock for the AMQP bus on which we shall just install hooks for injecting
faults.


 Finally, please note I am using DB-level locks rather than non-locking
  algorithms for making reservations.
 
  I thought these were effectively broken in Galera clusters. Is that not
  correct?

 As I understand it, if two writes to two different masters end up
 violating some db-level constraint then the operation will cause a
 failure regardless if there is a lock.



 Basically, on Galera, instead of waiting for the lock, each will
 proceed with the transaction.  Finally, on commit, a write
 certification will double check constraints with the rest of the
 cluster (with a write certification).  It is at this point where
 Galera will fail one of them as a deadlock for violating the
 constraint.  Hence the need to retry.  To me, non-locking just means
 that you embrace the fact that the lock won't work and you don't
 bother to apply it in the first place.


This is correct.

Db level locks are broken in galera. As Carl says, write sets are sent out
for certification after a transaction is committed.
So the write intent lock, or even primary key constraint violations cannot
be verified before committing the transaction.
As a result you incur a write set certification failure, which is notably
more expensive than an instance-level rollback, and manifests as a
DBDeadlock exception to the OpenStack service.

Retrying a transaction is also a way of embracing this behaviour... you
just accept the idea of having to reach to write set certifications.
Non-locking approaches instead aim at avoiding write set certifications.
The downside is that especially in high concurrency scenario, the operation
is retries many times, and this might become even more expensive than
dealing with the write set certification failure.

But zzzeek (Mike Bayer) is coming to our help; as a part of his DBFacade
work, we should be able to treat active/active cluster as active/passive
for writes, and active/active for reads. This means that the write set
certification issue just won't show up, and the benefits of active/active
clusters will still be attained for most operations (I don't think there's
any doubt that SELECT operations represent the majority of all DB
statements).


 If my understanding is incorrect, please set me straight.


You're already straight enough ;)



  If you do go that route, I think you will have to contend with DBDeadlock
  errors when we switch to the new SQL driver anyway. From what I've
 observed,
  it seems that if someone is holding a lock on a table and you try to grab
  it, pymsql immediately throws a deadlock exception.


 I'm not familiar with pymysql to know if this is true or not.  But,
 I'm sure that it is possible not to detect the lock at all on galera.
 Someone else will have to chime in to set me straight on the details.


DBDeadlocks without multiple workers also suggest we should look closely at
what eventlet is doing before placing the blame on pymysql. I don't think
that the switch to pymysql is changing the behaviour of the database
interface; I think it's changing the way in which neutron interacts to the
database thus unveiling concurrency issues that we did not spot before as
we were relying on a sort of implicit locking triggered by the fact that
some parts of Mysql-Python were implemented in C.



 Carl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-16 Thread Alec Hothan (ahothan)
Gordon,

These are all great points for RPC messages (also called CALL in oslo
messaging). There are similar ambiguous contracts for the other types of
messages (CAST and FANOUT).
I am worried about the general lack of interest from the community to fix
this as it looks like most people assume that oslo messaging is good
enough (with rabbitMQ) and hence there is no need to invest any time on an
alternative transport (not mentioning that people generally prefer to work
on newer trending areas in OpenStack than contribute on a lower-level
messaging layer).
I saw Sean Dague mention in another email that RabbitMQ is used by 95% of
OpenStack users - and therefore does it make sense to invest in ZMQ (legit
question). RabbitMQ had had a lot of issues but there has been several
commits fixing some of the issues, so it would make sense IMHO to make
another status update to reevaluate the situation.

For OpenStack to be really production grade at scale, there is a need for
a very strong messaging layer and this cannot be achieved with such a
loose API definitions (regardless of what transport is used). This will be
what distinguishes a great cloud OS platform from a so-so one.
There is also a need for defining more clearly the roadmap for oslo
messaging because it is far from over. I see a need for clarifying the
following areas:
- validation at scale and HA
- security and encryption on the control plane

  Alec



On 6/16/15, 11:25 AM, Gordon Sim g...@redhat.com wrote:

On 06/12/2015 09:41 PM, Alec Hothan (ahothan) wrote:
 One long standing issue I can see is the fact that the oslo messaging
API
 documentation is sorely lacking details on critical areas such as API
 behavior during fault conditions, load conditions and scale conditions.

I very much agree, particularly on the contract/expectations in the face
of different failure conditions. Even for those who are critical of the
pluggability of oslo.messaging, greater clarity here would be of benefit.

As I understand it, the intention is that RPC calls are invoked on a
server at-most-once, meaning that in the event of any failure, the call
will only be retried by the olso.messaging layer if it believes it can
ensure the invocation is not made twice.

If that is correct, stating so explicitly and prominently would be
worthwhile. The expectation for services using the API would then be to
decide on any retry themselves. An idempotent call could retry for a
configured number of attempts perhaps. A non-idempotent call might be
able to check the result via some other call and decide based on that
whether to retry. Giving up would then be a last resort. This would help
increase robustness of the system overall.

Again if the assumption of at-most-once is correct, and explicitly
stated, the design of the code can be reviewed to ensure it logically
meets that guarantee and of course it can also be explicitly tested for
in stress tests at the oslo.messaging level, ensuring there are no
unintended duplicate invocations. An explicit contract also allows
different approaches to be assessed and compared.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-16 Thread Jay Pipes

On 06/15/2015 10:55 AM, James Page wrote:

We understand and have communicated from the start of this
conversation that we will need to be able to maintain deltas between
Debian and Ubuntu - for both technical reasons, in the way the
distributions work (think Ubuntu main vs universe), as well as
objectives that each distribution has in terms of the way packaging
should work.


Hi James,

For the benefit of the TC members (such as myself) that do not have a 
great background in packaging internals, would you mind describing one 
or two of the deltas you describe above? I'm really wondering what these 
things look like and how big the difference is from the Debian packaging 
recipes (is that the right word, even?)


All the best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Salvatore Orlando
On 16 June 2015 at 18:49, Carl Baldwin c...@ecbaldwin.net wrote:

 On Thu, Jun 11, 2015 at 2:45 PM, Salvatore Orlando sorla...@nicira.com
 wrote:
  I have been then following a different approach. And a set of patches,
  including a devref one [2], if up for review [3]. This hardly completes
 the
  job: more work is required on the testing side, both as unit and
 functional
  tests.
 
  As for the spec, since I honestly would like to spare myself the hassle
 of
  rewriting it, I would kindly ask our glorious drivers team if they're ok
  with me submitting a spec in the shorter format approved for Liberty
 without
  going through the RFE process, as the spec is however in the Kilo
 backlog.

 It took me a second read through to realize that you're talking to me
 among the drivers team.  Personally, I'm okay with this and our
 currently documented policy seems to allow for this until Liberty-1.


Great!



 I just hope that this isn't an indication that we're requiring too
 much in this new RFE process and scaring potential filers away.  I'm
 trying to learn how to write good RFEs, so let me give it a shot:

   Summary:  Need robust quota enforcement in Neutron.

   Further Information:  Neutron can allow exceeding the quota in
 certain cases.  Some investigation revealed that quotas in Neutron are
 subject to a race where parallel requests can each check quota and
 find there is just enough left to fulfill its individual request.
 Each request proceeds to fulfillment with no more regard to the quota.
 When all of the requests are eventually fulfilled, we find that they
 have exceeded the quota.

 Given my current knowledge of the RFE process, that is what I would
 file as a bug in launchpad and tag it with 'rfe.'


The RFE process is fine and relatively simple. I was just luring somebody
into giving me the exact text to put in it!
Jokes apart, I was suggesting this because since it was a backlog spec,
it was already assumed that it was something we wanted to have for Neutron
and thus skip the RFE approval step.


  For testing I wonder what strategy do you advice for implementing
 functional
  tests. I could do some black-box testing and verifying quota limits are
  correctly enforced. However, I would also like to go a bit white-box and
  also verify that reservation entries are created and removed as
 appropriate
  when a reservation is committed or cancelled.
  Finally it would be awesome if I was able to run in the gate functional
  tests on multi-worker servers, and inject delays or faults to verify the
  systems behaves correctly when it comes to quota enforcement.

 Full black box testing would be impossible to achieve without multiple
 workers, right?  We've proposed adding multiple worker processes to
 the gate a couple of times if I recall including a recent one to .


Yeah but Neutron was not as stable with multiple workers, and we had to
revert it (I think I did the revert)


 Fixing the failures has not yet been seen as a priority.


I wonder if this is because developers are too busy bikeshedding or chasing
unicorns,  or because the issues we saw are mostly due to the way we run
tests in the gate and are not found by operators in real deployments
(another option if that operators are too afraid of neutron's
unpredictability and they do not even try turning on multiple workers)


 I agree that some whitebox testing should be added.  It may sound a
 bit double-entry to some but I don't mind, especially given the
 challenges around block box testing.  Maybe Assaf can chime in here
 and set us straight.


I want white-box testing. I think it's important. Unit tests to an extent
do this, but they don't test the whole functionality. On the other hand
black-bot testing tests the functionality, but it does not tell you whether
the system is actually behaving as you expect. If it's not, it means you
have a fault. And that fault will eventually emerge as a failure. So we
need this kind of testing. However, I need hooks in Neutron in order to
achieve this. Like a sqlalchemy event listener that informs me of completed
transactions, for instance. Or hooks to perform fault injection - like
adding a delay, or altering the return value of a function. It would be
good for me to know whether this is in the testing roadmap for Liberty.



  Do these kinds of test even make sense? And are they feasible at all? I
  doubt we have any framework for injecting anything in neutron code under
  test.

 Dunno.


  Finally, please note I am using DB-level locks rather than non-locking
  algorithms for making reservations. I can move to a non-locking
 algorithm,
  Jay proposed one for nova for Kilo, and I can just implement that one,
 but
  first I would like to be convinced with a decent proof (or sort of) that
 the
  extra cost deriving from collision among workers is overshadowed by the
 cost
  for having to handle a write-set certification failure and retry the
  operation.

 Do you have a reference describing the 

Re: [openstack-dev] [Cinder] Volume creation fails in Horizon

2015-06-16 Thread Mike Perez
On 13:00 Jun 15, Jayanthi, Swaroop wrote:
 Hi All,
 
 I am trying to create a Volume for VMFS with a Volume Type (selected Volume 
 Type selected has extra_specs).I am receiving an error Volume creation 
 failed incase if the volume-type has extra-specs.
 
 Cinder doesn't support Volume creation if the volume-type has extra-specs?   
 Is this expected behavior can you please let me know your thoughts.
 
 If not how to overcome this issue from Horizon UI incase if the Volume-Type 
 has extra-specs ?
 
 Thanks and Regards,

Cinder does support volume creation if the volume type has extra specs. Volume
types with extra specs is information the Cinder scheduler uses in picking
a Cinder volume host. Can you please provide the Cinder scheduler log, as well
as information on the volume type's extra specs?

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Targeting icehouse-eol?

2015-06-16 Thread Alan Pevec
 let's release this last one (codename: Farewell ?) point release. I
 can do this next week after we finish pending reviews.

Remaining stable/icehouse reviews[1] have -2 or -1 except
https://review.openstack.org/176019 which I've asked
neutron-stable-maint to review.
Matt, anything else before we can tag 2014.1.5 and icehouse-eol ?

Cheers,
Alan

[1]
https://review.openstack.org/#/q/status:open+AND+branch:stable/icehouse+AND+%28project:openstack/nova+OR+project:openstack/keystone+OR+project:openstack/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR+project:openstack/horizon+OR+project:openstack/heat+OR+project:openstack/ceilometer+OR+project:openstack/trove%29,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-16 Thread Michael Still
I don't think you need a spec for this (its a refactor). That said,
I'd be interested in exploring how you deprecate the old flags. Can
you have more than one deprecated name for a single flag?

Michael

On Wed, Jun 17, 2015 at 7:29 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 6/16/2015 4:21 PM, Matt Riedemann wrote:

 The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all
 very similar.

 I want to extract a common base class that abstracts some of the common
 code and then let the sub-classes provide overrides where necessary.

 As part of this, I'm wondering if we could just have a single
 'mount_point_base' config option rather than one per backend like we
 have today:

 nfs_mount_point_base
 glusterfs_mount_point_base
 smbfs_mount_point_base
 quobyte_mount_point_base

 With libvirt you can only have one of these drivers configured per
 compute host right?  So it seems to make sense that we could have one
 option used for all 4 different driver implementations and reduce some
 of the config option noise.

 I checked the os-brick change [1] proposed to nova to see if there would
 be any conflicts there and so far that's not touching any of these
 classes so seems like they could be worked in parallel.

 Are there any concerns with this?

 Is a blueprint needed for this refactor?

 [1] https://review.openstack.org/#/c/175569/


 I threw together a quick blueprint [1] just for tracking.

 I'm assuming I don't need a spec for this.

 [1]
 https://blueprints.launchpad.net/nova/+spec/consolidate-libvirt-fs-volume-drivers


 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >