Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-03 Thread Tom Fifield
On 02/05/14 22:09, Mark McClain wrote:
 
 On May 2, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:
 
 Some non insignificant number of devstack changes related to neutron
 seem to be neutron plugins having to do all kinds of manipulation of
 extra config files. The grenade upgrade issue in neutron was because of
 some placement change on config files. Neutron seems to have *a ton* of
 config files and is extremely sensitive to their locations/naming, which
 also seems like it ends up in flux.
 
 We have grown in the number of configuration files and I do think some of the 
 design decisions made several years ago should probably be revisited.  One of 
 the drivers of multiple configuration files is the way that Neutron is 
 currently packaged [1][2].  We’re packaged significantly different than the 
 other projects so the thinking in the early years was that each 
 plugin/service since it was packaged separately needed its own config file.  
 This causes problems because often it involves changing the init script 
 invocation if the plugin is changed vs only changing the contents of the init 
 script.  I’d like to see Neutron changed to be a single package similar to 
 the way Cinder is packaged with the default config being ML2.
 

 Is there an overview somewhere to explain this design point?
 
 Sadly no.  It’s a historical convention that needs to be reconsidered.
 

 All the other services have a single config config file designation on
 startup, but neutron services seem to need a bunch of config files
 correct on the cli to function (see this process list from recent
 grenade run - http://paste.openstack.org/show/78430/ note you will have
 to horiz scroll for some of the neutron services).

 Mostly it would be good to understand this design point, and if it could
 be evolved back to the OpenStack norm of a single config file for the
 services.

 
 +1 to evolving into a more limited set of files.  The trick is how we 
 consolidate the agent, server, plugin and/or driver options or maybe we don’t 
 consolidate and use config-dir more.  In some cases, the files share a set of 
 common options and in other cases there are divergent options [3][4].   
 Outside of testing the agents are not installed on the same system as the 
 server, so we need to ensure that the agent configuration files should stand 
 alone.  
 
 To throw something out, what if moved to using config-dir for optional 
 configs since it would still support plugin scoped configuration files.  
 
 Neutron Servers/Network Nodes
 /etc/neutron.d
   neutron.conf  (Common Options)
   server.d (all plugin/service config files )
   service.d (all service config files)
 
 
 Hypervisor Agents
 /etc/neutron
   neutron.conf
   agent.d (Individual agent config files)
 
 
 The invocations would then be static:
 
 neutron-server —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/server.d
 
 Service Agents:
 neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/service.d
 
 Hypervisors (assuming the consolidates L2 is finished this cycle):
 neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/agent.d
 
 Thoughts?

What do operators want?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Fulfilling Operator Requirements: Driver / Management API

2014-05-03 Thread Adam Harwell
My comments in red (sorry again).

From: Eugene Nikanorov enikano...@mirantis.commailto:enikano...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, May 2, 2014 5:08 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Fulfilling Operator Requirements: 
Driver / Management API

Hi Adam,

My comments inline:


On Fri, May 2, 2014 at 1:33 AM, Adam Harwell 
adam.harw...@rackspace.commailto:adam.harw...@rackspace.com wrote:
I am sending this now to gauge interest and get feedback on what I see as an 
impending necessity — updating the existing haproxy driver, replacing it, or 
both.
I agree with Stephen's first point here.
For HAProxy driver to support advanced use cases like routed mode, it's agent 
should be severely changed and receive some capabilities of L3 agent.
In fact, I'd suggest making additional driver, not for haproxy in VMs, but 
for... dedicated haproxy nodes.
Dedicated haproxy node is a host (similar to compute) with L2 agent and lbaas 
(not necessarily existing) agent on it.

In fact, it's essentially the same model as used right now, but i think it has 
it's advantages over haproxy-in-vm, at least:
- plugin driver doesn't need to manage VM life cycle (no orchestration)
- immediate natural multitenant support with isolated networks
- instead of adding haproxy in VM, you add a process (which is both faster and 
more efficient);
more scaling is achieved by adding physical haproxy node; existing agent health 
reporting will make it available for loadbalancer scheduling automatically.

I think that driver sounds like a good idea — I think we agree in essence, that 
there will need to be drivers to provide a variety of different approaches. I 
guess the question becomes, is there a smart way to accomplish this?

HAProxy: This references two things currently, and I feel this is a source of 
some misunderstanding. When I refer to  HAProxy (capitalized), I will be 
referring to the official software package (found here: http://haproxy.1wt.eu/ 
), and when I refer to haproxy (lowercase, and in quotes) I will be referring 
to the neutron-lbaas driver (found here: 
https://github.com/openstack/neutron/tree/master/neutron/services/loadbalancer/drivers/haproxy
 ). The fact that the neutron-lbaas driver is named directly after the software 
package seems very unfortunate, and while it is not directly in the scope of 
what I'd like to discuss here, I would love to see it changed to more 
accurately reflect what it is --  one specific driver implementation that 
coincidentally uses HAProxy as a backend. More on this later.
We also was referring existing driver as haproxy-on-host.
Ok, I will use that term from now on (I just hadn't seen it anywhere, and you 
can understand how it is confusing to just see haproxy as the driver name).


Operator Requirements: The requirements that can be found on the wiki page 
here:  
https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements#Operator_Requirements
 and focusing on (but not limited to) the following list:
* Scalability
* DDoS Mitigation
* Diagnostics
* Logging and Alerting
* Recoverability
* High Availability (this is in the User Requirements section, but will be 
largely up to the operator to handle, so I would include it when discussing 
Operator Requirements)
Those requirements are of very different kinds and they are going to be 
addressed by quite different components of lbaas, not solely by the driver.

Management API: A restricted API containing resources that Cloud Operators 
could access, including most of the list of Operator Requirements (above).
The work is being done on this front: we're designing a way for plugin drivers 
to expose their own API, that specifically is needed for operator API which 
might not be common between providers.
Ok, this sounds like what some other people mentioned, and does sound like 
essentially what we'd need to do for this to work in any real capacity. The 
question I have then is, do we still need to talk about this at all, or just 
agree to make sure this method works, and then go our own ways implementing our 
Management APIs?


Load Balancer (LB): I use this term very generically — essentially a logical 
entity that represents one use case. As used in the sentence: I have a Load 
Balancer in front of my website. or The Load Balancer I set up to offload SSL 
Decryption is lowering my CPU load nicely.

--
 Overview
--
What we've all been discussing for the past month or two (the API, Object 
Model, etc) is being directly driven by the User and Operator Requirements that 
have somewhat recently been enumerated (many thanks to everyone who has 
contributed to that discussion!). With that in mind, 

Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-03 Thread Adam Harwell
Sounds about right to me. I guess I agree with your agreement. :)
Does anyone actually oppose this arrangement?

--Adam

From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, May 2, 2014 7:53 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

Hi guys,

Yep, so what I'm hearing is that we should be able to assume that either all 
members in a single pool are adjacent (ie. layer-2 connected), or are routable 
from that subnet.

Adam-- I could see it going either way with regard to how to communicate with 
members:  If the particular device that the provider uses lives outside tenant 
private networks, the driver for said devices would need to make sure that VIFs 
(or some logical equivalent) are added such that the devices can talk to the 
members. This is also the case for virtual load balancers (or other devices) 
which are assigned to the tenant but live on an external network. (In this 
topology, VIP subnet and pool subnet could differ, and the driver needs to make 
sure that the load balancer has a virtual interface/neutron port + IP address 
on the pool subnet.)

There's also the option that if the device being used for load balancing 
exists as a virtual appliance that can be deployed on an internal network, one 
can make it publicly accessible by adding a neutron floating IP (ie. static 
NAT rule) that forwards any traffic destined for a public external IP to the 
load balancer's internal IP address.  (In this topology, VIP subnet and pool 
subnet would be the same thing.) The nifty thing about this topology is that 
load balancers that don't have this static NAT rule added are implicitly 
private to the tenant internal subnet.

Having seen what our customers do with their topologies, my gut reaction is to 
say that the 99.9% use case is that all the members of a pool will be in the 
same subnet, or routable from the pool subnet. And I agree that if someone has 
a really strange topology in use that doesn't work with this assumption, it's 
not the job of LBaaS to try and solve this for them.

Anyway, I'm hearing general agreement that subnet_id should be an attribute of 
the pool.


On Fri, May 2, 2014 at 5:24 AM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:
Agree with Sam here,
Moreover, i think it makes sense to leave subnet an attribute of the pool.
Which would mean that members reside in that subnet or are available (routable) 
from this subnet, and LB should have a port on this subnet.

Thanks,
Eugene.


On Fri, May 2, 2014 at 3:51 PM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:
I think that associating a VIP subnet and list of member subnets is a good 
choice.
This is declaratively saying to where is the configuration expecting layer 2 
proximity.
The minimal would be the VIP subnet which in essence means the VIP and members 
are expected on the same subnet.

Any member outside the specified subnets is supposedly accessible via routing.

It might be an option to state the static route to use to access such member(s).
On many cases the needed static routes could also be computed automatically.

Regards,
   -Sam.

On 2 במאי 2014, at 03:50, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

Hi Trevor,

I was the one who wrote that use case based on discussion that came out of the 
question I wrote the list last week about SSL re-encryption:  Someone had 
stated that sometimes pool members are local, and sometimes they are hosts 
across the internet, accessible either through the usual default route, or via 
a VPN tunnel.

The point of this use case is to make the distinction that if we associate a 
neutron_subnet with the pool (rather than with the member), then some members 
of the pool that don't exist in that neutron_subnet might not be accessible 
from that neutron_subnet.  However, if the behavior of the system is such that 
attempting to reach a host through the subnet's default route still works 
(whether that leads to communication over a VPN or the usual internet routes), 
then this might not be a problem.

The other option is to associate the neutron_subnet with a pool member. But in 
this case there might be problems too. Namely:

  *   The device or software that does the load balancing may need to have an 
interface on each of the member subnets, and presumably an IP address from 
which to originate requests.
  *   How does one resolve cases where subnets have overlapping IP ranges?

In the end, it may be simpler not to associate neutron_subnet with a pool at 
all. Maybe it only makes sense to do this for a VIP, and then the assumption 
would 

[openstack-dev] nova compute error

2014-05-03 Thread abhishek jain
Hi all

I want to boot VM from controller node onto the compute node using
devstack.All my services such as nova,q-agt,nova-compute,neutron,etc are
running properly both on compute node as well on the controller node.I'm
also able to boot VMs on controller node from the openstack dashvoard
.However issue is coming when I'm booting VM from controller node onto
compute node.
Following is the error in the nova-compute logs when I'm trying to boot VM
on compute node from controller node


2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp if
ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
dom=self)
2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
libvirtError: Error while building firewall: Some rules could not be
created for interface tap74d2ff08-7f: Failure to execute command '$EBT -t
nat -A libvirt-J-tap74d2ff08-7f  -j J-tap74d2ff08-7f-mac' : 'Unable to
update the kernel. Two possible causes:
2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp 1.
Multiple ebtables programs were executing simultaneously. The ebtables
2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
userspace tool doesn't by default support multiple ebtables programs
running
2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
concurrently. The ebtables option --concurrent or a tool like flock can be
2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqpused
to support concurrent scripts that update the ebtables kernel tables.
2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp 2. The
kernel doesn't support a certain ebtables extension, consider
2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
recompiling your kernel or insmod the extension.
2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp .'.
2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
2014-04-30 05:10:29.066 17049 DEBUG nova.openstack.common.rpc.amqp [-]
Making synchronous call on conductor ... multicall /opt/stack/nova/nova/o

From the logs it appears that the command $EBT -t nat -A
libvirt-J-tap74d2ff08-7f  -j J-tap74d2ff08-7f-mac is not able to update the
kernel with the ebtables rules.I have also enabled the ebtables modules(*)
in my kernel.

Please help me regarding this.
Also is there any other way of booting the VM without updating the rules in
kernel .

Thanks
Abhisehk Jain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-03 Thread Kenichi Oomichi
Hi David,

 -Original Message-
 From: David Kranz [mailto:dkr...@redhat.com]
 Sent: Friday, May 02, 2014 2:30 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft
 
  The verify_tempest_config tool was an attempt at a compromise between being
  explicit and also using auto discovery. By using the APIs to help create a
  config file that reflected the current configuration state of the services. 
  It's
  still a WIP though, and it's really just meant to be a user tool. I don't 
  ever
  see it being included in our gate workflow.

 I think we have to accept that there are two legitimate use cases for
 tempest configuration:
 
 1. The entity configuring tempest is the same as the entity that
 deployed. This is the gate case.
 2. Tempest is to be pointed at an existing cloud but was not part of a
 deployment process. We want to run the tests for the supported
 services/extensions.

Thanks for clarifying, I heard some requests for the above 2 use case
and the autodiscovery would be nice for it.

 We should modularize the code around discovery so that the discovery
 functions return the changes to conf that would have to be made. The
 callers can then decide how that information is to be used. This would
 support both use cases. I have some changes to the verify_tempest_config
 code that does this which I will push up if the concept is agreed.

Great.

BTW, current API extension lists for Nova(api_extensions/ api_v3_extensions
in tempest.conf) don't work at all because tests with requires_ext() don't
exist in Nova API tests. I will add requires_ext() to Nova API tests, that
will be worth even if the autodiscovery is not implemented.


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-03 Thread Kenichi Oomichi

Hi Matthew,

 -Original Message-
 From: Matthew Treinish [mailto:mtrein...@kortar.org]
 Sent: Friday, May 02, 2014 12:36 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft
 
   When adding new API parameters to the existing APIs, these parameters 
   should
   be API extensions according to the above guidelines. So we have three 
   options
   for handling API extensions in Tempest:
  
   1. Consider them as optional, and cannot block the incompatible
   changes of them. (Current)
   2. Consider them as required based on tempest.conf, and can block the
   incompatible changes.
   3. Consider them as required automatically with microversioning, and
   can block the incompatible changes.
 
  I investigated the way of the above option 3, then have one question
  about current Tempest implementation.
 
  Now verify_tempest_config tool gets API extension list from each
  service including Nova and verifies API extension config of tempest.conf
  based on the list.
  Can we use the list for selecting what extension tests run instead of
  the verification?
  As you said In the previous IRC meeting, current API tests will be
  skipped if the test which is decorated with requires_ext() and the
  extension is not specified in tempest.conf. I feel it would be nice
  that Tempest gets API extension list and selects API tests automatically
  based on the list.
 
 So we used to do this type of autodiscovery in tempest, but we stopped because
 it let bugs slip through the gate. This topic has come up several times in the
 past, most recently in discussing reorganizing the config file. [1] This is 
 why
 we put [2] in the tempest README. I agree autodiscovery would be simpler, but
 the problem is because we use tempest as the gate if there was a bug that 
 caused
 autodiscovery to be different from what was expected the tests would just
 silently skip. This would often go unnoticed because of the sheer volume of
 tempest tests.(I think we're currently at ~2300) I also feel that explicitly
 defining what is a expected to be enabled is a key requirement for branchless
 tempest for the same reason.

Thanks for the explanation, I understand the purpose of static config for
the gate. We could not notice some unexpected skips due to the test volume
as you said. but the autodiscovery still seems attractive for me, that would
make it easy to run Tempest on production environments. So how about 
implementing
the autodiscovery as just one option which is disabled as default value in
tempest.conf?
For example, current config of nova v3 API extensions are

 api_v3_extensions=all

and we will be able to specify auto instead of all if autodiscovery
is necessary:

 api_v3_extensions=auto

It would be nice to define it as experimental on the gate and check the
number of test skips sometimes by comparing the legitimate gate?

 The verify_tempest_config tool was an attempt at a compromise between being
 explicit and also using auto discovery. By using the APIs to help create a
 config file that reflected the current configuration state of the services. 
 It's
 still a WIP though, and it's really just meant to be a user tool. I don't ever
 see it being included in our gate workflow.

I see, thanks.

  In addition, The methods which are decorated with requires_ext() are
  test methods now, but I think it would be better to decorate client
  methods(get_hypervisor_list, etc.) because each extension loading
  condition affects available APIs.
 
 So my concern with decorating the client methods directly is that it might 
 raise
 the skip too late and we'll end up leaking resources. But, I haven't tried it 
 so
 it might work fine without leaking anything. I agree that it would make 
 skipping
 based on extensions easier because it's really the client methods that depend 
 on
 the extensions. So give it a shot and lets see if it works. The only other
 complication is the scenario, and cli tests because they don't use the tempest
 clients. But, we can just handle that by decorating the test methods like we 
 do
 now.

Thanks again, I got current implementations are nice for avoiding unnecessary
operations against environments.


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-03 Thread Kashyap Chamarthy
On Fri, May 02, 2014 at 08:18:18AM -0500, Kyle Mestery wrote:
 On Fri, May 2, 2014 at 6:39 AM, Sean Dague s...@dague.net wrote:
  Some non insignificant number of devstack changes related to neutron
  seem to be neutron plugins having to do all kinds of manipulation of
  extra config files. The grenade upgrade issue in neutron was because of
  some placement change on config files. Neutron seems to have *a ton* of
  config files and is extremely sensitive to their locations/naming, which
  also seems like it ends up in flux.
 
  Is there an overview somewhere to explain this design point?
 
  All the other services have a single config config file designation on
  startup, but neutron services seem to need a bunch of config files
  correct on the cli to function (see this process list from recent
  grenade run - http://paste.openstack.org/show/78430/ note you will have
  to horiz scroll for some of the neutron services).
 
  Mostly it would be good to understand this design point, and if it could
  be evolved back to the OpenStack norm of a single config file for the
  services.
 
 I think this is entirely possible. Each plugin has it's own
 configuration, and this is usually done in it's own section. In
 reality, it's not necessary to have more than a single config file, as
 long as the sections in the configuration file are unique.

FWIW, I think this would definitely be useful - to have a Neutron single
config file with unique sections. I frequently try to test two-node
OpenStack setups with Neutron, where I need to keep track of at-least
these config files (correct me if I'm doing something ineffeciently) for
Neutron alone.

Controller node
---

  1. /etc/neutron/neutron.conf
  2. /etc/neutron/plugin.ini
  3. /etc/neutron/dhcp_agent.ini
  4. /etc/neutron/l3_agent.ini
  - I notice dhcp_agent and l3_agent configs similar most of the
time
  5. /etc/neutron/dnsmasq.conf 
  - I use this, so that I can log dnsmasq details to a file instead
of journalctl)
  6. /etc/neutron/metadata_agent.ini
  7. /etc/neutron/api-paste.ini 
 - This is auto-generated I believe


Compute node


  1. /etc/neutron/neutron.conf
  2. /etc/neutron/plugin.ini
  3. /etc/neutron/metadata_agent.ini

Not considering iptables/firewalld configs.


-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-03 Thread gustavo panizzo gfa
On 05/02/2014 11:09 AM, Mark McClain wrote:
 
 To throw something out, what if moved to using config-dir for optional 
 configs since it would still support plugin scoped configuration files.  
 
 Neutron Servers/Network Nodes
 /etc/neutron.d
   neutron.conf  (Common Options)
   server.d (all plugin/service config files )
   service.d (all service config files)
 
 
 Hypervisor Agents
 /etc/neutron
   neutron.conf
   agent.d (Individual agent config files)
 
 
 The invocations would then be static:
 
 neutron-server —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/server.d
 
 Service Agents:
 neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/service.d
 
 Hypervisors (assuming the consolidates L2 is finished this cycle):
 neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/agent.d
 
 Thoughts?

i like this idea, it makes easy to use configuration manager to setup
neutron, also it fits perfectly with real life where sometimes you need
more than one l3 agent running on the same box



-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] sphinxcontrib-docbookrestapi broken with latest pip

2014-05-03 Thread Jeremy Stanley
As of TODAY's pip 1.5.5 release, which vendors in a newer setuptools
(3.4.4), sphinxcontrib-docbookrestapi is uninstallable pending
approval of https://review.openstack.org/#/c/84132 and we have
gate jobs on OpenStack projects which cannot pass as a result.

URL: 
http://logs.openstack.org/15/91815/4/gate/gate-requirements-integration-dsvm/44d6cdf/
 

I would open a bug, but can't figure out where the bug tracker for
stackforge/sphinxcontrib-docbookrestapi is supposed to reside. As
Cyril Roelandt is the sole core reviewer/approver on the project,
I'm hoping he is reading the list (or that someone in contact with
him can bring this issue to his attention). Thanks for your timely
assistance in this matter?
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] sphinxcontrib-docbookrestapi broken with latest pip

2014-05-03 Thread Anne Gentle
You could log a bug against openstack-manuals and tag it with doc-builds.
That's where we track other doc tooling bugs.

It would be great to get that renamed, since it's a wadlrestapi converter.


On Sat, May 3, 2014 at 5:48 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 As of TODAY's pip 1.5.5 release, which vendors in a newer setuptools
 (3.4.4), sphinxcontrib-docbookrestapi is uninstallable pending
 approval of https://review.openstack.org/#/c/84132 and we have
 gate jobs on OpenStack projects which cannot pass as a result.

 URL:
 http://logs.openstack.org/15/91815/4/gate/gate-requirements-integration-dsvm/44d6cdf/

 I would open a bug, but can't figure out where the bug tracker for
 stackforge/sphinxcontrib-docbookrestapi is supposed to reside. As
 Cyril Roelandt is the sole core reviewer/approver on the project,
 I'm hoping he is reading the list (or that someone in contact with
 him can bring this issue to his attention). Thanks for your timely
 assistance in this matter?
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] sphinxcontrib-docbookrestapi broken with latest pip

2014-05-03 Thread Jeremy Stanley
On 2014-05-03 18:41:05 -0500 (-0500), Anne Gentle wrote:
 You could log a bug against openstack-manuals and tag it with
 doc-builds. That's where we track other doc tooling bugs.

Thanks Anne! https://launchpad.net/bugs/1315768

 It would be great to get that renamed, since it's a wadlrestapi
 converter.

Should it still remain in StackForge, or is it under the purview of
the Docs Team now?
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] sphinxcontrib-docbookrestapi broken with latest pip

2014-05-03 Thread Anne Gentle
On Sat, May 3, 2014 at 7:44 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-05-03 18:41:05 -0500 (-0500), Anne Gentle wrote:
  You could log a bug against openstack-manuals and tag it with
  doc-builds. That's where we track other doc tooling bugs.

 Thanks Anne! https://launchpad.net/bugs/1315768

  It would be great to get that renamed, since it's a wadlrestapi
  converter.

 Should it still remain in StackForge, or is it under the purview of
 the Docs Team now?
 --


StackForge is fine -- not all the projects are using it.


  Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-05-03 Thread Qiming Teng
On Thu, May 01, 2014 at 08:49:11PM +0800, Jay Lau wrote:
 Jay Pipes and all, I'm planning to merge this topic to
 http://junodesignsummit.sched.org/event/77801877aa42b595f14ae8b020cd1999after
 some discussion in this week's Gantt IRC meeting, hope it is OK.
 
 Thanks!

The link above didn't work.  How about telling us the name of the topic?


 2014-05-01 19:56 GMT+08:00 Day, Phil philip@hp.com:
 
   
In the original API there was a way to remove members from the group.
This didn't make it into the code that was submitted.
  
   Well, it didn't make it in because it was broken. If you add an instance
  to a
   group after it's running, a migration may need to take place in order to
  keep
   the semantics of the group. That means that for a while the policy will
  be
   being violated, and if we can't migrate the instance somewhere to
  satisfy the
   policy then we need to either drop it back out, or be in violation.
  Either some
   additional states (such as being queued for inclusion in a group, etc)
  may be
   required, or some additional footnotes on what it means to be in a group
   might have to be made.
  
   It was for the above reasons, IIRC, that we decided to leave that bit
  out since
   the semantics and consequences clearly hadn't been fully thought-out.
   Obviously they can be addressed, but I fear the result will be ... ugly.
  I think
   there's a definite possibility that leaving out those dynamic functions
  will look
   more desirable than an actual implementation.
  
  If we look at a server group as a general contained or servers, that may
  have an attribute that expresses scheduling policy, then it doesn't seem to
  ugly to restrict the conditions on which an add is allowed to only those
  that don't break the (optional) policy.Wouldn't even have to go to the
  scheduler to work this out.
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-03 Thread Bohai (ricky)
+1  
I like this idea.

Best regards to you.
Ricky


 -Original Message-
 From: Alexandre Viau [mailto:alexandre.v...@savoirfairelinux.com]
 Sent: Friday, May 02, 2014 5:17 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] Monitoring as a Service
 
 Hello Everyone!
 
 My name is Alexandre Viau from Savoir-Faire Linux.
 
 We have submited a Monitoring as a Service blueprint and need feedback.
 
 Problem to solve: Ceilometer's purpose is to track and *measure/meter* usage
 information collected from OpenStack components (originally for billing). 
 While
 Ceilometer is usefull for the cloud operators and infrastructure metering, it 
 is
 not a *monitoring* solution for the tenants and their services/applications
 running in the cloud because it does not allow for service/application-level
 monitoring and it ignores detailed and precise guest system metrics.
 
 Proposed solution: We would like to add Monitoring as a Service to Openstack
 
 Just like Rackspace's Cloud monitoring, the new monitoring service - lets 
 call it
 OpenStackMonitor for now -  would let users/tenants keep track of their
 ressources on the cloud and receive instant notifications when they require
 attention.
 
 This RESTful API would enable users to create multiple monitors with
 predefined checks, such as PING, CPU usage, HTTPS and SMTP or custom
 checks performed by a Monitoring Agent on the instance they want to monitor.
 
 Predefined checks such as CPU and disk usage could be polled from Ceilometer.
 Other predefined checks would be performed by the new monitoring service
 itself. Checks such as PING could be flagged to be performed from multiple
 sites.
 
 Custom checks would be performed by an optional Monitoring Agent. Their
 results would be polled by the monitoring service and stored in Ceilometer.
 
 If you wish to collaborate, feel free to contact me at
 alexandre.v...@savoirfairelinux.com
 The blueprint is available here:
 https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service
 
 Thanks!
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev