Re: [Openstack] [Keystone] Keystone performance work

2013-12-16 Thread Peter Feiner
On Mon, Dec 16, 2013 at 2:25 AM, Neependra Khare nkh...@redhat.com wrote:
 Any pointers on configuring multi-process Keystone would be helpful. I see a
 method
 mentioned in Run N keystone Processes section of following:-
 http://blog.gridcentric.com/bid/318277/Boosting-OpenStack-s-Parallel-Performance;


Hi Neependra,

Here's an up-to-date version of my keystone workers patch:
https://github.com/peterfeiner/keystone/commit/fdf2b2e4e8ca77133bd855144409c92ee9ccc512.
Apply the patch and set workers=N in /etc/keystone/keystone.conf.

Unfortunately, I haven't had time to work on my abandoned review for
this patch (https://review.openstack.org/#/c/42967/). However, I did
fix the race conditions that were causing tempest test failures
(https://github.com/openstack/keystone/commit/8f685962a1d761107653f3a55757b588d0a3a67e),
so all that's left to do is add some unit tests.

Peter

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly

2013-12-10 Thread Peter Feiner
On Tue, Dec 10, 2013 at 7:48 AM, Nathani, Sreedhar (APS)
sreedhar.nath...@hp.com wrote:
 My setup has 17 L2 agents (16 compute nodes, one Network node). Setting the 
 minimize_polling helped to reduce the CPU
 utilization by the L2 agents but it did not help in instances getting the IP 
 during first boot.

 With the minimize_polling polling enabled less number of instances could get 
 IP than without the minimize_polling fix.

 Once the we reach certain number of ports(in my case 120 ports), during 
 subsequent concurrent instance deployment(30 instances),
 updating the port details in the dnsmasq host is taking long time, which 
 causing the delay for instances getting IP address.

To figure out what the next problem is, I recommend that you determine
precisely what port details in the dnsmasq host [are] taking [a] long
time to update. Is the DHCPDISCOVER packet from the VM arriving
before the dnsmasq process's hostsfile is updated and dnsmasq is
SIGHUP'd? Is the VM sending the DHCPDISCOVER request before its tap
device is wired to the dnsmasq process (i.e., determine the status of
the chain of bridges at the time the guest sends the DHCPDISCOVER
packet)? Perhaps the DHCPDISCOVER packet is being dropped because the
iptables rules for the VM's port haven't been instantiated when the
DHCPDISCOVER packet is sent. Or perhaps something else, such as the
replies being dropped. These are my only theories at the moment.

Anyhow, once you determine where the DHCP packets are being lost,
you'll have a much better idea of what needs to be fixed.

One suggestion I have to make your debugging less onerous is to
reconfigure your guest image's networking init script to retry DHCP
requests indefinitely. That way, you'll see the guests' DHCP traffic
when neutron eventually gets everything in order. On CirrOS, add the
following line to the eht0 stanza in /etc/network/interfaces to retry
DHCP requests 100 times every 3 seconds:

udhcpc_opts -t 100 -T 3

 When I deployed only 5 instances concurrently (already had 211 instances 
 active) instead of 30, all the instances are able to get the IP.
 But when I deployed 10 instances concurrently (already had 216 instances 
 active) instead of 30, none of the instances could able to get the IP

This is reminiscent of yet another problem I saw at scale. If you're
using the security group rule VMs in this group can talk to everybody
else in this group, which is one of the defaults in devstack, you get
O(N^2) iptables rules for N VMs running on a particular host. When you
have more VMs running, the openvswitch agent, which is responsible for
instantiating the iptables and does so somewhat laboriously with
respect to the number of iptables rules, the opevnswitch agent could
take too long to configure ports before VMs' DHCP clients time out.
However, considering that you're seeing low CPU utilization by the
openvswitch agent, I don't think you're having this problem; since
you're distributing your VMs across numerous compute hosts, N is quite
small in your case. I only saw problems when N was  100.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly

2013-12-05 Thread Peter Feiner
On Thu, Dec 5, 2013 at 8:23 AM, Nathani, Sreedhar (APS)
sreedhar.nath...@hp.com wrote:
 Hello Marun,



 Please find the details about my setup and tests which i have done so far



 Setup

   - One Physical Box with 16c, 256G memory. 2 VMs created on this Box - One
 for Controller and One for Network Node

   - 16x compute nodes (each has 16c, 256G memory)

   - All the systems are installed with Ubuntu Precise + Havana Bits from
 Ubuntu Cloud Archive



 Steps to simulate the issue

   1) Concurrently create 30 Instances (m1.small) using REST API with
 mincount=30

   2) sleep for 20min and repeat the step (1)





 Issue 1

 In Havana, once we cross 150 instances (5 batches x 30) during 6th batch
 some instances are going into ERROR state

 due to network port not able to create and some instances are getting
 duplicate IP address



 Per Maru Newby this issue might related to this bug

 https://bugs.launchpad.net/bugs/1192381



 I have done the similar with Grizzly on the same environment 2 months back,
 where I could able to deploy close to 240 instances without any errors

 Initially on Grizzly also seen the same behavior but with these tunings
 based on this bug

 https://bugs.launchpad.net/neutron/+bug/1160442, never had issues (tested
 more than 10 times)

sqlalchemy_pool_size = 60

sqlalchemy_max_overflow = 120

sqlalchemy_pool_timeout = 2

agent_down_time = 60

report_internval = 20



 In Havana, I have tuned the same tunables but I could never get past 150+
 instances. Without the tunables I could not able to get past

 100 instances. We are getting many timeout errors from the DHCP agent and
 neutron clients



 NOTE: After tuning the agent_down_time to 60 and report_interval to 20, we
 no longer getting these error messages

2013-12-02 11:44:43.421 28201 WARNING
 neutron.scheduler.dhcp_agent_scheduler [-] No more DHCP agents

2013-12-02 11:44:43.439 28201 WARNING
 neutron.scheduler.dhcp_agent_scheduler [-] No more DHCP agents

2013-12-02 11:44:43.452 28201 WARNING
 neutron.scheduler.dhcp_agent_scheduler [-] No more DHCP agents





 In the compute node openvswitch agent logs, we see these errors repeating
 continuously



 2013-12-04 06:46:02.081 3546 TRACE
 neutron.plugins.openvswitch.agent.ovs_neutron_agent Timeout: Timeout while
 waiting on RPC response - topic: q-plugin, RPC method:
 security_group_rules_for_devices info: unknown

 and WARNING neutron.openstack.common.rpc.amqp [-] No calling threads waiting
 for msg_id



 DHCP agent has below errors



 2013-12-02 15:35:19.557 22125 ERROR neutron.agent.dhcp_agent [-] Unable to
 reload_allocations dhcp.

 2013-12-02 15:35:19.557 22125 TRACE neutron.agent.dhcp_agent Timeout:
 Timeout while waiting on RPC response - topic: q-plugin, RPC method:
 get_dhcp_port info: unknown



 2013-12-02 15:35:34.266 22125 ERROR neutron.agent.dhcp_agent [-] Unable to
 sync network state.

 2013-12-02 15:35:34.266 22125 TRACE neutron.agent.dhcp_agent Timeout:
 Timeout while waiting on RPC response - topic: q-plugin, RPC method:
 get_active_networks_info info: unknown





 In Havana, I have merged the code from this patch and set api_workers to 8
 (My Controller VM has 8cores/16Hyperthreads)

 https://review.openstack.org/#/c/37131/



 After this patch and starting 8 neutron-server worker threads, during the
 batch creation of 240 instances with 30 concurrent requests during each
 batch,

 238 instances became active and 2 instances went into error. Interesting
 these 2 instances which went into error state are from the same compute
 node.



 Unlike earlier this time, the errors are due to 'Too Many Connections' to
 the MySQL database.

 2013-12-04 17:07:59.877 21286 AUDIT nova.compute.manager
 [req-26d64693-d1ef-40f3-8350-659e34d5b1d7 c4d609870d4447c684858216da2f8041
 9b073211dd5c4988993341cc955e200b] [instance:
 c14596fd-13d5-482b-85af-e87077d4ed9b] Terminating instance

 2013-12-04 17:08:00.578 21286 ERROR nova.compute.manager
 [req-26d64693-d1ef-40f3-8350-659e34d5b1d7 c4d609870d4447c684858216da2f8041
 9b073211dd5c4988993341cc955e200b] [instance:
 c14596fd-13d5-482b-85af-e87077d4ed9b] Error: Remote error: OperationalError
 (OperationalError) (1040, 'Too many connections') None None



 Need to back port the patch 'https://review.openstack.org/#/c/37131/' to
 address the Neutron Scaling issues in Havana.

 Carl already back porting this patch into Havana
 https://review.openstack.org/#/c/60082/ which is good.



 Issue 2

 Grizzly :

 During the concurrent instance creation in Grizzly, once we cross 210
 instances, during subsequent 30 instance creation some of

 the instances could not get their IP address during the first boot with in
 first few min. Instance MAC and IP Address details

 were updated in the dnsmasq host file but with a delay. Instances are able
 to get their IP address with a delay eventually.



 If we reboot the instance using 'nova reboot' instance used to get IP
 Address.

 * 

Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-04 Thread Peter Feiner
On Wed, Dec 4, 2013 at 11:16 AM, Nikola Đipanov ndipa...@redhat.com wrote:
 1) Figure out what is the deal with mox3 and decide if owning it will
 really be less trouble than porting nova. To be hones - I was unable to
 even find the code repo for it, only [3]. If anyone has more info -
 please weigh in. We'll also need volunteers

 [3] https://pypi.python.org/pypi/mox3/0.7.0

That's all I was able to find.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-11-19 Thread Peter Feiner
A substantive reason for switching from mox to mock is the derelict
state of mox releases. There hasn't been a release of mox in three
years: the latest, mox-0.5.3, was released in 2010 [1, 2]. Moreover,
in the past 3 years, substantial bugs have been fixed in upstream mox.
For example, with the year-old fix to
https://code.google.com/p/pymox/issues/detail?id=16, a very nasty bug
in nova would have been caught by an existing test [3].

Alternatively, a copy of the upstream mox code could be added in-tree.

[1] mox releases: https://code.google.com/p/pymox/downloads/list
[2] mox on pypi: https://pypi.python.org/pypi/mox
[3] see comments 5 and 6 in https://bugs.launchpad.net/nova/+bug/1251792

On Wed, Nov 13, 2013 at 2:24 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 11/12/2013 5:04 PM, Chuck Short wrote:




 On Tue, Nov 12, 2013 at 4:49 PM, Mark McLoughlin mar...@redhat.com
 mailto:mar...@redhat.com wrote:

 On Tue, 2013-11-12 at 16:42 -0500, Chuck Short wrote:
  
   Hi
  
  
   On Tue, Nov 12, 2013 at 4:24 PM, Mark McLoughlin
 mar...@redhat.com mailto:mar...@redhat.com

   wrote:
   On Tue, 2013-11-12 at 13:11 -0800, Shawn Hartsock wrote:
Maybe we should have some 60% rule... that is: If you
 change
   more than
half of a test... you should *probably* rewrite the test
 in
   Mock.
  
  
   A rule needs a reasoning attached to it :)
  
   Why do we want people to use mock?
  
   Is it really for Python3? If so, I assume that means we've
   ruled out the
   python3 port of mox? (Ok by me, but would be good to hear
 why)
   And, if
   that's the case, then we should encourage whoever wants to
   port mox
   based tests to mock.
  
  
  
   The upstream maintainer is not going to port mox to python3 so we
 have
   a fork of mox called mox3. Ideally, we would drop the usage of mox
 in
   favour of mock so we don't have to carry a forked mox.

 Isn't that the opposite conclusion you came to here:


 http://lists.openstack.org/pipermail/openstack-dev/2013-July/012474.html

 i.e. using mox3 results in less code churn?

 Mark.



 Yes that was my original position but I though we agreed in thread
 (further on) that we would use mox3 and then migrate to mock further on.

 Regards
 chuck


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 So it sounds like we're good with using mox for new tests again? Given Chuck
 got it into global-requirements here:

 https://github.com/openstack/requirements/commit/998dda263d7c7881070e3f16e4523ddcd23fc36d

 We can stave off the need to transition everything from mox to mock?

 I can't seem to find the nova blueprint to convert everything from mox to
 mock, maybe it was obsoleted already.

 Anyway, if mox(3) is OK and we don't need to use mock, it seems like we
 could add something to the developer guide here because I think this
 question comes up frequently:

 http://docs.openstack.org/developer/nova/devref/unit_tests.html

 Does anyone disagree?

 BTW, I care about this because I've been keeping in mind the mox/mock
 transition when doing code reviews and giving a -1 when new tests are using
 mox (since I thought that was a no-no now).
 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-11-19 Thread Peter Feiner
On Tue, Nov 19, 2013 at 11:19 AM, Chuck Short chuck.sh...@canonical.com wrote:
 Hi


 On Tue, Nov 19, 2013 at 10:43 AM, Peter Feiner pe...@gridcentric.ca wrote:

 A substantive reason for switching from mox to mock is the derelict
 state of mox releases. There hasn't been a release of mox in three
 years: the latest, mox-0.5.3, was released in 2010 [1, 2]. Moreover,
 in the past 3 years, substantial bugs have been fixed in upstream mox.
 For example, with the year-old fix to
 https://code.google.com/p/pymox/issues/detail?id=16, a very nasty bug
 in nova would have been caught by an existing test [3].

 Alternatively, a copy of the upstream mox code could be added in-tree.

 Please no, I think we are in an agreement with mox3 and mock.

That's cool. As long as the mox* is phased out, the false-positive
test results will be fixed.

Of course, there's _another_ alternative, which is to retrofit mox3
with the upstream mox fixes (e.g., the bug I cited above exists in
mox3). However, the delta between mox3 and upstream mox is pretty huge
(I just checked), so effort is probably better spent switching to
mock. To that end, I plan on changing the tests I cited above.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Concurrency Races in SQL Assignment Backend

2013-10-31 Thread Peter Feiner
Okay Brant, sounds good. I'll start working on the SQL code today.

Please bear in mind that I'm going to be on vacation next week, so a
patch won't be ready until some time after November 11.

On Thu, Oct 31, 2013 at 9:46 AM, Brant Knudson b...@acm.org wrote:
 Peter -

 We discussed better use of transactions in irc, but I don't think anyone has
 had a chance to look at it. This would be a very useful thing to have
 someone look at. I'm fine with holding off on the oslo.db sessions work
 until we're sure the code is correct w/r/t multi-processing so that tempest
 is going to pass consistently.

 - Brant

 On Wed, Oct 30, 2013 at 5:08 PM, Peter Feiner pe...@gridcentric.ca wrote:

 Hi Brant,

 In addition to the race you've fixed in
 https://review.openstack.org/#/c/50767/, it looks like there are quite
 a few more races in the SQL backend of keystone.assignment. I filed a
 bug to this effect: https://bugs.launchpad.net/keystone/+bug/1246489.
 The general problem is that transactions are used somewhat
 indiscriminately. The fix (i.e., using transactions judiciously) is
 straightforward and should be mostly independent of your ongoing
 oslo.db sessions port in https://review.openstack.org/#/c/49460/. So,
 unless you already have something in the works, I'll get started on
 that tomorrow.

 I'm eager to fix these races so
 https://review.openstack.org/#/c/42967/ can reliably pass tempest :-)

 Peter



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Keystone Concurrency Races in SQL Assignment Backend

2013-10-30 Thread Peter Feiner
Hi Brant,

In addition to the race you've fixed in
https://review.openstack.org/#/c/50767/, it looks like there are quite
a few more races in the SQL backend of keystone.assignment. I filed a
bug to this effect: https://bugs.launchpad.net/keystone/+bug/1246489.
The general problem is that transactions are used somewhat
indiscriminately. The fix (i.e., using transactions judiciously) is
straightforward and should be mostly independent of your ongoing
oslo.db sessions port in https://review.openstack.org/#/c/49460/. So,
unless you already have something in the works, I'll get started on
that tomorrow.

I'm eager to fix these races so
https://review.openstack.org/#/c/42967/ can reliably pass tempest :-)

Peter

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] OpenStack Performance Tuning Webinar

2013-09-08 Thread Peter Feiner
Hello OpenStackers,

I'm giving a webinar on OpenStack performance tuning on Wednesday
September 11 at 1PM EST. The webinar will focus on configuration
considerations geared toward efficiency and concurrency. If you're
interested, please register now!

https://attendee.gotowebinar.com/register/8597035589878749440

For a detailed discussion of the performance work I've been doing on
OpenStack, see

http://blog.gridcentric.com/bid/318277/Boosting-OpenStack-s-Parallel-Performance

Thanks!

Peter Feiner

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] Article on Improving Openstack's Parallel Performance

2013-07-30 Thread Peter Feiner
On Mon, Jul 29, 2013 at 7:06 PM, Aaron Rosen aro...@nicira.com wrote:
 Hi Peter,

 Great article. Any interest in pushing your N-work quantum-server patch
 upstream?

Thanks Aaron!

I'm certainly interested in pushing the patch upstream :-) When I
submit a review, we'll see if the community reciprocates.

As one comment points out, you can achieve the same thing with Apache.
However, I'd still like to have a simple option in the HTTP servers
for neutron et al.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Article on Improving Openstack's Parallel Performance

2013-07-22 Thread Peter Feiner
Hello,

I've written an article about my ongoing work on improving OpenStack's
parallel performance:

http://blog.gridcentric.com/bid/318277/Boosting-OpenStack-s-Parallel-Performance

The article discusses host configuration changes and patches (upstream
and in progress) that give a 74% speedup in a parallel macro benchmark
(boot 40 instances, ping them, ssh into them, delete them).

This article is a follow up to my presentation at the OpenStack summit
in Portland.

Peter

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving task flow to conductor - concern about scale

2013-07-19 Thread Peter Feiner
On Fri, Jul 19, 2013 at 11:06 AM, Dan Smith d...@danplanet.com wrote:
 FWIW, I don't think anyone is suggesting a single conductor, and
 especially not a single database proxy.

This is a critical detail that I missed. Re-reading Phil's original email,
I see you're debating the ratio of nova-conductor DB proxies to
nova-conductor task flow managers.

I had assumed that some of the task management state would exist
in memory. Is it all going to exist in the database?

 Since these queries are made frequently (i.e., easily 100 times
 during instance creation) and while other global locks are held
 (e.g., in the case of nova-compute's ResourceTracker), most of what
 nova-compute does becomes serialized.

 I think your numbers are a bit off. When I measured it just before
 grizzly, an instance create was something like 20-30 database calls.
 Unless that's changed (a lot) lately ... :)

Ah perhaps... at least I had the order of magnitude right :-) Even
with 20-30 calls,
when a bunch of instances are being booted in parallel and all of the
database calls
are serialized, minutes are added in instance creation time.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving task flow to conductor - concern about scale

2013-07-19 Thread Peter Feiner
On Fri, Jul 19, 2013 at 10:15 AM, Dan Smith d...@danplanet.com wrote:

  So rather than asking what doesn't work / might not work in the
  future I think the question should be aside from them both being
  things that could be described as a conductor - what's the
  architectural reason for wanting to have these two separate groups of
  functionality in the same service ?

 IMHO, the architectural reason is lack of proliferation of services and
 the added complexity that comes with it. If one expects the
 proxy workload to always overshadow the task workload, then making
 these two things a single service makes things a lot simpler.

I'd like to point a low-level detail that makes scaling nova-conductor
at the process level extremely compelling: the database driver
blocking the eventlet thread serializes nova's database access.

Since the database connection driver is typically implemented in a
library beyond the purview of eventlet's monkeypatching (i.e., a
native python extension like _mysql.so), blocking database calls will
block all eventlet coroutines. Since most of what nova-conductor does
is access the database, a nova-conductor process's handling of
requests is effectively serial.

Nova-conductor is the gateway to the database for nova-compute
processes.  So permitting a single nova-conductor process would
effectively serialize all database queries during instance creation,
deletion, periodic instance refreshes, etc. Since these queries are
made frequently (i.e., easily 100 times during instance creation) and
while other global locks are held (e.g., in the case of nova-compute's
ResourceTracker), most of what nova-compute does becomes serialized.

In parallel performance experiments I've done, I have found that
running multiple nova-conductor processes is the best way to mitigate
the serialization of blocking database calls. Say I am booting N
instances in parallel (usually up to N=40). If I have a single
nova-conductor process, the duration of each nova-conductor RPC
increases linearly with N, which can add _minutes_ to instance
creation time (i.e., dozens of RPCs, some taking several seconds).
However, if I run N nova-conductor processes in parallel, then the
duration of the nova-conductor RPCs do not increase with N; since each
RPC is most likely handled by a different nova-conductor, serial
execution of each process is moot.

Note that there are alternative methods for preventing the eventlet
thread from blocking during database calls. However, none of these
alternatives performed as well as multiple nova-conductor processes:

Instead of using the native database driver like _mysql.so, you can
use a pure-python driver, like pymysql by setting
sql_connection=mysql+pymysql://... in the [DEFAULT] section of
/etc/nova/nova.conf, which eventlet will monkeypatch to avoid
blocking. The problem with this approach is the vastly greater CPU
demand of the pure-python driver compared to the native driver. Since
the pure-python driver is so much more CPU intensive, the eventlet
thread spends most of its time talking to the database, which
effectively the problem we had before!

Instead of making database calls from eventlet's thread, you can
submit them to eventlet's pool of worker threads and wait for the
results. Try this by setting dbapi_use_tpool=True in the [DEFAULT]
section of /etc/nova/nova.conf. The problem I found with this approach
was the overhead of synchronizing with the worker threads. In
particular, the time elapsed between the worker thread finishing and
the waiting coroutine being resumed was typically several times
greater than the duration of the database call itself.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev