On 10/09/2015 03:36 PM, Ian Wells wrote:
On 9 October 2015 at 12:50, Chris Friesen <chris.frie...@windriver.com
<mailto:chris.frie...@windriver.com>> wrote:
Has anybody looked at why 1 instance is too slow and what it would take to
make 1 scheduler instance work fast enough? This does not preclude the
use of
concurrency for finer grain tasks in the background.
Currently we pull data on all (!) of the compute nodes out of the database
via a series of RPC calls, then evaluate the various filters in python code.
I'll say again: the database seems to me to be the problem here. Not to
mention, you've just explained that they are in practice holding all the data in
memory in order to do the work so the benefit we're getting here is really a
N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
secondary, in fact), and that without incremental updates to the receivers.
I don't see any reason why you couldn't have an in-memory scheduler.
Currently the database serves as the persistant storage for the resource usage,
so if we take it out of the picture I imagine you'd want to have some way of
querying the compute nodes for their current state when the scheduler first
starts up.
I think the current code uses the fact that objects are remotable via the
conductor, so changing that to do explicit posts to a known scheduler topic
would take some work.
Chris
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev