Le 19/01/2015 20:25, Ed Leafe a écrit :
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
I want to make sure that everyone is present and prepared to discuss the
one outstanding spec for Kilo: https://review.openstack.org/#/c/138444/
In the words of Jay Pipes, we are at an impasse: Jay and I prefer an
approach in which the scheduler loads up the information about the
compute nodes when it starts up, and then relies on the compute nodes to
update their status whenever an instance is create/destroyed/resized.
Sylvain prefers instead to have the hosts query that information for
every call to _get_all_host_states(), adding the instance information to
the Host object as an InstanceList attribute. I might be a little off in
my summary of the two positions, but they largely reflect the preference
for solving this issue.
Sounds like there is a misunderstanding with my opinion.
That's unfortunate that even if we had a Google Hangout, we have to
discuss again what we agreed.
But OK, let's discuss again what we said and let me try to provide again
a quick explanation about my view here...
So, as I said during the Hangout, the scheduler is not having an
HostState Manager created when the scheduler service is running, but
rather created each time a query is coming in.
That means that if you want to persist an information, it needs to be
updated in the compute_nodes DB table so it would instanciate
accordingly the HostState object that filters are consuming.
I think we all agree with the fact that querying instances status should
be done by looking at HostState instead of querying DB directly, that's
a good point.
So, having said that, the discussions are about how to instantiate
HostState and deal with potential race conditions that an asynchronous
call would have.
By saying I was thinking about a call in _get_all_host_states(), I was
just saying that it was the current only way to add some details in
HostState was by doing that way.
Considering a scheduler service for persisting HostState is totally
having my +1 on it. Are we sure it should be done in that spec ? I'm not
sure at all.
IMO, the former approach is a lot closer to the ideal end result for an
independent scheduler service, whereas the latter is closer to the
current design, and would be less disruptive code-wise. The former *may*
increase the probability of race conditions in which two schedulers
simultaneously try to consume resources on the same host, but there are
several possible ways we can reduce that probability.
As I said, the former approach is requiring a persistent HostState
manager that we don't have now. That sounds interesting and you have my
vote, but that should not be handled in the spec you are mentioning and
requesting for a Spec Freeze exception.
Honestly, are we talking about coding stuff for Kilo ? If yes, I don't
think the former approach is doable for Kilo, in particular as no code
has been written yet.
If we're talking about what the Scheduler should look in the future,
then yes I'm 100% with you.
So please read up on that spec, and come to the meeting tomorrow
prepared to discuss it.
BTW, the latter approach is very similar to an earlier version of the
spec: https://review.openstack.org/#/c/138444/8/ . We seem to be going
in circles!
Are you sure that the patchset you are quoting is the proposal I'm
mentioning ?
Keep in mind I'm trying to see a common approach for the same paradigm
that has been approved here
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/isolate-scheduler-db-aggregates.html
-Sylvain
- --
- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)
iQIcBAEBAgAGBQJUvVoNAAoJEKMgtcocwZqLtIwP/A8MCveYF5/Q1aiEucDCOEns
kkBtTWMFa/0CGWzl4MTHzw6545gdrBxsDPX2nZBnNHQNObTt/Hq6CAIg3gm3EIDE
fTws9OjX7Ihf4E8IhdB1guH6s2eqRf4jkIIfUjnmp1nk+UkZ6q35bI3Emk1Sta1j
qR2NFmvhWzHK3hSTKHqjas30SVydL/QnCMpVnni0mNP/8uXNdGI2fivPSA7a0LE1
0ssMNFa2Us91v7258bXNhK6B5hbeI2PPhK0r19fFUl5CcsYtYShF0HJQLEd3dG8I
+znvqYZDPRPqZrKC0xWzNp/wpMWAV6oyv0fhVSyUkjfjH/vB5wASK2iGogbqmW07
rKiFcb8xSiMoZbydw9SV0Jya3do/+5tiBrjchzxgUQdRfG72nzGfTssbE/tn/aiw
2BS1ihe6ero20+0lBwxOirdEBsOQ6jmn4rcGuVpRr5lwcealkPe0j6YzFIT6Gqla
Cpj4z0exnMMaUtD/9v2wYE2N3BscWcDoJZ/jBQDibsS6Y5R/1JKvkqF/dpLmw+6s
VAMLy+rYJ6Cx8Z+WJ53uPw2sjZdprfj+qSTu9sHoqV9ycz+ID0FfvwyjJCf36NaD
8j+Z8avM0KJb0gabz9jT3/b2Y0S7dbwdsS+rPMkUSyBYdm+VUe0gCn6i09Erh7ED
Cu/WryIeoy+H2oH5UrzO
=hp2w
-END PGP SIGNATURE-
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openst