Ceilometer is a great project for taking metrics available in Nova and other 
systems and making them available for use by Operations, Billing, Monitoring, 
etc - and clearly we should try and avoid having multiple collectors of the 
same data.

But making the Nova scheduler dependent on Ceilometer seems to be the wrong way 
round to me - scheduling is such a fundamental operation that I want Nova to be 
self sufficient in this regard.   In particular I don't want the availability 
of my core compute platform to be constrained by the availability of my (still 
evolving) monitoring system.

If Ceilometer can be fed from the data used by the Nova scheduler then that's a 
good plus - but not the other way round.

Phil

> -----Original Message-----
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 18 July 2013 12:05
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Nova] New DB column or new DB table?
> 
> On 07/17/2013 10:54 PM, Lu, Lianhao wrote:
> > Hi fellows,
> >
> > Currently we're implementing the BP
> https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling. The
> main idea is to have an extensible plugin framework on nova-compute where
> every plugin can get different metrics(e.g. CPU utilization, memory cache
> utilization, network bandwidth, etc.) to store into the DB, and the nova-
> scheduler will use that data from DB for scheduling decision.
> >
> > Currently we adds a new table to store all the metric data and have nova-
> scheduler join loads the new table with the compute_nodes table to get all the
> data(https://review.openstack.org/35759). Someone is concerning about the
> performance penalty of the join load operation when there are many metrics
> data stored in the DB for every single compute node. Don suggested adding a
> new column in the current compute_nodes table in DB, and put all metric data
> into a dictionary key/value format and store the json encoded string of the
> dictionary into that new column in DB.
> >
> > I'm just wondering which way has less performance impact, join load with a
> new table with quite a lot of rows, or json encode/decode a dictionary with a
> lot of key/value pairs?
> >
> > Thanks,
> > -Lianhao
> 
> I'm really confused. Why are we talking about collecting host metrics in nova
> when we've got a whole project to do that in ceilometer? I think utilization
> based scheduling would be a great thing, but it really out to be interfacing 
> with
> ceilometer to get that data. Storing it again in nova (or even worse 
> collecting it
> a second time in nova) seems like the wrong direction.
> 
> I think there was an equiv patch series at the end of Grizzly that was pushed 
> out
> for the same reasons.
> 
> If there is a reason ceilometer can't be used in this case, we should have 
> that
> discussion here on the list. Because my initial reading of this blueprint and 
> the
> code patches is that it partially duplicates ceilometer function, which we
> definitely don't want to do. Would be happy to be proved wrong on that.
> 
>       -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to