Hello,

Thanks both for your responses, defiantly looking at collectd + graphite, just 
wanted to see what overheads where like, far from in a situation that would 
choke the cluster but wanted to check first.

Thanks,
Ashley

-----Original Message-----
From: Christian Balzer [mailto:ch...@gol.com] 
Sent: 24 October 2016 11:04
To: ceph-users@lists.ceph.com
Cc: John Spray <jsp...@redhat.com>; Ashley Merrick <ash...@amerrick.co.uk>
Subject: Re: [ceph-users] Monitoring Overhead


Hello,

On Mon, 24 Oct 2016 10:46:31 +0100 John Spray wrote:

> On Mon, Oct 24, 2016 at 4:21 AM, Ashley Merrick <ash...@amerrick.co.uk> wrote:
> > Hello,
> >
> >
> >
> > This may come across as a simple question but just wanted to check.
> >
> >
> >
> > I am looking at importing live data from my cluster via ceph -s 
> > e.t.c into a graphical graph interface so I can monitor performance 
> > / iops / e.t.c overtime.
> >
> >
> >
> > I am looking to pull this data from one or more monitor nodes, when 
> > the data is retrieved for the ceph -s output is this information 
> > that the monitor already has locally or is there an overhead that is 
> > applied to the whole cluster to retrieve this data every time the command 
> > is executed?
> 
> It's all from the local state on the mons, the OSDs aren't involved at 
> all in responding to the status command.
>
That said, as mentioned before on this ML, the output of "ceph -s" is a sample 
from a window and only approaching something reality if sampled and divided of 
a long period.

If you need something that involves "what happened on OSD x at time y", 
collectd and graphite (or deviations of if) are your friends, but they do cost 
you a CPU cycle or two.
OTOH, if your OSDs or MONs were to choke from that kind of monitoring, you're 
walking on very thin ice already.

Christian

> Cheers,
> John
> 
> >
> >
> >
> > Reason I ask is I want to make sure I am not applying unnecessary 
> > overhead and load onto all OSD node’s to retrieve this data at a 
> > near live view, I fully understand it will apply a small amount of 
> > load / CPU on the local MON to process the command, I am more interesting 
> > in overall cluster.
> >
> >
> >
> > Thanks,
> >
> > Ashley
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to