Very intensive I/O under mon process

2013-01-02 Thread Andrey Korolyov
I have just observed that ceph-mon process, at least bobtail one, has
an extremely high density of writes - times above _overall_ cluster
amount of writes, measured by qemu driver(and they are very close to
be fair). For example, test cluster of 32 osds have 7.5 MByte/s of
writes on each mon node having overall amount about 1.5 Mbyte/s and
dev- with only three osds has values is about 1Mbyte/s with
accumulated real write bandwidth of tens of kilobytes per second.

I`m afraid if this is normal, I may hit a limit of spinning storage
increasing test cluster, say, twenty times up of number of osd and
related ``idle'' write bandwidth.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Very intensive I/O under mon process

2013-01-02 Thread Joao Eduardo Luis

On 01/02/2013 03:40 PM, Andrey Korolyov wrote:

I have just observed that ceph-mon process, at least bobtail one, has
an extremely high density of writes - times above _overall_ cluster
amount of writes, measured by qemu driver(and they are very close to
be fair). For example, test cluster of 32 osds have 7.5 MByte/s of
writes on each mon node having overall amount about 1.5 Mbyte/s and
dev- with only three osds has values is about 1Mbyte/s with
accumulated real write bandwidth of tens of kilobytes per second.

I`m afraid if this is normal, I may hit a limit of spinning storage
increasing test cluster, say, twenty times up of number of osd and
related ``idle'' write bandwidth.


High debugging levels (specially 'debug ms', 'debug mon' or 'debug 
paxos') should significantly increase IO on the monitors. Might that be 
the case?


  -Joao

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Very intensive I/O under mon process

2013-01-02 Thread Andrey Korolyov
On Wed, Jan 2, 2013 at 8:00 PM, Joao Eduardo Luis joao.l...@inktank.com wrote:
 On 01/02/2013 03:40 PM, Andrey Korolyov wrote:

 I have just observed that ceph-mon process, at least bobtail one, has
 an extremely high density of writes - times above _overall_ cluster
 amount of writes, measured by qemu driver(and they are very close to
 be fair). For example, test cluster of 32 osds have 7.5 MByte/s of
 writes on each mon node having overall amount about 1.5 Mbyte/s and
 dev- with only three osds has values is about 1Mbyte/s with
 accumulated real write bandwidth of tens of kilobytes per second.

 I`m afraid if this is normal, I may hit a limit of spinning storage
 increasing test cluster, say, twenty times up of number of osd and
 related ``idle'' write bandwidth.


 High debugging levels (specially 'debug ms', 'debug mon' or 'debug paxos')
 should significantly increase IO on the monitors. Might that be the case?

Nope, all debug levels, including mons are set to 0/0. I also see that
the ``no-client'' cluster shows a very small amount of such writes
under mon, 10-20kByte/s, and one idle client (writing couple of bytes
without O_SYNC) raise this value times up to ~200kB/s and so on, so
may be I`m wrong before and writes correlate with amount of clients
too(six clients plus three control nodes accessing via API in the
context of previous message for both environments).
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Very intensive I/O under mon process

2013-01-02 Thread Sage Weil
On Wed, 2 Jan 2013, Andrey Korolyov wrote:
 On Wed, Jan 2, 2013 at 8:00 PM, Joao Eduardo Luis joao.l...@inktank.com 
 wrote:
  On 01/02/2013 03:40 PM, Andrey Korolyov wrote:
 
  I have just observed that ceph-mon process, at least bobtail one, has
  an extremely high density of writes - times above _overall_ cluster
  amount of writes, measured by qemu driver(and they are very close to
  be fair). For example, test cluster of 32 osds have 7.5 MByte/s of
  writes on each mon node having overall amount about 1.5 Mbyte/s and
  dev- with only three osds has values is about 1Mbyte/s with
  accumulated real write bandwidth of tens of kilobytes per second.
 
  I`m afraid if this is normal, I may hit a limit of spinning storage
  increasing test cluster, say, twenty times up of number of osd and
  related ``idle'' write bandwidth.
 
 
  High debugging levels (specially 'debug ms', 'debug mon' or 'debug paxos')
  should significantly increase IO on the monitors. Might that be the case?
 
 Nope, all debug levels, including mons are set to 0/0. I also see that
 the ``no-client'' cluster shows a very small amount of such writes
 under mon, 10-20kByte/s, and one idle client (writing couple of bytes
 without O_SYNC) raise this value times up to ~200kB/s and so on, so
 may be I`m wrong before and writes correlate with amount of clients
 too(six clients plus three control nodes accessing via API in the
 context of previous message for both environments).

The mon is writing out state updates at regular intervals, mostly PG stat 
(object and byte counts) updates--what you see from 'ceph pg dump'.  You 
should see IO go up to some ceiling as more IO is applied to the cluster 
and then level off.  The frequency of these writes, and thus the total IO 
rate, can be controlled via the 'paxos propose interval' knob (default is 
1 or 2 seconds).

sage
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html