Re: [ceph-users] Different filesystems on OSD hosts at the same cluster

2015-08-07 Thread Udo Lembke
Hi,
some time ago I switched all OSDs from XFS to ext4 (step by step).
I had no issues during mixed osd-format (the process takes some weeks).

And yes, for me ext4 performs also better (esp. the latencies).

Udo

Am 07.08.2015 13:31, schrieb Межов Игорь Александрович:
 Hi!
 
 We do some performance tests on our small Hammer install:
  - Debian Jessie;
  - Ceph Hammer 0.94.2 self-built from sources (tcmalloc)
  - 1xE5-2670 + 128Gb RAM
  - 2 nodes shared with mons, system and mon DB are on separate SAS mirror;
  - 16 OSD on each node, SAS 10k;
  - 2 Intel DC S3700 200Gb SSD for journalling 
  - 10Gbit interconnect, shared public and cluster metwork, MTU9100
  - 10Gbit client host, fio 2.2.7 compiled with RBD engine
 
 We benchmark 4k random read performance on 500G RBD volume with fio-rbd 
 and got different results. When we use XFS 
 (noatime,attr2,inode64,allocsize=4096k,
 noquota) on OSD disks, we can get ~7k sustained iops. After recreating the 
 same OSDs
 with EXT4 fs (noatime,data=ordered) we can achieve ~9.5k iops in the same 
 benchmark.
 
 So there are some questions to community:
  1. Is really EXT4 perform better under typical RBD load (we Ceph to host VM 
 images)?
  2. Is it safe to intermix OSDs with different backingstore filesystems at 
 one cluster 
 (we use ceph-deploy to create and manage OSDs)?
  3. Is it safe to move our production cluster (Firefly 0.80.7) from XFS to 
 ext4 by
 removing XFS osds one-by-one and later add the same disk drives as Ext4 OSDs
 (of course, I know about huge data-movement that will take place during this 
 process)?
 
 Thanks!
 
 Megov Igor
 CIO, Yuterra
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Different filesystems on OSD hosts at the same cluster

2015-08-07 Thread Межов Игорь Александрович
Hi!

We do some performance tests on our small Hammer install:
 - Debian Jessie;
 - Ceph Hammer 0.94.2 self-built from sources (tcmalloc)
 - 1xE5-2670 + 128Gb RAM
 - 2 nodes shared with mons, system and mon DB are on separate SAS mirror;
 - 16 OSD on each node, SAS 10k;
 - 2 Intel DC S3700 200Gb SSD for journalling 
 - 10Gbit interconnect, shared public and cluster metwork, MTU9100
 - 10Gbit client host, fio 2.2.7 compiled with RBD engine

We benchmark 4k random read performance on 500G RBD volume with fio-rbd 
and got different results. When we use XFS 
(noatime,attr2,inode64,allocsize=4096k,
noquota) on OSD disks, we can get ~7k sustained iops. After recreating the same 
OSDs
with EXT4 fs (noatime,data=ordered) we can achieve ~9.5k iops in the same 
benchmark.

So there are some questions to community:
 1. Is really EXT4 perform better under typical RBD load (we Ceph to host VM 
images)?
 2. Is it safe to intermix OSDs with different backingstore filesystems at one 
cluster 
(we use ceph-deploy to create and manage OSDs)?
 3. Is it safe to move our production cluster (Firefly 0.80.7) from XFS to ext4 
by
removing XFS osds one-by-one and later add the same disk drives as Ext4 OSDs
(of course, I know about huge data-movement that will take place during this 
process)?

Thanks!

Megov Igor
CIO, Yuterra

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com