I have 3 pools.

 

0 rbd,1 cephfs_data,2 cephfs_metadata

 

cephfs_data has 1024 as a pg_num, total pg number is 2113

 

POOL_NAME       USED   OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND 
DEGRADED RD_OPS RD    WR_OPS WR

cephfs_data      4000M    1000      0   2000                  0       0        
0      2     0  27443 44472M

cephfs_metadata 11505k      24      0     48                  0       0        
0     38 8456k   7384 14719k

rbd                  0       0      0      0                  0       0        
0      0     0      0      0

 

total_objects    1024

total_used       30575M

total_avail      55857G

total_space      55887G

 

 

 

 

 

From: David Turner [mailto:drakonst...@gmail.com] 
Sent: Tuesday, July 18, 2017 2:31 AM
To: Gencer Genç <gen...@gencgiyen.com>; Patrick Donnelly <pdonn...@redhat.com>
Cc: Ceph Users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Yet another performance tuning for CephFS

 

What are your pool settings? That can affect your read/write speeds as much as 
anything in the ceph.conf file.

 

On Mon, Jul 17, 2017, 4:55 PM Gencer Genç <gen...@gencgiyen.com 
<mailto:gen...@gencgiyen.com> > wrote:

I don't think so.

Because I tried one thing a few minutes ago. I opened 4 ssh channel and
run rsync command and copy bigfile to different targets in cephfs at the
same time. Then i looked into network graphs and i see numbers up to
1.09 gb/s. But why single copy/rsync cannot exceed 200mb/s? What
prevents it im really wonder this.

Gencer.


-----Original Message-----
From: Patrick Donnelly [mailto:pdonn...@redhat.com <mailto:pdonn...@redhat.com> 
]
Sent: 17 Temmuz 2017 Pazartesi 23:21
To: gen...@gencgiyen.com <mailto:gen...@gencgiyen.com> 
Cc: Ceph Users <ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> >
Subject: Re: [ceph-users] Yet another performance tuning for CephFS

On Mon, Jul 17, 2017 at 1:08 PM,  <gen...@gencgiyen.com 
<mailto:gen...@gencgiyen.com> > wrote:
> But lets try another. Lets say i have a file in my server which is 5GB. If i
> do this:
>
> $ rsync ./bigfile /mnt/cephfs/targetfile --progress
>
> Then i see max. 200 mb/s. I think it is still slow :/ Is this an expected?

Perhaps that is the bandwidth limit of your local device rsync is reading from?

--
Patrick Donnelly

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to