Re: [ceph-users] FW: RGW performance issue

2015-11-13 Thread Pavan Rallabhandi
No documentation that am aware of. The idea is to avoid having multiple RGW 
instances, if there is enough cluster bandwidth that a single RGW instance can 
drive through.

-Original Message-
From: Jens Rosenboom [mailto:j.rosenb...@x-ion.de] 
Sent: Friday, November 13, 2015 8:58 PM
To: Pavan Rallabhandi
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] FW: RGW performance issue

2015-11-13 5:47 GMT+01:00 Pavan Rallabhandi <pavan.rallabha...@sandisk.com>:
> If you are on >=hammer builds, you might want to consider the option 
> of using 'rgw_num_rados_handles', which opens up more handles to the 
> cluster from RGW. This would help in scenarios, where you have enough 
> number of OSDs to drive the cluster bandwidth, which I guess is the case with 
> you.

Is there any documentation on this option other than the source itself? My 
google foo failed to come up with anything except pull requests.

In particular it would be interesting to know what a useful target value would 
be and how to define "enough" OSDs.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] FW: RGW performance issue

2015-11-13 Thread Jens Rosenboom
2015-11-13 5:47 GMT+01:00 Pavan Rallabhandi :
> If you are on >=hammer builds, you might want to consider the option of
> using 'rgw_num_rados_handles', which opens up more handles to the cluster
> from RGW. This would help in scenarios, where you have enough number of OSDs
> to drive the cluster bandwidth, which I guess is the case with you.

Is there any documentation on this option other than the source
itself? My google foo failed to come up with anything except pull
requests.

In particular it would be interesting to know what a useful target
value would be and how to define "enough" OSDs.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] FW: RGW performance issue

2015-11-12 Thread Максим Головков
Hello,

 

We are building a cluster for archive storage. We plan to use an Object
Storage (RGW) only, no Block Devices and File System. We doesn't require
high speed, so we are using old weak servers (4 cores, 3 GB RAM) with new
huge but slow HDDs (8TB, 5900rpm). We have 3 storage nodes with 24 OSDs
totally now and 3 RGW nodes based on a default Civetweb engine.

 

We have got about 50 MB/sec "raw" write speed with librados-level benches
(measured by rados bench, rados put), and it's quite enough for us. However,
RGW performance is dramatically low: no more than 5 MB/sec for file
uploading via s3cmd and swift client. It's too slow for ours tasks and it's
abnormally slow compared with librados write speed, imho.

 

Write speed is a most important for us now, our the first goal is to
download about 50 TB of archive data from a public cloud to our promise
storage. We need no less than 20 MB/sec of write speed.

 

Can anybody help my with RGW performance? Who use RGW, what performance
penalty does it give? And where to find the cause of the problem? I have
checked all performance counters what I know and I haven't found any
critical values.

 

Thanks.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] FW: RGW performance issue

2015-11-12 Thread Pavan Rallabhandi
If you are on >=hammer builds, you might want to consider the option of using 
'rgw_num_rados_handles', which opens up more handles to the cluster from RGW. 
This would help in scenarios, where you have enough number of OSDs to drive the 
cluster bandwidth, which I guess is the case with you.

Thanks,
-Pavan.

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ?? 

Sent: Thursday, November 12, 2015 1:51 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] FW: RGW performance issue

Hello,

We are building a cluster for archive storage. We plan to use an Object Storage 
(RGW) only, no Block Devices and File System. We doesn't require high speed, so 
we are using old weak servers (4 cores, 3 GB RAM) with new huge but slow HDDs 
(8TB, 5900rpm). We have 3 storage nodes with 24 OSDs totally now and 3 RGW 
nodes based on a default Civetweb engine.

We have got about 50 MB/sec "raw" write speed with librados-level benches 
(measured by rados bench, rados put), and it's quite enough for us. However, 
RGW performance is dramatically low: no more than 5 MB/sec for file uploading 
via s3cmd and swift client. It's too slow for ours tasks and it's abnormally 
slow compared with librados write speed, imho.

Write speed is a most important for us now, our the first goal is to download 
about 50 TB of archive data from a public cloud to our promise storage. We need 
no less than 20 MB/sec of write speed.

Can anybody help my with RGW performance? Who use RGW, what performance penalty 
does it give? And where to find the cause of the problem? I have checked all 
performance counters what I know and I haven't found any critical values.

Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com