On 11 January 2014 10:40, Cedric Lemarchand ced...@yipikai.org wrote:
Le 10/01/2014 17:16, Bradley Kite a écrit :
This might explain why the performance is not so good - on each
connection it can only do 1 transaction at a time:
1) Submit write
2) wait...
3) Receive ACK
Then repeat
On 9 January 2014 16:57, Mark Nelson mark.nel...@inktank.com wrote:
On 01/09/2014 10:43 AM, Bradley Kite wrote:
On 9 January 2014 15:44, Christian Kauhaus k...@gocept.com
mailto:k...@gocept.com wrote:
Am 09.01.2014 10:25, schrieb Bradley Kite:
3 servers (quad-core CPU, 16GB RAM
Hi
Ceph uses thin-provisioning so it will not allocate the full block device
when you create it with qemu-img.
When you write to the data it then allocates the block devices.
However, you can enable TRIM/DISCARD in the VM as per the documentation
here: http://ceph.com/docs/next/rbd/qemu-rbd/ -
surely provide better performance if there were more
OSD's, as my tests are using an IO depth of 256.
Regards
--
Brad.
On 10 January 2014 14:32, Mark Nelson mark.nel...@inktank.com wrote:
On 01/10/2014 03:08 AM, Bradley Kite wrote:
On 9 January 2014 16:57, Mark Nelson mark.nel...@inktank.com
Hi there,
I am new to Ceph and still learning its performance capabilities, but I
would like to share my performance results in the hope that they are useful
to others, and also to see if there is room for improvement in my setup.
Firstly, a little about my setup:
3 servers (quad-core CPU, 16GB
On 9 January 2014 15:44, Christian Kauhaus k...@gocept.com wrote:
Am 09.01.2014 10:25, schrieb Bradley Kite:
3 servers (quad-core CPU, 16GB RAM), each with 4 SATA 7.2K RPM disks
(4TB)
plus a 160GB SSD.
[...]
By comparison, a 12-disk RAID5 iscsi SAN is doing ~4000 read iops and
~2000