Re: [ceph-users] slow read-performance inside the vm

2015-01-27 Thread Patrik Plank
Hello again,



thank all for the very very helpful advices.



Now i have reinstalled my ceph cluster. 

Three nodes with ceph version 0.80.7 and for every single disk an osd. The 
journal will be saved on a ssd.

 
My ceph.conf

  

[global] 

fsid = bceade34-3c54-4a35-a759-7af631a19df7 

mon_initial_members = ceph01 

mon_host = 10.0.0.20,10.0.0.21,10.0.0.22 

auth_cluster_required = cephx 

auth_service_required = cephx 

auth_client_required = cephx 

filestore_xattr_use_omap = true 

public_network = 10.0.0.0/24 

cluster_network = 10.0.1.0/24 

osd_pool_default_size = 2

osd_pool_default_min_size = 1 

osd_pool_default_pg_num = 4096 

osd_pool_default_pgp_num = 4096 

filestore_max_sync_interval = 30 

 


ceph osd tree

 


 -1 6.76 root default 

 
-2 2.44 host ceph01 



0 0.55 osd.0 up 1 

3 0.27 osd.3 up 1 

4 0.27 osd.4 up 1 

5 0.27 osd.5 up 1 

6 0.27 osd.6 up 1 

7 0.27 osd.7 up 1 

1 0.27 osd.1 up 1 

2 0.27 osd.2 up 1 

 
-3 2.16 host ceph02 



9 0.27 osd.9 up 1 

11 0.27 osd.11 up 1 

12 0.27 osd.12 up 1 

13 0.27 osd.13 up 1 

14 0.27 osd.14 up 1 

15 0.27 osd.15 up 1 

8 0.27 osd.8 up 1 

10 0.27 osd.10 up 1 

 
-4 2.16 host ceph03



17 0.27 osd.17 up 1 

18 0.27 osd.18 up 1 

19 0.27 osd.19 up 1 

20 0.27 osd.20 up 1 

21 0.27 osd.21 up 1 

22 0.27 osd.22 up 1 

23 0.27 osd.23 up 1 

16 0.27 osd.16 up 1

 


rados bench -p kvm 50 write --no-cleanup 

 
Total time run: 50.494855 

Total writes made: 1180 

Write size: 4194304 

Bandwidth (MB/sec): 93.475 



Stddev Bandwidth: 16.3955 

Max bandwidth (MB/sec): 112 

Min bandwidth (MB/sec): 0 

Average Latency: 0.684571 

Stddev Latency: 0.216088 

Max latency: 1.86831 

Min latency: 0.234673 



rados bench -p kvm 50 seq 



Total time run: 15.009855 

Total reads made: 1180 

Read size: 4194304 

Bandwidth (MB/sec): 314.460 

Average Latency: 0.20296 

Max latency: 1.06341 

Min latency: 0.02983 

 


I am really happy, these values above are enough for my little amount of vms. 
Inside the vms I get now for write 80mb/s and read 130mb/s, with write-cache 
enabled.

But there is one little problem. 

Are there some tuning parameters for small files?

For 4kb to 50kb files the cluster is very slow. 



thank you

best regards



 
-Original message-
From: Lindsay Mathieson lindsay.mathie...@gmail.com
Sent: Friday 9th January 2015 0:59
To: ceph-users@lists.ceph.com
Cc: Patrik Plank pat...@plank.me
Subject: Re: [ceph-users] slow read-performance inside the vm


On Thu, 8 Jan 2015 05:36:43 PM Patrik Plank wrote:

Hi Patrick, just a beginner myself, but have been through a similar process 
recently :)

 With these values above, I get a write performance of 90Mb/s and read
 performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio driver
 and writeback-cache enabled) Are these values normal with my configuration
 and hardware? - 

They do seem *very* odd. Your write performance is pretty good, your read 
performance is abysmal - with a similar setup, with 3 OSD's slower than yours 
I was getting 200 MB/s reads. 

Maybe your network setup is dodgy? Jumbo frames can be tricky. Have you run 
iperf between the nodes?

What are you using for benchmark testing on the windows guest?

Also, probably more useful to turn writeback caching off for benchmarking, the 
cache will totally obscure the real performance.

How is the VM mounted? rbd driver?

 The read-performance seems slow. Would the
 read-performance better if I run for every single disk a osd?

I think so - in general the more OSD's the better. Also having 8 HD's in RAID0 
is a recipe for disaster, you'll lost the entire OSD is one of those disks 
fails.

I'd be creating an OSD for each HD (8 per node), with a 5-10GB SSD partition 
per OSD for journal. Tedious, but should make a big difference to reads and 
writes.

Might be worth while trying
[global]
  filestore max sync interval = 30

as well.

-- 
Lindsay



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] slow read-performance inside the vm

2015-01-27 Thread Udo Lembke
Hi Patrik,

Am 27.01.2015 14:06, schrieb Patrik Plank:
 

 ...
 I am really happy, these values above are enough for my little amount of
 vms. Inside the vms I get now for write 80mb/s and read 130mb/s, with
 write-cache enabled.
 
 But there is one little problem.
 
 Are there some tuning parameters for small files?
 
 For 4kb to 50kb files the cluster is very slow.
 

do you use an higher read-ahead inside the VM?
Like echo 4096  /sys/block/vda/queue/read_ahead_kb

Udo

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] slow read-performance inside the vm

2015-01-11 Thread Alexandre DERUMIER
Hi,

also check your cpu usage, dell poweredge 2900 are quite old (6-8 years old),

The more iops you need, to more cpu you need.

I don't remember what is the default blocksize of rados bench.


- Mail original -
De: Patrik Plank pat...@plank.me
À: ceph-users ceph-users@lists.ceph.com
Envoyé: Jeudi 8 Janvier 2015 17:36:43
Objet: [ceph-users] slow read-performance inside the vm

slow read-performance inside the vm 





Hi, 

first of all, I am a “ceph-beginner“ so i am sorry for the trivial questions 
:). 

I have build a ceph three node cluster for virtualization. 



Hardware: 



Dell Poweredge 2900 

8 x 300GB SAS 15k7 with Dell Perc 6/i in Raid 0 

2 x 120GB SSD in Raid 1 with Fujitsu Raid Controller for Journal + OS 

16GB RAM 

2 x Intel Xeon E5410 2,3 Ghz 

2 x Dual 1Gb Nic 



Configuration 




Ceph 0.90 

2x Network bonding with 2 x 1 Gb Network (public + cluster Network) with mtu 
9000 

read_ahead_kb = 2048 

/dev/sda1 on /var/lib/ceph/osd/ceph-0 type xfs 
(rw,noatime,attr2,inode64,noquota) 





ceph.conf: 




[global] 



fsid = 1afaa484-1e18-4498-8fab-a31c0be230dd 

mon_initial_members = ceph01 

mon_host = 10.0.0.20,10.0.0.21,10.0.0.22 

auth_cluster_required = cephx 

auth_service_required = cephx 

auth_client_required = cephx 

filestore_xattr_use_omap = true 

public_network = 10.0.0.0/24 

cluster_network = 10.0.1.0/24 

osd_pool_default_size = 3 

osd_pool_default_min_size = 1 

osd_pool_default_pg_num = 128 

osd_pool_default_pgp_num = 128 

filestore_flusher = false 




[client] 

rbd_cache = true 

rbd_readahead_trigger_requests = 50 

rbd_readahead_max_bytes = 4096 

rbd_readahead_disable_after_bytes = 0 







rados bench -p kvm 200 write –no-cleanup 



Total time run: 201.139795 

Total writes made: 3403 

Write size: 4194304 

Bandwidth (MB/sec): 67.674 

Stddev Bandwidth: 66.7865 

Max bandwidth (MB/sec): 212 

Min bandwidth (MB/sec): 0 

Average Latency: 0.945577 

Stddev Latency: 1.65121 

Max latency: 13.6154 

Min latency: 0.085628 



rados bench -p kvm 200 seq 



Total time run: 63.755990 

Total reads made: 3403 

Read size: 4194304 

Bandwidth (MB/sec): 213.502 

Average Latency: 0.299648 

Max latency: 1.00783 

Min latency: 0.057656 




So here my questions: 



With these values above, I get a write performance of 90Mb/s and read 
performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio driver and 
writeback-cache enabled) 

Are these values normal with my configuration and hardware? - The 
read-performance seems slow. 

Would the read-performance better if I run for every single disk a osd? 




Best regards 

Patrik 



___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] slow read-performance inside the vm

2015-01-08 Thread Patrik Plank
Hi,

first of all, I am a “ceph-beginner“ so i am sorry for the trivial questions :).

I have build a ceph three node cluster for virtualization.

  

Hardware:

 
Dell Poweredge 2900

8 x 300GB SAS 15k7 with Dell Perc 6/i in Raid 0

2 x 120GB SSD in Raid 1 with Fujitsu Raid Controller for Journal + OS

16GB RAM

2 x Intel Xeon E5410 2,3 Ghz

2 x Dual 1Gb Nic

 
Configuration

  

Ceph 0.90

2x Network bonding with 2 x 1 Gb Network (public + cluster Network) with mtu 
9000

read_ahead_kb = 2048

/dev/sda1 on /var/lib/ceph/osd/ceph-0 type xfs 
(rw,noatime,attr2,inode64,noquota)


  

ceph.conf:

  

[global]

 
fsid = 1afaa484-1e18-4498-8fab-a31c0be230dd

mon_initial_members = ceph01

mon_host = 10.0.0.20,10.0.0.21,10.0.0.22

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

filestore_xattr_use_omap = true

public_network = 10.0.0.0/24

cluster_network = 10.0.1.0/24

osd_pool_default_size = 3

osd_pool_default_min_size = 1

osd_pool_default_pg_num = 128

osd_pool_default_pgp_num = 128

filestore_flusher = false

  

[client]

rbd_cache = true

rbd_readahead_trigger_requests = 50

rbd_readahead_max_bytes = 4096

rbd_readahead_disable_after_bytes = 0





rados bench -p kvm 200 write –no-cleanup

 
Total time run: 201.139795

Total writes made: 3403

Write size: 4194304

Bandwidth (MB/sec): 67.674

Stddev Bandwidth: 66.7865

Max bandwidth (MB/sec): 212

Min bandwidth (MB/sec): 0

Average Latency: 0.945577

Stddev Latency: 1.65121

Max latency: 13.6154

Min latency: 0.085628

 
rados bench -p kvm 200 seq

 
Total time run: 63.755990

Total reads made: 3403

Read size: 4194304

Bandwidth (MB/sec): 213.502

Average Latency: 0.299648

Max latency: 1.00783

Min latency: 0.057656



So here my questions:

 
With these values above, I get a write performance of 90Mb/s and read 
performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio driver and 
writeback-cache enabled)

Are these values normal with my configuration and hardware? - The 
read-performance seems slow.

Would the read-performance better if I run for every single disk a osd?

  

Best regards

Patrik

 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] slow read-performance inside the vm

2015-01-08 Thread Lindsay Mathieson
On Thu, 8 Jan 2015 05:36:43 PM Patrik Plank wrote:

Hi Patrick, just a beginner myself, but have been through a similar process 
recently :)

 With these values above, I get a write performance of 90Mb/s and read
 performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio driver
 and writeback-cache enabled) Are these values normal with my configuration
 and hardware? - 

They do seem *very* odd. Your write performance is pretty good, your read 
performance is abysmal - with a similar setup, with 3 OSD's slower than yours 
I was getting 200 MB/s reads. 

Maybe your network setup is dodgy? Jumbo frames can be tricky. Have you run 
iperf between the nodes?

What are you using for benchmark testing on the windows guest?

Also, probably more useful to turn writeback caching off for benchmarking, the 
cache will totally obscure the real performance.


How is the VM mounted? rbd driver?


 The read-performance seems slow. Would the
 read-performance better if I run for every single disk a osd?

I think so - in general the more OSD's the better. Also having 8 HD's in RAID0 
is a recipe for disaster, you'll lost the entire OSD is one of those disks 
fails.

I'd be creating an OSD for each HD (8 per node), with a 5-10GB SSD partition 
per OSD for journal. Tedious, but should make a big difference to reads and 
writes.

Might be worth while trying
[global]
  filestore max sync interval = 30

as well.

-- 
Lindsay

signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] slow read-performance inside the vm

2015-01-08 Thread German Anders
that depends.. with which block size do you get those numbers? Ceph is 
really good with block sizes  256kb, 1M, 4M...




German Anders


















--- Original message ---
Asunto: [ceph-users] slow read-performance inside the vm
De: Patrik Plank pat...@plank.me
Para: ceph-users@lists.ceph.com ceph-users@lists.ceph.com
Fecha: Thursday, 08/01/2015 20:21







Hi,
first of all, I am a “ceph-beginner“ so i am sorry for the trivial 
questions :).

I have build a ceph three node cluster for virtualization.

Hardware:

Dell Poweredge 2900
8 x 300GB SAS 15k7 with Dell Perc 6/i in Raid 0
2 x 120GB SSD in Raid 1 with Fujitsu Raid Controller for Journal + OS
16GB RAM
2 x Intel Xeon E5410 2,3 Ghz
2 x Dual 1Gb Nic

Configuration


Ceph 0.90
2x Network bonding with 2 x 1 Gb Network (public + cluster Network) 
with mtu 9000

read_ahead_kb = 2048
/dev/sda1 on /var/lib/ceph/osd/ceph-0 type xfs 
(rw,noatime,attr2,inode64,noquota)




ceph.conf:


[global]

fsid = 1afaa484-1e18-4498-8fab-a31c0be230dd
mon_initial_members = ceph01
mon_host = 10.0.0.20,10.0.0.21,10.0.0.22
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public_network = 10.0.0.0/24
cluster_network = 10.0.1.0/24
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 128
osd_pool_default_pgp_num = 128
filestore_flusher = false


[client]
rbd_cache = true
rbd_readahead_trigger_requests = 50
rbd_readahead_max_bytes = 4096
rbd_readahead_disable_after_bytes = 0




rados bench -p kvm 200 write –no-cleanup

Total time run: 201.139795
Total writes made: 3403
Write size: 4194304
Bandwidth (MB/sec): 67.674
Stddev Bandwidth: 66.7865
Max bandwidth (MB/sec): 212
Min bandwidth (MB/sec): 0
Average Latency: 0.945577
Stddev Latency: 1.65121
Max latency: 13.6154
Min latency: 0.085628

rados bench -p kvm 200 seq

Total time run: 63.755990
Total reads made: 3403
Read size: 4194304
Bandwidth (MB/sec): 213.502
Average Latency: 0.299648
Max latency: 1.00783
Min latency: 0.057656


So here my questions:

With these values above, I get a write performance of 90Mb/s and read 
performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio 
driver and writeback-cache enabled)
Are these values normal with my configuration and hardware? - The 
read-performance seems slow.
Would the read-performance better if I run for every single disk a 
osd?



Best regards
Patrik

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com