Re: [ceph-users] 回复:Re: testing ceph performance issue

2013-11-27 Thread Kyle Bader
 How much performance can be improved if use SSDs  to storage journals?

You will see roughly twice the throughput unless you are using btrfs
(still improved but not as dramatic). You will also see lower latency
because the disk head doesn't have to seek back and forth between
journal and data partitions.

   Kernel RBD Driver  ,  what is this ?

There are several RBD implementations, one is the kernel RBD driver in
upstream Linux, another is built into Qemu/KVM.

 and we want to know the RBD if  support XEN virual  ?

It is possible, but not nearly as well tested and not prevalent as RBD
via Qemu/KVM. This might be a starting point if your interested in
testing Xen/RBD integration:

http://wiki.xenproject.org/wiki/Ceph_and_libvirt_technology_preview

Hope that helps!

-- 

Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 回复:Re: testing ceph performance issue

2013-11-26 Thread haiquan517
 Hi Mark
 
 Thanks your reply!  
  Yes,   33T  OSD we are use 1 disk per OSD. 
- Do you use SSDs for journals? Ask:  don't use SSDs for journals , how 
much performance can be improved if use SSDs  to storage journals?  
- What CPU do you have in each node? Ask:  CPU : Intel(R) Xeon(R) CPU 
E5-2420 0 @ 1.90GHz
- What OS/Kernel?  Ask : Kernel :   3.11.6-1.el6xen.x86_64

- Are you using QEMU/KVM or Kernel RBD Driver?
  we don't use QEMU/KVM , now we direct copy virtual image to block storage  
directory ,and then use samba share this directory to other hosts. 
  Kernel RBD Driver  ,  what is this ? 
and we want to know the RBD if  support XEN virual  ?   
Thanks a lot!



- 原始邮件 -
发件人:Mark Nelson mark.nel...@inktank.com
收件人:ceph-users@lists.ceph.com
主题:Re: [ceph-users] testing ceph performance issue
日期:2013年11月26日 23点23分

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com