Not much suggestion from my side (I never ran Ceph on complete HDDs), other 
than probably running more OSDs/HDDs. More OSDs should help specially if you 
can spread these on many nodes.
I would say try with fio-rbd (librbd) first (rbd_cache = false) as it may give 
you some boost over kernel rbd since the TCP_NODELAY patch is probably not in 
yet with krbd. But, I doubt how significant it is in the HDD world.

Thanks & Regards
Somnath


From: Le Quang Long [mailto:longlq.openst...@gmail.com]
Sent: Sunday, August 30, 2015 5:19 PM
To: Somnath Roy
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Fwd: [Ceph-community]Improve Read Performance


Yes, I will use Ceph RBD as shared Storage for Oracle Database Cluster, so I 
need high I/O read write random. With 3 nodes and 24 SAS 15K 1TB, what is the 
most optimized solution to get it ?
On Aug 31, 2015 2:01 AM, "Somnath Roy" 
<somnath....@sandisk.com<mailto:somnath....@sandisk.com>> wrote:
And what kind of performance are you looking for?
I assume your workload will be small block random read/write?
Btw, without SSD journal write performance will be very bad specially when your 
cluster is small..

Sent from my iPhone

On Aug 30, 2015, at 4:33 AM, Le Quang Long 
<longlq.openst...@gmail.com<mailto:longlq.openst...@gmail.com>> wrote:

Thanks for your reply.

I intend use Ceph RBD as shared storage for Oracle Database RAC.
My Ceph deployment has 3 nodes with 8 1TB 15k SAS per node, I do not have SSD 
at the moment, so I design every SAS will be Journal and OSD.

Can you suggest me a way to get highest performance for Oracle Cluster with 
this deployment?

Many thanks.

________________________________

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to