Hi Christian,
for my setup "b" takes too long - too much data movement and stress to all
nodes.
I have simply (with replica 3) "set noout", reinstall one node (with new
filesystem on the OSDs, but leave them in the
crushmap) and start all OSDs (at friday night) - takes app. less than one day
for
>>True, true. But I personally think that Ceph doesn't perform well on
>>small <10 node clusters.
Hi, I can reach 60 iops 4k read with 3 nodes (6ssd each).
- Mail original -
De: "Lindsay Mathieson"
À: "Tony Nelson"
Cc: "ceph-users"
Envoyé: Lundi 31 Août 2015 03:10:14
Objet: Re: [
Hello,
I'm about to add another storage node to small firefly cluster here and
refurbish 2 existing nodes (more RAM, different OSD disks).
Insert rant about not going to start using ceph-deploy as I would have to
set the cluster to no-in since "prepare" also activates things due to the
udev magi
On 29 August 2015 at 00:53, Tony Nelson wrote:
> I recently built a 3 node Proxmox cluster for my office. I’d like to get
> HA setup, and the Proxmox book recommends Ceph. I’ve been reading the
> documentation and watching videos, and I think I have a grasp on the
> basics, but I don’t need any
Yes, I will use Ceph RBD as shared Storage for Oracle Database Cluster, so
I need high I/O read write random. With 3 nodes and 24 SAS 15K 1TB, what is
the most optimized solution to get it ?
On Aug 31, 2015 2:01 AM, "Somnath Roy" wrote:
And what kind of performance are you looking for?
I assume y
In case someone else runs into the same issue in future:
I came out of this issue by installing epel-release before installing
ceph-deploy. If the order of installation is ceph-deploy followed by
epel-release, the issue is being hit.
Thanks,
Pavana
On Sat, Aug 29, 2015 at 10:02 AM, pavana bhat
Hi,
I am trying to ceph-deploy with Hammer on rhel7. While trying to activate
the OSD using ceph-deploy on admin-node, the below step hangs. I tried to
run it manually on the osd-node and tried tracing using "python -m trace
--trace" . It looks like it is stuck in some threading.py code. Can someo
And what kind of performance are you looking for?
I assume your workload will be small block random read/write?
Btw, without SSD journal write performance will be very bad specially when your
cluster is small..
Sent from my iPhone
On Aug 30, 2015, at 4:33 AM, Le Quang Long
mailto:longlq.openst.
How heavy transaction are you expecting?
Shinobu
On Sun, Aug 30, 2015 at 8:33 PM, Le Quang Long
wrote:
> Thanks for your reply.
>
> I intend use Ceph RBD as shared storage for Oracle Database RAC.
> My Ceph deployment has 3 nodes with 8 1TB 15k SAS per node, I do not have
> SSD at the moment,
Thanks for your reply.
I intend use Ceph RBD as shared storage for Oracle Database RAC.
My Ceph deployment has 3 nodes with 8 1TB 15k SAS per node, I do not have
SSD at the moment, so I design every SAS will be Journal and OSD.
Can you suggest me a way to get highest performance for Oracle Cluste
10 matches
Mail list logo