[ceph-users] How do you replace an OSD?

2013-08-13 Thread Dmitry Postrigan

I just got my small Ceph cluster running. I run 6 OSDs on the same server to 
basically replace mdraid.

I have tried to simulate a hard drive (OSD) failure: removed the OSD 
(out+stop), zapped it, and then
prepared and activated it. It worked, but I ended up with one extra OSD (and 
the old one still showing in the ceph -w output).
I guess this is not how I am supposed to do it?

Documentation recommends manually editing the configuration, however, there are 
no osd entries in my /etc/ceph/ceph.conf

So what would be the best way to replace a failed OSD?

Dmitry

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph instead of RAID

2013-08-13 Thread Dmitry Postrigan
 This will be a single server configuration, the goal is to replace mdraid, 
 hence I tried to use localhost
 (nothing more will be added to the cluster). Are you saying it will be less 
 fault tolerant than a RAID-10?

 Ceph is a distributed object store. If you stay within a single machine,
 keep using a local RAID solution (hardware or software).

 Why would you want to make this switch?

I do not think RAID-10 on 6 3TB disks is going to be reliable at all. I have 
simulated several failures, and
it looks like a rebuild will take a lot of time. Funnily, during one of these 
experiments, another drive
failed, and I had lost the entire array. Good luck recovering from that...

I feel that Ceph is better than mdraid because:
1) When ceph cluster is far from being full, 'rebuilding' will be much faster 
vs mdraid
2) You can easily change the number of replicas
3) When multiple disks have bad sectors, I suspect ceph will be much easier to 
recover data from than from
mdraid which will simply never finish rebuilding.
4) If we need to migrate data over to a different server with no downtime, we 
just add more OSDs, wait, and
then remove the old ones :-)

This is my initial observation though, so please correct me if I am wrong.

Dmitry

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph instead of RAID

2013-08-12 Thread Dmitry Postrigan
Hello community,

I am currently installing some backup servers with 6x3TB drives in them. I 
played with RAID-10 but I was not
impressed at all with how it performs during a recovery.

Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be 
local, so I could simply create
6 local OSDs + a monitor, right? Is there anything I need to watch out for in 
such configuration?

Another thing. I am using ceph-deploy and I have noticed that when I do this:

ceph-deploy --verbose  new localhost

the ceph.conf file is created in the current folder instead of /etc. Is this 
normal?

Also, in the ceph.conf there's a line:
mon host = ::1
Is this normal or I need to change this to point to localhost?

Thanks for any feedback on this.

Dmitry

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com