If you're single machine change hostname in the cfg to be the same but you need to change dev name to be different for each osd process on single machine.
On Wed, Apr 11, 2012 at 2:42 PM, Madhusudhana U <madhusudhana.u.acha...@gmail.com> wrote: > Stefan Kleijkers <stefan <at> unilogicnetworks.net> writes: > >> >> Hello, >> >> Yes that's no problem. I'm using that configuration for some time now. >> Just generate a config with multiple OSD clauses with the same node/host. >> >> With the newer ceph version mkcephfs is smart enough to detect the osd's >> on the same node and will generate a crushmap whereby the objects get >> replicated to different nodes. >> >> I didn't see any impact on the performance (if you have enough >> processing power, because you need more of that). >> >> I wanted to use just a few OSD's per node with mdraid, so I could use >> RAID6. This way I could swap a faulty disk without bringing the node >> down. But I couldn't get it stable with mdraid. >> > This is how my OSD part in ceph.conf looks like > > [osd.0] > host = ceph-node-1 > btrfs devs = /dev/sda6 > > [osd.1] > host = ceph-node-2 > btrfs devs = /dev/sda6 > > [osd.2] > host = ceph-node-3 > btrfs devs = /dev/sda6 > > [osd.3] > host = ceph-node-4 > btrfs devs = /dev/sda6 > > > > Can you please help me how I can add multiple OSD in the same machine > considering that i have 4 partition created for OSD ? > > I have powerful machines having 6 quad core Intel Xeon with 48G of RAM > > > > > > > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Tomasz Paszkowski SS7, Asterisk, SAN, Datacenter, Cloud Computing +48500166299 -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html