Hi,

We did the opposite here; adding some SSD in free slots after having a
normal cluster running with SATA.

We just created a new pool for them and separated the two types. I
used this as a template:
http://ceph.com/docs/master/rados/operations/crush-map/?highlight=ssd#placing-different-pools-on-different-osds
and left out the part with placing a master copy on each ssd.

I had to create the pool, rack and host in the crush rules for the
first server (wouldn't let me from the command line using 'ceph osd
crush set ...'), after that I could just add servers/osd to it like
normal.

I think unless you really need two separate clusters, I'd go with just
having different pools for it; you'll need a copy of every service
(mons, storagenodes etc) with two clusters.

More info on running multiple clusters here:
http://ceph.com/docs/master/rados/configuration/ceph-conf/#running-multiple-clusters

Cheers,
Martin

On Tue, Mar 5, 2013 at 9:48 PM, Stefan Priebe <s.pri...@profihost.ag> wrote:
> Hi,
>
> right now i have a bunch of OSD hosts (servers) which have just 4 disks
> each. All of them use SSDs right now.
>
> So i have a lot of free harddisk slots in the chassis. So my idea was to
> create a second ceph system using these free slots. Is this possible? Or
> should i just the first one with different rules? Any hints?
>
> Greets,
> Stefan
>
> _______________________________________________
> ceph-users mailing list
> ceph-us...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to