I Would say the easiest way would be to leverage all the self-healing of
ceph: add the new nodes to the old cluster, allow or force all the data to
migrate between nodes, and then remove the old ones out.

Well to be fair you could probably just install radosgw on another node and
use it as your gateway without the need to even create a new OSD node...

Or was there a reason to create a new cluster? I can tell you that one of
the clusters I have has been around since bobtail, and now it's hammer...

On Wed, Aug 26, 2015 at 2:50 PM, Chang, Fangzhe (Fangzhe) <
fangzhe.ch...@alcatel-lucent.com> wrote:

> Hi,
>
>
>
> We have been running Ceph/Radosgw version 0.80.7 (Giant) and stored quite
> some amount of data in it. We are only using ceph as an object store via
> radosgw. Last week cheph-radosgw daemon suddenly refused to start (with
> logs only showing “initialization timeout” error on Centos 7).  This
> triggers me to install a newer instance --- Ceph/Radosgw version 0.94.2
> (Hammer). The new instance has a different set of key rings by default. The
> next step is to have all the data migrated. Does anyone know how to get the
> existing data out of the old ceph  cluster (Giant) and into the new
> instance (Hammer)? Please note that in the old three-node cluster ceph osd
> is still running but radosgw is not. Any suggestion will be greatly
> appreciated.
>
> Thanks.
>
>
>
> Regards,
>
>
>
> Fangzhe Chang
>
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to