Dear Cepher,

It's my first time to write email to the list, I hope  my problem is
depicted clearly.

I have a cluster with 4 physical servers, 3 mons on each server and 4 osds
per one server, as well as one server as rgw client. I just upgrade 3
servers from H to J, except the rgw server.

After creating rgw service on one mon node, I found the new rgw service can
not access my old rgw pools with the name user and password, but I create
new user and password on the new rgw server, it just can access the new
default rgw pools.


If I want to access the old rgw pools with my new rgw service, is there any
way to go?

My old ceph version:
[root@rgw0 ~]# ceph --version
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)

And the new version:
[root@ceph2 ~]# ceph --version
ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)

Here are cluster info:
[root@ceph2 home]# ceph -s
    cluster 3fcc77ef-9fda-4f83-8b9f-efc9c769c857
     health HEALTH_OK
     monmap e5: 3 mons at {ceph0=
172.17.0.170:6789/0,ceph1=172.17.0.171:6789/0,ceph2=172.17.0.172:6789/0}
            election epoch 346, quorum 0,1,2 ceph0,ceph1,ceph2
      fsmap e353: 5/5/5 up
{4:0=ceph2-mds1=up:active,4:1=ceph1-mds0=up:active,4:2=ceph2-mds0=up:active,4:3=ceph1-mds1=up:active,4:4=ceph0-mds0=up:active}
     osdmap e7540: 12 osds: 12 up, 12 in
      pgmap v5839842: 1960 pgs, 31 pools, 185 GB data, 96103 objects
            557 GB used, 44132 GB / 44690 GB avail
                1960 active+clean
  client io 18291 B/s rd, 0 B/s wr, 17 op/s rd, 11 op/s wr

[root@ceph2 home]# rados lspools
rbd
.rgw.root
.rgw.control
.rgw
.rgw.gc
glance
.users.uid
.users
.users.swift
.rgw.buckets.index
.rgw.buckets
.users.email
LUNs
.rgw.buckets.hospitalA
.rgw.buckets.hospitalB
.rgw.buckets.hospitalC
.rgw.buckets.extra
ceph2_mds0_data
ceph2_mds0_metadata
nfs_disk
default.rgw.control
default.rgw.data.root
default.rgw.gc
default.rgw.log
default.rgw.users.uid
default.rgw.users.keys
default.rgw.users.swift
default.rgw.meta
default.rgw.buckets.index
default.rgw.buckets.data
tmp_pool


Any help will be greatly appreciated !

Many thanks,

Sambar
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to