}
}
],
"metadata_heap": "",
"realm_id": ""
}*
>>* >> >> *radosgw-admin user info
>>--uid="0611e8fdb62b4b2892b62c7e7bf3767f$0611e8fdb62b4b2892b62c7e7bf3767f"
>>--debug-ms=1 --debug-rgw=20 --
ants = true
rgw_keystone_implicit_tenants = true
rgw_swift_account_in_url = true
rgw_s3_auth_use_keystone = true
rgw_keystone_verify_ssl = false
mon_pg_warn_max_object_skew = 1000
rgw enable usage log = true
rgw usage log tick interval = 30
rgw usage log flush threshold = 1024
rgw usag
/ Kind Regards
Dilip Renkila
Linux / Unix SysAdmin
Linserv AB
Direct: +46 8 473 60 64
Mobile: +46 705080243
dilip.renk...@linserv.se
www.linserv.se
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
Hi all,
We have a ceph kraken cluster. Last week, we lost an OSD server. Then we
added one more OSD servers with same configuration.Then we let cluster to
recover,but i think it didn't happened.Still most of PG's are stuck in
remapped and in degraded state. When i restart all osd daemons, it
Hi all,
We recently had an osd breakdown. After that i have manually added osd's
thinking that ceph repairs by itself.
I am running ceph 11 version
root@node16:~# ceph -v
ceph version 11.2.1 (e0354f9d3b1eea1d75a7dd487ba8098311be38a7)
root@node16:~# ceph -s
cluster