Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
I've upgraded our test cluster from 9.2.1 to 10.2.1 and I still had these issues. As before the script did fix the issue and the cluster is now working. Is the correct fix in 10.2.1 or was it still expected to run the fix? If it makes a difference I'm running trusty, the cluster was created on hammer, upgraded to infernalis and now jewel... On Tue, Apr 26, 2016 at 1:06 AM, Yehuda Sadeh-Weinraub wrote: > I managed to reproduce the issue, and there seem to be multiple > problems. Specifically we have an issue when upgrading a default > cluster that hasn't had a zone (and region) explicitly configured > before. There is another bug that I found > (http://tracker.ceph.com/issues/15597) that makes things even a bit > more complicated. > > I created the following script that might be able to fix things for you: > https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone > > For future reference, this script shouldn't be used if there are any > zones configured other than the default one. It also makes some ninja > patching to the zone config because of a bug that exists currently, > but will probably not apply to any next versions. > > Please let me know if you have any issues, or if this actually does its magic. > > Thanks, > Yehuda > > On Mon, Apr 25, 2016 at 4:10 PM, Richard Chan > wrote: >> >>> > How do you actually do that? >>> >>> What does 'radosgw-admin zone get' return? >>> >>> Yehuda >> >> >> >> [root@node1 ceph]# radosgw-admin zone get >> unable to initialize zone: (2) No such file or directory >> >> (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from a >> clean >> >> ceph-deploy rgw create node1 >> >> ## user created under Hammer >> [root@node1 ceph]# radosgw-admin user info --uid=testuser >> 2016-04-26 07:07:06.159497 7f410c33ca40 0 RGWZoneParams::create(): error >> creating default zone params: (17) File exists >> could not fetch user info: no user info saved >> >> "rgw_max_chunk_size": "524288", >> "rgw_max_put_size": "5368709120", >> "rgw_override_bucket_index_max_shards": "0", >> "rgw_bucket_index_max_aio": "8", >> "rgw_enable_quota_threads": "true", >> "rgw_enable_gc_threads": "true", >> "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1", >> "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin", >> "rgw_cache_enabled": "true", >> "rgw_cache_lru_size": "1", >> "rgw_socket_path": "", >> "rgw_host": "", >> "rgw_port": "", >> "rgw_dns_name": "", >> "rgw_dns_s3website_name": "", >> "rgw_content_length_compat": "false", >> "rgw_script_uri": "", >> "rgw_request_uri": "", >> "rgw_swift_url": "", >> "rgw_swift_url_prefix": "swift", >> "rgw_swift_auth_url": "", >> "rgw_swift_auth_entry": "auth", >> "rgw_swift_tenant_name": "", >> "rgw_swift_account_in_url": "false", >> "rgw_swift_enforce_content_length": "false", >> "rgw_keystone_url": "", >> "rgw_keystone_admin_token": "", >> "rgw_keystone_admin_user": "", >> "rgw_keystone_admin_password": "", >> "rgw_keystone_admin_tenant": "", >> "rgw_keystone_admin_project": "", >> "rgw_keystone_admin_domain": "", >> "rgw_keystone_api_version": "2", >> "rgw_keystone_accepted_roles": "Member, admin", >> "rgw_keystone_token_cache_size": "1", >> "rgw_keystone_revocation_interval": "900", >> "rgw_keystone_verify_ssl": "true", >> "rgw_keystone_implicit_tenants": "false", >> "rgw_s3_auth_use_rados": "true", >> "rgw_s3_auth_use_keystone": "false", >> "rgw_ldap_uri": "ldaps:\/\/", >> "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com", >> "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com", >> "rgw_ldap_dnattr": "uid", >> "rgw_ldap_secret": "\/etc\/openldap\/secret", >> "rgw_s3_auth_use_ldap": "false", >> "rgw_admin_entry": "admin", >> "rgw_enforce_swift_acls": "true", >> "rgw_swift_token_expiration": "86400", >> "rgw_print_continue": "true", >> "rgw_remote_addr_param": "REMOTE_ADDR", >> "rgw_op_thread_timeout": "600", >> "rgw_op_thread_suicide_timeout": "0", >> "rgw_thread_pool_size": "100", >> "rgw_num_control_oids": "8", >> "rgw_num_rados_handles": "1", >> "rgw_nfs_lru_lanes": "5", >> "rgw_nfs_lru_lane_hiwat": "911", >> "rgw_nfs_fhcache_partitions": "3", >> "rgw_nfs_fhcache_size": "2017", >> "rgw_zone": "", >> "rgw_zone_root_pool": ".rgw.root", >> "rgw_default_zone_info_oid": "default.zone", >> "rgw_region": "", >> "rgw_default_region_info_oid": "default.region", >> "rgw_zonegroup": "", >> "rgw_zonegroup_root_pool": ".rgw.root", >> "rgw_default_zonegroup_info_oid": "default.zonegroup", >> "rgw_realm": "", >> "rgw_realm_root_pool": ".rgw.root", >> "rgw_default_realm_info_oid": "default.realm", >> "rgw_period_root_pool": ".rgw.root", >> "rgw_period_latest_epoch_info_oid": ".latest_epoch", >> "rgw_l
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
hi, my pools ar named a bit fifferent fomr the dafault ones: .dev-qa.rgw.gc .rgw.control .dev-qa.users.uid .dev-qa.users.swift .dev.rgw.root .dev-qa.usage .dev-qa.log .dev-qa.rgw.buckets .dev-qa.rgw.buckets.index .dev-qa.rgw.root .dev-qa.users.email .dev-qa.intent-log .dev-qa.rgw.buckets.extra .dev-qa.rgw.control .dev-qa.domain.rgw .dev-qa.users .rgw.root so can you give me some more information on how i need tor change the script, that the poolnames are fixed. Thanks, Ansgar 2016-04-26 18:00 GMT+02:00 Richard Chan : > Summary of Yehuda's script on Hammer -> Jewel upgrade: > > 1. It works: users, buckets, objects now accessible: the zonegroup and zone > have been set to "default" ( previouslly zone = "" and region = "") > > 2. s3cmd needed to be upgraded to 1.6 to work > > Thanks. > > > > On Tue, Apr 26, 2016 at 8:06 AM, Yehuda Sadeh-Weinraub > wrote: >> >> I managed to reproduce the issue, and there seem to be multiple >> problems. Specifically we have an issue when upgrading a default >> cluster that hasn't had a zone (and region) explicitly configured >> before. There is another bug that I found >> (http://tracker.ceph.com/issues/15597) that makes things even a bit >> more complicated. >> >> I created the following script that might be able to fix things for you: >> >> https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone >> >> For future reference, this script shouldn't be used if there are any >> zones configured other than the default one. It also makes some ninja >> patching to the zone config because of a bug that exists currently, >> but will probably not apply to any next versions. >> >> Please let me know if you have any issues, or if this actually does its >> magic. >> >> Thanks, >> Yehuda >> >> On Mon, Apr 25, 2016 at 4:10 PM, Richard Chan >> wrote: >> > >> >> > How do you actually do that? >> >> >> >> What does 'radosgw-admin zone get' return? >> >> >> >> Yehuda >> > >> > >> > >> > [root@node1 ceph]# radosgw-admin zone get >> > unable to initialize zone: (2) No such file or directory >> > >> > (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from >> > a >> > clean >> > >> > ceph-deploy rgw create node1 >> > >> > ## user created under Hammer >> > [root@node1 ceph]# radosgw-admin user info --uid=testuser >> > 2016-04-26 07:07:06.159497 7f410c33ca40 0 RGWZoneParams::create(): >> > error >> > creating default zone params: (17) File exists >> > could not fetch user info: no user info saved >> > >> > "rgw_max_chunk_size": "524288", >> > "rgw_max_put_size": "5368709120", >> > "rgw_override_bucket_index_max_shards": "0", >> > "rgw_bucket_index_max_aio": "8", >> > "rgw_enable_quota_threads": "true", >> > "rgw_enable_gc_threads": "true", >> > "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1", >> > "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin", >> > "rgw_cache_enabled": "true", >> > "rgw_cache_lru_size": "1", >> > "rgw_socket_path": "", >> > "rgw_host": "", >> > "rgw_port": "", >> > "rgw_dns_name": "", >> > "rgw_dns_s3website_name": "", >> > "rgw_content_length_compat": "false", >> > "rgw_script_uri": "", >> > "rgw_request_uri": "", >> > "rgw_swift_url": "", >> > "rgw_swift_url_prefix": "swift", >> > "rgw_swift_auth_url": "", >> > "rgw_swift_auth_entry": "auth", >> > "rgw_swift_tenant_name": "", >> > "rgw_swift_account_in_url": "false", >> > "rgw_swift_enforce_content_length": "false", >> > "rgw_keystone_url": "", >> > "rgw_keystone_admin_token": "", >> > "rgw_keystone_admin_user": "", >> > "rgw_keystone_admin_password": "", >> > "rgw_keystone_admin_tenant": "", >> > "rgw_keystone_admin_project": "", >> > "rgw_keystone_admin_domain": "", >> > "rgw_keystone_api_version": "2", >> > "rgw_keystone_accepted_roles": "Member, admin", >> > "rgw_keystone_token_cache_size": "1", >> > "rgw_keystone_revocation_interval": "900", >> > "rgw_keystone_verify_ssl": "true", >> > "rgw_keystone_implicit_tenants": "false", >> > "rgw_s3_auth_use_rados": "true", >> > "rgw_s3_auth_use_keystone": "false", >> > "rgw_ldap_uri": "ldaps:\/\/", >> > "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com", >> > "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com", >> > "rgw_ldap_dnattr": "uid", >> > "rgw_ldap_secret": "\/etc\/openldap\/secret", >> > "rgw_s3_auth_use_ldap": "false", >> > "rgw_admin_entry": "admin", >> > "rgw_enforce_swift_acls": "true", >> > "rgw_swift_token_expiration": "86400", >> > "rgw_print_continue": "true", >> > "rgw_remote_addr_param": "REMOTE_ADDR", >> > "rgw_op_thread_timeout": "600", >> > "rgw_op_thread_suicide_timeout": "0", >> > "rgw_thread_pool_size": "100", >> > "rgw_num_control_oids": "8", >> > "rgw_num_rados_handles": "1", >> > "rgw_nfs_lru_lanes": "5", >> > "rgw_nfs_lru_lane_hiwat
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
Summary of Yehuda's script on Hammer -> Jewel upgrade: 1. It works: users, buckets, objects now accessible: the zonegroup and zone have been set to "default" ( previouslly zone = "" and region = "") 2. s3cmd needed to be upgraded to 1.6 to work Thanks. On Tue, Apr 26, 2016 at 8:06 AM, Yehuda Sadeh-Weinraub wrote: > I managed to reproduce the issue, and there seem to be multiple > problems. Specifically we have an issue when upgrading a default > cluster that hasn't had a zone (and region) explicitly configured > before. There is another bug that I found > (http://tracker.ceph.com/issues/15597) that makes things even a bit > more complicated. > > I created the following script that might be able to fix things for you: > > https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone > > For future reference, this script shouldn't be used if there are any > zones configured other than the default one. It also makes some ninja > patching to the zone config because of a bug that exists currently, > but will probably not apply to any next versions. > > Please let me know if you have any issues, or if this actually does its > magic. > > Thanks, > Yehuda > > On Mon, Apr 25, 2016 at 4:10 PM, Richard Chan > wrote: > > > >> > How do you actually do that? > >> > >> What does 'radosgw-admin zone get' return? > >> > >> Yehuda > > > > > > > > [root@node1 ceph]# radosgw-admin zone get > > unable to initialize zone: (2) No such file or directory > > > > (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from > a > > clean > > > > ceph-deploy rgw create node1 > > > > ## user created under Hammer > > [root@node1 ceph]# radosgw-admin user info --uid=testuser > > 2016-04-26 07:07:06.159497 7f410c33ca40 0 RGWZoneParams::create(): error > > creating default zone params: (17) File exists > > could not fetch user info: no user info saved > > > > "rgw_max_chunk_size": "524288", > > "rgw_max_put_size": "5368709120", > > "rgw_override_bucket_index_max_shards": "0", > > "rgw_bucket_index_max_aio": "8", > > "rgw_enable_quota_threads": "true", > > "rgw_enable_gc_threads": "true", > > "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1", > > "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin", > > "rgw_cache_enabled": "true", > > "rgw_cache_lru_size": "1", > > "rgw_socket_path": "", > > "rgw_host": "", > > "rgw_port": "", > > "rgw_dns_name": "", > > "rgw_dns_s3website_name": "", > > "rgw_content_length_compat": "false", > > "rgw_script_uri": "", > > "rgw_request_uri": "", > > "rgw_swift_url": "", > > "rgw_swift_url_prefix": "swift", > > "rgw_swift_auth_url": "", > > "rgw_swift_auth_entry": "auth", > > "rgw_swift_tenant_name": "", > > "rgw_swift_account_in_url": "false", > > "rgw_swift_enforce_content_length": "false", > > "rgw_keystone_url": "", > > "rgw_keystone_admin_token": "", > > "rgw_keystone_admin_user": "", > > "rgw_keystone_admin_password": "", > > "rgw_keystone_admin_tenant": "", > > "rgw_keystone_admin_project": "", > > "rgw_keystone_admin_domain": "", > > "rgw_keystone_api_version": "2", > > "rgw_keystone_accepted_roles": "Member, admin", > > "rgw_keystone_token_cache_size": "1", > > "rgw_keystone_revocation_interval": "900", > > "rgw_keystone_verify_ssl": "true", > > "rgw_keystone_implicit_tenants": "false", > > "rgw_s3_auth_use_rados": "true", > > "rgw_s3_auth_use_keystone": "false", > > "rgw_ldap_uri": "ldaps:\/\/", > > "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com", > > "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com", > > "rgw_ldap_dnattr": "uid", > > "rgw_ldap_secret": "\/etc\/openldap\/secret", > > "rgw_s3_auth_use_ldap": "false", > > "rgw_admin_entry": "admin", > > "rgw_enforce_swift_acls": "true", > > "rgw_swift_token_expiration": "86400", > > "rgw_print_continue": "true", > > "rgw_remote_addr_param": "REMOTE_ADDR", > > "rgw_op_thread_timeout": "600", > > "rgw_op_thread_suicide_timeout": "0", > > "rgw_thread_pool_size": "100", > > "rgw_num_control_oids": "8", > > "rgw_num_rados_handles": "1", > > "rgw_nfs_lru_lanes": "5", > > "rgw_nfs_lru_lane_hiwat": "911", > > "rgw_nfs_fhcache_partitions": "3", > > "rgw_nfs_fhcache_size": "2017", > > "rgw_zone": "", > > "rgw_zone_root_pool": ".rgw.root", > > "rgw_default_zone_info_oid": "default.zone", > > "rgw_region": "", > > "rgw_default_region_info_oid": "default.region", > > "rgw_zonegroup": "", > > "rgw_zonegroup_root_pool": ".rgw.root", > > "rgw_default_zonegroup_info_oid": "default.zonegroup", > > "rgw_realm": "", > > "rgw_realm_root_pool": ".rgw.root", > > "rgw_default_realm_info_oid": "default.realm", > > "rgw_period_root_pool": ".rgw.root", > > "rgw_period_latest_epoch_info_oid": ".latest_epoch
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
My bad: s3cmd errors were unrelated to Jewel upgrade and Yehuda's script: It required an upgrade from s3cmd from 1.5 to 1.6 - sorry for the noise. Will try to replicate the upgrade. On Tue, Apr 26, 2016 at 9:27 PM, Richard Chan wrote: > Also s3cmd is unable to create new buckets: > > # s3cmd -c jewel.cfg mb s3://test.3 > ERROR: S3 error: None > > > On Tue, Apr 26, 2016 at 8:06 AM, Yehuda Sadeh-Weinraub > wrote: > >> I managed to reproduce the issue, and there seem to be multiple >> problems. Specifically we have an issue when upgrading a default >> cluster that hasn't had a zone (and region) explicitly configured >> before. There is another bug that I found >> (http://tracker.ceph.com/issues/15597) that makes things even a bit >> more complicated. >> >> I created the following script that might be able to fix things for you: >> >> https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone >> >> For future reference, this script shouldn't be used if there are any >> zones configured other than the default one. It also makes some ninja >> patching to the zone config because of a bug that exists currently, >> but will probably not apply to any next versions. >> >> Please let me know if you have any issues, or if this actually does its >> magic. >> >> Thanks, >> Yehuda >> >> On Mon, Apr 25, 2016 at 4:10 PM, Richard Chan >> wrote: >> > >> >> > How do you actually do that? >> >> >> >> What does 'radosgw-admin zone get' return? >> >> >> >> Yehuda >> > >> > >> > >> > [root@node1 ceph]# radosgw-admin zone get >> > unable to initialize zone: (2) No such file or directory >> > >> > (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is >> from a >> > clean >> > >> > ceph-deploy rgw create node1 >> > >> > ## user created under Hammer >> > [root@node1 ceph]# radosgw-admin user info --uid=testuser >> > 2016-04-26 07:07:06.159497 7f410c33ca40 0 RGWZoneParams::create(): >> error >> > creating default zone params: (17) File exists >> > could not fetch user info: no user info saved >> > >> > "rgw_max_chunk_size": "524288", >> > "rgw_max_put_size": "5368709120", >> > "rgw_override_bucket_index_max_shards": "0", >> > "rgw_bucket_index_max_aio": "8", >> > "rgw_enable_quota_threads": "true", >> > "rgw_enable_gc_threads": "true", >> > "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1", >> > "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin", >> > "rgw_cache_enabled": "true", >> > "rgw_cache_lru_size": "1", >> > "rgw_socket_path": "", >> > "rgw_host": "", >> > "rgw_port": "", >> > "rgw_dns_name": "", >> > "rgw_dns_s3website_name": "", >> > "rgw_content_length_compat": "false", >> > "rgw_script_uri": "", >> > "rgw_request_uri": "", >> > "rgw_swift_url": "", >> > "rgw_swift_url_prefix": "swift", >> > "rgw_swift_auth_url": "", >> > "rgw_swift_auth_entry": "auth", >> > "rgw_swift_tenant_name": "", >> > "rgw_swift_account_in_url": "false", >> > "rgw_swift_enforce_content_length": "false", >> > "rgw_keystone_url": "", >> > "rgw_keystone_admin_token": "", >> > "rgw_keystone_admin_user": "", >> > "rgw_keystone_admin_password": "", >> > "rgw_keystone_admin_tenant": "", >> > "rgw_keystone_admin_project": "", >> > "rgw_keystone_admin_domain": "", >> > "rgw_keystone_api_version": "2", >> > "rgw_keystone_accepted_roles": "Member, admin", >> > "rgw_keystone_token_cache_size": "1", >> > "rgw_keystone_revocation_interval": "900", >> > "rgw_keystone_verify_ssl": "true", >> > "rgw_keystone_implicit_tenants": "false", >> > "rgw_s3_auth_use_rados": "true", >> > "rgw_s3_auth_use_keystone": "false", >> > "rgw_ldap_uri": "ldaps:\/\/", >> > "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com", >> > "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com", >> > "rgw_ldap_dnattr": "uid", >> > "rgw_ldap_secret": "\/etc\/openldap\/secret", >> > "rgw_s3_auth_use_ldap": "false", >> > "rgw_admin_entry": "admin", >> > "rgw_enforce_swift_acls": "true", >> > "rgw_swift_token_expiration": "86400", >> > "rgw_print_continue": "true", >> > "rgw_remote_addr_param": "REMOTE_ADDR", >> > "rgw_op_thread_timeout": "600", >> > "rgw_op_thread_suicide_timeout": "0", >> > "rgw_thread_pool_size": "100", >> > "rgw_num_control_oids": "8", >> > "rgw_num_rados_handles": "1", >> > "rgw_nfs_lru_lanes": "5", >> > "rgw_nfs_lru_lane_hiwat": "911", >> > "rgw_nfs_fhcache_partitions": "3", >> > "rgw_nfs_fhcache_size": "2017", >> > "rgw_zone": "", >> > "rgw_zone_root_pool": ".rgw.root", >> > "rgw_default_zone_info_oid": "default.zone", >> > "rgw_region": "", >> > "rgw_default_region_info_oid": "default.region", >> > "rgw_zonegroup": "", >> > "rgw_zonegroup_root_pool": ".rgw.root", >> > "rgw_default_zonegroup_info_oid": "default.zonegroup", >> >
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
Also s3cmd is unable to create new buckets: # s3cmd -c jewel.cfg mb s3://test.3 ERROR: S3 error: None On Tue, Apr 26, 2016 at 8:06 AM, Yehuda Sadeh-Weinraub wrote: > I managed to reproduce the issue, and there seem to be multiple > problems. Specifically we have an issue when upgrading a default > cluster that hasn't had a zone (and region) explicitly configured > before. There is another bug that I found > (http://tracker.ceph.com/issues/15597) that makes things even a bit > more complicated. > > I created the following script that might be able to fix things for you: > > https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone > > For future reference, this script shouldn't be used if there are any > zones configured other than the default one. It also makes some ninja > patching to the zone config because of a bug that exists currently, > but will probably not apply to any next versions. > > Please let me know if you have any issues, or if this actually does its > magic. > > Thanks, > Yehuda > > On Mon, Apr 25, 2016 at 4:10 PM, Richard Chan > wrote: > > > >> > How do you actually do that? > >> > >> What does 'radosgw-admin zone get' return? > >> > >> Yehuda > > > > > > > > [root@node1 ceph]# radosgw-admin zone get > > unable to initialize zone: (2) No such file or directory > > > > (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from > a > > clean > > > > ceph-deploy rgw create node1 > > > > ## user created under Hammer > > [root@node1 ceph]# radosgw-admin user info --uid=testuser > > 2016-04-26 07:07:06.159497 7f410c33ca40 0 RGWZoneParams::create(): error > > creating default zone params: (17) File exists > > could not fetch user info: no user info saved > > > > "rgw_max_chunk_size": "524288", > > "rgw_max_put_size": "5368709120", > > "rgw_override_bucket_index_max_shards": "0", > > "rgw_bucket_index_max_aio": "8", > > "rgw_enable_quota_threads": "true", > > "rgw_enable_gc_threads": "true", > > "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1", > > "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin", > > "rgw_cache_enabled": "true", > > "rgw_cache_lru_size": "1", > > "rgw_socket_path": "", > > "rgw_host": "", > > "rgw_port": "", > > "rgw_dns_name": "", > > "rgw_dns_s3website_name": "", > > "rgw_content_length_compat": "false", > > "rgw_script_uri": "", > > "rgw_request_uri": "", > > "rgw_swift_url": "", > > "rgw_swift_url_prefix": "swift", > > "rgw_swift_auth_url": "", > > "rgw_swift_auth_entry": "auth", > > "rgw_swift_tenant_name": "", > > "rgw_swift_account_in_url": "false", > > "rgw_swift_enforce_content_length": "false", > > "rgw_keystone_url": "", > > "rgw_keystone_admin_token": "", > > "rgw_keystone_admin_user": "", > > "rgw_keystone_admin_password": "", > > "rgw_keystone_admin_tenant": "", > > "rgw_keystone_admin_project": "", > > "rgw_keystone_admin_domain": "", > > "rgw_keystone_api_version": "2", > > "rgw_keystone_accepted_roles": "Member, admin", > > "rgw_keystone_token_cache_size": "1", > > "rgw_keystone_revocation_interval": "900", > > "rgw_keystone_verify_ssl": "true", > > "rgw_keystone_implicit_tenants": "false", > > "rgw_s3_auth_use_rados": "true", > > "rgw_s3_auth_use_keystone": "false", > > "rgw_ldap_uri": "ldaps:\/\/", > > "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com", > > "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com", > > "rgw_ldap_dnattr": "uid", > > "rgw_ldap_secret": "\/etc\/openldap\/secret", > > "rgw_s3_auth_use_ldap": "false", > > "rgw_admin_entry": "admin", > > "rgw_enforce_swift_acls": "true", > > "rgw_swift_token_expiration": "86400", > > "rgw_print_continue": "true", > > "rgw_remote_addr_param": "REMOTE_ADDR", > > "rgw_op_thread_timeout": "600", > > "rgw_op_thread_suicide_timeout": "0", > > "rgw_thread_pool_size": "100", > > "rgw_num_control_oids": "8", > > "rgw_num_rados_handles": "1", > > "rgw_nfs_lru_lanes": "5", > > "rgw_nfs_lru_lane_hiwat": "911", > > "rgw_nfs_fhcache_partitions": "3", > > "rgw_nfs_fhcache_size": "2017", > > "rgw_zone": "", > > "rgw_zone_root_pool": ".rgw.root", > > "rgw_default_zone_info_oid": "default.zone", > > "rgw_region": "", > > "rgw_default_region_info_oid": "default.region", > > "rgw_zonegroup": "", > > "rgw_zonegroup_root_pool": ".rgw.root", > > "rgw_default_zonegroup_info_oid": "default.zonegroup", > > "rgw_realm": "", > > "rgw_realm_root_pool": ".rgw.root", > > "rgw_default_realm_info_oid": "default.realm", > > "rgw_period_root_pool": ".rgw.root", > > "rgw_period_latest_epoch_info_oid": ".latest_epoch", > > "rgw_log_nonexistent_bucket": "false", > > "rgw_log_object_name": "%Y-%m-%d-%H-%i-%n", > > "rgw_log_object_name_utc": "false", > >
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
Result: 1. user and buckets recognised; 2. radosgw-admin bucket list --bucket test.1 shows objects but 3. s3cmd cannot list contents of buckets # s3cmd -c jewel.cfg ls 2016-04-25 15:57 s3://test.1 2016-04-25 15:58 s3://test.2 # s3cmd -c jewel.cfg ls s3://test.1/ ERROR: S3 error: None s3cmd -c jewel.cfg ls s3://test.2/ ERROR: S3 error: None ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9), process radosgw, pid 18737 starting handler: civetweb starting handler: fastcgi -- 192.168.122.111:0/1100370813 submit_message mon_subscribe({osdmap=57}) v2 remote, 192.168.122. monclient: hunting for new mon ERROR: no socket server point defined, cannot start fcgi frontend rgw period pusher: The new period does not contain my zonegroup! == starting new request req=0x7f6e12f6f690 = == req done req=0x7f6e12f6f690 op status=0 http_status=403 == civetweb: 0x7f6e5c010ba0: 192.168.122.110 - - [26/Apr/2016:21:24:23 +0800] "GET /test.2/ HTTP/1.1 On Tue, Apr 26, 2016 at 10:20 AM, Richard Chan wrote: > Quick questions: > > 1. Should this script be run on a pre-Jewel setup (e.g. revert test VMs) > or > *after* Jewel attempted to read the no-zone/no-region Hammer and created > the default.* pools? > > 2. Should the radosgw daemon be running when executing the script? > > Thanks! > > > > On Tue, Apr 26, 2016 at 8:06 AM, Yehuda Sadeh-Weinraub > wrote: > >> I managed to reproduce the issue, and there seem to be multiple >> problems. Specifically we have an issue when upgrading a default >> cluster that hasn't had a zone (and region) explicitly configured >> before. There is another bug that I found >> (http://tracker.ceph.com/issues/15597) that makes things even a bit >> more complicated. >> >> I created the following script that might be able to fix things for you: >> >> https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone >> >> For future reference, this script shouldn't be used if there are any >> zones configured other than the default one. It also makes some ninja >> patching to the zone config because of a bug that exists currently, >> but will probably not apply to any next versions. >> >> Please let me know if you have any issues, or if this actually does its >> magic. >> >> Thanks, >> Yehuda >> >> On Mon, Apr 25, 2016 at 4:10 PM, Richard Chan >> wrote: >> > >> >> > How do you actually do that? >> >> >> >> What does 'radosgw-admin zone get' return? >> >> >> >> Yehuda >> > >> > >> > >> > [root@node1 ceph]# radosgw-admin zone get >> > unable to initialize zone: (2) No such file or directory >> > >> > (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is >> from a >> > clean >> > >> > ceph-deploy rgw create node1 >> > >> > ## user created under Hammer >> > [root@node1 ceph]# radosgw-admin user info --uid=testuser >> > 2016-04-26 07:07:06.159497 7f410c33ca40 0 RGWZoneParams::create(): >> error >> > creating default zone params: (17) File exists >> > could not fetch user info: no user info saved >> > >> > "rgw_max_chunk_size": "524288", >> > "rgw_max_put_size": "5368709120", >> > "rgw_override_bucket_index_max_shards": "0", >> > "rgw_bucket_index_max_aio": "8", >> > "rgw_enable_quota_threads": "true", >> > "rgw_enable_gc_threads": "true", >> > "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1", >> > "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin", >> > "rgw_cache_enabled": "true", >> > "rgw_cache_lru_size": "1", >> > "rgw_socket_path": "", >> > "rgw_host": "", >> > "rgw_port": "", >> > "rgw_dns_name": "", >> > "rgw_dns_s3website_name": "", >> > "rgw_content_length_compat": "false", >> > "rgw_script_uri": "", >> > "rgw_request_uri": "", >> > "rgw_swift_url": "", >> > "rgw_swift_url_prefix": "swift", >> > "rgw_swift_auth_url": "", >> > "rgw_swift_auth_entry": "auth", >> > "rgw_swift_tenant_name": "", >> > "rgw_swift_account_in_url": "false", >> > "rgw_swift_enforce_content_length": "false", >> > "rgw_keystone_url": "", >> > "rgw_keystone_admin_token": "", >> > "rgw_keystone_admin_user": "", >> > "rgw_keystone_admin_password": "", >> > "rgw_keystone_admin_tenant": "", >> > "rgw_keystone_admin_project": "", >> > "rgw_keystone_admin_domain": "", >> > "rgw_keystone_api_version": "2", >> > "rgw_keystone_accepted_roles": "Member, admin", >> > "rgw_keystone_token_cache_size": "1", >> > "rgw_keystone_revocation_interval": "900", >> > "rgw_keystone_verify_ssl": "true", >> > "rgw_keystone_implicit_tenants": "false", >> > "rgw_s3_auth_use_rados": "true", >> > "rgw_s3_auth_use_keystone": "false", >> > "rgw_ldap_uri": "ldaps:\/\/", >> > "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com", >> > "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com", >> > "rgw_ldap_dnattr": "uid", >> > "rgw_ldap_secret": "\/etc\/openldap\/secret", >> > "rg
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
Quick questions: 1. Should this script be run on a pre-Jewel setup (e.g. revert test VMs) or *after* Jewel attempted to read the no-zone/no-region Hammer and created the default.* pools? 2. Should the radosgw daemon be running when executing the script? Thanks! On Tue, Apr 26, 2016 at 8:06 AM, Yehuda Sadeh-Weinraub wrote: > I managed to reproduce the issue, and there seem to be multiple > problems. Specifically we have an issue when upgrading a default > cluster that hasn't had a zone (and region) explicitly configured > before. There is another bug that I found > (http://tracker.ceph.com/issues/15597) that makes things even a bit > more complicated. > > I created the following script that might be able to fix things for you: > > https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone > > For future reference, this script shouldn't be used if there are any > zones configured other than the default one. It also makes some ninja > patching to the zone config because of a bug that exists currently, > but will probably not apply to any next versions. > > Please let me know if you have any issues, or if this actually does its > magic. > > Thanks, > Yehuda > > On Mon, Apr 25, 2016 at 4:10 PM, Richard Chan > wrote: > > > >> > How do you actually do that? > >> > >> What does 'radosgw-admin zone get' return? > >> > >> Yehuda > > > > > > > > [root@node1 ceph]# radosgw-admin zone get > > unable to initialize zone: (2) No such file or directory > > > > (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from > a > > clean > > > > ceph-deploy rgw create node1 > > > > ## user created under Hammer > > [root@node1 ceph]# radosgw-admin user info --uid=testuser > > 2016-04-26 07:07:06.159497 7f410c33ca40 0 RGWZoneParams::create(): error > > creating default zone params: (17) File exists > > could not fetch user info: no user info saved > > > > "rgw_max_chunk_size": "524288", > > "rgw_max_put_size": "5368709120", > > "rgw_override_bucket_index_max_shards": "0", > > "rgw_bucket_index_max_aio": "8", > > "rgw_enable_quota_threads": "true", > > "rgw_enable_gc_threads": "true", > > "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1", > > "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin", > > "rgw_cache_enabled": "true", > > "rgw_cache_lru_size": "1", > > "rgw_socket_path": "", > > "rgw_host": "", > > "rgw_port": "", > > "rgw_dns_name": "", > > "rgw_dns_s3website_name": "", > > "rgw_content_length_compat": "false", > > "rgw_script_uri": "", > > "rgw_request_uri": "", > > "rgw_swift_url": "", > > "rgw_swift_url_prefix": "swift", > > "rgw_swift_auth_url": "", > > "rgw_swift_auth_entry": "auth", > > "rgw_swift_tenant_name": "", > > "rgw_swift_account_in_url": "false", > > "rgw_swift_enforce_content_length": "false", > > "rgw_keystone_url": "", > > "rgw_keystone_admin_token": "", > > "rgw_keystone_admin_user": "", > > "rgw_keystone_admin_password": "", > > "rgw_keystone_admin_tenant": "", > > "rgw_keystone_admin_project": "", > > "rgw_keystone_admin_domain": "", > > "rgw_keystone_api_version": "2", > > "rgw_keystone_accepted_roles": "Member, admin", > > "rgw_keystone_token_cache_size": "1", > > "rgw_keystone_revocation_interval": "900", > > "rgw_keystone_verify_ssl": "true", > > "rgw_keystone_implicit_tenants": "false", > > "rgw_s3_auth_use_rados": "true", > > "rgw_s3_auth_use_keystone": "false", > > "rgw_ldap_uri": "ldaps:\/\/", > > "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com", > > "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com", > > "rgw_ldap_dnattr": "uid", > > "rgw_ldap_secret": "\/etc\/openldap\/secret", > > "rgw_s3_auth_use_ldap": "false", > > "rgw_admin_entry": "admin", > > "rgw_enforce_swift_acls": "true", > > "rgw_swift_token_expiration": "86400", > > "rgw_print_continue": "true", > > "rgw_remote_addr_param": "REMOTE_ADDR", > > "rgw_op_thread_timeout": "600", > > "rgw_op_thread_suicide_timeout": "0", > > "rgw_thread_pool_size": "100", > > "rgw_num_control_oids": "8", > > "rgw_num_rados_handles": "1", > > "rgw_nfs_lru_lanes": "5", > > "rgw_nfs_lru_lane_hiwat": "911", > > "rgw_nfs_fhcache_partitions": "3", > > "rgw_nfs_fhcache_size": "2017", > > "rgw_zone": "", > > "rgw_zone_root_pool": ".rgw.root", > > "rgw_default_zone_info_oid": "default.zone", > > "rgw_region": "", > > "rgw_default_region_info_oid": "default.region", > > "rgw_zonegroup": "", > > "rgw_zonegroup_root_pool": ".rgw.root", > > "rgw_default_zonegroup_info_oid": "default.zonegroup", > > "rgw_realm": "", > > "rgw_realm_root_pool": ".rgw.root", > > "rgw_default_realm_info_oid": "default.realm", > > "rgw_period_root_pool": ".rgw.root", > > "rgw_period_latest_epoch_info_oid": ".l
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
I managed to reproduce the issue, and there seem to be multiple problems. Specifically we have an issue when upgrading a default cluster that hasn't had a zone (and region) explicitly configured before. There is another bug that I found (http://tracker.ceph.com/issues/15597) that makes things even a bit more complicated. I created the following script that might be able to fix things for you: https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone For future reference, this script shouldn't be used if there are any zones configured other than the default one. It also makes some ninja patching to the zone config because of a bug that exists currently, but will probably not apply to any next versions. Please let me know if you have any issues, or if this actually does its magic. Thanks, Yehuda On Mon, Apr 25, 2016 at 4:10 PM, Richard Chan wrote: > >> > How do you actually do that? >> >> What does 'radosgw-admin zone get' return? >> >> Yehuda > > > > [root@node1 ceph]# radosgw-admin zone get > unable to initialize zone: (2) No such file or directory > > (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from a > clean > > ceph-deploy rgw create node1 > > ## user created under Hammer > [root@node1 ceph]# radosgw-admin user info --uid=testuser > 2016-04-26 07:07:06.159497 7f410c33ca40 0 RGWZoneParams::create(): error > creating default zone params: (17) File exists > could not fetch user info: no user info saved > > "rgw_max_chunk_size": "524288", > "rgw_max_put_size": "5368709120", > "rgw_override_bucket_index_max_shards": "0", > "rgw_bucket_index_max_aio": "8", > "rgw_enable_quota_threads": "true", > "rgw_enable_gc_threads": "true", > "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1", > "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin", > "rgw_cache_enabled": "true", > "rgw_cache_lru_size": "1", > "rgw_socket_path": "", > "rgw_host": "", > "rgw_port": "", > "rgw_dns_name": "", > "rgw_dns_s3website_name": "", > "rgw_content_length_compat": "false", > "rgw_script_uri": "", > "rgw_request_uri": "", > "rgw_swift_url": "", > "rgw_swift_url_prefix": "swift", > "rgw_swift_auth_url": "", > "rgw_swift_auth_entry": "auth", > "rgw_swift_tenant_name": "", > "rgw_swift_account_in_url": "false", > "rgw_swift_enforce_content_length": "false", > "rgw_keystone_url": "", > "rgw_keystone_admin_token": "", > "rgw_keystone_admin_user": "", > "rgw_keystone_admin_password": "", > "rgw_keystone_admin_tenant": "", > "rgw_keystone_admin_project": "", > "rgw_keystone_admin_domain": "", > "rgw_keystone_api_version": "2", > "rgw_keystone_accepted_roles": "Member, admin", > "rgw_keystone_token_cache_size": "1", > "rgw_keystone_revocation_interval": "900", > "rgw_keystone_verify_ssl": "true", > "rgw_keystone_implicit_tenants": "false", > "rgw_s3_auth_use_rados": "true", > "rgw_s3_auth_use_keystone": "false", > "rgw_ldap_uri": "ldaps:\/\/", > "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com", > "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com", > "rgw_ldap_dnattr": "uid", > "rgw_ldap_secret": "\/etc\/openldap\/secret", > "rgw_s3_auth_use_ldap": "false", > "rgw_admin_entry": "admin", > "rgw_enforce_swift_acls": "true", > "rgw_swift_token_expiration": "86400", > "rgw_print_continue": "true", > "rgw_remote_addr_param": "REMOTE_ADDR", > "rgw_op_thread_timeout": "600", > "rgw_op_thread_suicide_timeout": "0", > "rgw_thread_pool_size": "100", > "rgw_num_control_oids": "8", > "rgw_num_rados_handles": "1", > "rgw_nfs_lru_lanes": "5", > "rgw_nfs_lru_lane_hiwat": "911", > "rgw_nfs_fhcache_partitions": "3", > "rgw_nfs_fhcache_size": "2017", > "rgw_zone": "", > "rgw_zone_root_pool": ".rgw.root", > "rgw_default_zone_info_oid": "default.zone", > "rgw_region": "", > "rgw_default_region_info_oid": "default.region", > "rgw_zonegroup": "", > "rgw_zonegroup_root_pool": ".rgw.root", > "rgw_default_zonegroup_info_oid": "default.zonegroup", > "rgw_realm": "", > "rgw_realm_root_pool": ".rgw.root", > "rgw_default_realm_info_oid": "default.realm", > "rgw_period_root_pool": ".rgw.root", > "rgw_period_latest_epoch_info_oid": ".latest_epoch", > "rgw_log_nonexistent_bucket": "false", > "rgw_log_object_name": "%Y-%m-%d-%H-%i-%n", > "rgw_log_object_name_utc": "false", > "rgw_usage_max_shards": "32", > "rgw_usage_max_user_shards": "1", > "rgw_enable_ops_log": "false", > "rgw_enable_usage_log": "false", > "rgw_ops_log_rados": "true", > "rgw_ops_log_socket_path": "", > "rgw_ops_log_data_backlog": "5242880", > "rgw_usage_log_flush_threshold": "1024", > "rgw_usage_log_tick_interval": "30", > "rgw_intent_log_object_name": "%Y-%m-%d-%i-%n", > "
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
> > How do you actually do that? > > What does 'radosgw-admin zone get' return? > > Yehuda > [root@node1 ceph]# radosgw-admin zone get unable to initialize zone: (2) No such file or directory (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from a clean ceph-deploy rgw create node1 ## user created under Hammer [root@node1 ceph]# radosgw-admin user info --uid=testuser 2016-04-26 07:07:06.159497 7f410c33ca40 0 RGWZoneParams::create(): error creating default zone params: (17) File exists could not fetch user info: no user info saved "rgw_max_chunk_size": "524288", "rgw_max_put_size": "5368709120", "rgw_override_bucket_index_max_shards": "0", "rgw_bucket_index_max_aio": "8", "rgw_enable_quota_threads": "true", "rgw_enable_gc_threads": "true", "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1", "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin", "rgw_cache_enabled": "true", "rgw_cache_lru_size": "1", "rgw_socket_path": "", "rgw_host": "", "rgw_port": "", "rgw_dns_name": "", "rgw_dns_s3website_name": "", "rgw_content_length_compat": "false", "rgw_script_uri": "", "rgw_request_uri": "", "rgw_swift_url": "", "rgw_swift_url_prefix": "swift", "rgw_swift_auth_url": "", "rgw_swift_auth_entry": "auth", "rgw_swift_tenant_name": "", "rgw_swift_account_in_url": "false", "rgw_swift_enforce_content_length": "false", "rgw_keystone_url": "", "rgw_keystone_admin_token": "", "rgw_keystone_admin_user": "", "rgw_keystone_admin_password": "", "rgw_keystone_admin_tenant": "", "rgw_keystone_admin_project": "", "rgw_keystone_admin_domain": "", "rgw_keystone_api_version": "2", "rgw_keystone_accepted_roles": "Member, admin", "rgw_keystone_token_cache_size": "1", "rgw_keystone_revocation_interval": "900", "rgw_keystone_verify_ssl": "true", "rgw_keystone_implicit_tenants": "false", "rgw_s3_auth_use_rados": "true", "rgw_s3_auth_use_keystone": "false", "rgw_ldap_uri": "ldaps:\/\/", "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com", "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com", "rgw_ldap_dnattr": "uid", "rgw_ldap_secret": "\/etc\/openldap\/secret", "rgw_s3_auth_use_ldap": "false", "rgw_admin_entry": "admin", "rgw_enforce_swift_acls": "true", "rgw_swift_token_expiration": "86400", "rgw_print_continue": "true", "rgw_remote_addr_param": "REMOTE_ADDR", "rgw_op_thread_timeout": "600", "rgw_op_thread_suicide_timeout": "0", "rgw_thread_pool_size": "100", "rgw_num_control_oids": "8", "rgw_num_rados_handles": "1", "rgw_nfs_lru_lanes": "5", "rgw_nfs_lru_lane_hiwat": "911", "rgw_nfs_fhcache_partitions": "3", "rgw_nfs_fhcache_size": "2017", "rgw_zone": "", "rgw_zone_root_pool": ".rgw.root", "rgw_default_zone_info_oid": "default.zone", "rgw_region": "", "rgw_default_region_info_oid": "default.region", "rgw_zonegroup": "", "rgw_zonegroup_root_pool": ".rgw.root", "rgw_default_zonegroup_info_oid": "default.zonegroup", "rgw_realm": "", "rgw_realm_root_pool": ".rgw.root", "rgw_default_realm_info_oid": "default.realm", "rgw_period_root_pool": ".rgw.root", "rgw_period_latest_epoch_info_oid": ".latest_epoch", "rgw_log_nonexistent_bucket": "false", "rgw_log_object_name": "%Y-%m-%d-%H-%i-%n", "rgw_log_object_name_utc": "false", "rgw_usage_max_shards": "32", "rgw_usage_max_user_shards": "1", "rgw_enable_ops_log": "false", "rgw_enable_usage_log": "false", "rgw_ops_log_rados": "true", "rgw_ops_log_socket_path": "", "rgw_ops_log_data_backlog": "5242880", "rgw_usage_log_flush_threshold": "1024", "rgw_usage_log_tick_interval": "30", "rgw_intent_log_object_name": "%Y-%m-%d-%i-%n", "rgw_intent_log_object_name_utc": "false", "rgw_init_timeout": "300", "rgw_mime_types_file": "\/etc\/mime.types", "rgw_gc_max_objs": "32", "rgw_gc_obj_min_wait": "7200", "rgw_gc_processor_max_time": "3600", "rgw_gc_processor_period": "3600", "rgw_s3_success_create_obj_status": "0", "rgw_resolve_cname": "false", "rgw_obj_stripe_size": "4194304", "rgw_extended_http_attrs": "", "rgw_exit_timeout_secs": "120", "rgw_get_obj_window_size": "16777216", "rgw_get_obj_max_req_size": "4194304", "rgw_relaxed_s3_bucket_names": "false", "rgw_defer_to_bucket_acls": "", "rgw_list_buckets_max_chunk": "1000", "rgw_md_log_max_shards": "64", "rgw_num_zone_opstate_shards": "128", "rgw_opstate_ratelimit_sec": "30", "rgw_curl_wait_timeout_ms": "1000", "rgw_copy_obj_progress": "true", "rgw_copy_obj_progress_every_bytes": "1048576", "rgw_data_log_window": "30", "rgw_data_log_changes_size": "1000", "rgw_data_log_num_shards": "128", "rgw_data_log_obj_prefix": "data_log", "rgw_replica_log_obj_prefix":
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
(sorry for resubmission, adding ceph-users) On Mon, Apr 25, 2016 at 9:47 AM, Richard Chan wrote: > Hi Yehuda > > I created a test 3xVM setup with Hammer and one radosgw on the (separate) > admin node; creating one user and buckets. > > I upgraded the VMs to jewel and created a new radosgw on one of the nodes. > > The object store didn't seem to survive the upgrade > > # radosgw-admin user info --uid=testuser > 2016-04-26 00:41:50.713069 7fcdcc6fca40 0 RGWZoneParams::create(): error > creating default zone params: (17) File exists > could not fetch user info: no user info saved > > rados lspools > rbd > .rgw.root > .rgw.control > .rgw > .rgw.gc > .users.uid > .users > .rgw.buckets.index > .rgw.buckets > default.rgw.control > default.rgw.data.root > default.rgw.gc > default.rgw.log > default.rgw.users.uid > default.rgw.users.keys > > Do I have to configure radosgw to use the pools with default.*? No. Need to get it to play along nicely with the old pools. > How do you actually do that? What does 'radosgw-admin zone get' return? Yehuda ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
Hi Yehuda I created a test 3xVM setup with Hammer and one radosgw on the (separate) admin node; creating one user and buckets. I upgraded the VMs to jewel and created a new radosgw on one of the nodes. The object store didn't seem to survive the upgrade # radosgw-admin user info --uid=testuser 2016-04-26 00:41:50.713069 7fcdcc6fca40 0 RGWZoneParams::create(): error creating default zone params: (17) File exists could not fetch user info: no user info saved rados lspools rbd .rgw.root .rgw.control .rgw .rgw.gc .users.uid .users .rgw.buckets.index .rgw.buckets default.rgw.control default.rgw.data.root default.rgw.gc default.rgw.log default.rgw.users.uid default.rgw.users.keys Do I have to configure radosgw to use the pools with default.*? How do you actually do that? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
On 23-04-16 18:17, Yehuda Sadeh-Weinraub wrote: > On Sat, Apr 23, 2016 at 6:22 AM, Richard Chan > wrote: >> Hi Cephers, >> >> I upgraded to Jewel and noted the is massive radosgw multisite rework >> in the release notes. >> >> Can Jewel radosgw be configured to present existing Hammer buckets? >> On a test system, jewel didn't recognise my Hammer buckets; >> >> Hammer used pools .rgw.* >> Jewel created by default: .rgw.root and default.rgw* >> >> >> > Yes, jewel should be able to read hammer buckets. If it detects that > there's an old config, it should migrate existing setup into the new > config. It seemsthat something didn't work as expected here. One way > to fix it would be to create a new zone and set its pools to point at > the old config's pools. We'll need to figure out what went wrong > though. > Hi, I'm also wandering about the correct upgrade procedure for the radosgw's, especially in a multi gateway setup in a federated config. If you say existing setup should migrate, is it ok then to have hammer and jewel radosgw's co-exist (for a short time)? We have for example multi radosgw instances behind an haproxy. Can they be upgraded one at a time or do they all need to be stopped before starting the first jewel radosgw? Does the ceph.conf file needs to be adapted to the jewel config, fe change "rgw region root pool" into "rgw zonegroup root pool"? Before or after the upgrade? Concerning data replication. I understand the radosgw-agent is deprecated in jewel and the replication is done by the radosgw's them selves. Is this also automatically enabled or does this need to be started / configured somehow? thanks in advance, Sam ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Can Jewel read Hammer radosgw buckets?
On Sat, Apr 23, 2016 at 6:22 AM, Richard Chan wrote: > Hi Cephers, > > I upgraded to Jewel and noted the is massive radosgw multisite rework > in the release notes. > > Can Jewel radosgw be configured to present existing Hammer buckets? > On a test system, jewel didn't recognise my Hammer buckets; > > Hammer used pools .rgw.* > Jewel created by default: .rgw.root and default.rgw* > > > Yes, jewel should be able to read hammer buckets. If it detects that there's an old config, it should migrate existing setup into the new config. It seemsthat something didn't work as expected here. One way to fix it would be to create a new zone and set its pools to point at the old config's pools. We'll need to figure out what went wrong though. Yehuda ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com