Re: [ceph-users] RGW Multisite metadata sync init

2017-09-07 Thread David Turner
I sent the output of all of the files including the logs to you. Thank you for your help so far. On Thu, Sep 7, 2017 at 4:48 PM Yehuda Sadeh-Weinraub wrote: > On Thu, Sep 7, 2017 at 11:37 PM, David Turner > wrote: > > I'm pretty sure I'm using the

Re: [ceph-users] RGW Multisite metadata sync init

2017-09-07 Thread Yehuda Sadeh-Weinraub
On Thu, Sep 7, 2017 at 11:37 PM, David Turner wrote: > I'm pretty sure I'm using the cluster admin user/keyring. Is there any > output that would be helpful? Period, zonegroup get, etc? - radosgw-admin period get - radosgw-admin zone list - radosgw-admin zonegroup

Re: [ceph-users] RGW Multisite metadata sync init

2017-09-07 Thread David Turner
I'm pretty sure I'm using the cluster admin user/keyring. Is there any output that would be helpful? Period, zonegroup get, etc? On Thu, Sep 7, 2017 at 4:27 PM Yehuda Sadeh-Weinraub wrote: > On Thu, Sep 7, 2017 at 11:02 PM, David Turner > wrote: > >

Re: [ceph-users] RGW Multisite metadata sync init

2017-09-07 Thread Yehuda Sadeh-Weinraub
On Thu, Sep 7, 2017 at 11:02 PM, David Turner wrote: > I created a test user named 'ice' and then used it to create a bucket named > ice. The bucket ice can be found in the second dc, but not the user. > `mdlog list` showed ice for the bucket, but not for the user. I

Re: [ceph-users] RGW Multisite metadata sync init

2017-09-07 Thread David Turner
I created a test user named 'ice' and then used it to create a bucket named ice. The bucket ice can be found in the second dc, but not the user. `mdlog list` showed ice for the bucket, but not for the user. I performed the same test in the internal realm and it showed the user and bucket both

Re: [ceph-users] RGW Multisite metadata sync init

2017-09-07 Thread Yehuda Sadeh-Weinraub
On Thu, Sep 7, 2017 at 10:04 PM, David Turner wrote: > One realm is called public with a zonegroup called public-zg with a zone for > each datacenter. The second realm is called internal with a zonegroup > called internal-zg with a zone for each datacenter. they each have

Re: [ceph-users] RGW Multisite metadata sync init

2017-09-07 Thread David Turner
One realm is called public with a zonegroup called public-zg with a zone for each datacenter. The second realm is called internal with a zonegroup called internal-zg with a zone for each datacenter. they each have their own rgw's and load balancers. The needs of our public facing rgw's and load

Re: [ceph-users] RGW Multisite metadata sync init

2017-09-07 Thread Yehuda Sadeh-Weinraub
On Thu, Sep 7, 2017 at 7:44 PM, David Turner wrote: > Ok, I've been testing, investigating, researching, etc for the last week and > I don't have any problems with data syncing. The clients on one side are > creating multipart objects while the multisite sync is creating

Re: [ceph-users] RGW Multisite metadata sync init

2017-09-07 Thread David Turner
Ok, I've been testing, investigating, researching, etc for the last week and I don't have any problems with data syncing. The clients on one side are creating multipart objects while the multisite sync is creating them as whole objects and one of the datacenters is slower at cleaning up the

Re: [ceph-users] RGW Multisite metadata sync init

2017-08-31 Thread David Turner
All of the messages from sync error list are listed below. The number on the left is how many times the error message is found. 1811 "message": "failed to sync bucket instance: (16) Device or resource busy" 7 "message": "failed to sync bucket

Re: [ceph-users] RGW Multisite metadata sync init

2017-08-29 Thread Orit Wasserman
Hi David, On Mon, Aug 28, 2017 at 8:33 PM, David Turner wrote: > The vast majority of the sync error list is "failed to sync bucket > instance: (16) Device or resource busy". I can't find anything on Google > about this error message in relation to Ceph. Does anyone

Re: [ceph-users] RGW Multisite metadata sync init

2017-08-28 Thread David Turner
The vast majority of the sync error list is "failed to sync bucket instance: (16) Device or resource busy". I can't find anything on Google about this error message in relation to Ceph. Does anyone have any idea what this means? and/or how to fix it? On Fri, Aug 25, 2017 at 2:48 PM Casey Bodley

Re: [ceph-users] RGW Multisite metadata sync init

2017-08-25 Thread Casey Bodley
Hi David, The 'data sync init' command won't touch any actual object data, no. Resetting the data sync status will just cause a zone to restart a full sync of the --source-zone's data changes log. This log only lists which buckets/shards have changes in them, which causes radosgw to consider

Re: [ceph-users] RGW Multisite metadata sync init

2017-08-25 Thread Casey Bodley
Hi David, The 'radosgw-admin sync error list' command may be useful in debugging sync failures for specific entries. For users, we've seen some sync failures caused by conflicting user metadata that was only present on the secondary site. For example, a user that had the same access key or

Re: [ceph-users] RGW Multisite metadata sync init

2017-08-24 Thread David Turner
Apparently the data shards that are behind go in both directions, but only one zone is aware of the problem. Each cluster has objects in their data pool that the other doesn't have. I'm thinking about initiating a `data sync init` on both sides (one at a time) to get them back on the same page.

[ceph-users] RGW Multisite metadata sync init

2017-08-24 Thread David Turner
I have a RGW Multisite 10.2.7 set up for bi-directional syncing. This has been operational for 5 months and working fine. I recently created a new user on the master zone, used that user to create a bucket, and put in a public-acl object in there. The Bucket created on the second site, but the

Re: [ceph-users] RGW Multisite metadata sync init

2017-08-24 Thread David Turner
After restarting the 2 RGW daemons on the second site again, everything caught up on the metadata sync. Is there something about having 2 RGW daemons on each side of the multisite that might be causing an issue with the sync getting stale? I have another realm set up the same way that is having