Re: [ceph-users] Federated gateways

2015-12-23 Thread ghislain.chevalier
03:01 À : cle...@centraldesktop.com Cc : ceph-users@lists.ceph.com Objet : Re: [ceph-users] Federated gateways Hi Craig, I am testing the federated gateway of 1 region with 2 zones. And I found only metadata is replicated, the data is NOT. According to your check list, I am sure all thinks are

Re: [ceph-users] Federated gateways

2015-11-08 Thread WD_Hwang
to][DEBUG ] Host: node1-west.ceph.com 2015-11-06 17:18:01,205 4558 [boto][DEBUG ] Port: 80 2015-11-06 17:18:01,205 4558 [boto][DEBUG ] Params: {} 2015-11-06 17:18:01,206 4558 [boto][DEBUG ] establishing HTTP connection: kwargs={'port': 80, 'timeout': 70} 2015-11-06 17:18:01,206 4558 [boto][DEBUG ] Token: None 2015-11-06 17:18:01,206 4558

Re: [ceph-users] Federated gateways

2015-11-05 Thread WD_Hwang
Hi Craig, I am testing the federated gateway of 1 region with 2 zones. And I found only metadata is replicated, the data is NOT. According to your check list, I am sure all thinks are checked. Could you review my configuration scripts? The configuration files are similar to http://docs.ceph.com/

Re: [ceph-users] Federated gateways

2014-11-14 Thread Craig Lewis
I have identical regionmaps in both clusters. I only created the zone's pools in that cluster. I didn't delete the default .rgw.* pools, so those exist in both zones. Both users need to be system on both ends, and have identical access and secrets. If they're not, this is likely your problem.

Re: [ceph-users] Federated gateways

2014-11-14 Thread Aaron Bassett
Well I upgraded both clusters to giant this morning just to see if that would help, and it didn’t. I have a couple questions though. I have the same regionmap on both clusters, with both zones in it, but then i only have the buckets and zone info for one zone in each cluster, is this right? Or d

Re: [ceph-users] Federated gateways

2014-11-12 Thread Craig Lewis
http://tracker.ceph.com/issues/9206 My post to the ML: http://www.spinics.net/lists/ceph-users/msg12665.html IIRC, the system uses didn't see the other user's bucket in a bucket listing, but they could read and write the objects fine. On Wed, Nov 12, 2014 at 11:16 AM, Aaron Bassett wrote: >

Re: [ceph-users] Federated gateways

2014-11-12 Thread Aaron Bassett
In playing around with this a bit more, I noticed that the two users on the secondary node cant see each others buckets. Is this a problem? > On Nov 11, 2014, at 6:56 PM, Craig Lewis wrote: > >> I see you're running 0.80.5. Are you using Apache 2.4? There is a known >> issue with Apache 2.4 o

Re: [ceph-users] Federated gateways

2014-11-11 Thread Craig Lewis
> > I see you're running 0.80.5. Are you using Apache 2.4? There is a known > issue with Apache 2.4 on the primary and replication. It's fixed, just > waiting for the next firefly release. Although, that causes 40x errors > with Apache 2.4, not 500 errors. > > It is apache 2.4, but I’m actually

Re: [ceph-users] Federated gateways

2014-11-11 Thread Aaron Bassett
> On Nov 11, 2014, at 4:21 PM, Craig Lewis wrote: > > Is that radosgw log from the primary or the secondary zone? Nothing in that > log jumps out at me. This is the log from the secondary zone. That HTTP 500 response code coming back is the only problem I can find. There are a bunch of 404s f

Re: [ceph-users] Federated gateways

2014-11-11 Thread Craig Lewis
Is that radosgw log from the primary or the secondary zone? Nothing in that log jumps out at me. I see you're running 0.80.5. Are you using Apache 2.4? There is a known issue with Apache 2.4 on the primary and replication. It's fixed, just waiting for the next firefly release. Although, that

Re: [ceph-users] Federated gateways

2014-11-11 Thread Aaron Bassett
Ok I believe I’ve made some progress here. I have everything syncing *except* data. The data is getting 500s when it tries to sync to the backup zone. I have a log from the radosgw with debug cranked up to 20: 2014-11-11 14:37:06.688331 7f54447f0700 1 == starting new request req=0x7f546800

Re: [ceph-users] Federated gateways

2014-11-05 Thread Aaron Bassett
Ah so I need both users in both clusters? I think I missed that bit, let me see if that does the trick. Aaron > On Nov 5, 2014, at 2:59 PM, Craig Lewis wrote: > > One region two zones is the standard setup, so that should be fine. > > Is metadata (users and buckets) being replicated, but not

Re: [ceph-users] Federated gateways

2014-11-05 Thread Craig Lewis
One region two zones is the standard setup, so that should be fine. Is metadata (users and buckets) being replicated, but not data (objects)? Let's go through a quick checklist: - Verify that you enabled log_meta and log_data in the region.json for the master zone - Verify that RadosGW

Re: [ceph-users] Federated gateways (our planning use case)

2014-10-08 Thread David Barker
I've had some luck putting a load balancer infront of multiple zones to get around the multiple URL issue. You can get the LB to send POST/DELETE et al to the primary zone, but GET requests can be distributed to multiple zones. The only issue is the replication delay; your data may not be available

Re: [ceph-users] Federated gateways (our planning use case)

2014-10-06 Thread Craig Lewis
This sounds doable, with a few caveats. Currently, replication is only one direction. You can only write to the primary zone, and you can read from the primary or secondary zones. A cluster can have many zones on it. I'm thinking your setup would be a star topology. Each telescope will be a p

Re: [ceph-users] Federated gateways

2014-04-15 Thread Brian Andrus
Those backslashes as output by radosgw-admin are escape characters preceding the forward slash. They should be removed when you are connecting with most clients. AFAIK, s3cmd would work fine with your original key, had you stripped out the escape chars. You could also just regenerate or specify a k

Re: [ceph-users] Federated gateways

2014-04-15 Thread Craig Lewis
Also good to know that s3cmd does not handle those escapes correctly. Thanks! *Craig Lewis* Senior Systems Engineer Office +1.714.602.1309 Email cle...@centraldesktop.com *Central Desktop. Work together in ways you never thought possible.* Connect with us Web

Re: [ceph-users] Federated gateways

2014-04-15 Thread Peter
fixed! thank you for the reply. It was the backslashes in the secret that was the issue. I generated a new gateway user with: radosgw-admin user create --uid=test2 --display-name=test2 --access-key={key} --secret={secret_without_slashes} --name client.radosgw.gateway and that worked. On 04/

Re: [ceph-users] Federated gateways

2014-04-14 Thread Craig Lewis
2014-04-14 12:39:20.556085 7f133f7ee700 10 auth_hdr: GET x-amz-date:Mon, 14 Apr 2014 11:39:01 + / 2014-04-14 12:39:20.556125 7f133f7ee700 15 *calculated digest=TQ5LP8ZeufSqKLumak6Aez4o+Pg=* 2014-04-14 12:39:20.556127 7f133f7ee700 15 *auth_sign=hx94rY3BJn7HQKA6ERaksNMQPRs=* 2014-04-14 1

Re: [ceph-users] Federated gateways

2014-04-14 Thread Peter
Here is log output for request to gateway: 2014-04-14 12:39:20.547012 7f1377aa97c0 20 enqueued request req=0x8ca280 2014-04-14 12:39:20.547036 7f1377aa97c0 20 RGWWQ: 2014-04-14 12:39:20.547038 7f1377aa97c0 20 req: 0x8ca280 2014-04-14 12:39:20.547044 7f1377aa97c0 10 allocated request req=0x8a6d30

Re: [ceph-users] Federated gateways

2014-04-14 Thread Peter Tiernan
i have the following in ceph.conf: [client.radosgw.gateway] host = cephgw keyring = /etc/ceph/keyring.radosgw.gateway rgw print continue = false rgw region = us rgw region root pool = .us.rgw.root rgw zone = us-master rgw zone root pool = .us-master.rgw.root rgw dns name = cephgw