[ceph-users] active+remapped+backfilling with objects misplaced

2019-08-26 Thread Arash Shams
Hi everybody Im new to ceph and I have a question related to active+remapped+backfilling and misplaced objects Recently I copied more than 10 million objects to a new cluster with 3 nodes and 6 osds during this migration one of my OSDs got full and health check became ERR I dont know why but c

[ceph-users] Re: active+remapped+backfilling with objects misplaced

2019-08-30 Thread Arash Shams
Thanks David, I will dig for pg-upmap From: David Casier Sent: Tuesday, August 27, 2019 12:26 PM To: ceph-users@ceph.io Subject: [ceph-users] Re: active+remapped+backfilling with objects misplaced Hi, First, do not panic :) Secondly, verify that the number of

[ceph-users] RadosGW cant list objects when there are too many of them

2019-10-17 Thread Arash Shams
Dear All I have a bucket with 5 million Objects and I cant list objects with radosgw-admin bucket list --bucket=bucket | jq .[].name or listing files using boto3 s3 = boto3.client('s3', endpoint_url=credentials['endpoint_url'], aws_access_key_id=cre

[ceph-users] Re: RadosGW cant list objects when there are too many of them

2019-10-21 Thread Arash Shams
Thanks paul Yes listing v2 is not supported yet. I checked metadata osds and all of them are 600gb 10k hdd I dont think this was the issue. I will test the --allow-unordered Regards From: Paul Emmerich Sent: Thursday, October 17, 2019 10:00 AM To: Arash Shams

[ceph-users] custom x-amz-request-id

2019-11-13 Thread Arash Shams
Hi everybody Im using Nginx in front of radosgw and I generate the request id header on nginx can I pass the same value to radosgw and tell it use this header instead of generating a new one ? nginx sample : more_set_input_headers "x-amz-request-id: $txid" Thanks ___

[ceph-users] RGW listing millions of objects takes too much time

2019-12-09 Thread Arash Shams
Dear All, I have almost 30 million objects and I want to list them and index them somewhere else, Im using boto3 with continuation Marker but it takes almost 9 hours can I run it in multiple threads to make it faster? what solution do you suggest to speedup this process, Thanks _