[ceph-users] Re: bug ceph auth

2021-07-14 Thread Wesley Dillingham
Do you get the same error if you just do "ceph auth get
client.bootstrap-osd" i.e. does client.bootstrap exist as a user?

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 


On Wed, Jul 14, 2021 at 1:56 PM Wesley Dillingham 
wrote:

> is /var/lib/ceph/bootstrap-osd/ in existence and writeable?
>
> Respectfully,
>
> *Wes Dillingham*
> w...@wesdillingham.com
> LinkedIn 
>
>
> On Wed, Jul 14, 2021 at 8:35 AM Marc  wrote:
>
>>
>>
>>
>> [@t01 ~]# ceph auth get client.bootstrap-osd -o
>> ​/var/lib/ceph/bootstrap-osd/ceph.keyring
>> Traceback (most recent call last):
>>   File "/usr/bin/ceph", line 1272, in 
>> retval = main()
>>   File "/usr/bin/ceph", line 1120, in main
>> print('Can\'t open output file {0}:
>> {1}'.format(parsed_args.output_file, e), file=sys.stderr)
>>   File "/usr/lib64/python2.7/codecs.py", line 351, in write
>> data, consumed = self.encode(object, self.errors)
>> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 23:
>> ordinal not in range(128)
>>
>>
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bug ceph auth

2021-07-14 Thread Wesley Dillingham
is /var/lib/ceph/bootstrap-osd/ in existence and writeable?

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 


On Wed, Jul 14, 2021 at 8:35 AM Marc  wrote:

>
>
>
> [@t01 ~]# ceph auth get client.bootstrap-osd -o
> ​/var/lib/ceph/bootstrap-osd/ceph.keyring
> Traceback (most recent call last):
>   File "/usr/bin/ceph", line 1272, in 
> retval = main()
>   File "/usr/bin/ceph", line 1120, in main
> print('Can\'t open output file {0}:
> {1}'.format(parsed_args.output_file, e), file=sys.stderr)
>   File "/usr/lib64/python2.7/codecs.py", line 351, in write
> data, consumed = self.encode(object, self.errors)
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 23:
> ordinal not in range(128)
>
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RocksDB resharding does not work

2021-07-14 Thread Neha Ojha
This seems to be a real issue, created
https://tracker.ceph.com/issues/51676 to track it.

Thanks,
Neha

On Thu, Jul 8, 2021 at 8:21 AM Robert Sander
 wrote:
>
> Hi,
>
> I am trying to apply the resharding to a containerized OSD (16.2.4) as 
> described here:
>
> https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#rocksdb-sharding
>
> # ceph osd set noout
> # ceph orch daemon stop osd.13
> # cephadm shell --name osd.13
> # ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-13 --sharding="m(3) 
> p(3,0-12) O(3,0-13)=block_cache={type=binned_lru} L P" reshard
> 2021-07-08T15:12:03.392+ 7f2e2173b3c0 -1 rocksdb: prepare_for_reshard 
> failure parsing column options: block_cache={type=binned_lru}
> error resharding: (22) Invalid argument
> # exit
> # ceph orch daemon start osd.13
> # ceph osd unset noout
>
> The OSD cannot start any more and has these errors in its log:
>
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  0 osd.13:7.OSDShard using op scheduler 
> ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  1 bluestore(/var/lib/ceph/osd/ceph-13) _mount path 
> /var/lib/ceph/osd/ceph-13
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  0 bluestore(/var/lib/ceph/osd/ceph-13) _open_db_and_around 
> read-only:0 repair:0
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  1 bdev(0x564271210800 /var/lib/ceph/osd/ceph-13/block) open 
> path /var/lib/ceph/osd/ceph-13/block
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  1 bdev(0x564271210800 /var/lib/ceph/osd/ceph-13/block) open 
> size 107369988096 (0x18ffc0, 100 GiB) block_size 4096 (4 KiB) 
> non-rotational discard supported
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  1 bluestore(/var/lib/ceph/osd/ceph-13) _set_cache_sizes 
> cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  1 bdev(0x564271210c00 /var/lib/ceph/osd/ceph-13/block) open 
> path /var/lib/ceph/osd/ceph-13/block
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  1 bdev(0x564271210c00 /var/lib/ceph/osd/ceph-13/block) open 
> size 107369988096 (0x18ffc0, 100 GiB) block_size 4096 (4 KiB) 
> non-rotational discard supported
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  1 bluefs add_block_device bdev 1 path 
> /var/lib/ceph/osd/ceph-13/block size 100 GiB
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  1 bluefs mount
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.293+ 
> 7efc32db4080  1 bluefs _init_alloc shared, id 1, capacity 0x18ffc0, block 
> size 0x1
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.325+ 
> 7efc32db4080  1 bluefs mount shared_bdev_used = 0
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.325+ 
> 7efc32db4080 -1 rocksdb: verify_sharding extra columns in rocksdb. rocksdb 
> columns = [default,m-0,m-1,m-2,p-0,p-1,p-2] target columns = 
> [reshardingXcommencingXlocked]
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.325+ 
> 7efc32db4080 -1 bluestore(/var/lib/ceph/osd/ceph-13) _open_db erroring 
> opening db:
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.325+ 
> 7efc32db4080  1 bluefs umount
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.325+ 
> 7efc32db4080  1 bdev(0x564271210c00 /var/lib/ceph/osd/ceph-13/block) close
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.577+ 
> 7efc32db4080  1 bdev(0x564271210800 /var/lib/ceph/osd/ceph-13/block) close
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.825+ 
> 7efc32db4080 -1 osd.13 0 OSD:init: unable to mount object store
> Jul 08 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.825+ 
> 7efc32db4080 -1  ** ERROR: osd init failed: (5) Input/output error
>
> How do I correct the issue?
>
> Regards
> --
> Robert Sander
> Heinlein Consulting GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> http://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm stuck in deleting state

2021-07-14 Thread Dimitri Savineau
Hi,

That's probably related to https://tracker.ceph.com/issues/51571

Regards,

Dimitri

On Wed, Jul 14, 2021 at 8:17 AM Eugen Block  wrote:

> Hi,
>
> do you see the daemon on that iscsi host(s) with 'cephadm ls'? If the
> answer is yes, you could remove it with cephadm, too:
>
> cephadm rm-daemon --name iscsi.iscsi
>
> Does that help?
>
>
> Zitat von Fyodor Ustinov :
>
> > Hi!
> >
> > I have fresh installed pacific
> >
> > root@s-26-9-19-mon-m1:~# ceph version
> > ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a)
> > pacific (stable)
> >
> > I managed to bring him to this state:
> >
> > root@s-26-9-19-mon-m1:~# ceph health detail
> > HEALTH_ERR Module 'cephadm' has failed: dashboard iscsi-gateway-rm
> > failed: iSCSI gateway 'iscsi-gw-1' does not exist retval: -2
> > [ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: dashboard
> > iscsi-gateway-rm failed: iSCSI gateway 'iscsi-gw-1' does not exist
> > retval: -2
> > Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed:
> > iSCSI gateway 'iscsi-gw-1' does not exist retval: -2
> >
> >
> > root@s-26-9-19-mon-m1:~# ceph orch ls
> > NAME   PORTSRUNNING  REFRESHED   AGE  PLACEMENT
> > alertmanager   ?:9093,9094  1/1  14m ago 9d   count:1;label:mon
> > crash 12/12  14m ago 11d  *
> > grafana?:3000   1/1  14m ago 9d   count:1;label:mon
> > iscsi.iscsi 0/07h
>  iscsi-gw-1;iscsi-gw-2
> > mgr 2/2  14m ago 9d   count:2;label:mon
> > mon 3/3  14m ago 5d   count:3
> > node-exporter  ?:9100 12/12  14m ago 11d  *
> > osd   54/54  14m ago -
> > prometheus ?:9095   1/1  14m ago 5d   count:1;label:mon
> >
> > root@s-26-9-19-mon-m1:~# ceph orch host ls
> > HOST  ADDR  LABELS  STATUS
> > s-26-9-17 10.5.107.104  _admin
> > s-26-9-18 10.5.107.105  _admin
> > s-26-9-19-mon-m1  10.5.107.101  mon _admin
> > s-26-9-20 10.5.107.106  _admin
> > s-26-9-21 10.5.107.107  _admin
> > s-26-9-22 10.5.107.110  _admin
> > s-26-9-23 10.5.107.108  _admin
> > s-26-9-24-mon-m2  10.5.107.102  _admin mon
> > s-26-9-25 10.5.107.111  _admin
> > s-26-9-26 10.5.107.109  _admin
> > s-26-9-27 10.5.107.112  _admin
> > s-26-9-28-mon-m3  10.5.107.103  _admin mon
> >
> >
> > How can we get the cluster out of this state now?
> >
> > WBR,
> > Fyodor.
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] bug ceph auth

2021-07-14 Thread Marc



[@t01 ~]# ceph auth get client.bootstrap-osd -o 
​/var/lib/ceph/bootstrap-osd/ceph.keyring
Traceback (most recent call last):
  File "/usr/bin/ceph", line 1272, in 
retval = main()
  File "/usr/bin/ceph", line 1120, in main
print('Can\'t open output file {0}: {1}'.format(parsed_args.output_file, 
e), file=sys.stderr)
  File "/usr/lib64/python2.7/codecs.py", line 351, in write
data, consumed = self.encode(object, self.errors)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 23: 
ordinal not in range(128)



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm stuck in deleting state

2021-07-14 Thread Eugen Block

Hi,

do you see the daemon on that iscsi host(s) with 'cephadm ls'? If the  
answer is yes, you could remove it with cephadm, too:


cephadm rm-daemon --name iscsi.iscsi

Does that help?


Zitat von Fyodor Ustinov :


Hi!

I have fresh installed pacific

root@s-26-9-19-mon-m1:~# ceph version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a)  
pacific (stable)


I managed to bring him to this state:

root@s-26-9-19-mon-m1:~# ceph health detail
HEALTH_ERR Module 'cephadm' has failed: dashboard iscsi-gateway-rm  
failed: iSCSI gateway 'iscsi-gw-1' does not exist retval: -2
[ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: dashboard  
iscsi-gateway-rm failed: iSCSI gateway 'iscsi-gw-1' does not exist  
retval: -2
Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed:  
iSCSI gateway 'iscsi-gw-1' does not exist retval: -2



root@s-26-9-19-mon-m1:~# ceph orch ls
NAME   PORTSRUNNING  REFRESHED   AGE  PLACEMENT
alertmanager   ?:9093,9094  1/1  14m ago 9d   count:1;label:mon
crash 12/12  14m ago 11d  *
grafana?:3000   1/1  14m ago 9d   count:1;label:mon
iscsi.iscsi 0/07h   iscsi-gw-1;iscsi-gw-2
mgr 2/2  14m ago 9d   count:2;label:mon
mon 3/3  14m ago 5d   count:3
node-exporter  ?:9100 12/12  14m ago 11d  *
osd   54/54  14m ago -
prometheus ?:9095   1/1  14m ago 5d   count:1;label:mon

root@s-26-9-19-mon-m1:~# ceph orch host ls
HOST  ADDR  LABELS  STATUS
s-26-9-17 10.5.107.104  _admin
s-26-9-18 10.5.107.105  _admin
s-26-9-19-mon-m1  10.5.107.101  mon _admin
s-26-9-20 10.5.107.106  _admin
s-26-9-21 10.5.107.107  _admin
s-26-9-22 10.5.107.110  _admin
s-26-9-23 10.5.107.108  _admin
s-26-9-24-mon-m2  10.5.107.102  _admin mon
s-26-9-25 10.5.107.111  _admin
s-26-9-26 10.5.107.109  _admin
s-26-9-27 10.5.107.112  _admin
s-26-9-28-mon-m3  10.5.107.103  _admin mon


How can we get the cluster out of this state now?

WBR,
Fyodor.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io




___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] pool removed_snaps

2021-07-14 Thread Seena Fallah
Hi,

In ceph osd dump I see many removed_snaps in order of 500k.
I see snap trimming event in ceph status sometimes, but after that when I
again dump removed_snaps the length of it didn't get smaller!

How can I get rid of these removed_snaps?

Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] "missing required protocol features" when map rbd

2021-07-14 Thread Szabo, Istvan (Agoda)
Hi,

We have updated our cluster from luminous 12.2.8 to nautilus 14.2.22 and users 
with this core kernel:

Linux servername  4.9.241-37.el7.x86_64 #1 SMP Mon Nov 2 13:55:04 UTC 2020 
x86_64 x86_64 x86_64 GNU/Linux

Experiencing this issue when they try to map the image. (unmap works)


[15085691.865199] libceph: mon2 10.10.10.10:6789 feature set mismatch, my 
240106b84a842a52 < server's 260106b84aa42a52, missing 220
[15085691.890027] libceph: mon2 10.10.10.10:6789 missing required protocol 
features

This kernel version works:
Linux servername  3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 
2021 x86_64 x86_64 x86_64 GNU/Linux

I've read to set the crush tuneables to hammer and so on, at the moment those 
are on jewel I don't think is a good idea to set back even older one.

Any idea how to fix it?

Thank you


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] cephadm stuck in deleting state

2021-07-14 Thread Fyodor Ustinov
Hi!

I have fresh installed pacific

root@s-26-9-19-mon-m1:~# ceph version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

I managed to bring him to this state:

root@s-26-9-19-mon-m1:~# ceph health detail
HEALTH_ERR Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: 
iSCSI gateway 'iscsi-gw-1' does not exist retval: -2
[ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: dashboard iscsi-gateway-rm 
failed: iSCSI gateway 'iscsi-gw-1' does not exist retval: -2
Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: iSCSI 
gateway 'iscsi-gw-1' does not exist retval: -2


root@s-26-9-19-mon-m1:~# ceph orch ls
NAME   PORTSRUNNING  REFRESHED   AGE  PLACEMENT  
alertmanager   ?:9093,9094  1/1  14m ago 9d   count:1;label:mon  
crash 12/12  14m ago 11d  *  
grafana?:3000   1/1  14m ago 9d   count:1;label:mon  
iscsi.iscsi 0/07h   iscsi-gw-1;iscsi-gw-2  
mgr 2/2  14m ago 9d   count:2;label:mon  
mon 3/3  14m ago 5d   count:3
node-exporter  ?:9100 12/12  14m ago 11d  *  
osd   54/54  14m ago -
prometheus ?:9095   1/1  14m ago 5d   count:1;label:mon  

root@s-26-9-19-mon-m1:~# ceph orch host ls
HOST  ADDR  LABELS  STATUS  
s-26-9-17 10.5.107.104  _admin  
s-26-9-18 10.5.107.105  _admin  
s-26-9-19-mon-m1  10.5.107.101  mon _admin  
s-26-9-20 10.5.107.106  _admin  
s-26-9-21 10.5.107.107  _admin  
s-26-9-22 10.5.107.110  _admin  
s-26-9-23 10.5.107.108  _admin  
s-26-9-24-mon-m2  10.5.107.102  _admin mon  
s-26-9-25 10.5.107.111  _admin  
s-26-9-26 10.5.107.109  _admin  
s-26-9-27 10.5.107.112  _admin  
s-26-9-28-mon-m3  10.5.107.103  _admin mon  


How can we get the cluster out of this state now?

WBR,
Fyodor.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] "ceph fs perf stats" and "cephfs-top" don't work

2021-07-14 Thread Erwin Bogaard
Hi,

I just upgraded our cluster to pacific 16.2.5.
As I'm curious what cephfs-top could give for insights, I followed the
steps in the documentation.
After enabling the mgr module "stats":

# ceph mgr module ls
...
"enabled_modules": [
"dashboard",
"iostat",
"restful",
"stats",
"zabbix"
...

I tried the following command:
# ceph fs perf stats
{"version": 1, "global_counters": ["cap_hit", "read_latency",
"write_latency", "metadata_latency", "dentry_lease", "opened_files",
"pinned_icaps", "opened_inodes"], "counters": [], "client_metadata": {},
"global_metrics": {}, "metrics": {"delayed_ranks": []}}

As you can see, this returns no info whatsoever. The same with:

# cephfs-top
cluster ceph does not exist

The actual cluster name is "ceph".

So I don't understand why "ceph fs perf stats" isn't showing any
information.
Maybe another indicator something isn't ritght:

# ceph fs status
cephfs - 0 clients
==
RANK  STATE  MDSACTIVITY DNSINOS   DIRS   CAPS
...

I see "0 clients". When I take a look in the mgr dashboard, I can actually
see all clients. Which are RHEL 7 & 8 cephfs kernel clients.
There is only 1 mds active, and 1 in standby-replay.
I have multiple pools active, but only 1 fs.

Does anyone have a suggestion where I can take a look enable gathering the
stats?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: resharding and s3cmd empty listing

2021-07-14 Thread Konstantin Shalygin
Hi,

What is your Ceph version?


k

Sent from my iPhone

> On 12 Jul 2021, at 20:41, Jean-Sebastien Landry 
>  wrote:
> 
> Hi everyone, something strange here with bucket resharding vs. bucket 
> listing.
> 
> I have a bucket with about 1M objects in it, I increased the bucket quota 
> from 1M to 2M, and manually resharded from 11 to 23. (dynamic resharding is 
> disabled)
> Since then, the user can't list objects in some paths. The objects are there, 
> but the client can't list them.
> 
> Using this example: s3://bucket/dir1/dir2/dir3/dir4
> 
> s3cmd can't list the objects in dir2 and dir4 but rclone works and list all 
> objects.
> s3cmd don't give any errors, just list the path with no object in it.
> 
> I reshard to 1, everything is ok, s3cmd can list all objects in all paths.
> I reshard to 11, s3cmd works with dir2 but can't list the objects in dir4.
> I reshard to 13, s3cmd can't list dir2 and dir4.
> I reshard to 7, s3cmd works with all the paths.
> 
> s3cmd always works with dir1 and dir3, regardless of the shard number, the 
> problem is just with dir2 and dir4.
> s3cmd, s3browser and "aws s3 ls" are problematic, "aws s3api list-objects" 
> and rclone always work.
> 
> I did a "bucket check --fix --check-objects", scrub/deep-scrub of the index 
> pgs, "bi list" looks good to me, charset & etags looks good too, s3cmd in 
> debug mode doesn't report any error, no xml error, no http-4xx everything is 
> http-200. I can't find anything suspicious in the haproxy/beast syslog. 
> resharding process didn't give any error, everything is HEALTH_OK.
> 
> Maybe the next step is to look for a s3cmd/python bug, but I'm curious if 
> someone here have ever experienced something like this.
> Any thoughts are welcome :-)
> Thanks!
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io