Well, it didn't work at first and I found that I created the user without
'--system'.
After I modify the user with '--system', the dashboard connects to the rgw.
I'm
not sure if there is any other operation I did out of the docs.
sathvik vutukuri <7vik.sath...@gmail.com> 于2020年7月27日周一 上午11:36写道:
I have done the same, but this is the issue in the dashboard..
Information
key system is not in dict {u'attrs': [], u'display_name': u'User for
Connector', u'default_storage_class': u'', u'keys': [{u'access_key':
u'Q2RQU16YETCDGQ0C483Q', u'secret_key':
Hi all,
I have a cluster providing object storage.
The cluster has worked well until someone saves flink checkpoints in
the 'flink' bucket. I checked its behavior and I find that the flink saves
the current checkpoint data and delete the former ones frequently. I
suppose that it makes the bucket
The user provided to the dashboard must be created with '--system' with
radosgw-admin, or it's not working.
sathvik vutukuri <7vik.sath...@gmail.com> 于2020年7月26日周日 上午9:54写道:
> I have enabled it using the same doc, but some how it's not working.
>
> On Sun, 26 Jul 2020, 06:55 Oliver Freyermuth,
Dear fellow cephers,
I observe a wired problem on our mimic-13.2.8 cluster. We have an EC RBD pool
backed by HDDs. These disks are not in any other pool. I noticed that the total
capacity (=USED+MAX AVAIL) reported by "ceph df detail" has shrunk recently
from 300TiB to 200TiB. Part but by no