Hi,
When you list the roles, the Condition element of the trust policy in the
role doesn't seem quite right:
"Condition": {
>"StringEquals": {
>"localhost:8080/auth/realms/demo:myclient
On Wed, Mar 16, 2022 at 10:49:15AM +, Frank Schilder wrote:
> Returning to this thread, I finally managed to capture the problem I'm
> facing in a log. The time service to the outside world is blocked by
> our organisation's firewall and I'm restricted to use internal time
> servers.
Hello,
It seems like Pritha is the Ceph RGW expert in this forum. I am currently
trying to integrate CephRGW object storage with KeyCloak as the OIDC
provider. I am running ceph version 16.2.7 Pacific stable.
At this point, I am just trying to get a POC working with the python
scripts
Hi,
we've got user that is called like : (please don't ask me why.
I have no clue) and it got some strange behavior in the syncing process.
In the master zonegroup the user looks like this:
root@s3db1:~# radosgw-admin user info --uid
Hi all,
There is one week left until the Ceph User Survey closes. Please
consider taking it or sharing it with others that use Ceph.
On Fri, Feb 11, 2022 at 3:59 PM Mike Perez wrote:
>
> Hi everyone!
>
> Be sure to make your voice heard by taking the Ceph User Survey before
> March 25, 2022.
Hi,
Following up from this, is it just normal for them to take a while? I
notice that once I have restarted an OSD, the 'meta' value drops right down
to empty and slowly builds back up. The restarted OSD's start with just 1gb
or so of metadata and increase over time to 160/170GB of metadata.
So
Hello,
If I understand my issue correctly, it is in fact unrelated to CephFS itself,
rather the problem happens at a lower level (in Ceph itself). IOW, it affects
all kind of snapshots, not just CephFS ones. I believe my FS is healthy
otherwise. In any case, here is the output of the command you
Hi everyone
On March 24 at 17:00 UTC, hear Kamoltat (Junior) Sirivadhna give a
Ceph Tech Talk on how Teuthology, Ceph's integration test framework,
works!
https://ceph.io/en/community/tech-talks/
Also, if you would like to present and share with the community what
you're doing with Ceph or
I tried it on a mini-cluster (4 Raspberries) with 16.2.7.
Same procedure, same effect. I just can’t get rid of these objects.
Is there any method that would allow me to delete these objects without
damaging RGW?
Ciao, Uli
> On 17. 03 2022, at 15:30, Soumya Koduri wrote:
>
> On 3/17/22
Hello,
I have an SSD pool that was initially created years ago with 128PG. This
seems to be suboptimal to me. In this pool are 32 OSDs á 1.6TiB. 8 servers
with 4 OSDs each.
ceph osd pool autoscale-status recommends 2048 PGs.
Is it safe to enable the autoscale mode? Is the pool still accessible
On 3/10/2022 6:10 PM, Sasa Glumac wrote:
> In this respect could you please try to switch bluestore and bluefs
> allocators to bitmap and run some smoke benchmarking again.
Can i change this on live server (is there possibility of losing data
etc )? Can you please share correct procedure.
11 matches
Mail list logo