[ceph-users] Re: RBD Images with namespace and K8s

2022-11-22 Thread Marcus Müller
Mon, Nov 21, 2022 at 5:48 PM Marcus Müller > wrote: >> >> Hi all, >> >> we created a RBD image for usage in a K8s cluster. We use a own user and >> namespace for that RBD image. >> >> If we want to use this RBD image as a volume in k8s, it won’t work

[ceph-users] RBD Images with namespace and K8s

2022-11-21 Thread Marcus Müller
Hi all, we created a RBD image for usage in a K8s cluster. We use a own user and namespace for that RBD image. If we want to use this RBD image as a volume in k8s, it won’t work as k8s can’t find the image - without a namespace for the RBD it works. Do we have to set something special here ?

[ceph-users] failed to decode CephXAuthenticate / andle_auth_bad_method server allowed_methods [2] but i only support [2]

2022-11-17 Thread Marcus Müller
Hi all, I try to install a new rgw node. After trying to execute this command: /usr/bin/radosgw -f --cluster ceph --name client.rgw.s3-001 --setuser ceph --setgroup ceph --keyring=/etc/ceph/ceph.client.admin.keyring --conf /etc/ceph/ceph.conf -m 10.0.111.13 I get: 2022-11-16T15:37:39.291+01

[ceph-users] Re: radosgw API issues

2022-07-18 Thread Marcus Müller
ley : > > are you running quincy? it looks like this '/admin/info' API was new > to that release > > https://docs.ceph.com/en/quincy/radosgw/adminops/#info > > On Fri, Jul 15, 2022 at 7:04 AM Marcus Müller > wrote: >> >> Hi all, >> >>

[ceph-users] radosgw API issues

2022-07-15 Thread Marcus Müller
Hi all, I’ve created a test user on our radosgw to work with the API. I’ve done the following: ~#radosgw-admin user create --uid=testuser--display-name=„testuser" ~#radosgw-admin caps add --uid=testuser --caps={caps} "caps": [ { "type": "amz-cache", "perm": "

[ceph-users] Re: scrubbing+deep+repair PGs since Upgrade

2022-06-27 Thread Marcus Müller
$ ceph daemon mon.ceph4 config get osd_scrub_auto_repair { "osd_scrub_auto_repair": "true" } What does this tell me know? Setting can be changed to false of course, but as list-inconsistent-obj shows something, I would like to find the reason for that first. Regards Marcus

[ceph-users] scrubbing+deep+repair PGs since Upgrade

2022-06-26 Thread Marcus Müller
Hi all, we recently upgraded from Ceph Luminous (12.x) to Ceph Octopus (15.x) (of course with Mimic and Nautilus in between). Since this upgrade we see we constant number of active+clean+scrubbing+deep+repair PGs. We never had this in the past, now every time (like 10 or 20 PGs at the same time

[ceph-users] Re: Ceph MON on ZFS filesystem - good idea?

2022-03-03 Thread Marcus Müller
snapshots, compression, etc? > > You might want to consider recordsize / blocksize for the dataset where it > would live: > > https://www.reddit.com/r/zfs/comments/8l20f5/zfs_record_size_is_smaller_really_better/ > >> On Mar 2, 2022, at 10:59 AM, Marcus Mül

[ceph-users] Ceph MON on ZFS filesystem - good idea?

2022-03-02 Thread Marcus Müller
Hi all, are there any recommendations for suitable filesystems for ceph monitors ? In the past we always deployed them on ext4, but would be ZFS possible as well? Regards, Marcus ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an