[ceph-users] Re: Consequence of maintaining hundreds of clones of a single RBD image snapshot

2023-04-19 Thread Eugen Block
Hi, the closest thing to your request I see in a customer cluster are 186 rbd children of one single image, and nobody has complained yet. The pools are all-flash with 60 SSD OSDs across 5 nodes and are used for OpenStack. Regarding the consistency during flattening, I haven't done that

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-19 Thread Venky Shankar
Hi Reto, On Wed, Apr 19, 2023 at 9:34 PM Ilya Dryomov wrote: > > On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi wrote: > > > > > > Hi, > > > > Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov > > : > >> > >> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote: > >> > > >> > yes, I used the

[ceph-users] quincy user metadata constantly changing versions on multisite slave with radosgw roles

2023-04-19 Thread Christopher Durham
Hi, I am using 17.2.6 on rocky linux for both the master and the slave site I noticed that: radosgw-admin sync status often shows that the metadata sync is behind a minute or two on the slave. This didn't make sense, as the metadata isn't changing as far as I know. radosgw-admin mdlog list (on

[ceph-users] Ceph rgw ldap user acl and quotas

2023-04-19 Thread Ignazio Cassano
Hello Everyone, We are going to install ceph object storage with ldap autenticazione. We would like to know if acl and quotas on objects and buckets work fine with ldap users. Thanks Ignazio ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

2023-04-19 Thread Anthony D'Atri
Actually there was a firmware bug around that a while back. The HBA and storcli claimed to not touch drive cache, but actually were enabling it and lying. > On Apr 19, 2023, at 1:41 PM, Marco Gaiarin wrote: > > Mandi! Mario Giammarco > In chel di` si favelave... > >> The disk cache is:

[ceph-users] Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

2023-04-19 Thread Marco Gaiarin
Mandi! Mario Giammarco In chel di` si favelave... > The disk cache is: If the controller does not lie, disk cache is disabled, see my previous messages. > The controller cache: Manual say that for Non-RAID/Automatic RAID0 disks, «The only supported cache policy for non???RAID disks is

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-19 Thread Reto Gysi
Hi Ilya, Ok, I've migrated the ceph-dev image to a separate ecpool for rbd and now the backup works fine again. root@zephir:~# umount /opt/ceph-dev root@zephir:~# rbd unmap ceph-dev root@zephir:~# rbd migration prepare --data-pool rbd_ecpool ceph-dev root@zephir:~# rbd migration execute ceph-dev

[ceph-users] Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

2023-04-19 Thread Marco Gaiarin
Mandi! Konstantin Shalygin In chel di` si favelave... > Current controller mode is RAID. You can switch to HBA mode and disable cache > in controller settings at the BIOS No, is a bit complex then that. Controller does not have an 'HBA-mode', but a 'AutoRAID0' mode. -- Documentation is

[ceph-users] Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

2023-04-19 Thread Marco Gaiarin
Mandi! Matthias Ferdinand In chel di` si favelave... > In the first linked mail, Dan van der Ster points to this page: > https://docs.ceph.com/en/latest/start/hardware-recommendations/#write-caches root@pppve1:~# for d in a b c d e f; do smartctl -g wcache /dev/sd$d | grep ^Write; done

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-19 Thread Ilya Dryomov
On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi wrote: > > > Hi, > > Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov : >> >> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote: >> > >> > yes, I used the same ecpool_hdd also for cephfs file systems. The new pool >> > ecpool_test I've created for

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-19 Thread Reto Gysi
Hi, Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov : > On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote: > > > > yes, I used the same ecpool_hdd also for cephfs file systems. The new > pool ecpool_test I've created for a test, I've also created it with > application profile 'cephfs',

[ceph-users] Unprivileged Ceph containers

2023-04-19 Thread Stephen Smith6
Hey folks – just thought I’d ask here – today cephadm creates unit files for Ceph containers with --privileged in the “podman run” call. I saw https://github.com/ceph/ceph-container/blob/main/src/daemon/README.md but I’m not sure how this relates to Ceph when deployed with cephadm? Thanks,

[ceph-users] Re: upgrading from el7 / nautilus

2023-04-19 Thread Michel Jouvin
Hi Marc, I can share what we did a few months ago. As a remark, I am not sure Nautilus is available in EL8 but may be I missed it. In our case we did the following travel: - Pacific to Octopus on EL7, traditionally managed - Conversion of the cluster to a cephadm cluster as it makes every

[ceph-users] upgrading from el7 / nautilus

2023-04-19 Thread Marc
Sorry for addressing this again. But I think there are quite a few still with Nautilus, that are planning such upgrade. Nautilus is currently available for el7, el8 Octopus is currently available for el7, el8 Pacific is currently available for el8, el9 Quincy is currently available for el8,

[ceph-users] Re: HBA or RAID-0 + BBU

2023-04-19 Thread Anthony D'Atri
LSI 9266/9271 as well in an affected range unless ECO’d > On Apr 19, 2023, at 3:13 PM, Sebastian wrote: > > I want add one thing to what other says, we discussed this between > Cephalocon sessions, avoid HP controllers p210/420, or upgrade firmware to > latest. > These controllers has

[ceph-users] Re: HBA or RAID-0 + BBU

2023-04-19 Thread Sebastian
I want add one thing to what other says, we discussed this between Cephalocon sessions, avoid HP controllers p210/420, or upgrade firmware to latest. These controllers has strange bug, during high workload they restart itself. BR, Sebastian > On 19 Apr 2023, at 08:39, Janne Johansson wrote:

[ceph-users] Re: pacific el7 rpms

2023-04-19 Thread Marc
It would be better to remove such folders, because it gives the impression something is due > > On EL7 only Nautilus was present. Pacific was from EL8 > > > k > > > > On 17 Apr 2023, at 11:29, Marc wrote: > > > Is there ever going to be rpms in > >

[ceph-users] Re: Rados gateway data-pool replacement.

2023-04-19 Thread Casey Bodley
On Wed, Apr 19, 2023 at 5:13 AM Gaël THEROND wrote: > > Hi everyone, quick question regarding radosgw zone data-pool. > > I’m currently planning to migrate an old data-pool that was created with > inappropriate failure-domain to a newly created pool with appropriate > failure-domain. > > If I’m

[ceph-users] Ceph stretch mode / POOL_BACKFILLFULL

2023-04-19 Thread Kilian Ries
Hi, we run a ceph cluster in stretch mode with one pool. We know about this bug: https://tracker.ceph.com/issues/56650 https://github.com/ceph/ceph/pull/47189 Can anyone tell me what happens when a pool gets to 100% full? At the moment raw OSD usage is about 54% but ceph throws me a

[ceph-users] Rados gateway data-pool replacement.

2023-04-19 Thread Gaël THEROND
Hi everyone, quick question regarding radosgw zone data-pool. I’m currently planning to migrate an old data-pool that was created with inappropriate failure-domain to a newly created pool with appropriate failure-domain. If I’m doing something like: radosgw-admin zone modify —rgw-zone default

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-19 Thread Ilya Dryomov
On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote: > > yes, I used the same ecpool_hdd also for cephfs file systems. The new pool > ecpool_test I've created for a test, I've also created it with application > profile 'cephfs', but there aren't any cephfs filesystem attached to it. This is not

[ceph-users] Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

2023-04-19 Thread Mario Giammarco
Il giorno sab 15 apr 2023 alle ore 11:10 Marco Gaiarin < g...@lilliput.linux.it> ha scritto: > > Sorry, i'm a bit puzzled here. > > Matthias suggest to enable write cache, you suggest to disble it... or i'm > cache-confused?! ;-) > > > > There is a cache in each disk, and a cache in the

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-19 Thread Reto Gysi
yes, I used the same ecpool_hdd also for cephfs file systems. The new pool ecpool_test I've created for a test, I've also created it with application profile 'cephfs', but there aren't any cephfs filesystem attached to it. root@zephir:~# ceph fs status backups - 2 clients === RANK STATE

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-19 Thread Ilya Dryomov
On Tue, Apr 18, 2023 at 11:34 PM Reto Gysi wrote: > > Ah, yes indeed I had disabled log-to-stderr in cluster wide config. > root@zephir:~# rbd -p rbd snap create ceph-dev@backup --id admin --debug-ms 1 > --debug-rbd 20 --log-to-stderr=true >/home/rgysi/log.txt 2>&1 Hi Reto, So "rbd snap

[ceph-users] Re: HBA or RAID-0 + BBU

2023-04-19 Thread Janne Johansson
Den ons 19 apr. 2023 kl 00:55 skrev Murilo Morais : > Good evening everyone! > Guys, about the P420 RAID controller, I have a question about the operation > mode: What would be better: HBA or RAID-0 with BBU (active write cache)? As already said, always give ceph (and zfs and btrfs..) the raw

[ceph-users] Re: HBA or RAID-0 + BBU

2023-04-19 Thread Anthony D'Atri
Are you baiting me? ;) HBA. Always. RAID HBAs are the devil. > On Apr 19, 2023, at 12:56 AM, Murilo Morais wrote: > > Good evening everyone! > > Guys, about the P420 RAID controller, I have a question about the operation > mode: What would be better: HBA or RAID-0 with BBU (active write