[ceph-users] HBA or RAID-0 + BBU

2023-04-18 Thread Murilo Morais
Good evening everyone! Guys, about the P420 RAID controller, I have a question about the operation mode: What would be better: HBA or RAID-0 with BBU (active write cache)? Thanks in advance! ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-18 Thread Reto Gysi
Ah, yes indeed I had disabled log-to-stderr in cluster wide config. root@zephir:~# rbd -p rbd snap create ceph-dev@backup --id admin --debug-ms 1 --debug-rbd 20 --log-to-stderr=true >/home/rgysi/log.txt 2>&1 root@zephir:~# Here's the log.txt Am Di., 18. Apr. 2023 um 18:36 Uhr schrieb Ilya

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-18 Thread Reto Gysi
Hi Eugen Yes, I used the default setting of rbd_default_pool='rbd'. I don't have anything set for default_data_pool. root@zephir:~# ceph config show-with-defaults mon.zephir | grep -E "default(_data)*_pool" osd_default_data_pool_replay_window 45

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-18 Thread Eugen Block
You don't seem to specify a pool name to the snap create command, does your rbd_default_pool match the desired pool? And also does rbd_default_data_pool match what you expect (if those values are even set)? I've never used custom values for those configs but if you don't specify a pool

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-18 Thread Ilya Dryomov
On Tue, Apr 18, 2023 at 5:45 PM Reto Gysi wrote: > > Hi Ilya > > Sure. > > root@zephir:~# rbd snap create ceph-dev@backup --id admin --debug-ms 1 > --debug-rbd 20 >/home/rgysi/log.txt 2>&1 You probably have custom log settings in the cluster-wide config. Please append "--log-to-stderr true"

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-18 Thread Reto Gysi
Hi Ilya Sure. root@zephir:~# rbd snap create ceph-dev@backup --id admin --debug-ms 1 --debug-rbd 20 >/home/rgysi/log.txt 2>&1 root@zephir:~# Am Di., 18. Apr. 2023 um 16:19 Uhr schrieb Ilya Dryomov : > On Tue, Apr 18, 2023 at 3:21 PM Reto Gysi wrote: > > > > Hi, > > > > Yes both snap create

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-18 Thread Ilya Dryomov
On Tue, Apr 18, 2023 at 3:21 PM Reto Gysi wrote: > > Hi, > > Yes both snap create commands were executed as user admin: > client.admin >caps: [mds] allow * >caps: [mgr] allow * >caps: [mon] allow * >caps: [osd] allow * > > deep scrubbing+repair of ecpool_hdd is

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-18 Thread Reto Gysi
Hi, Yes both snap create commands were executed as user admin: client.admin caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow * deep scrubbing+repair of ecpool_hdd is still ongoing, but so far the problem still exists Am Di., 18. Apr. 2023

[ceph-users] Re: ceph pg stuck - missing on 1 osd how to proceed

2023-04-18 Thread David Orman
You may want to consider disabling deep scrubs and scrubs while attempting to complete a backfill operation. On Tue, Apr 18, 2023, at 01:46, Eugen Block wrote: > I didn't mean you should split your PGs now, that won't help because > there is already backfilling going on. I would revert the

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-18 Thread Eugen Block
Hi, In the meantime I did some further test. I've created a new erasure coded datapool 'ecpool_test' and if I create a new rbd image with this data pool I can create snapshots, but I can't create snapshots on both new and existing images with existing data pool 'ecpool_hdd' just one thought,

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-18 Thread Lokendra Rathour
yes thanks, Robert, after installing the Ceph common mount is working fine. On Tue, Apr 18, 2023 at 2:10 PM Robert Sander wrote: > On 18.04.23 06:12, Lokendra Rathour wrote: > > > but if I try mounting from a normal Linux machine with connectivity > > enabled between Ceph mon nodes, it gives

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-18 Thread Robert Sander
On 18.04.23 06:12, Lokendra Rathour wrote: but if I try mounting from a normal Linux machine with connectivity enabled between Ceph mon nodes, it gives the error as stated before. Have you installed ceph-common on the "normal Linux machine"? Regards -- Robert Sander Heinlein Support GmbH

[ceph-users] Consequence of maintaining hundreds of clones of a single RBD image snapshot

2023-04-18 Thread Eyal Barlev
Hello, My use-case involves creating hundreds of clones (~1,000) of a single RBD image snapshot. I assume watchers exist for each clone, due to the copy-on-write nature of clones. Should I expect a penalty for maintaining such a large number of clones: cpu, memory, performance? If such penalty

[ceph-users] Re: ceph pg stuck - missing on 1 osd how to proceed

2023-04-18 Thread Eugen Block
I didn't mean you should split your PGs now, that won't help because there is already backfilling going on. I would revert the pg_num changes (since nothing actually happened yet there's no big risk) and wait for the backfill to finish. You don't seem to have inactive PGs so it shouldn't