Hi,
the closest thing to your request I see in a customer cluster are 186
rbd children of one single image, and nobody has complained yet. The
pools are all-flash with 60 SSD OSDs across 5 nodes and are used for
OpenStack. Regarding the consistency during flattening, I haven't done
that
Hi Reto,
On Wed, Apr 19, 2023 at 9:34 PM Ilya Dryomov wrote:
>
> On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi wrote:
> >
> >
> > Hi,
> >
> > Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov
> > :
> >>
> >> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
> >> >
> >> > yes, I used the
Hi,
I am using 17.2.6 on rocky linux for both the master and the slave site
I noticed that:
radosgw-admin sync status
often shows that the metadata sync is behind a minute or two on the slave. This
didn't make sense, as the metadata isn't changing as far as I know.
radosgw-admin mdlog list
(on
Hello Everyone,
We are going to install ceph object storage with ldap autenticazione.
We would like to know if acl and quotas on objects and buckets work fine
with ldap users.
Thanks
Ignazio
___
ceph-users mailing list -- ceph-users@ceph.io
To
Actually there was a firmware bug around that a while back. The HBA and
storcli claimed to not touch drive cache, but actually were enabling it and
lying.
> On Apr 19, 2023, at 1:41 PM, Marco Gaiarin wrote:
>
> Mandi! Mario Giammarco
> In chel di` si favelave...
>
>> The disk cache is:
Mandi! Mario Giammarco
In chel di` si favelave...
> The disk cache is:
If the controller does not lie, disk cache is disabled, see my previous
messages.
> The controller cache:
Manual say that for Non-RAID/Automatic RAID0 disks, «The only supported
cache policy for non???RAID disks is
Hi Ilya,
Ok, I've migrated the ceph-dev image to a separate ecpool for rbd and now
the backup works fine again.
root@zephir:~# umount /opt/ceph-dev
root@zephir:~# rbd unmap ceph-dev
root@zephir:~# rbd migration prepare --data-pool rbd_ecpool ceph-dev
root@zephir:~# rbd migration execute ceph-dev
Mandi! Konstantin Shalygin
In chel di` si favelave...
> Current controller mode is RAID. You can switch to HBA mode and disable cache
> in controller settings at the BIOS
No, is a bit complex then that.
Controller does not have an 'HBA-mode', but a 'AutoRAID0' mode.
--
Documentation is
Mandi! Matthias Ferdinand
In chel di` si favelave...
> In the first linked mail, Dan van der Ster points to this page:
> https://docs.ceph.com/en/latest/start/hardware-recommendations/#write-caches
root@pppve1:~# for d in a b c d e f; do smartctl -g wcache /dev/sd$d | grep
^Write; done
On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi wrote:
>
>
> Hi,
>
> Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov :
>>
>> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
>> >
>> > yes, I used the same ecpool_hdd also for cephfs file systems. The new pool
>> > ecpool_test I've created for
Hi,
Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov :
> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
> >
> > yes, I used the same ecpool_hdd also for cephfs file systems. The new
> pool ecpool_test I've created for a test, I've also created it with
> application profile 'cephfs',
Hey folks – just thought I’d ask here – today cephadm creates unit files for
Ceph containers with --privileged in the “podman run” call. I saw
https://github.com/ceph/ceph-container/blob/main/src/daemon/README.md but I’m
not sure how this relates to Ceph when deployed with cephadm?
Thanks,
Hi Marc,
I can share what we did a few months ago. As a remark, I am not sure
Nautilus is available in EL8 but may be I missed it. In our case we did
the following travel:
- Pacific to Octopus on EL7, traditionally managed
- Conversion of the cluster to a cephadm cluster as it makes every
Sorry for addressing this again. But I think there are quite a few still with
Nautilus, that are planning such upgrade.
Nautilus is currently available for el7, el8
Octopus is currently available for el7, el8
Pacific is currently available for el8, el9
Quincy is currently available for el8,
LSI 9266/9271 as well in an affected range unless ECO’d
> On Apr 19, 2023, at 3:13 PM, Sebastian wrote:
>
> I want add one thing to what other says, we discussed this between
> Cephalocon sessions, avoid HP controllers p210/420, or upgrade firmware to
> latest.
> These controllers has
I want add one thing to what other says, we discussed this between Cephalocon
sessions, avoid HP controllers p210/420, or upgrade firmware to latest.
These controllers has strange bug, during high workload they restart itself.
BR,
Sebastian
> On 19 Apr 2023, at 08:39, Janne Johansson wrote:
It would be better to remove such folders, because it gives the impression
something is due
>
> On EL7 only Nautilus was present. Pacific was from EL8
>
>
> k
>
>
>
> On 17 Apr 2023, at 11:29, Marc wrote:
>
>
> Is there ever going to be rpms in
>
>
On Wed, Apr 19, 2023 at 5:13 AM Gaël THEROND wrote:
>
> Hi everyone, quick question regarding radosgw zone data-pool.
>
> I’m currently planning to migrate an old data-pool that was created with
> inappropriate failure-domain to a newly created pool with appropriate
> failure-domain.
>
> If I’m
Hi,
we run a ceph cluster in stretch mode with one pool. We know about this bug:
https://tracker.ceph.com/issues/56650
https://github.com/ceph/ceph/pull/47189
Can anyone tell me what happens when a pool gets to 100% full? At the moment
raw OSD usage is about 54% but ceph throws me a
Hi everyone, quick question regarding radosgw zone data-pool.
I’m currently planning to migrate an old data-pool that was created with
inappropriate failure-domain to a newly created pool with appropriate
failure-domain.
If I’m doing something like:
radosgw-admin zone modify —rgw-zone default
On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
>
> yes, I used the same ecpool_hdd also for cephfs file systems. The new pool
> ecpool_test I've created for a test, I've also created it with application
> profile 'cephfs', but there aren't any cephfs filesystem attached to it.
This is not
Il giorno sab 15 apr 2023 alle ore 11:10 Marco Gaiarin <
g...@lilliput.linux.it> ha scritto:
>
> Sorry, i'm a bit puzzled here.
>
> Matthias suggest to enable write cache, you suggest to disble it... or i'm
> cache-confused?! ;-)
>
>
>
> There is a cache in each disk, and a cache in the
yes, I used the same ecpool_hdd also for cephfs file systems. The new pool
ecpool_test I've created for a test, I've also created it with application
profile 'cephfs', but there aren't any cephfs filesystem attached to it.
root@zephir:~# ceph fs status
backups - 2 clients
===
RANK STATE
On Tue, Apr 18, 2023 at 11:34 PM Reto Gysi wrote:
>
> Ah, yes indeed I had disabled log-to-stderr in cluster wide config.
> root@zephir:~# rbd -p rbd snap create ceph-dev@backup --id admin --debug-ms 1
> --debug-rbd 20 --log-to-stderr=true >/home/rgysi/log.txt 2>&1
Hi Reto,
So "rbd snap
Den ons 19 apr. 2023 kl 00:55 skrev Murilo Morais :
> Good evening everyone!
> Guys, about the P420 RAID controller, I have a question about the operation
> mode: What would be better: HBA or RAID-0 with BBU (active write cache)?
As already said, always give ceph (and zfs and btrfs..) the raw
Are you baiting me? ;) HBA. Always. RAID HBAs are the devil.
> On Apr 19, 2023, at 12:56 AM, Murilo Morais wrote:
>
> Good evening everyone!
>
> Guys, about the P420 RAID controller, I have a question about the operation
> mode: What would be better: HBA or RAID-0 with BBU (active write
26 matches
Mail list logo