Mandi! Frank Schilder
In chel di` si favelave...
>> iops: min=2, max= 40, avg=21.13, stdev= 6.10, samples=929
>> iops: min=2, max= 42, avg=21.52, stdev= 6.56, samples=926
> That looks horrible.
Exactly, horrible.
The strange thing is that we came from an homegro
Hi,
Current controller mode is RAID. You can switch to HBA mode and disable cache
in controller settings at the BIOS
k
Sent from my iPhone
> On 15 Apr 2023, at 12:11, Marco Gaiarin wrote:
>
> Mandi! Frank Schilder
> In chel di` si favelave...
>
>>> iops: min=2, max= 40, avg
Hi J-P Methot,
perhaps my response is a bit late but this to some degree recalls me an
issue we've been facing yesterday.
First of all you might want to set debug-osd to 20 for this specific OSD
and see if log would be more helpful. Please share if possible.
Secondly I'm curious if the las
With the LSI HBAs I’ve used, HBA cache seemed to only be used for VDs, not for
passthrough drives. And then with various nasty bugs. Be careful not to
conflate HBA cache with cache on the HDD itself.
> On Apr 15, 2023, at 11:51 AM, Konstantin Shalygin wrote:
>
> Hi,
>
> Current controller
Hello Guillaume,
If container build still works I'd be very thankful for the hotfix
version.
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/UF3BSLMA7M3I2KK7BWJGZUJWKPGPZRSK/
Thank you!
On Tue, Jan 31, 2023 at 5:08 PM Jonas Nemeikšis
wrote:
> Cool, thank you!
>
> On Tue, Jan
Den fre 14 apr. 2023 kl 18:04 skrev Will Nilges :
>
> Hello!
> I'm trying to Install the ceph-common package on a Rocky Linux 9 box so
> that I can connect to our ceph cluster and mount user directories. I've
> added the ceph repo to yum.repos.d, but when I run `dnf install
> ceph-common`, I get th
Hi,
I observed duplicate object names in the result of the admin list bucket on
15.2.12 cluster. I used the following command and some of the object names
in the result list appeared more than once. There is no versioning config
for the bucket.
radosgw-admin bucket list --allow-unordered --max-en
After a critical node failure on my lab cluster, which won't come back up and
is still down, the RBD objects are still being watched / mounted according to
ceph. I can't shell to the node to rbd unbind them as the node is down. I am
absolutely certain that nothing is using these images and the
Den lör 15 apr. 2023 kl 15:47 skrev mahnoosh shahidi :
>
> Hi,
>
> I observed duplicate object names in the result of the admin list bucket on
> 15.2.12 cluster. I used the following command and some of the object names
> in the result list appeared more than once. There is no versioning config
> f
Hi,
to verify if the bucket names are really duplicated, please try running the
following command:
'radosgw-admin bucket list --allow-unordered | jq -r ".[]" | sort | uniq -c
| sort -h | tail'
On Sat, Apr 15, 2023 at 7:20 PM Janne Johansson wrote:
> Den lör 15 apr. 2023 kl 15:47 skrev mahnoosh
Hi
I think the issue you are experiencing may be related to a bug that has
been reported in the Ceph project. Specifically, the issue is documented in
https://tracker.ceph.com/issues/58156, and a pull request has been
submitted and merged in https://github.com/ceph/ceph/pull/44090.
On Fri, Apr 1
Hi
You can use the script available at
https://github.com/TheJJ/ceph-balancer/blob/master/placementoptimizer.py to
check the status of backfilling and PG state, and also to cancel
backfilling using upmap. To view the movement status of all PGs in the
backfilling state, you can execute the command
Hi,
Thank you for your answer, yes this seems to be exactly my issue. The pull
request related to the issue is this one:
https://github.com/ceph/ceph/pull/49199 and it is not (yet?) merged into the
Quincy release. Hopefully this will happen before the next major release,
because I cannot run a
It's funny, I had the EPEL repos added yesterday, and no joy. Left it alone
for a day, and voila, packages exist. Must have forgotten to update or
clear my cache or something?
Thanks for the help!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsu
Hi .
Any input will be of great help.
Thanks once again.
Lokendra
On Fri, 14 Apr, 2023, 3:47 pm Lokendra Rathour,
wrote:
> Hi Team,
> their is one additional observation.
> Mount as the client is working fine from one of the Ceph nodes.
> Command *: sudo mount -t ceph :/ /mnt/imgs -o
> name=foo
15 matches
Mail list logo