ou tell:
> 1 : device size 0x6fc7c0 : own
> 0x[4~4e0,12f7~2252d,23b06~21a23,4583e~20f89,6b163~2,35a78f~478a2]
> = 0xccc5b : using 0xa60ed0000(42 GiB) : bluestore has 0x62e79f(396
> GiB)
> available
> wal_total:0,
mitigates spillover. Previously a, say, 29GB DB
> device/partition
> would be like 85% unused.
> With recent releases one can also turn on DB compression, which should have a
> similar benefit.
>> On Nov 12, 2024, at 11:25 AM, Frédéric Nass
>> wrote:
>> Hi Anthony,
storage-ceph/7.1?topic=bluestore-resharding-rocksdb-database
> | ibm.com ]
> [
> https://www.ibm.com/docs/en/storage-ceph/7.1?topic=bluestore-resharding-rocksdb-database
> ]
>> On Nov 12, 2024, at 8:02 AM, Alexander Patrakov wrote:
>> Yes, that is correct.
>> On Tue, Nov
> use_some_extra), it no longer wastes the extra capacity of the DB
> device.
>
> On Tue, Nov 12, 2024 at 5:52 PM Frédéric Nass
> wrote:
>>
>>
>>
>> - Le 12 Nov 24, à 8:51, Roland Giesler rol...@giesler.za.net a écrit :
>>
>> > On 2024/11
- Le 12 Nov 24, à 8:51, Roland Giesler rol...@giesler.za.net a écrit :
> On 2024/11/12 04:54, Alwin Antreich wrote:
>> Hi Roland,
>>
>> On Mon, Nov 11, 2024, 20:16 Roland Giesler wrote:
>>
>>> I have ceph 17.2.6 on a proxmox cluster and want to replace some ssd's
>>> who are end of life. I
to change rocksdb options for META data reduction?
> 2024년 11월 8일 (금) 오전 12:53, Frédéric Nass < [
> mailto:frederic.n...@univ-lorraine.fr | frederic.n...@univ-lorraine.fr ] >님이
> 작성:
>> Hi,
>> You could give rocksdb compression a try. It's safe to use si
/cephadm/serve.py#L475C4-L475C41
- Le 7 Nov 24, à 16:28, Frédéric Nass frederic.n...@univ-lorraine.fr a
écrit :
> Hi,
>
> We're encountering this unexpected behavior as well. This tracker [1] was
> created 4 months ago.
>
> Regards,
> Frédéric.
>
> [1] https
for Managed OSDs. It's dynamically
> created when the system comes up and will simply re-create itself. Which
> is why it's easier to purge the artefacts of a legacy OSD.
>
> Tim
>
>
> On 11/7/24 10:28, Frédéric Nass wrote:
>> Hi,
>>
>> We
don't need to do
> the extra steps, as well as probably allowing service actions against the
> "osd" service (even though it's just a placeholder in reality) but none of
> that exists currently.
>
> On Wed, Nov 6, 2024 at 11:50 AM Tim Holloway wrote:
>
>&
Hi,
We're encountering this unexpected behavior as well. This tracker [1] was
created 4 months ago.
Regards,
Frédéric.
[1] https://tracker.ceph.com/issues/67018
- Le 6 Déc 22, à 8:41, Holger Naundorf naund...@rz.uni-kiel.de a écrit :
> Hello,
> a mgr failover did not change the situation
Hi,
You could give rocksdb compression a try. It's safe to use since Pacific and
it's now enabled by default in Squid:
$ ceph config set osd bluestore_rocksdb_options_annex
'compression=kLZ4Compression'
Restart all OSDs and compact them twice. You can check db_used_bytes before and
after enab
- Le 6 Nov 24, à 12:29, Eugen Block ebl...@nde.ag a écrit :
> Dave,
>
> I noticed that the advanced osd spec docs are missing a link to
> placement-by-pattern-matching docs (thanks to Zac and Adam for picking
> that up):
>
> https://docs.ceph.com/en/latest/cephadm/services/#placement-by-pa
- Le 1 Nov 24, à 19:28, Dave Hall kdh...@binghamton.edu a écrit :
> Tim,
>
> Actually, the links the Eugen shared earlier were sufficient. I ended up
> with
>
> service_type: osd
> service_name: osd
> placement:
> host_pattern: 'ceph01'
> spec:
> data_devices:
>rotational: 1
> db_d
Hi Niklas,
To explain the 33% misplaced objects after you move a host to another DC, one
would have to check the current crush rule (ceph osd getcrushmap | crushtool -d
-) and to which OSDs PGs are mapped to before and after the move operation
(ceph pg dump).
Regarding the replicated crush rul
Hi Istvan,
Is you upgraded cluster using wpq or mclock scheduler? (ceph tell osd.X config
show | grep osd_op_queue)
Maybe your OSDs set their osd_mclock_max_capacity_iops_* capacity too low on
start (ceph config dump | grep osd_mclock_max_capacity_iops) limiting their
performance.
You might w
t 12:46 PM Sake Ceph wrote:
>
> > I hope someone of the development team can share some light on this. Will
> > search the tracker if some else made a request about this.
> >
> > > Op 29-10-2024 16:02 CET schreef Frédéric Nass <
> > frederic.n...@univ-lorra
Hi,
I'm not aware of any service settings that would allow that.
You'll have to monitor each MDS state and restart any non-local active MDSs to
reverse roles.
Regards,
Frédéric.
- Le 29 Oct 24, à 14:06, Sake Ceph c...@paulusma.eu a écrit :
> Hi all
> We deployed successfully a stretched c
>
>
>>
>> /maged
>>
>> On 23/10/2024 06:54, Vigneshwar S wrote:
>> > Hi Frédéric,
>> >
>> > 5a section states that that the divergent events would be
>> tracked and
>> > deleted. Bu
- Le 25 Oct 24, à 18:21, Frédéric Nass frederic.n...@univ-lorraine.fr a
écrit :
> - Le 25 Oct 24, à 16:31, Bob Gibson r...@oicr.on.ca a écrit :
>
>> HI Frédéric,
>>
>>> I think this message shows up as this very specific post adoption 'osd'
>&
- Le 25 Oct 24, à 16:31, Bob Gibson r...@oicr.on.ca a écrit :
> HI Frédéric,
>
>> I think this message shows up as this very specific post adoption 'osd'
>> service
>> has already been marked as 'deleted'. Maybe when you ran the command for the
>> first time.
>> The only reason it still sh
- Le 23 Oct 24, à 20:14, Bob Gibson r...@oicr.on.ca a écrit :
> Sorry to resurrect this thread, but while I was able to get the cluster
> healthy
> again by manually creating the osd, I'm still unable to manage osds using the
> orchestrator.
>
> The orchestrator is generally working, but I
Hi Marc,
Make sure you have a look at CrowdSec [1] for distributed protection. It's well
worth the time.
Regards,
Frédéric.
[1] https://github.com/crowdsecurity/crowdsec
De : Marc
Envoyé : jeudi 24 octobre 2024 22:52
À : Ken Dreyer
Cc: ceph-users
Objet : [cep
Hi Edouard,
For each subvolume listed by 'ceph fs subvolume ls cephfs csi', you could get
its PV name with 'rados listomapvals' and then check if this PV still exists or
not in K8s:
$ ceph fs subvolume ls cephfs csi
[
{
"name": "csi-vol-fab753bf-c4c0-42d0-98d4-8dd1caf5055f"
}
]
$ rados
t peers with C and looks at
> the
> history and thinks they’ve no divergent events and keep the object right?
> Regards,
> Vigneshwar
> On Wed, 23 Oct 2024 at 9:09 AM, Frédéric Nass < [
> mailto:frederic.n...@univ-lorraine.fr | frederic.n...@univ-lorraine.fr ] >
> wr
Hi Vigneshwar,
You might want to check '5a' section from the peering process documentation
[1].
Regards,
Frédéric.
[1]
https://docs.ceph.com/en/reef/dev/peering/#description-of-the-peering-process
De : Vigneshwar S
Envoyé : mardi 22 octobre 2024 11:05
À :
Hi Dave,
After removing the per host osd_memory_targets with the command Alex just
shared, I would advise you to disable swap and reboot these OSD nodes (the
reboot step is important). In the past we've had issues with swap interfering
badly with OSD memory calculation ending up with OSDs swapp
very_ops.
>
> I guess they got lost while upgrading? I dont know, but this seems
> to be the solution.
>
>
> ceph orch apply ceph-exporter
>
> ceph orch redeploy prometheus
>
>
> Best
>
> inDane
>
>
>
>
>
>
>
> On
ve any additional suggestions for troubleshooting this issue further?
> Thanks again for your help!
> Best regards,
> Sanjay Mohan
> Software Defined Storage Engineer
> sanjaymo...@am.amrita.edu
> From: Frédéric Nass
> Sent: 21 October 2024 1:22 PM
> To: Sanjay Mohan
>
Hi Sanjay,
I've just checked the dashboard of a v19.2.0 cluster, and the recovery
throughput is displayed correctly, as shown in the screenshot here [1]. You
might want to consider redeploying the dashboard.
Regards,
Frédéric.
[1] https://docs.ceph.com/en/latest/mgr/dashboard/
- Le 19 Oct
Hi Malte,
Check this solution posted here [1] by Alex.
Cheers,
Frédéric.
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/PEJC7ANB6EHXWE2W4NIGN2VGBGIX4SD4/
De : Malte Stroem
Envoyé : jeudi 17 octobre 2024 20:24
À : Eugen Block; ceph-users@ce
Hi Harry,
Do you have a 'cluster_network' set to the same subnet as the 'public_network'
like in the issue [1]? Doesn't make much sens setting up a cluster_network when
it's not different than the public_network.
Maybe that's what triggers the OSD_UNREACHABLE recently coded here [2] (even
thoug
On 09.10.24 15:48, Frédéric Nass wrote:
>> There's this --yes-i-really-mean-it option you could try but only after
>> making
>> sure that all OSDs are actually running Pacific.
>>
>> What does a 'ceph versions' says? Did you restart all OSDs after the upgrad
root@helper:~# ceph osd require-osd-release mimic
> Error EPERM: not all up OSDs have CEPH_FEATURE_SERVER_MIMIC feature
>
> On 09.10.24 15:18, Frédéric Nass wrote:
>> Here's an example of what a Pacific cluster upgraded from Hammer shows:
>>
>> $ ceph osd dump | head -13
luminous
> stretch_mode_enabled false
>
> You are right, I did't run such commands. This is because I have two
> other clusters that I have gradulally upgraded to Quincy from Luminous,
> but following the proxmox instructions, and I don't see there any such.
>
> On 09.10
a7d43a51b03) pacific
> (stable)
> 3.
> Yes, now MON started and OSDs started, but they cannot connect to MON. At the
> same time, the MON journal has a message:
> disallowing boot of octopus+ OSD osd.xx
> And I tried rebuild the MON with this ceph (Pacific) version and it is runn
- Le 8 Oct 24, à 15:24, Alex Rydzewski rydzewski...@gmail.com a écrit :
> Hello, dear community!
>
> I kindly ask for your help in resolving my issue.
>
> I have a server with a single-node CEPH setup with 5 OSDs. This server
> has been powered off for about two years, and when I needed the
Hey Eugen,
Check this one here: https://github.com/ceph/ceph/pull/55534
It's fixed in 18.2.4 and should be in upcoming 17.2.8.
Cheers,
Frédéric.
De : Eugen Block
Envoyé : jeudi 3 octobre 2024 23:21
À : ceph-users@ceph.io
Objet : [ceph-users] Re: cephadm crush_d
- Le 2 Oct 24, à 16:21, Victor Rodriguez a écrit
:
>> Hi,
>> What makes this cluster a non-local cluster?
> It's hosted in OVH's 3AZ, with each host in a different DC, each at around
> 30-60km's away from each other, hence the relatively high latency.
Yeah. 0.6 to 1ms are expect latencies
BTW, running first Squid stable release (v19.2.0) in production seems a bit
audacious at this time. :-)
Frédéric.
- Le 2 Oct 24, à 9:03, Frédéric Nass frederic.n...@univ-lorraine.fr a écrit
:
> Hi,
>
> Probably. This one [1] was posted 2 months ago. No investigations yet.
>
Hi,
Probably. This one [1] was posted 2 months ago. No investigations yet.
May be EL9.4 and/or podman version related.
Regards,
Frédéric.
[1] https://tracker.ceph.com/issues/67517
- Le 1 Oct 24, à 16:43, Sascha Frey s...@techfak.net a écrit :
> Hi,
>
> after upgrading our Ceph cluster fro
Hi,
What makes this cluster a non-local cluster?
0'6 and 1 millisecond RTT latency seems too high for all-flash clusters and
intense 4K write workloads.
The upmap-read or read balancer modes may help with reads but not writes where
1.2ms+ latency will still be observed.
Regards,
Frédéric.
---
Hi Alex,
Maybe this one [1] that leads to osd / mon asserts. Have a look at Laura's post
here [2] for more information.
Updating clients to Reef+ (not sure which kernel added the upmap read feature)
or removing any pg_upmap_primaries entries may help in your situation.
Regards,
Frédéric.
[1]
Hi George,
Looks like you hit this one [1]. Can't find the fix [2] in Reef release notes
[3]. You'll have to cherry pick it and build sources or wait for it to come to
next build.
Regards,
Frédéric.
[1] https://tracker.ceph.com/issues/58878
[2] https://github.com/ceph/ceph/pull/55265
[3] https
Hi Burkhard,
This is a known issue. We ran into it a few months back using VScode containers
working on Cephfs under Kubernetes.
Tweaking the settings.json file as he suggested here [1] by Dietmar did the
trick for us.
Regards,
Frédéric.
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ce
error on a bucket
> with 0 shards. This bucket does have 0 shards when I check the stats.
> TBH I'm pretty sure there are tons and tons of leftover rados objects in our
> cluster. The radosgw service has crashed so many times since the inception of
> this cluster (50x a day fo
- Le 19 Sep 24, à 16:34, Reid Guyett a écrit :
> Hi,
> I didn't notice any changes in the counts after running the check --fix |
> check
> --check-objects --fix. Also the bucket isn't versioned.
> I will take a look at the index vs the radoslist. Which side w
orage.domain ] s3api abort-multipart-upload --bucket
>> mimir-prod --key "network/01H9CFRA45MJWBHQRCHRR4JHV4/index" --upload-id
>> "sJRTCoqiZvlge2cjz6gLU7DwuLI468zo.2"
>> An error occurred (NoSuchUpload) when calling the AbortMultipartUpload
>> operati
RR4JHV4/index" --upload-id
>> "sJRTCoqiZvlge2cjz6gLU7DwuLI468zo.2"
>> An error occurred (NoSuchUpload) when calling the AbortMultipartUpload
>> operation: Unknown
> I seem to have many buckets with this type of state. I'm hoping to be able to
> fix them.
>
Hi Laszlo,
I think it depends on the type of cluster you're trying to build.
If made of HDD/SSD OSDs, then 2 cores per OSD is probably still valid. I
believe the 5-6 cores per OSDs recommendation you mentioned relates to all
flash (NVMe) clusters where CPUs and especially memory bandwidth can't
Hi Reid,
The bucket check --fix will not clean up aborted multipart uploads. An S3
client will.
You need to either set a Lifecycle policy on buckets to have these cleaned up
automatically after some time
~/ cat /home/lifecycle.xml
3
Enabled
~/ s3cmd se
Hey,
Yes, you can use either of these commands depending on whether or not you are
using containers to get live OSDs's bluestore fragmentation:
ceph daemon osd.0 bluestore allocator score block
or
cephadm shell ceph daemon osd.0 bluestore allocator score block
...
{
"fragmentation_rating":
As a reminder, there's this one waiting ;-)
https://tracker.ceph.com/issues/66641
Frédéric.
PS: For the record, Andre's problem was related to the 'caps'
(https://www.reddit.com/r/ceph/comments/1ffzfjc/ceph_rbd_werasure_coding/)
- Le 15 Sep 24, à 18:02, Anthony D'Atri anthony.da...@gmail.c
c.
- Le 9 Sep 24, à 17:15, Frédéric Nass frederic.n...@univ-lorraine.fr a
écrit :
> Hi Istvan,
>
> This can only ease when adding new storage capacity to the cluster (and maybe
> when data migration is involved like when changing cluster's topology or crush
> rules?).
>
Hi Istvan,
This can only ease when adding new storage capacity to the cluster (and maybe
when data migration is involved like when changing cluster's topology or crush
rules?).
When adding new nodes, PGs will be remapped to make use of the new OSDs, which
will trigger some data migration. The
it be that the LV
> is
> not correctly mapped?
> Basically here the question is: is there a way to recover the data of an OSD
> in
> an LV, if it was ceph osd purge before the cluster had a chance to replicate
> it
> (after ceph osd out )?
> Thanks for your time!
> fm
Hi Marco,
Have you checked the output of:
dd if=/dev/ceph-xxx/osd-block-x of=/tmp/foo bs=4K count=2
hexdump -C /tmp/foo
and:
/usr/bin/ceph-bluestore-tool show-label --log-level=30 --dev /dev/nvmexxx -l
/var/log/ceph/ceph-volume.log
to see if it's aligned with OSD's metadata.
You
hich is already
> present in 'ceph orch ps --daemon-type' command. You could either
> drain a specific daemon-type or drain the entire host (can be the
> default with the same behaviour as it currently works). That would
> allow more control about non-osd daemons.
>
> Zitat
- Le 19 Aoû 24, à 15:45, Yehuda Sadeh-Weinraub yeh...@redhat.com a écrit :
> On Sat, Aug 17, 2024 at 9:12 AM Anthony D'Atri wrote:
>>
>> > It's going to wreak havoc on search engines that can't tell when
>> > someone's looking up Ceph versus the long-establish Squid Proxy.
>>
>> Search engin
when a
> label
> is removed from the host the services eventually drain.
>
>
>
> -Original Message-
> From: Frédéric Nass
> Sent: Thursday, August 29, 2024 11:30 AM
> To: Eugen Block
> Cc: ceph-users ; dev
> Subject: [ceph-users] Re: ceph orch host drain
Hello Eugen,
A month back, while playing with a lab cluster, I drained a multi-service host
(OSDs, MGR, MON, etc.) in order to recreate all of its OSDs. During this
operation, all cephadm containers were removed as expected, including the MGR.
As a result, I got into a situation where the orche
Hi Nicola,
You might want to post in the ceph-dev list about this or discuss it with devs
in the ceph-devel slack channel for quicker help.
Bests,
Frédéric.
De : Nicola Mori
Envoyé : mercredi 21 août 2024 15:52
À : ceph-users@ceph.io
Objet : [ceph-users] Re: Pu
Hi Dario,
A workaround may be to downgrade client's kernel or ceph-fuse version to a
lower version than those listed in Enrico's comment #22, I believe.
Can't say for sure though since I couldn't verify it myself.
Cheers,
Frédéric.
De : Dario Graña
Envoyé : ven
sure you don't run out of disk space.
Best regards,
Frédéric.
De : Best Regards
Envoyé : jeudi 8 août 2024 11:32
À : Frédéric Nass
Cc: ceph-users
Objet : Re:Re: Re:[ceph-users] Re: Please guide us inidentifying
thecauseofthedata miss in EC pool
Hi,Fr
crashed.
Your thoughts?
Frédéric.
De : Best Regards
Envoyé : jeudi 8 août 2024 09:16
À : Frédéric Nass
Cc: ceph-users
Objet : Re:[ceph-users] Re: Please guide us inidentifying thecause ofthedata
miss in EC pool
Hi, Frédéric Nass
Yes. I checked the host running
déric.
De : Best Regards
Envoyé : jeudi 8 août 2024 08:10
À : Frédéric Nass
Cc: ceph-users
Objet : Re:Re: Re:Re: Re:Re: Re:Re: [ceph-users] Please guide us inidentifying
thecause ofthedata miss in EC pool
Hi, Frédéric Nass
Thank you for your continued attention and guidance. Let's a
Hi,
You're right. The object reindex subcommand backport was rejected for P and is
still pending for Q and R. [1]
Use rgw-restore-bucket-index script instead.
Regards,
Frédéric.
[1] https://tracker.ceph.com/issues/61405
De : vuphun...@gmail.com
Envoyé : mercre
Hi,
First thing that comes to mind when it comes to data unavailability or
inconsistencies after a power outage is that some dirty data may have been lost
along the IO path before reaching persistent storage. This can happen with non
enterprise grade SSDs using non-persistent cache or with HDDs
Hello,
Not sure this exactly matches your case but you could try to reindex those
orphan objects with 'radosgw-admin object reindex --bucket {bucket_name}'. See
[1] for command arguments, like realm, zonegroup, zone, etc.
This command scans the data pool for objects that belong to a given bucket
Hi Huy,
The sync result you posted earlier appears to be from master zone. Have you
checked the secondary zone with 'radosgw-admin sync status --rgw-zone=hn2'?
Can you check that:
- sync user exists in the realm with 'radosgw-admin user list
--rgw-realm=multi-region'
- sync user's access_key an
Hi Josh,
Thank you for sharing this information.
Can I ask what symptoms made you interested in tombstones?
For the past few months, we've been observing successive waves of a large
number of OSDs in overspilling. When the phenomenon occurs, we automatically
compact the OSDs (on the fly, one a
ces, came back later and
succeeded.
Maybe that explains it.
Cheers,
Frédéric.
- Le 17 Juil 24, à 16:22, Frédéric Nass frederic.n...@univ-lorraine.fr a
écrit :
> - Le 17 Juil 24, à 15:53, Albert Shih albert.s...@obspm.fr a écrit :
>
>> Le 17/07/2024 à 09:40:59+0200,
kend.
>>
>> But v2 is absent on the public OSD and MDS network
>>
>> The specific point is that the public network has been changed.
>>
>> At first, I thought it was the order of declaration of my_host (v1 before v2)
>> but apparently, that's
Hi David,
Redeploying 2 out of 3 MONs a few weeks back (to have them using RocksDB to be
ready for Quincy) prevented some clients from connecting to the cluster and
mounting cephfs volumes.
Before the redeploy, these clients were using port 6789 (v1) explicitly as
connections wouldn't work wit
Hi Rudenko,
There's been this bug [1] in the past preventing BlueFS alert from popping up
on ceph -s due to some code refactoring. You might just be facing over spilling
without noticing.
I'm saying this because you're running v16.2.13 and this bug was fixed in
v16.2.14 (by [3], based on Pacifi
--
> Agoda Services Co., Ltd.
> e: [ mailto:istvan.sz...@agoda.com | istvan.sz...@agoda.com ]
> -------
> From: Frédéric Nass
> Sent: Friday, July 12, 2024 6:52 PM
> To: Richard Bade ; Szabo, Istvan (Agoda)
>
> Cc: Cas
- Le 11 Juil 24, à 20:50, Dave Hall kdh...@binghamton.edu a écrit :
> Hello.
>
> I would like to use mirroring to facilitate migrating from an existing
> Nautilus cluster to a new cluster running Reef. RIght now I'm looking at
> RBD mirroring. I have studied the RBD Mirroring section of th
- Le 11 Juil 24, à 0:23, Richard Bade hitr...@gmail.com a écrit :
> Hi Casey,
> Thanks for that info on the bilog. I'm in a similar situation with
> large omap objects and we have also had to reshard buckets on
> multisite losing the index on the secondary.
> We also now have a lot of bucket
/ceph-${osd}/block --dev-target
/var/lib/ceph/osd/ceph-${osd}/block.db
3/ ceph orch daemon start osd.${osd}
4/ ceph tell osd.${osd} compact
Regards,
Frédéric.
- Le 8 Juil 24, à 17:39, Frédéric Nass frederic.n...@univ-lorraine.fr a
écrit :
> Hello,
>
> I just wanted to share
Hello,
I just wanted to share that the following command also helped us move slow used
bytes back to the fast device (without using bluefs-bdev-expand), when several
compactions couldn't:
$ cephadm shell --fsid $cid --name osd.${osd} -- ceph-bluestore-tool
bluefs-bdev-migrate --path /var/lib/c
Hi,
Another way to prevent data movement at OSD creation time (appart from using
norebalance and nobackfill cluster flags) is to pre-create the host buckets in
another root, named for example "closet", let the orchestrator create the OSDs
and move these host buckets to their final bucket locati
We came to the same conclusions as Alexander when we studied replacing Ceph's
iSCSI implementation with Ceph's NFS-Ganesha implementation: HA was not working.
During failovers, vmkernel would fail with messages like this:
2023-01-14T09:39:27.200Z Wa(180) vmkwarning: cpu18:2098740)WARNING: NFS41:
- Le 28 Juin 24, à 15:27, Anthony D'Atri anthony.da...@gmail.com a écrit :
>>>
>>> But this in a spec doesn't match it:
>>>
>>> size: '7000G:'
>>>
>>> This does:
>>>
>>> size: '6950G:'
>
> There definitely is some rounding within Ceph, and base 2 vs base 10
> shenanigans.
>
>>
>> $ ce
- Le 26 Juin 24, à 10:50, Torkil Svensgaard tor...@drcmr.dk a écrit :
> On 26/06/2024 08:48, Torkil Svensgaard wrote:
>> Hi
>>
>> We have a bunch of HDD OSD hosts with DB/WAL on PCI NVMe, either 2 x
>> 3.2TB or 1 x 6.4TB. We used to have 4 SSDs pr node for journals before
>> bluestore and t
com/show_bug.cgi?id=2219373
[2] https://github.com/ceph/ceph/pull/53803
- Le 28 Juin 24, à 10:34, Torkil Svensgaard tor...@drcmr.dk a écrit :
> On 27-06-2024 10:56, Frédéric Nass wrote:
>> Hi Torkil, Ruben,
>
> Hi Frédéric
>
>> I see two theoretical ways to do this wi
Hi Torkil, Ruben,
I see two theoretical ways to do this without additional OSD service. One that
probably doesn't work :-) and another one that could work depending on how the
orchestrator prioritize its actions based on services criteria.
The one that probably doesn't work is by specifying mul
Hello Wesley,
I couldn't find any tracker related to this and since min_size=1 has been
involved in many critical situations with data loss, I created this one:
https://tracker.ceph.com/issues/66641
Regards,
Frédéric.
- Le 17 Juin 24, à 19:14, Wesley Dillingham w...@wesdillingham.com a écr
Hello,
'ceph osd deep-scrub 5' deep-scrubs all PGs for which osd.5 is primary (and
only those).
You can check that from ceph-osd.5.log by running:
for pg in $(grep 'deep-scrub starts' /var/log/ceph/*/ceph-osd.5.log | awk
'{print $8}') ; do echo "pg: $pg, primary osd is osd.$(ceph pg $pg query -
Hello Petr,
- Le 4 Juin 24, à 12:13, Petr Bena petr@bena.rocks a écrit :
> Hello,
>
> I wanted to try out (lab ceph setup) what exactly is going to happen
> when parts of data on OSD disk gets corrupted. I created a simple test
> where I was going through the block device data until I found
Hi Joshua,
These messages actually deserve more attention than you think, I believe. You
may hit this one [1] that Mark (comment #4) also hit with 16.2.10 (RHCS 5).
PR's here: https://github.com/ceph/ceph/pull/51669
Could you try raising osd_max_scrubs to 2 or 3 (now defaults to 3 in quincy and
Hello Robert,
You could try:
ceph config set mgr mgr/cephadm/container_image_nvmeof
"quay.io/ceph/nvmeof:1.2.13" or whatever image tag you need (1.2.13 is current
latest).
Another way to run the image is by editing the unit.run file of the service or
by directly running the container with pod
containers or distribution packages?
> * Do you run bare-metal or virtualized?
>
> Best,
> Sebastian
>
> Am 24.05.24 um 12:28 schrieb Frédéric Nass:
>> Hello everyone,
>>
>> Nice talk yesterday. :-)
>>
>> Regarding containers vs RPMs and orchestrat
Hello everyone,
Nice talk yesterday. :-)
Regarding containers vs RPMs and orchestration, and the related discussion from
yesterday, I wanted to share a few things (which I wasn't able to share
yesterday on the call due to a headset/bluetooth stack issue) to explain why we
use cephadm and ceph
d/or group_vars/*.yaml files.
You can also try adding multiple - on the ansible-playbook command and see
if you get something useful.
Regards,
Frédéric.
De : vladimir franciz blando
Envoyé : mardi 14 mai 2024 21:23
À : Frédéric Nass
Cc: Eugen Block; ceph-us
#x27;t work either.
> Regards,
> [ https://about.me/vblando | Vlad Blando ]
> On Tue, May 14, 2024 at 4:10 PM Frédéric Nass < [
> mailto:frederic.n...@univ-lorraine.fr | frederic.n...@univ-lorraine.fr ] >
> wrote:
>> Hello Vlad,
>> We've seen this before a
Hello Vlad,
We've seen this before a while back. Not sure to recall how we got around this
but you might want to try setting 'ip_version: ipv4' in your all.yaml file
since this seems to be a condition to the facts setting.
- name: Set_fact _monitor_addresses - ipv4
ansible.builtin.set_fact:
Hello,
'almost all diagnostic ceph subcommands hang!' -> this triggered my bell. We've
had a similar issue with many ceph commands hanging due to a missing L3 ACL
between MGRs and a new MDS machine that we added to the cluster.
I second Eugen analysis: network issue, whatever the OSI layer.
Re
s all too easy to forget to
>> reduce them later, or think that it's okay to run all the time with
>> reduced headroom.
>>
>> Until a host blows up and you don't have enough space to recover into.
>>
>>> On Apr 12, 2024, at 05:01, Frédéric Nass
>>
Hello Michael,
You can try this:
1/ check that the host shows up on ceph orch ls with the right label 'osds'
2/ check that the host is OK with ceph cephadm check-host . It should
look like:
(None) ok
podman (/usr/bin/podman) version 4.6.1 is present
systemctl is present
lvcreate is present
Unit
.
Regards,
Frédéric.
- Le 23 Avr 24, à 13:04, Janne Johansson icepic...@gmail.com a écrit :
> Den tis 23 apr. 2024 kl 11:32 skrev Frédéric Nass
> :
>> Ceph is strongly consistent. Either you read/write objects/blocs/files with
>> an
>> insured strong consistency OR yo
Hello,
My turn ;-)
Ceph is strongly consistent. Either you read/write objects/blocs/files with an
insured strong consistency OR you don't. Worst thing you can expect from Ceph,
as long as it's been properly designed, configured and operated is a temporary
loss of access to the data.
There are
1 - 100 of 143 matches
Mail list logo