Don't know if this will help you
But we do all our scrubbing manually with cron tasks
always the oldest non-scrubbed pg
And to check on scrubbing we use this - which reports the current
active scrubbing process
ceph pg ls scrubbing | sort -k18 -k19 | head -n 20
for us a scrub is 5 minutes +/-
Hi
Just testing as I have not received a message from the list in a couple days
Thanks Joe
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
That's correct - we use the kernel target not tcmu-runner
>>> Xiubo Li 12/13/2022 6:02 PM >>>
On 14/12/2022 06:54, Joe Comeau wrote:
> I am curious about what is happening with your iscsi configuration
> Is this a new iscsi config or something that has just cropped
I am curious about what is happening with your iscsi configuration
Is this a new iscsi config or something that has just cropped up ?
We are using/have been using vmware for 5+ years with iscsi
We are using the kernel iscsi vs tcmu
We are running ALUA and all datastores are setup as RR
We
I will tell you of our experience
Dell perc controllers with HDD and separate Intel NVMe for journals etc
With the Disk first behind the controller with caching enabled and it set as a
raid0 and the OSDs were encrypted everything was good.
When we upgrade to LVM and still encrypted and
-remapped.py
Matthias Grandl
Head of UX
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
On Fri, May 7, 2021, 02:16 Joe Comeau wrote:
>
> Nautilus cluster is not unmapping
>
Nautilus cluster is not unmapping
ceph 14.2.16
ceph report |grep "osdmap_.*_committed"
report 1175349142
"osdmap_first_committed": 285562,
"osdmap_last_committed": 304247,
we've set osd_map_cache_size = 2
but its is slowly growing to that difference as well
OSD map first
just issue the commands
scrub pg deep-scrub 17.1cs
this will deep scrub this pg
ceph pg repair 17.7ff
repairs the pg
>>> Richard Bade 1/26/2021 3:40 PM >>>
Hi Everyone,
I also have seen this inconsistent with empty when you do
list-inconsistent-obj
$ sudo ceph health detail
s from here:
https://documentation.suse.com/ses/6/html/ses-all/ceph-rbd.html
--
Salsa
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Thursday, September 3, 2020 12:58 PM, Joe Comeau
wrote:
> Here is a link for iSCSI/RBD implementation guide from SUSE for this
year for vmware (Hype
Here is a link for iSCSI/RBD implementation guide from SUSE for this year for
vmware (Hyper-v should be similar)
https://www.suse.com/media/guide/suse-enterprise-storage-implementation-guide-for-vmware-esxi-guide.pdf
We've been running rbd/iscsi for 4 years
Thanks Joe
>>> Salsa 9/2/2020
A while ago - before ceph balancer - probably on Jewel
We had a bunch of disks with different re-weights to help control pg
We upgraded to luminous
All our disks are the same, so we set them all back to 1.0 then let them fill
accordingly
Then ran balancer about 4-5 times, letting each run
try from admin node
ceph osd df
ceph osd status
thanks Joe
>>> 2/10/2020 10:44 AM >>>
Hello MJ,
Perhaps your PGs are a unbalanced?
Ceph osd df tree
Greetz
Mehmet
Am 10. Februar 2020 14:58:25 MEZ schrieb lists :
>Hi,
>
>We would like to replace the current seagate ST4000NM0034 HDDs in
12 matches
Mail list logo