[ceph-users] Re: Heads up: New Ceph images require x86-64-v2 and possibly a qemu config change for virtual servers

2024-07-18 Thread Bailey Allison
+1 to this, also ran into this in our lab testing. Thanks for sharing this information! Regards, Bailey > -Original Message- > From: Eugen Block > Sent: July 18, 2024 3:55 AM > To: ceph-users@ceph.io > Subject: [ceph-users] Re: Heads up: New Ceph images require x86-64-v2 and > possibly

[ceph-users] Re: Question regarding bluestore labels

2024-06-10 Thread Bailey Allison
Igor, it was your post on here mentioning this a few weeks ago that actually let me know to even check this stuff. Regards, Bailey > -Original Message- > From: Igor Fedotov > Sent: June 10, 2024 7:08 AM > To: Bailey Allison ; 'ceph-users' us...@ceph.io> > Subjec

[ceph-users] Question regarding bluestore labels

2024-06-07 Thread Bailey Allison
I have a question regarding bluestore labels, specifically for a block.db partition. To make a long story short, we are currently in a position where checking the label of a block.db partition and it appears corrupted. I have seen another thread on here suggesting to copy the label from a

[ceph-users] Re: Write issues on CephFS mounted with root_squash

2024-05-15 Thread Bailey Allison
Hey Nicola, Try mounting cephfs with fuse instead of kernel, we have seen before sometimes the kernel mount does not properly support that option but the fuse mount does. Regards, Bailey > -Original Message- > From: Nicola Mori > Sent: May 15, 2024 7:55 AM > To: ceph-users >

[ceph-users] Re: Reconstructing an OSD server when the boot OS is corrupted

2024-04-24 Thread Bailey Allison
Hey Peter, A simple ceph-volume lvm activate should get all of the OSDs back up and running once you install the proper packages/restore the ceph config file/etc., If the node was also a mon/mgr you can simply re-add those services. Regards, Bailey > -Original Message- > From: Peter

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-03-28 Thread Bailey Allison
Hey, We make use of the ctdb_mutex_ceph_rados_helper so the lock file just gets stored within CephFS metadata pool rather than on a shared CephFS mount as a file. We don't recommend storing directly on CephFS as if the mount hosting the lock file is to go down we have seen the mds mark as

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-03-21 Thread Bailey Allison
I think this is fantastic. Looking forward to the sambaxp talk too! CephFS + SMB is something we make use of very much of, and have had a lot of success working with. It is nice to see it getting some more integration. Regards, Bailey > -Original Message- > From: John Mulligan >

[ceph-users] Re: CephFS space usage

2024-03-14 Thread Bailey Allison
Hey All, It might be easier to check using cephfs dir stats using getfattr, ex. getfattr -n ceph.dir.rentries /path/to/dir Regards, Bailey > -Original Message- > From: Igor Fedotov > Sent: March 14, 2024 1:37 PM > To: Thorne Lawler ; ceph-users@ceph.io; > etienne.men...@ubisoft.com;

[ceph-users] Re: Ceph & iSCSI

2024-02-27 Thread Bailey Allison
+1 on this if you need iSCSI, Maged & team have built a great iSCSI ceph solution with PetaSAN especially if integrating directly into VMware. Regards, Bailey > -Original Message- > From: Maged Mokhtar > Sent: February 27, 2024 5:40 AM > To: ceph-users@ceph.io > Subject: [ceph-users]

[ceph-users] Re: PSA: Long Standing Debian/Ubuntu build performance issue (fixed, backports in progress)

2024-02-08 Thread Bailey Allison
Holy! I have no questions just wanted to say thanks for emailing this, as much as it does suck to know that's been an issue I really appreciate you sharing the information about this on here. We've got a fair share of ubuntu clusters so if there's a way to validate I would love to know, but it

[ceph-users] Re: Performance impact of Heterogeneous environment

2024-01-17 Thread Bailey Allison
+1 to this, great article and great research. Something we've been keeping a very close eye on ourselves. Overall we've mostly settled on the old keep it simple stupid methodology with good results. Especially as the benefits have gotten less beneficial the more recent your ceph version, and

[ceph-users] Re: ceph df reports incorrect stats

2023-12-06 Thread Bailey Allison
Hey Frank, +1 to this, we've seen it a few times now. I've attached an output of ceph df from an internal cluster we have with the same issue. [root@Cluster1 ~]# ceph df --- RAW STORAGE --- CLASS SIZE AVAILUSED RAW USED %RAW USED fast_nvme 596 GiB 595 GiB 50 MiB 1.0 GiB

[ceph-users] Re: Upgrading nautilus / centos7 to octopus / ubuntu 20.04. - Suggestions and hints?

2023-08-01 Thread Bailey Allison
Hi Götz, We’ve done a similar process which involves going from starting at CentOS 7 Nautilus and upgrading to Rocky 8/Ubuntu 20.04 Octopus+. What we do is start on CentOS 7 Nautilus we upgrade to Octopus on CentOS 7 (we’ve built python packages and have them on our repo to satisfy some

[ceph-users] Re: 1 Large omap object found

2023-07-31 Thread Bailey Allison
Hi, It appears you have quite a low PG count on your cluster (approx. 20 PGs per each OSD). Usually is recommended to have about 100-150 per each OSD. With a lower PG count you can have issues with balancing data and cause errors such as large OMAP objects. Might not be the fix in this case

[ceph-users] Re: OSD stuck on booting state after upgrade (v15.2.17 -> v17.2.6)

2023-07-27 Thread Bailey Allison
Hi, Did you restart all of the ceph services just on node 1 so far? Or did you restart mons on each node first, then managers on each node, etc.,? I have seen during ceph upgrades if services are restarted out of order a similar issue occurs (restarting all ceph services on a single node).

[ceph-users] Re: architecture help (iscsi, rbd, backups?)

2023-05-02 Thread Bailey Allison
nt: April 29, 2023 11:21 PM >To: Bailey Allison ; ceph-users@ceph.io >Subject: [ceph-users] Re: architecture help (iscsi, rbd, backups?) > >Bailey, > >Thanks for your extensive reply, you got me down the wormhole of CephFS and >SMB (and looking at a lot of 45drives vide

[ceph-users] Re: architecture help (iscsi, rbd, backups?)

2023-04-27 Thread Bailey Allison
Hey Angelo, Just to make sure I'm understanding correctly, the main idea for the use case is to be able to present Ceph storage to windows clients as SMB? If so, you can absolutely use CephFS to get that done. This is something we do all the time with our cluster configurations, if we're

[ceph-users] Re: Interruption of rebalancing

2023-03-02 Thread Bailey Allison
Hey Jeff, As long as you set the maintenance flags (noout/norebalance) you should be good to take the node down with a reboot Regards, Bailey >From: Jeffrey Turmelle >Sent: March 1, 2023 2:47 PM >To: ceph-users@ceph.io >Subject: [ceph-users] Interruption of rebalancing > >I

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Bailey Allison
Hi, That is most likely possible but the difference in performance from doing CephFS + Samba compared to RBD + Ceph iSCSI + Windows SMB would probably be extremely noticeable in a not very good way. As Wyll mentioned recommended way is to just share out SMB on top of an exisitng CephFS mount

[ceph-users] Re: Balancer Distribution Help

2022-09-22 Thread Bailey Allison
Hi Reed, Just taking a quick glance at the Pastebin provided I have to say your cluster balance is already pretty damn good all things considered. We've seen the upmap balancer at it's best in practice provides a deviation of about 10-20% percent across OSDs which seems to be matching up on