[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-18 Thread Venky Shankar
On Tue, Oct 17, 2023 at 12:23 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63219#note-2 > Release Notes - TBD > > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be fixed for this releas

[ceph-users] Re: How to deal with increasing HDD sizes ? 1 OSD for 2 LVM-packed HDDs ?

2023-10-18 Thread Renaud Jean Christophe Miel
Thank you for your feedback. We have a failure domain of "node". The question here is a rather simple one: when you add to an existing Ceph cluster a new node having disks twice (12TB) the size of the existing disks (6TB), how do you let Ceph evenly distribute the data across all disks ? You m

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-18 Thread Yuri Weinstein
Ok I merged all PRs known to me. If I hear no objections I will start the building (Casey FYI -> and will in parallel run quicny-p2p) On Wed, Oct 18, 2023 at 11:44 AM Yuri Weinstein wrote: > > Per our chat with Casey, we will remove s3tests and include > https://github.com/ceph/ceph/pull/54078

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-18 Thread Laura Flores
The upgrade-clients/client-upgrade-quincy-reef suite passed with Prashant’s POOL_AAPP_NOT_ENABLED PR. Approved! On Wed, Oct 18, 2023 at 1:45 PM Yuri Weinstein wrote: > Per our chat with Casey, we will remove s3tests and include > https://github.com/ceph/ceph/pull/54078 into 17.2.7 > > On Wed, Oc

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-18 Thread Yuri Weinstein
Per our chat with Casey, we will remove s3tests and include https://github.com/ceph/ceph/pull/54078 into 17.2.7 On Wed, Oct 18, 2023 at 9:30 AM Casey Bodley wrote: > > On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote: > > > > Details of this release are summarized here: > > > > https://track

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-18 Thread Casey Bodley
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63219#note-2 > Release Notes - TBD > > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be fixed for this release

[ceph-users] Re: How to deal with increasing HDD sizes ? 1 OSD for 2 LVM-packed HDDs ?

2023-10-18 Thread Anthony D'Atri
This is one of many reasons for not using HDDs ;) One nuance that is easy overlooked is the CRUSH weight of failure domains. If, say, you have a failure domain of "rack" with size=3 replicated pools and 3x CRUSH racks, if you add the new, larger OSDs to only one rack, you will not increase the

[ceph-users] Join us for the User + Dev Meeting, happening tomorrow!

2023-10-18 Thread Laura Flores
Hi Ceph users and developers, You are invited to join us at the User + Dev meeting tomorrow at 10:00 AM EST! See below for more meeting details. We have two guest speakers joining us tomorrow: 1. "CRUSH Changes at Scale" by Joshua Baergen, Digital Ocean In this talk, Joshua Baergen will discuss

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-18 Thread Laura Flores
@Prashant Dhange raised PR https://github.com/ceph/ceph/pull/54065 to help with POOL_APP_NOT_ENABLED warnings in the smoke, rados, perf-basic, and upgrade-clients/client-upgrade-quincy-reef suites. The tracker has been updated with reruns including Prashant's PR. *Smoke, rados, and perf-basic are

[ceph-users] Re: How to trigger scrubbing in Ceph on-demand ?

2023-10-18 Thread Reto Gysi
Hi I haven't updated to reef yet. I've tried this on quincy. # create a testfile on cephfs.rgysi.data pool root@zephir:/home/rgysi/misc# echo cephtest123 > cephtest.txt #list inode of new file root@zephir:/home/rgysi/misc# ls -i cephtest.txt 1099518867574 cephtest.txt convert inode value to hex

[ceph-users] Re: How to trigger scrubbing in Ceph on-demand ?

2023-10-18 Thread Jayjeet Chakraborty
Hi all, Just checking if someone had a chance to go through the scrub trigger issue above. Thanks. Best Regards, *Jayjeet Chakraborty* Ph.D. Student Department of Computer Science and Engineering University of California, Santa Cruz *Email: jayje...@ucsc.edu * On Mon, Oct 16, 2023 at 9:01 PM Ja

[ceph-users] How to confirm cache hit rate in ceph osd.

2023-10-18 Thread mitsu
Hi, I'd like to know cache hit rate in ceph osd. I installed prometheus and grafana. But there aren't cache hit rate on grafana dashbords... Does Ceph have a cache hit rate counter? I'd like to know the impact of READ performance on Ceph cluster. Regards, -- Mitsumasa KONDO ___

[ceph-users] Re: How to deal with increasing HDD sizes ? 1 OSD for 2 LVM-packed HDDs ?

2023-10-18 Thread Peter Grandi
> * Ceph cluster with old nodes having 6TB HDDs > * Add new node with new 12TB HDDs Halving IOPS-per-TB? https://www.sabi.co.uk/blog/17-one.html?170610#170610 https://www.sabi.co.uk/blog/15-one.html?150329#150329 > Is it supported/recommended to pack 2 6TB HDDs handled by 2 > old OSDs into 1 12T

[ceph-users] Re: Ceph 16.2.x mon compactions, disk writes

2023-10-18 Thread Zakhar Kirpichenko
Frank, The only changes in ceph.conf are just the compression settings, most of the cluster configuration is in the monitor database thus my ceph.conf is rather short: --- [global] fsid = xxx mon_host = [list of mons] [mon.yyy] public network = a.b.c.d/e mon_rocksdb_options = "wr

[ceph-users] Re: Nautilus - Octopus upgrade - more questions

2023-10-18 Thread Tim Holloway
I started with Octopus. It had one very serious flaw that I only fixed by having Ceph self-upgrade to Pacific. Octopus required perfect health to alter daemons and often the health problems were themselves issues with daemons. Pacific can overlook most of those problems, so it's a lot easier to rep

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-18 Thread Guillaume Abrioux
Hi Yuri, ceph-volume approved https://jenkins.ceph.com/job/ceph-volume-test/566/ Regards, -- Guillaume Abrioux Software Engineer From: Yuri Weinstein Date: Monday, 16 October 2023 at 20:53 To: dev , ceph-users Subject: [EXTERNAL] [ceph-users] quincy v17.2.7 QE Validation status Details of th

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-18 Thread Patrick Begou
Hi all, I'm trying to catch the faulty commit. I'm able to build Ceph from the git repo in a fresh podman container but at this time, the lsblk command returns nothing in my container. In ceph containers lsblk works So something is wrong with launching my podman container (or different from l

[ceph-users] Re: Remove empty orphaned PGs not mapped to a pool

2023-10-18 Thread Eugen Block
Hi, So now we need to empty these OSDs. The device class was SSD. I changed it to HDD and moved the OSDs inside the Crush tree to the other HDD OSDs of the host. I need to move the PGs away from the OSDs to other OSDs but I do not know how to do it. your crush rule doesn't specify a devic

[ceph-users] Re: Remove empty orphaned PGs not mapped to a pool

2023-10-18 Thread Malte Stroem
Hello, well yes, I think I have to edit the Crush rule and modify: item_name or to be clear: I need to modify this in the decompiled crush map: root bmeta { id -4 # do not change unnecessarily id -254 class hdd # do not change unnecessarily id -

[ceph-users] Re: stuck MDS warning: Client HOST failing to respond to cache pressure

2023-10-18 Thread Loïc Tortay
On 18/10/2023 10:02, Frank Schilder wrote: Hi Loïc, thanks for the pointer. Its kind of the opposite extreme to dropping just everything. I need to know the file name that is in cache. I'm looking for a middle way, say, "drop_caches -u USER" that drops all caches of files owned by user USER. T

[ceph-users] Re: Remove empty orphaned PGs not mapped to a pool

2023-10-18 Thread Malte Stroem
Hello Eugen, I was wrong. I am sorry. The PGs are not empty and orphaned. Most of the PGs are empty but a few are indeed used. And the pool for these PGs is still there. It is the metadata pool of the erasure coded pool for RBDs. The cache tier pool was removed successfully. So now we need

[ceph-users] Re: Time Estimation for cephfs-data-scan scan_links

2023-10-18 Thread Peter Grandi
[...] > What is being done is a serial tree walk and copy in 3 > replicas of all objects in the CephFS metadata pool, so it > depends on both the read and write IOPS rate for the metadata > pools, but mostly in the write IOPS. [...] Wild guess: > metadata is on 10x 3.84TB SSDs without persistent ca

[ceph-users] traffic by IP address / bucket / user

2023-10-18 Thread Boris Behrens
Hi, did someone have a solution ready to monitor traffic by IP address? Cheers Boris ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph 16.2.x mon compactions, disk writes

2023-10-18 Thread Frank Schilder
Hi Zakhar, since its a bit beyond of the scope of basic, could you please post the complete ceph.conf config section for these changes for reference? Thanks! = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Zakhar Kirpichenko

[ceph-users] Re: How to deal with increasing HDD sizes ? 1 OSD for 2 LVM-packed HDDs ?

2023-10-18 Thread Robert Sander
On 10/18/23 09:25, Renaud Jean Christophe Miel wrote: Hi, Use case: * Ceph cluster with old nodes having 6TB HDDs * Add new node with new 12TB HDDs Is it supported/recommended to pack 2 6TB HDDs handled by 2 old OSDs into 1 12TB LVM disk handled by 1 new OSD ? The 12 TB HDD will get double th

[ceph-users] Re: stuck MDS warning: Client HOST failing to respond to cache pressure

2023-10-18 Thread Frank Schilder
Hi Loïc, thanks for the pointer. Its kind of the opposite extreme to dropping just everything. I need to know the file name that is in cache. I'm looking for a middle way, say, "drop_caches -u USER" that drops all caches of files owned by user USER. This way I could try dropping caches for a bu

[ceph-users] Re: Nautilus - Octopus upgrade - more questions

2023-10-18 Thread Marc
> > I have a Nautilus cluster built using Ceph packages from Debian 10 > Backports, deployed with Ceph-Ansible. > > I see that Debian does not offer Ceph 15/Octopus packages. However, > download.ceph.com does offer such packages. > > Question: Is it a safe upgrade to install the download.ceph.

[ceph-users] How to deal with increasing HDD sizes ? 1 OSD for 2 LVM-packed HDDs ?

2023-10-18 Thread Renaud Jean Christophe Miel
Hi, Use case: * Ceph cluster with old nodes having 6TB HDDs * Add new node with new 12TB HDDs Is it supported/recommended to pack 2 6TB HDDs handled by 2 old OSDs into 1 12TB LVM disk handled by 1 new OSD ? Regards, Renaud Miel ___ ceph-users mailing