On Tue, Oct 17, 2023 at 12:23 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be fixed for this releas
Thank you for your feedback.
We have a failure domain of "node".
The question here is a rather simple one:
when you add to an existing Ceph cluster a new node having disks twice (12TB)
the size of the existing disks (6TB), how do you let Ceph evenly distribute the
data across all disks ?
You m
Ok I merged all PRs known to me.
If I hear no objections I will start the building
(Casey FYI -> and will in parallel run quicny-p2p)
On Wed, Oct 18, 2023 at 11:44 AM Yuri Weinstein wrote:
>
> Per our chat with Casey, we will remove s3tests and include
> https://github.com/ceph/ceph/pull/54078
The upgrade-clients/client-upgrade-quincy-reef suite passed with Prashant’s
POOL_AAPP_NOT_ENABLED PR. Approved!
On Wed, Oct 18, 2023 at 1:45 PM Yuri Weinstein wrote:
> Per our chat with Casey, we will remove s3tests and include
> https://github.com/ceph/ceph/pull/54078 into 17.2.7
>
> On Wed, Oc
Per our chat with Casey, we will remove s3tests and include
https://github.com/ceph/ceph/pull/54078 into 17.2.7
On Wed, Oct 18, 2023 at 9:30 AM Casey Bodley wrote:
>
> On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://track
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be fixed for this release
This is one of many reasons for not using HDDs ;)
One nuance that is easy overlooked is the CRUSH weight of failure domains.
If, say, you have a failure domain of "rack" with size=3 replicated pools and
3x CRUSH racks, if you add the new, larger OSDs to only one rack, you will not
increase the
Hi Ceph users and developers,
You are invited to join us at the User + Dev meeting tomorrow at 10:00 AM
EST! See below for more meeting details.
We have two guest speakers joining us tomorrow:
1. "CRUSH Changes at Scale" by Joshua Baergen, Digital Ocean
In this talk, Joshua Baergen will discuss
@Prashant Dhange
raised PR https://github.com/ceph/ceph/pull/54065 to help with
POOL_APP_NOT_ENABLED warnings in the smoke, rados, perf-basic, and
upgrade-clients/client-upgrade-quincy-reef suites.
The tracker has been updated with reruns including Prashant's PR. *Smoke,
rados, and perf-basic are
Hi
I haven't updated to reef yet. I've tried this on quincy.
# create a testfile on cephfs.rgysi.data pool
root@zephir:/home/rgysi/misc# echo cephtest123 > cephtest.txt
#list inode of new file
root@zephir:/home/rgysi/misc# ls -i cephtest.txt
1099518867574 cephtest.txt
convert inode value to hex
Hi all,
Just checking if someone had a chance to go through the scrub trigger issue
above. Thanks.
Best Regards,
*Jayjeet Chakraborty*
Ph.D. Student
Department of Computer Science and Engineering
University of California, Santa Cruz
*Email: jayje...@ucsc.edu *
On Mon, Oct 16, 2023 at 9:01 PM Ja
Hi,
I'd like to know cache hit rate in ceph osd. I installed prometheus and
grafana. But there aren't cache hit rate on grafana dashbords...
Does Ceph have a cache hit rate counter? I'd like to know the impact of READ
performance on Ceph cluster.
Regards,
--
Mitsumasa KONDO
___
> * Ceph cluster with old nodes having 6TB HDDs
> * Add new node with new 12TB HDDs
Halving IOPS-per-TB?
https://www.sabi.co.uk/blog/17-one.html?170610#170610
https://www.sabi.co.uk/blog/15-one.html?150329#150329
> Is it supported/recommended to pack 2 6TB HDDs handled by 2
> old OSDs into 1 12T
Frank,
The only changes in ceph.conf are just the compression settings, most of
the cluster configuration is in the monitor database thus my ceph.conf is
rather short:
---
[global]
fsid = xxx
mon_host = [list of mons]
[mon.yyy]
public network = a.b.c.d/e
mon_rocksdb_options =
"wr
I started with Octopus. It had one very serious flaw that I only fixed
by having Ceph self-upgrade to Pacific. Octopus required perfect health
to alter daemons and often the health problems were themselves issues
with daemons. Pacific can overlook most of those problems, so it's a
lot easier to rep
Hi Yuri,
ceph-volume approved https://jenkins.ceph.com/job/ceph-volume-test/566/
Regards,
--
Guillaume Abrioux
Software Engineer
From: Yuri Weinstein
Date: Monday, 16 October 2023 at 20:53
To: dev , ceph-users
Subject: [EXTERNAL] [ceph-users] quincy v17.2.7 QE Validation status
Details of th
Hi all,
I'm trying to catch the faulty commit. I'm able to build Ceph from the
git repo in a fresh podman container but at this time, the lsblk command
returns nothing in my container.
In ceph containers lsblk works
So something is wrong with launching my podman container (or different
from l
Hi,
So now we need to empty these OSDs.
The device class was SSD. I changed it to HDD and moved the OSDs
inside the Crush tree to the other HDD OSDs of the host.
I need to move the PGs away from the OSDs to other OSDs but I do not
know how to do it.
your crush rule doesn't specify a devic
Hello,
well yes, I think I have to edit the Crush rule and modify:
item_name
or to be clear:
I need to modify this in the decompiled crush map:
root bmeta {
id -4 # do not change unnecessarily
id -254 class hdd # do not change unnecessarily
id -
On 18/10/2023 10:02, Frank Schilder wrote:
Hi Loïc,
thanks for the pointer. Its kind of the opposite extreme to dropping just everything. I
need to know the file name that is in cache. I'm looking for a middle way, say,
"drop_caches -u USER" that drops all caches of files owned by user USER. T
Hello Eugen,
I was wrong. I am sorry.
The PGs are not empty and orphaned.
Most of the PGs are empty but a few are indeed used.
And the pool for these PGs is still there. It is the metadata pool of
the erasure coded pool for RBDs. The cache tier pool was removed
successfully.
So now we need
[...]
> What is being done is a serial tree walk and copy in 3
> replicas of all objects in the CephFS metadata pool, so it
> depends on both the read and write IOPS rate for the metadata
> pools, but mostly in the write IOPS. [...] Wild guess:
> metadata is on 10x 3.84TB SSDs without persistent ca
Hi,
did someone have a solution ready to monitor traffic by IP address?
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Zakhar,
since its a bit beyond of the scope of basic, could you please post the
complete ceph.conf config section for these changes for reference?
Thanks!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Zakhar Kirpichenko
On 10/18/23 09:25, Renaud Jean Christophe Miel wrote:
Hi,
Use case:
* Ceph cluster with old nodes having 6TB HDDs
* Add new node with new 12TB HDDs
Is it supported/recommended to pack 2 6TB HDDs handled by 2 old OSDs
into 1 12TB LVM disk handled by 1 new OSD ?
The 12 TB HDD will get double th
Hi Loïc,
thanks for the pointer. Its kind of the opposite extreme to dropping just
everything. I need to know the file name that is in cache. I'm looking for a
middle way, say, "drop_caches -u USER" that drops all caches of files owned by
user USER. This way I could try dropping caches for a bu
>
> I have a Nautilus cluster built using Ceph packages from Debian 10
> Backports, deployed with Ceph-Ansible.
>
> I see that Debian does not offer Ceph 15/Octopus packages. However,
> download.ceph.com does offer such packages.
>
> Question: Is it a safe upgrade to install the download.ceph.
Hi,
Use case:
* Ceph cluster with old nodes having 6TB HDDs
* Add new node with new 12TB HDDs
Is it supported/recommended to pack 2 6TB HDDs handled by 2 old OSDs
into 1 12TB LVM disk handled by 1 new OSD ?
Regards,
Renaud Miel
___
ceph-users mailing
28 matches
Mail list logo