[ceph-users] Re: Cephalocon Amsterdam 2023 Photographer Volunteer Help Needed

2023-03-21 Thread Alvaro Soto
Did you found a volunteer yet? --- Alvaro Soto. Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. -- Great people talk about ideas, ordinary people talk

[ceph-users] Re: Moving From BlueJeans to Jitsi for Ceph meetings

2023-03-21 Thread Alvaro Soto
+1 jitsi --- Alvaro Soto. Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. -- Great people talk about ideas, ordinary people talk about things, small people

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-21 Thread Laura Flores
I reviewed the upgrade tests. I opened two new trackers: 1. https://tracker.ceph.com/issues/59121 - "No upgrade in progress" during upgrade tests - Ceph - Orchestrator 2. https://tracker.ceph.com/issues/59124 - "Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" during quincy p2p upgrade

[ceph-users] quincy v17.2.6 QE Validation status

2023-03-21 Thread Yuri Weinstein
Details of this release are summarized here: https://tracker.ceph.com/issues/59070#note-1 Release Notes - TBD The reruns were in the queue for 4 days because of some slowness issues. The core team (Neha, Radek, Laura, and others) are trying to narrow down the root cause. Seeking

[ceph-users] Re: Moving From BlueJeans to Jitsi for Ceph meetings

2023-03-21 Thread Federico Lucifredi
Jitsi is really good, and getting better — we have been using it with my local User’s Group for the last couple of years. Only observation is to discover the maximum allowable number of guests in advance if this is not already known - we had a fairly generous allowance in BlueJeans accounts

[ceph-users] Re: s3 compatible interface

2023-03-21 Thread Fox, Kevin M
Will either the file store or the posix/gpfs filter support the underlying files changing underneath so you can access the files either through s3 or by other out of band means (smb, nfs, etc)? Thanks, Kevin From: Matt Benjamin Sent: Monday, March 20,

[ceph-users] Re: Moving From BlueJeans to Jitsi for Ceph meetings

2023-03-21 Thread Mike Perez
I'm not familiar with BBB myself. Are there any objections to Jitsi? I want to update the calendar invites this week. On Thu, Mar 16, 2023 at 6:16 PM huxia...@horebdata.cn wrote: > > Besides Jitsi, another option would be BigBlueButton(BBB). Does anyone know > how BBB compares with Jitsi? > > >

[ceph-users] MDS host in OSD blacklist

2023-03-21 Thread Frank Schilder
Hi all, we have an octopus v15.2.17 cluster and observe that one of our MDS hosts showed up in the OSD blacklist: # ceph osd blacklist ls 192.168.32.87:6801/3841823949 2023-03-22T10:08:02.589698+0100 192.168.32.87:6800/3841823949 2023-03-22T10:08:02.589698+0100 I see an MDS restart that might

[ceph-users] Re: avg apply latency went up after update from octopus to pacific

2023-03-21 Thread Boris Behrens
Hi Igor, i've offline compacted all the OSDs and reenabled the bluefs_buffered_io It didn't change anything and the commit and apply latencies are around 5-10 times higher than on our nautlus cluster. The pacific cluster got a 5 minute mean over all OSDs 2.2ms, while the nautilus cluster is

[ceph-users] Re: Very slow backfilling/remapping of EC pool PGs

2023-03-21 Thread Gauvain Pocentek
On Tue, Mar 21, 2023 at 2:21 PM Clyso GmbH - Ceph Foundation Member < joachim.kraftma...@clyso.com> wrote: > > > https://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/#confval-osd_op_queue > Since this requires a restart I went an other way to speed up the recovery of degraded PGs

[ceph-users] Re: Very slow backfilling/remapping of EC pool PGs

2023-03-21 Thread Clyso GmbH - Ceph Foundation Member
https://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/#confval-osd_op_queue ___ Clyso GmbH - Ceph Foundation Member Am 21.03.23 um 12:51 schrieb Gauvain Pocentek: (adding back the list) On Tue, Mar 21, 2023 at 11:25 AM Joachim Kraftmayer wrote:

[ceph-users] Re: Ceph Bluestore tweaks for Bcache

2023-03-21 Thread Matthias Ferdinand
Hi, I found a way to preserve the rotational=1 flag for bcache-backed OSDs between reboots. Using a systemd drop-in for ceph-osd@.service, it now uses lsblk to look for a bcache device somewhere below the OSD, but sets rotational=1 in the uppermost LVM volume device mapper target only. This is

[ceph-users] Re: Very slow backfilling/remapping of EC pool PGs

2023-03-21 Thread Gauvain Pocentek
(adding back the list) On Tue, Mar 21, 2023 at 11:25 AM Joachim Kraftmayer < joachim.kraftma...@clyso.com> wrote: > i added the questions and answers below. > > ___ > Best Regards, > Joachim Kraftmayer > CEO | Clyso GmbH > > Clyso GmbH > p: +49 89 21 55 23 91 2 >

[ceph-users] Re: Changing os to ubuntu from centos 8

2023-03-21 Thread Szabo, Istvan (Agoda)
Thank you, I’ll take a note and give a try. Istvan Szabo Staff Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com --- From: Boris Behrens

[ceph-users] Re: avg apply latency went up after update from octopus to pacific

2023-03-21 Thread Igor Fedotov
Hi Boris, additionally you might want to manually compact RocksDB for every OSD. Thanks, Igor On 3/21/2023 12:22 PM, Boris Behrens wrote: Disabling the write cache and the bluefs_buffered_io did not change anything. What we see is that larger disks seem to be the leader in therms of

[ceph-users] Re: Changing os to ubuntu from centos 8

2023-03-21 Thread Boris Behrens
Hi Istvan, I currently make the move from centos7 to ubuntu18.04 (we want to jump directly from nautilus to pacific), When everything in the cluster got the same version, and the version is available on the new OS you can just reinstall the hosts with the new OS. With the mons, I remove the

[ceph-users] Re: avg apply latency went up after update from octopus to pacific

2023-03-21 Thread Boris Behrens
Disabling the write cache and the bluefs_buffered_io did not change anything. What we see is that larger disks seem to be the leader in therms of slowness (we have 70% 2TB, 20% 4TB and 10% 8TB SSDs in the cluster), but removing some of the 8TB disks and replace them with 2TB (because it's by far

[ceph-users] Re: s3 compatible interface

2023-03-21 Thread Joachim Kraftmayer
Hi, maybe I should have mentioned the zipper project as well, watched both IBM and SUSE presentations at FOSDEM 2023. I personally follow the zipper project with great interest. Joachim ___ Ceph Foundation Member Am 21.03.23 um 01:27 schrieb Matt Benjamin:

[ceph-users] Re: Very slow backfilling/remapping of EC pool PGs

2023-03-21 Thread Joachim Kraftmayer
Which Ceph version are you running, is mclock active? Joachim ___ Clyso GmbH - Ceph Foundation Member Am 21.03.23 um 06:53 schrieb Gauvain Pocentek: Hello all, We have an EC (4+2) pool for RGW data, with HDDs + SSDs for WAL/DB. This pool has 9 servers with

[ceph-users] Re: Unexpected ceph pool creation error with Ceph Quincy

2023-03-21 Thread Eugen Block
Sorry, hit send too early. It seems I could reproduce it by reducing the value to 1: host1:~ # ceph config set mon mon_max_pool_pg_num 1 host1:~ # ceph config get mon mon_max_pool_pg_num 1 host1:~ # ceph osd pool create pool3 Error ERANGE: 'pg_num' must be greater than 0 and less than or equal

[ceph-users] Re: Unexpected ceph pool creation error with Ceph Quincy

2023-03-21 Thread Eugen Block
Did you ever adjust mon_max_pool_pg_num? Can you check what your current config value is? host1:~ # ceph config get mon mon_max_pool_pg_num 65536 Zitat von Geert Kloosterman : Hi, Thanks Eugen for checking this. I get the same default values as you when I remove the entries from my

[ceph-users] Changing os to ubuntu from centos 8

2023-03-21 Thread Szabo, Istvan (Agoda)
Hi, I'd like to change the os to ubuntu 20.04.5 from my bare metal deployed octopus 15.2.14 on centos 8. On the first run I would go with octopus 15.2.17 just to not make big changes in the cluster. I've found couple of threads on the mailing list but those were containerized (like: Re: