[ceph-users] Debian 11 Bullseye support

2021-08-23 Thread Arunas B.
Hi, For this August Debian testing became a Debian stable with LTS support. But I see that only sid repo exists, no testing and no new stable bullseye. May be some one knows, when there are plans to build a bullseye build? Best regards, Arūnas ___ c

[ceph-users] Re: ceph snap-schedule retention is not properly being implemented

2021-08-23 Thread Prayank Saxena
Hi Patrick, Any ETA for the same? On Tue, 24 Aug 2021 at 8:58 AM, Prayank Saxena wrote: > Thanks Patrick! > > Much appreciated > > On Tue, 24 Aug 2021 at 5:37 AM, Patrick Donnelly > wrote: > >> Hi Prayank, >> >> Jan has a fix in progress here: https://github.com/ceph/ceph/pull/42893 >> >> -- >

[ceph-users] Re: [Ceph Dashboard] Alert configuration.

2021-08-23 Thread Lokendra Rathour
Hi Daniel, Thanks for the response !! If we talk about dashboard alerts, these alerts are processed via alert-manager (using Prometheus). Please correct me if I'm wrong. Now we have a setup where we may not install alert-manager, so in this case, is there a way to expose alert metrics without alert

[ceph-users] Re: Ceph packages for Rocky Linux

2021-08-23 Thread Massimo Sgaravatto
Isn't Rocky Linux 8 supposed to be binary-compatible with RHEL8 ? Cheers, Massimo On Tue, Aug 24, 2021 at 12:08 AM Kyriazis, George wrote: > Hello, > > Are there client packages available for Rocky Linux (specifically 8.4) for > Pacific? If not, when can we expect them? > > I also looked at do

[ceph-users] Re: [Ceph Dashboard] Alert configuration.

2021-08-23 Thread Daniel Persson
Hi Lokendra There are a lot of ways to see the status of your cluster. The main way to see it is to watch the dashboard alerts to see the most pressing matters to handle. You can also follow the log that the manager will keep as notifications. I usually use the "ceph health detail" to get the info

[ceph-users] [Ceph Dashboard] Alert configuration.

2021-08-23 Thread Lokendra Rathour
Hello Everyone, We have deployed Ceph ceph-ansible (Pacific Release). Query: Is it possible (if yes then what is the way), to view/verify the alerts (health/System both) directly without AlertManager? Or Can Ceph Dashboard only Only can help us see the Alerts in the Ceph Cluster(Health/System)? P

[ceph-users] Re: ceph snap-schedule retention is not properly being implemented

2021-08-23 Thread Prayank Saxena
Thanks Patrick! Much appreciated On Tue, 24 Aug 2021 at 5:37 AM, Patrick Donnelly wrote: > Hi Prayank, > > Jan has a fix in progress here: https://github.com/ceph/ceph/pull/42893 > > -- > Patrick Donnelly, Ph.D. > He / Him / His > Principal Software Engineer > Red Hat Sunnyvale, CA > GPG: 19F28

[ceph-users] Re: ceph snap-schedule retention is not properly being implemented

2021-08-23 Thread Patrick Donnelly
Hi Prayank, Jan has a fix in progress here: https://github.com/ceph/ceph/pull/42893 -- Patrick Donnelly, Ph.D. He / Him / His Principal Software Engineer Red Hat Sunnyvale, CA GPG: 19F28A586F808C2402351B93C3301A3E258DD79D ___ ceph-users mailing list -

[ceph-users] mds in death loop with [ERR] loaded dup inode XXX [2,head] XXX at XXX, but inode XXX already exists at XXX

2021-08-23 Thread Pickett, Neale T
hello, ceph-users! We have an old cephfs that is ten different kinds of broken, which we are attempting to (slowly) pull files from. The most recent issue we've hit is that the mds will start up, log hundreds of messages like below, then crash. This is happening in a loop; we can never actuall

[ceph-users] Re: Cephfs cannot create snapshots in subdirs of / with mds = "allow *"

2021-08-23 Thread Patrick Donnelly
On Sun, Aug 22, 2021 at 2:25 AM David Prude wrote: > > Patrick, > > Thank you so much for writing back. > > > Did you set the new "subvolume" flag on your root directory? The > > probable location for EPERM is here: > > > > https://github.com/ceph/ceph/blob/d4352939e387af20531f6bfbab2176dd91916067

[ceph-users] Ceph packages for Rocky Linux

2021-08-23 Thread Kyriazis, George
Hello, Are there client packages available for Rocky Linux (specifically 8.4) for Pacific? If not, when can we expect them? I also looked at download.ceph.com and I couldn’t find anything relevant. I only saw rh7 and rh8 packages. Thank you! George __

[ceph-users] Re: performance between ceph-osd and crimson-osd

2021-08-23 Thread Gregory Farnum
On Thu, Aug 19, 2021 at 12:40 AM Marc wrote: > > > > > https://docs.google.com/spreadsheets/d/1AXj9h0yDc2ztFWuptqcTrNU2Ui3wMyAn > > 6QUft3CPdcc/edit?usp=sharing > > > > > > The gist of it is that on the read path, crimson+cyanstore is > > significantly more efficient than crimson+alienstore and an

[ceph-users] data_log omaps

2021-08-23 Thread Szabo, Istvan (Agoda)
Hi, I have 11 omap in my octopus cluster related to datalog like this: /var/log/ceph/ceph.log-20210822.gz:2021-08-21T09:06:20.605200+0700 osd.11 (osd.11) 1876 : cluster [WRN] Large omap object found. Object: 22:b040fc05:::data_log.31:head PG: 22.a03f020d (22.d) Key count: 436895 Size (bytes):

[ceph-users] Re: SATA vs SAS

2021-08-23 Thread Kai Börnert
Basically yes, but I would not say supercritical. If it cannot deliver enough iops for ceph, it will stall even slow consumer hdds, if it is fast enough, the hdd/cpu/network will be the bottleneck, so there is not much to gain after that point. This is more a warning to check before buying a

[ceph-users] Re: cephfs snapshots mirroring

2021-08-23 Thread Arnaud MARTEL
Hi Venky, Thank's a lot for these explanations. I had some trouble when upgrading to v16.2.5. I'm using debian 10 with cephadm and the 16.2.5 containers use generated a lot of network dropped packets (I don't know why) on all my OSD hosts. I encountered also some hangs while reading files in c

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-08-23 Thread Igor Fedotov
Hi Dave, so may be another bug in Hybid Allocator... Could you please dump free extents for your "broken" osd(s) by issuing "ceph-bluestore-tool --path --command free-dump".  OSD to be offline. Preferably to have these reports after you reproduce the issue with hybrid allocator once again

[ceph-users] Re: Missing OSD in SSD after disk failure

2021-08-23 Thread Eric Fahnle
Hi Eugen, thanks for the reply. I've already tried what you wrote in your answer, but still no luck. The NVMe disk still doesn't have the OSD. Please note I using containers, not standalone OSDs. Any ideas? Regards, Eric Message: 2 Date: Fri, 20 Aug 2021 06:56

[ceph-users] Re: Missing OSD in SSD after disk failure

2021-08-23 Thread Eric Fahnle
Hi Eugen, thanks for the reply. I've already tried what you wrote in your answer, but still no luck. The NVMe disk still doesn't have the OSD. Please note I using containers, not standalone OSDs. Any ideas? Regards, Eric ___ ceph-users mailing list

[ceph-users] Re: cephfs snapshots mirroring

2021-08-23 Thread Venky Shankar
On Mon, Aug 23, 2021 at 5:36 PM Arnaud MARTEL wrote: > > Hi all, > > I'm not sure to really understand how cephfs snapshots mirroring is supposing > to work. > > I have 2 ceph clusters (pacific 16.2.4) and snapshots mirroring is set up for > only one directory, /ec42/test, in our cephfs filesyte

[ceph-users] cephfs snapshots mirroring

2021-08-23 Thread Arnaud MARTEL
Hi all, I'm not sure to really understand how cephfs snapshots mirroring is supposing to work. I have 2 ceph clusters (pacific 16.2.4) and snapshots mirroring is set up for only one directory, /ec42/test, in our cephfs filesytem (it's for test purposes but we plan to use it with about 50-60

[ceph-users] Re: ceph snap-schedule retention is not properly being implemented

2021-08-23 Thread Prayank Saxena
Hello everyone, Still waiting for response. Any kind of help is much appreciated. Thanks Prayank On Wed, 18 Aug 2021 at 9:44 AM, Prayank Saxena wrote: > Hello everyone, > > We have a ceph cluster with version Pacific v16.2.4 > > We are trying to implement the ceph module snap-schedule from thi

[ceph-users] Re: SATA vs SAS

2021-08-23 Thread Peter Lieven
Am 23.08.21 um 00:53 schrieb Kai Börnert: As far as i understand, more important factor (for the ssds) is if they have power loss protections (so they can use their ondevice write cache) and how many iops they have when using direct writes with queue depth 1 I just did a test for a hdd with bl

[ceph-users] Re: SATA vs SAS

2021-08-23 Thread Roland Giesler
On Mon, 23 Aug 2021 at 00:59, Kai Börnert wrote: > > As far as i understand, more important factor (for the ssds) is if they > have power loss protections (so they can use their ondevice write cache) > and how many iops they have when using direct writes with queue depth 1 So what you're saying i

[ceph-users] Re: SATA vs SAS

2021-08-23 Thread Roland Giesler
On Sat, 21 Aug 2021 at 22:34, Teoman Onay wrote: > > You seem to focus only on the controller bandwith while you should also > consider disk rpms. Most SATA drives runs at 7200rpm while SAS ones goes from > 10k to 15k rpm which increases the number of iops. > > Sata 80 iops > Sas 10k 120iops > S