[ceph-users] Re: cephfs vs rbd

2021-10-08 Thread Robert W. Eckert
That is odd- I am running some game servers (ARK Survival) and the RBD mount starts up in less than a minute, but the CEPHFS mount takes 20 minutes or more. It probably depends on the workload. -Original Message- From: Marc Sent: Friday, October 8, 2021 5:50 PM To: Jorge Garcia ;

[ceph-users] RFP for arm64 test nodes

2021-10-08 Thread Dan Mick
Ceph has been completely ported to build and run on ARM hardware (architecture arm64/aarch64), but we're unable to test it due to lack of hardware. We propose to purchase a significant number of ARM servers (50+?) to install in our upstream Sepia test lab to use for upstream testing of Ceph,

[ceph-users] Re: Cephfs + inotify

2021-10-08 Thread David Rivera
I see. This is true, I did monitor for changes on all clients involved. On Fri, Oct 8, 2021, 12:27 Daniel Poelzleithner wrote: > On 08/10/2021 21:19, David Rivera wrote: > > > I've used inotify against a kernel mount a few months back. Worked fine > for > > me if I recall correctly. > > It can

[ceph-users] Re: cephfs vs rbd

2021-10-08 Thread Marc
> I was wondering about performance differences between cephfs and rbd, so > I deviced this quick test. The results were pretty surprising to me. > > The test: on a very idle machine, make 2 mounts. One is a cephfs mount, > the other an rbd mount. In each directory, copy a humongous .tgz file >

[ceph-users] Re: cephfs vs rbd

2021-10-08 Thread Jorge Garcia
> Please describe the client system. > Are you using the same one for CephFS and RBD? Yes > Kernel version? Centos 8 4.18.0-240.15.1.el8_3.x86_64 (but also tried on a Centos 7 machine, similar results) > BM or VM? Bare Metal > KRBD or libvirt/librbd? I'm assuming KRBD. I just did a

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-08 Thread José H . Freidhof
Hi Igor, "And was osd.2 redeployed AFTER settings had been reset to defaults ?" A: YES "Anything particular about current cluster use cases?" A: we are using it temporary as a iscsi target for a vmware esxi cluster with 6 hosts. We created two 10tb iscsi images/luns for vmware, because the other

[ceph-users] Re: cephadm adopt with another user than root

2021-10-08 Thread Daniel Pivonka
Id have to test this to make sure it works but i believe you can run 'ceph cephadm set-user ' https://docs.ceph.com/en/octopus/cephadm/operations/#configuring-a-different-ssh-user after step 4 and before step 5 in the adoption guide https://docs.ceph.com/en/pacific/cephadm/adoption/ and then

[ceph-users] Re: Determining non-subvolume cephfs snapshot size

2021-10-08 Thread David Prude
Gregory,    Thank you for taking the time to lay out this information. -David On 10/8/21 1:34 PM, Gregory Farnum wrote: > On Fri, Oct 8, 2021 at 6:44 AM David Prude wrote: >> Hello, >> >> My apologies if this has been answered previously but by attempt to >> find an answer have failed me.

[ceph-users] Re: cephfs vs rbd

2021-10-08 Thread Anthony D'Atri
Please describe the client system. Are you using the same one for CephFS and RBD? Kernel version? BM or VM? KRBD or libvirt/librbd? Which filesystem did you have on the RBD volume, with what mkfs parameters? > > I was wondering about performance differences between cephfs and rbd, so I >

[ceph-users] Re: Cephfs + inotify

2021-10-08 Thread Daniel Poelzleithner
On 08/10/2021 21:19, David Rivera wrote: > I've used inotify against a kernel mount a few months back. Worked fine for > me if I recall correctly. It can very much depend on the source of changes. It is easy to imagine changes originating from localhost get inotify events, while changes from

[ceph-users] cephfs vs rbd

2021-10-08 Thread Jorge Garcia
I was wondering about performance differences between cephfs and rbd, so I deviced this quick test. The results were pretty surprising to me. The test: on a very idle machine, make 2 mounts. One is a cephfs mount, the other an rbd mount. In each directory, copy a humongous .tgz file (1.5 TB)

[ceph-users] Re: Cephfs + inotify

2021-10-08 Thread David Rivera
I've used inotify against a kernel mount a few months back. Worked fine for me if I recall correctly. On Fri, Oct 8, 2021, 08:20 Sean wrote: > I don’t think this is possible, since CephFS is a network mounted > filesystem. The inotify feature requires the kernel to be aware of file > system

[ceph-users] Re: Determining non-subvolume cephfs snapshot size

2021-10-08 Thread Gregory Farnum
On Fri, Oct 8, 2021 at 6:44 AM David Prude wrote: > > Hello, > > My apologies if this has been answered previously but by attempt to > find an answer have failed me. I am trying to determine the canonical > manner for determining how much storage space a cephfs snapshot is > consuming. It

[ceph-users] Re: is it possible to remove the db+wal from an external device (nvme)

2021-10-08 Thread Szabo, Istvan (Agoda)
Hi Igor, Here is a bluestore tool fsck output: https://justpaste.it/7igrb Is this that you are looking for? Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com

[ceph-users] cephadm adopt with another user than root

2021-10-08 Thread Luis Domingues
Hello, On our test cluster, we are running containerized latest pacific, and we are testing the upgrade path to cephadm. But we do not want cephadm to use the root user to connect to other machines. We found how to set the ssh-user during bootstrapping, but not when adopting an existing

[ceph-users] Re: Cephfs + inotify

2021-10-08 Thread Sean
I don’t think this is possible, since CephFS is a network mounted filesystem. The inotify feature requires the kernel to be aware of file system changes. If the kernel is unaware of changes in a tracked directory, which is the case for all network mounted filesystems, then it can’t inform any

[ceph-users] Determining non-subvolume cephfs snapshot size

2021-10-08 Thread David Prude
Hello,     My apologies if this has been answered previously but by attempt to find an answer have failed me. I am trying to determine the canonical manner for determining how much storage space a cephfs snapshot is consuming. It seems that you can determine the size of the referenced data by

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-08 Thread Igor Fedotov
And was osd.2 redeployed AFTER settings had been reset to defaults ? Anything particular about current cluster use cases? E.g. is it a sort of regular usage (with load flukes and peak) or may be some permanently running stress load testing. The latter might tend to hold the resources and e.g.

[ceph-users] Re: Edit crush rule

2021-10-08 Thread ceph-users
Excellent - thank you! From: Konstantin Shalygin 'k0ste at k0ste.ru' Sent: 07 October 2021 10:10 To: ceph-us...@hovr.anonaddy.com Subject: Re: [ceph-users] Edit crush rule Hi, On 7 Oct 2021, at 11:03,

[ceph-users] Re: Broken mon state after (attempted) 16.2.5 -> 16.2.6 upgrade

2021-10-08 Thread Jonathan D. Proulx
Hi Patrick, Yes we had been successfully running on Pacific v16.2.5 Thanks for the pointer to the bug, we eventually ended up taking eveything down and rebuilding the monstore using monstore-tool. Perhaps a longer and less pleasant path than necessary but it was effective. -Jon On Thu, Oct