[ceph-users] Re: MDS_DAMAGE in 17.2.7 / Cannot delete affected files

2023-11-30 Thread Sebastian Knust
ort new damage if it comes in. Cheers Sebastian -- Dr. Sebastian Knust | Bielefeld University IT Administrator | Faculty of Physics Office: D2-110 | Universitätsstr. 25 Phone: +49 521 106 5234 | 33615 Bielefeld ___ ceph-users mailin

[ceph-users] Re: MDS_DAMAGE in 17.2.7 / Cannot delete affected files

2023-11-29 Thread Sebastian Knust
Hello Patrick, On 27.11.23 19:05, Patrick Donnelly wrote: I would **really** love to see the debug logs from the MDS. Please upload them using ceph-post-file [1]. If you can reliably reproduce, turn on more debugging: ceph config set mds debug_mds 20 ceph config set mds debug_ms 1 [1] https

[ceph-users] MDS_DAMAGE in 17.2.7 / Cannot delete affected files

2023-11-24 Thread Sebastian Knust
Hi, After updating from 17.2.6 to 17.2.7 with cephadm, our cluster went into MDS_DAMAGE state. We had some prior issues with faulty kernel clients not releasing capabilities, therefore the update might just be a coincidence. `ceph tell mds.cephfs:0 damage ls` lists 56 affected files all with

[ceph-users] Re: Centos 7 Kernel clients on ceph Quincy -- experiences??

2022-09-20 Thread Sebastian Knust
Hi Christoph, I am able to reproducibly kernel panic CentOS 7 clients with native kernel (3.10.0-1160.76.1.el7) when accessing CephFS snapshots via SMB with vfs_shadow_copy2. This occurs on a Pacific cluster. IIRC accessing the snapshots on the server also lead to a kernel panic, but I'm not

[ceph-users] Re: CephFS snapshots with samba shadowcopy

2022-07-13 Thread Sebastian Knust
jects = ceph_snapshots Regards, Bailey ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io -- Dr. Sebastian Knust | Bielefeld University IT Administrator | Faculty of Physics Office: D

[ceph-users] Re: cephfs quota used

2021-12-16 Thread Sebastian Knust
Hi Jasper, On 16.12.21 12:45, Jesper Lykkegaard Karlsen wrote: Now, I want to access the usage information of folders with quotas from root level of the cephfs. I have failed to find this information through getfattr commands, only quota limits are shown here, and du-command on individual fold

[ceph-users] Re: cephfs kernel client + snapshots slowness

2021-12-10 Thread Sebastian Knust
estions? Andras ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io -- Dr. Sebastian Knust | Bielefeld University IT Administrator | Faculty of Physics Office: D2-110 | Universität

[ceph-users] Re: OSD repeatedly marked down

2021-12-01 Thread Sebastian Knust
Hi Jan, On 01.12.21 17:31, Jan Kasprzak wrote: In "ceph -s", they "2 osds down" message disappears, and the number of degraded objects steadily decreases. However, after some time the number of degraded objects starts going up and down again, and osds appear to be down (and then up again). After

[ceph-users] Re: Kworker 100% with ceph-msgr (after upgrade to 14.2.6?)

2021-09-07 Thread Sebastian Knust
Hi, I too am still suffering the same issue (snapshots lead to 100% ceph-msgr usage on client during metadata-intensive operations like backup and rsync) and had previously reported it to this list. This issue is also tracked at https://tracker.ceph.com/issues/44100 My current observations:

[ceph-users] Re: CephFS Octopus mv: Invalid cross-device link [Errno 18] / slow move

2021-08-18 Thread Sebastian Knust
Hi Luís, Am 18.08.2021 um 19:02 schrieb Luis Henriques: > Sebastian Knust writes: > >> Hi, >> >> I am running a Ceph Oc,topus (15.2.13) cluster mainly for CephFS. Moving >> (with >> mv) a large directory (mail server backup, so a few million small fi

[ceph-users] CephFS Octopus mv: Invalid cross-device link [Errno 18] / slow move

2021-08-18 Thread Sebastian Knust
Hi, I am running a Ceph Octopus (15.2.13) cluster mainly for CephFS. Moving (with mv) a large directory (mail server backup, so a few million small files) within the cluster takes multiple days, even though both source and destination share the same (default) file layout and - at least on the

[ceph-users] Re: Docker container snapshots accumulate until disk full failure?

2021-08-12 Thread Sebastian Knust
Dear Harry, `docker image prune -a` removes all dangling images as well as all images not referenced by any running container. I successfully used it in my setups to remove old versions. In RHEL/CentOS, podman is used and thus you should use `podman image prune -a` instead. HTH, Cheers Seb

[ceph-users] Wrong hostnames in "ceph mgr services" (Octopus)

2021-07-08 Thread Sebastian Knust
Hi, After upgrading from 15.2.8. to 15.2.13 with cephadm on CentOS 8 (containerised installation done by cephadm), Grafana no longer shows new data. Additionally, when accessing the Dashboard-URL on a host currently not hosting the dashboard, I am redirected to a wrong hostname (as shown in c

[ceph-users] Re: OT: How to Build a poor man's storage with ceph

2021-06-08 Thread Sebastian Knust
Hi Michael, On 08.06.21 11:38, Ml Ml wrote: Now i was asked if i could also build a cheap 200-500TB Cluster Storage, which should also scale. Just for Data Storage such as NextCloud/OwnCloud. With similar requirements (server primarily for Samba and NextCloud, some RBD use, very limited budge

[ceph-users] Re: Cephfs metadta pool suddenly full (100%) !

2021-06-01 Thread Sebastian Knust
Hi Hervé, On 01.06.21 14:00, Hervé Ballans wrote: I'm aware with your points, and maybe I was not really clear in my previous email (written in a hurry!) The problematic pool is the metadata one. All its OSDs (x3) are full. The associated data pool is OK and no OSD is full on the data pool. A

[ceph-users] Re: Cephfs metadta pool suddenly full (100%) !

2021-06-01 Thread Sebastian Knust
Hi Hervé, On 01.06.21 13:15, Hervé Ballans wrote: # ceph status   cluster:     id: 838506b7-e0c6-4022-9e17-2d1cf9458be6     health: HEALTH_ERR     1 filesystem is degraded     3 full osd(s)     1 pool(s) full     1 daemons have recently crashed You have

[ceph-users] Re: XFS on RBD on EC painfully slow

2021-05-28 Thread Sebastian Knust
Hi Reed, To add to this command by Weiwen: On 28.05.21 13:03, 胡 玮文 wrote: Have you tried just start multiple rsync process simultaneously to transfer different directories? Distributed system like ceph often benefits from more parallelism. When I migrated from XFS on iSCSI (legacy system, n

[ceph-users] CephFS: side effects of not using ceph-mgr volumes / subvolumes

2021-03-03 Thread Sebastian Knust
Hi, Assuming a cluster (currently octopus, might upgrade to pacific once released) serving only CephFS and that only to a handful of kernel and fuse-clients (no OpenStack, CSI or similar): Are there any side effects of not using the ceph-mgr volumes module abstractions [1], namely subvolumes

[ceph-users] CephFS Octopus snapshots / kworker at 100% / kernel vs. fuse client

2021-02-05 Thread Sebastian Knust
Hi, I am running a Ceph Octopus (15.2.8) cluster primarily for CephFS. Metadata is stored on SSD, data is stored in three different pools on HDD. Currently, I use 22 subvolumes. I am rotating snapshots on 16 subvolumes, all in the same pool, which is the primary data pool for CephFS. Current