ort new damage if it comes in.
Cheers
Sebastian
--
Dr. Sebastian Knust | Bielefeld University
IT Administrator | Faculty of Physics
Office: D2-110 | Universitätsstr. 25
Phone: +49 521 106 5234 | 33615 Bielefeld
___
ceph-users mailin
Hello Patrick,
On 27.11.23 19:05, Patrick Donnelly wrote:
I would **really** love to see the debug logs from the MDS. Please
upload them using ceph-post-file [1]. If you can reliably reproduce,
turn on more debugging:
ceph config set mds debug_mds 20
ceph config set mds debug_ms 1
[1] https
Hi,
After updating from 17.2.6 to 17.2.7 with cephadm, our cluster went into
MDS_DAMAGE state. We had some prior issues with faulty kernel clients
not releasing capabilities, therefore the update might just be a
coincidence.
`ceph tell mds.cephfs:0 damage ls` lists 56 affected files all with
Hi Christoph,
I am able to reproducibly kernel panic CentOS 7 clients with native
kernel (3.10.0-1160.76.1.el7) when accessing CephFS snapshots via SMB
with vfs_shadow_copy2. This occurs on a Pacific cluster. IIRC accessing
the snapshots on the server also lead to a kernel panic, but I'm not
jects = ceph_snapshots
Regards,
Bailey
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Dr. Sebastian Knust | Bielefeld University
IT Administrator | Faculty of Physics
Office: D
Hi Jasper,
On 16.12.21 12:45, Jesper Lykkegaard Karlsen wrote:
Now, I want to access the usage information of folders with quotas from root
level of the cephfs.
I have failed to find this information through getfattr commands, only quota
limits are shown here, and du-command on individual fold
estions?
Andras
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Dr. Sebastian Knust | Bielefeld University
IT Administrator | Faculty of Physics
Office: D2-110 | Universität
Hi Jan,
On 01.12.21 17:31, Jan Kasprzak wrote:
In "ceph -s", they "2 osds down"
message disappears, and the number of degraded objects steadily decreases.
However, after some time the number of degraded objects starts going up
and down again, and osds appear to be down (and then up again). After
Hi,
I too am still suffering the same issue (snapshots lead to 100%
ceph-msgr usage on client during metadata-intensive operations like
backup and rsync) and had previously reported it to this list. This
issue is also tracked at https://tracker.ceph.com/issues/44100
My current observations:
Hi Luís,
Am 18.08.2021 um 19:02 schrieb Luis Henriques:
> Sebastian Knust writes:
>
>> Hi,
>>
>> I am running a Ceph Oc,topus (15.2.13) cluster mainly for CephFS. Moving
>> (with
>> mv) a large directory (mail server backup, so a few million small fi
Hi,
I am running a Ceph Octopus (15.2.13) cluster mainly for CephFS. Moving
(with mv) a large directory (mail server backup, so a few million small
files) within the cluster takes multiple days, even though both source
and destination share the same (default) file layout and - at least on
the
Dear Harry,
`docker image prune -a` removes all dangling images as well as all
images not referenced by any running container. I successfully used it
in my setups to remove old versions.
In RHEL/CentOS, podman is used and thus you should use `podman image
prune -a` instead.
HTH, Cheers
Seb
Hi,
After upgrading from 15.2.8. to 15.2.13 with cephadm on CentOS 8
(containerised installation done by cephadm), Grafana no longer shows
new data. Additionally, when accessing the Dashboard-URL on a host
currently not hosting the dashboard, I am redirected to a wrong hostname
(as shown in c
Hi Michael,
On 08.06.21 11:38, Ml Ml wrote:
Now i was asked if i could also build a cheap 200-500TB Cluster
Storage, which should also scale. Just for Data Storage such as
NextCloud/OwnCloud.
With similar requirements (server primarily for Samba and NextCloud,
some RBD use, very limited budge
Hi Hervé,
On 01.06.21 14:00, Hervé Ballans wrote:
I'm aware with your points, and maybe I was not really clear in my
previous email (written in a hurry!)
The problematic pool is the metadata one. All its OSDs (x3) are full.
The associated data pool is OK and no OSD is full on the data pool.
A
Hi Hervé,
On 01.06.21 13:15, Hervé Ballans wrote:
# ceph status
cluster:
id: 838506b7-e0c6-4022-9e17-2d1cf9458be6
health: HEALTH_ERR
1 filesystem is degraded
3 full osd(s)
1 pool(s) full
1 daemons have recently crashed
You have
Hi Reed,
To add to this command by Weiwen:
On 28.05.21 13:03, 胡 玮文 wrote:
Have you tried just start multiple rsync process simultaneously to transfer
different directories? Distributed system like ceph often benefits from more
parallelism.
When I migrated from XFS on iSCSI (legacy system, n
Hi,
Assuming a cluster (currently octopus, might upgrade to pacific once
released) serving only CephFS and that only to a handful of kernel and
fuse-clients (no OpenStack, CSI or similar): Are there any side effects
of not using the ceph-mgr volumes module abstractions [1], namely
subvolumes
Hi,
I am running a Ceph Octopus (15.2.8) cluster primarily for CephFS.
Metadata is stored on SSD, data is stored in three different pools on
HDD. Currently, I use 22 subvolumes.
I am rotating snapshots on 16 subvolumes, all in the same pool, which is
the primary data pool for CephFS. Current
19 matches
Mail list logo