Hi,
Am 17.12.2017 um 10:40 schrieb Martin Preuss:
[...]
> is there a way to find out which files on CephFS is are using a given
> pg? I'd like to check whether those files are corrupted...
[...]
Nobody? Any hint, maybe?
Failing checksums for no apparent reason seem to me like quite a serious
BTW: Ceph version is 12.2.2 (the cluster was setup with 12.2.1, then
updated to 12.2.2.2 on Debian 9).
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3
mgr: ceph1(active), standbys: ceph2
mds: cephfs-1/1/1 up {0=ceph1=up:active}, 2 up:standby
osd: 10 osds: 10 up, 10 in
data:
Can you open a ticket with exact version of your ceph cluster?
http://tracker.ceph.com
Thanks,
On Sun, Dec 10, 2017 at 10:34 PM, Martin Preuss wrote:
> Hi,
>
> I'm new to Ceph. I started a ceph cluster from scratch on DEbian 9,
> consisting of 3 hosts, each host has 3-4
IIRC there was a bug related to bluestore compression fixed between
12.2.1 and 12.2.2
On Sun, Dec 10, 2017 at 5:04 PM, Martin Preuss wrote:
> Hi,
>
>
> Am 10.12.2017 um 22:06 schrieb Peter Woodman:
>> Are you using bluestore compression?
> [...]
>
> As a matter of fact, I
Hi,
Am 10.12.2017 um 22:06 schrieb Peter Woodman:
> Are you using bluestore compression?
[...]
As a matter of fact, I do. At least for one of the 5 pools, exclusively
used with CephFS (I'm using CephFS as a way to achieve high availability
while replacing an NFS server).
However, I see these
Are you using bluestore compression?
On Sun, Dec 10, 2017 at 1:45 PM, Martin Preuss wrote:
> Hi (again),
>
> meanwhile I tried
>
> "ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-0"
>
> but that resulted in a segfault (please see attached console log).
>
>
> Regards
Hi (again),
meanwhile I tried
"ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-0"
but that resulted in a segfault (please see attached console log).
Regards
Martin
Am 10.12.2017 um 14:34 schrieb Martin Preuss:
> Hi,
>
> I'm new to Ceph. I started a ceph cluster from scratch on
Hi,
I'm new to Ceph. I started a ceph cluster from scratch on DEbian 9,
consisting of 3 hosts, each host has 3-4 OSDs (using 4TB hdds, currently
totalling 10 hdds).
Right from the start I always received random scrub errors telling me
that some checksums didn't match the expected value, fixable