Re: [ceph-users] pgs not deep-scrubbed in time

2019-07-04 Thread Alexander Walker
Hi, thanks for you quickly answer. This option is set to false. root@heku1 ~# ceph daemon osd.1 config get osd_scrub_auto_repair {     "osd_scrub_auto_repair": "false" } Best regards Alex Am 03.07.2019 um 15:42 schrieb Paul Emmerich: auto repair

[ceph-users] pgs not deep-scrubbed in time

2019-07-03 Thread Alexander Walker
Hi, My Cluster show me this message cince last two weeks. Ceph Version (ceph -v): root@heku1 ~ # ceph -v ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable) All pgs are active+clean: root@heku1 ~ # ceph -s   cluster:     id: 0839c91a-f3ca-4119-853b-eb10904cf322   

[ceph-users] ceph-mon crash after update from hammer 0.94.7 to jewel 10.2.3

2016-11-09 Thread Alexander Walker
Hello, I've a cluster of three node (two osd on each node). First I've updated on node - osd is ok and running, but ceph-mon crashed. cephus@ceph3:~$ sudo /usr/bin/ceph-mon --cluster=ceph -i ceph3 -f --setuser ceph --setgroup ceph --debug_mon 20 starting mon.ceph3 rank 2 at

[ceph-users] File striping configuration?

2015-09-07 Thread Alexander Walker
Hi, I've found this document https://ceph.com/docs/v0.80/dev/file-striping. But I don't understand how and where I can configure this. I'll use this in CephFS. Can me help someone? ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Ceph Client parallized access?

2015-09-04 Thread Alexander Walker
Hi, i've configured a CephFS and mouted this in fstab ceph1:6789,ceph2:6789,ceph3:6789:/ /cephfsceph name=admin,secret=AQDVOOhVxEI7IBAAM+4el6WYbCwKvFxmW7ygcA==,noatime 0 2 it's mean: 1. Ceph Client can write data on all three server at the same time? 2. Client access the second