Re: [ceph-users] pgs not deep-scrubbed in time

2019-07-04 Thread Alexander Walker

Hi,
thanks for you quickly answer. This option is set to false.
root@heku1 ~# ceph daemon osd.1 config get osd_scrub_auto_repair
{
    "osd_scrub_auto_repair": "false"
}

Best regards
Alex

Am 03.07.2019 um 15:42 schrieb Paul Emmerich:

auto repair enabled___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] pgs not deep-scrubbed in time

2019-07-03 Thread Alexander Walker

Hi,
My Cluster show me this message cince last two weeks.

Ceph Version (ceph -v):

root@heku1 ~ # ceph -v
ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic
(stable)

All pgs are active+clean:

root@heku1 ~ # ceph -s
  cluster:
    id: 0839c91a-f3ca-4119-853b-eb10904cf322
    health: HEALTH_WARN
    514 pgs not deep-scrubbed in time

  services:
    mon: 5 daemons, quorum heku1,heku2,heku3,heku4,heku5
    mgr: heku2(active), standbys: heku1, heku5, heku4, heku3
    mds: cephfs_fs-1/1/1 up  {0=heku2=up:active}, 3 up:standby
    osd: 10 osds: 10 up, 10 in

  data:
    pools:   4 pools, 514 pgs
    objects: 1.17 M objects, 1.3 TiB
    usage:   2.5 TiB used, 2.8 TiB / 5.3 TiB avail
    pgs: 514 active+clean

  io:
    client:   2.6 KiB/s rd, 1.3 MiB/s wr, 0 op/s rd, 133 op/s wr

I've running manualy the deep scrubing process, but the message was not
changed:

ceph pg dump | grep -i active+clean | awk '{print $1}' | while read i; do ceph 
pg deep-scrub ${i}; done

Also I've changed this options and restartet all osd's

root@heku1 ~# ceph daemon osd.0 config get osd_deep_scrub_interval
{
    "osd_deep_scrub_interval": "604800.00" << 7 days
}
root@heku1 ~# ceph daemon osd.0 config get mon_warn_not_deep_scrubbed
{
    "mon_warn_not_deep_scrubbed": "691200" < 8 days
}

Can me help anyone?

Best Regards
Alex

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-mon crash after update from hammer 0.94.7 to jewel 10.2.3

2016-11-09 Thread Alexander Walker

Hello,

I've a cluster of three node (two osd on each node). First I've updated 
on node - osd is ok and running, but ceph-mon crashed.


cephus@ceph3:~$ sudo /usr/bin/ceph-mon --cluster=ceph -i ceph3 -f 
--setuser ceph --setgroup ceph --debug_mon 20
starting mon.ceph3 rank 2 at 192.168.49.103:6789/0 mon_data 
/var/lib/ceph/mon/ceph-ceph3 fsid 3c58a184-bf27-4273-8000-405513006a7b
mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 
7fc9f74ac4c0 time 2016-11-09 14:57:03.743773

mds/FSMap.cc: 628: FAILED assert(i.second.state == MDSMap::STATE_STANDBY)
 ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char 
const*)+0x8b) [0x55c5ddd031eb]

 2: (FSMap::sanity() const+0x932) [0x55c5ddc28112]
 3: (MDSMonitor::update_from_paxos(bool*)+0x450) [0x55c5dda53160]
 4: (PaxosService::refresh(bool*)+0x19a) [0x55c5dd9c6b4a]
 5: (Monitor::refresh_from_paxos(bool*)+0x143) [0x55c5dd963433]
 6: (Monitor::init_paxos()+0x85) [0x55c5dd963845]
 7: (Monitor::preinit()+0x925) [0x55c5dd973ec5]
 8: (main()+0x236d) [0x55c5dd901e9d]
 9: (__libc_start_main()+0xf5) [0x7fc9f4a2bf45]
 10: (()+0x26106a) [0x55c5dd95406a]
 NOTE: a copy of the executable, or `objdump -rdS ` is 
needed to interpret this.
2016-11-09 14:57:03.748124 7fc9f74ac4c0 -1 mds/FSMap.cc: In function 
'void FSMap::sanity() const' thread 7fc9f74ac4c0 time 2016-11-09 
14:57:03.743773

mds/FSMap.cc: 628: FAILED assert(i.second.state == MDSMap::STATE_STANDBY)

 ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char 
const*)+0x8b) [0x55c5ddd031eb]

 2: (FSMap::sanity() const+0x932) [0x55c5ddc28112]
 3: (MDSMonitor::update_from_paxos(bool*)+0x450) [0x55c5dda53160]
 4: (PaxosService::refresh(bool*)+0x19a) [0x55c5dd9c6b4a]
 5: (Monitor::refresh_from_paxos(bool*)+0x143) [0x55c5dd963433]
 6: (Monitor::init_paxos()+0x85) [0x55c5dd963845]
 7: (Monitor::preinit()+0x925) [0x55c5dd973ec5]
 8: (main()+0x236d) [0x55c5dd901e9d]
 9: (__libc_start_main()+0xf5) [0x7fc9f4a2bf45]
 10: (()+0x26106a) [0x55c5dd95406a]
 NOTE: a copy of the executable, or `objdump -rdS ` is 
needed to interpret this.


 0> 2016-11-09 14:57:03.748124 7fc9f74ac4c0 -1 mds/FSMap.cc: In 
function 'void FSMap::sanity() const' thread 7fc9f74ac4c0 time 
2016-11-09 14:57:03.743773

mds/FSMap.cc: 628: FAILED assert(i.second.state == MDSMap::STATE_STANDBY)

 ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char 
const*)+0x8b) [0x55c5ddd031eb]

 2: (FSMap::sanity() const+0x932) [0x55c5ddc28112]
 3: (MDSMonitor::update_from_paxos(bool*)+0x450) [0x55c5dda53160]
 4: (PaxosService::refresh(bool*)+0x19a) [0x55c5dd9c6b4a]
 5: (Monitor::refresh_from_paxos(bool*)+0x143) [0x55c5dd963433]
 6: (Monitor::init_paxos()+0x85) [0x55c5dd963845]
 7: (Monitor::preinit()+0x925) [0x55c5dd973ec5]
 8: (main()+0x236d) [0x55c5dd901e9d]
 9: (__libc_start_main()+0xf5) [0x7fc9f4a2bf45]
 10: (()+0x26106a) [0x55c5dd95406a]
 NOTE: a copy of the executable, or `objdump -rdS ` is 
needed to interpret this.


*** Caught signal (Aborted) **
 in thread 7fc9f74ac4c0 thread_name:ceph-mon
 ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
 1: (()+0x4f6222) [0x55c5ddbe9222]
 2: (()+0x10330) [0x7fc9f67ba330]
 3: (gsignal()+0x37) [0x7fc9f4a40c37]
 4: (abort()+0x148) [0x7fc9f4a44028]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char 
const*)+0x265) [0x55c5ddd033c5]

 6: (FSMap::sanity() const+0x932) [0x55c5ddc28112]
 7: (MDSMonitor::update_from_paxos(bool*)+0x450) [0x55c5dda53160]
 8: (PaxosService::refresh(bool*)+0x19a) [0x55c5dd9c6b4a]
 9: (Monitor::refresh_from_paxos(bool*)+0x143) [0x55c5dd963433]
 10: (Monitor::init_paxos()+0x85) [0x55c5dd963845]
 11: (Monitor::preinit()+0x925) [0x55c5dd973ec5]
 12: (main()+0x236d) [0x55c5dd901e9d]
 13: (__libc_start_main()+0xf5) [0x7fc9f4a2bf45]
 14: (()+0x26106a) [0x55c5dd95406a]
2016-11-09 14:57:03.757446 7fc9f74ac4c0 -1 *** Caught signal (Aborted) **
 in thread 7fc9f74ac4c0 thread_name:ceph-mon

 ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
 1: (()+0x4f6222) [0x55c5ddbe9222]
 2: (()+0x10330) [0x7fc9f67ba330]
 3: (gsignal()+0x37) [0x7fc9f4a40c37]
 4: (abort()+0x148) [0x7fc9f4a44028]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char 
const*)+0x265) [0x55c5ddd033c5]

 6: (FSMap::sanity() const+0x932) [0x55c5ddc28112]
 7: (MDSMonitor::update_from_paxos(bool*)+0x450) [0x55c5dda53160]
 8: (PaxosService::refresh(bool*)+0x19a) [0x55c5dd9c6b4a]
 9: (Monitor::refresh_from_paxos(bool*)+0x143) [0x55c5dd963433]
 10: (Monitor::init_paxos()+0x85) [0x55c5dd963845]
 11: (Monitor::preinit()+0x925) [0x55c5dd973ec5]
 12: (main()+0x236d) [0x55c5dd901e9d]
 13: (__libc_start_main()+0xf5) [0x7fc9f4a2bf45]
 14: (()+0x26106a) [0x55c5dd95406a]
 NOTE: a copy of the executable, or `objdump -rdS ` is 
needed to interpret this.


 0> 2016-11-09 14:57:03.757446 

[ceph-users] File striping configuration?

2015-09-07 Thread Alexander Walker

Hi,
I've found this document https://ceph.com/docs/v0.80/dev/file-striping. 
But I don't understand how and where I can configure this. I'll use this 
in CephFS.

Can me help someone?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Client parallized access?

2015-09-04 Thread Alexander Walker

Hi,
i've configured a CephFS and mouted this in fstab

ceph1:6789,ceph2:6789,ceph3:6789:/ /cephfsceph 
name=admin,secret=AQDVOOhVxEI7IBAAM+4el6WYbCwKvFxmW7ygcA==,noatime 0   2


it's mean:

1. Ceph Client can write data on all three server at the same time?
2. Client access the second server if first server is not reachable?


best regards
Alex
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com