Hi Christian,

that seems true, thanks.

But again, there are only occurence in GZ logs files (that were logrotated,
not in current log files):
Example:

[root@cs2 ~]# grep -ir "WRN" /var/log/ceph/
Binary file /var/log/ceph/ceph-mon.cs2.log-20140612.gz matches
Binary file /var/log/ceph/ceph.log-20140614.gz matches
Binary file /var/log/ceph/ceph.log-20140611.gz matches
Binary file /var/log/ceph/ceph.log-20140612.gz matches
Binary file /var/log/ceph/ceph.log-20140613.gz matches

Thanks,
Andrija


On 17 June 2014 10:48, Christian Balzer <ch...@gol.com> wrote:

>
> Hello,
>
> On Tue, 17 Jun 2014 10:30:44 +0200 Andrija Panic wrote:
>
> > Hi,
> >
> > I have 3 node (2 OSD per node) CEPH cluster, running fine, not much data,
> > network also fine:
> > Ceph ceph-0.72.2.
> >
> > When I issue "ceph status" command, I get randomly HEALTH_OK, and
> > imidiately after that when repeating command, I get HEALTH_WARN
> >
> > Examle given down - these commands were issues within less than 1 sec
> > between them
> > There are NO occuring of word "warn" in the logs (grep -ir "warn"
> > /var/log/ceph) on any of the servers...
> > I get false alerts with my status monitoring script, for this reason...
> >
> If I recall correctly, the logs will show INF, WRN and ERR, so grep for
> WRN.
>
> Regards,
>
> Christian
>
> > Any help would be greatly appriciated.
> >
> > Thanks,
> >
> > [root@cs3 ~]# ceph status
> >     cluster cab20370-bf6a-4589-8010-8d5fc8682eab
> >      health HEALTH_OK
> >      monmap e2: 3 mons at
> >
> {cs1=10.44.xxx.10:6789/0,cs2=10.44.xxx.11:6789/0,cs3=10.44.xxx.12:6789/0},
> > election epoch 122, quorum 0,1,2 cs1,cs2,cs3
> >      osdmap e890: 6 osds: 6 up, 6 in
> >       pgmap v2379904: 448 pgs, 4 pools, 862 GB data, 217 kobjects
> >             2576 GB used, 19732 GB / 22309 GB avail
> >                  448 active+clean
> >   client io 17331 kB/s rd, 113 kB/s wr, 176 op/s
> >
> > [root@cs3 ~]# ceph status
> >     cluster cab20370-bf6a-4589-8010-8d5fc8682eab
> >      health HEALTH_WARN
> >      monmap e2: 3 mons at
> >
> {cs1=10.44.xxx.10:6789/0,cs2=10.44.xxx.11:6789/0,cs3=10.44.xxx.12:6789/0},
> > election epoch 122, quorum 0,1,2 cs1,cs2,cs3
> >      osdmap e890: 6 osds: 6 up, 6 in
> >       pgmap v2379905: 448 pgs, 4 pools, 862 GB data, 217 kobjects
> >             2576 GB used, 19732 GB / 22309 GB avail
> >                  448 active+clean
> >   client io 28383 kB/s rd, 566 kB/s wr, 321 op/s
> >
> > [root@cs3 ~]# ceph status
> >     cluster cab20370-bf6a-4589-8010-8d5fc8682eab
> >      health HEALTH_OK
> >      monmap e2: 3 mons at
> >
> {cs1=10.44.xxx.10:6789/0,cs2=10.44.xxx.11:6789/0,cs3=10.44.xxx.12:6789/0},
> > election epoch 122, quorum 0,1,2 cs1,cs2,cs3
> >      osdmap e890: 6 osds: 6 up, 6 in
> >       pgmap v2379913: 448 pgs, 4 pools, 862 GB data, 217 kobjects
> >             2576 GB used, 19732 GB / 22309 GB avail
> >                  448 active+clean
> >   client io 21632 kB/s rd, 49354 B/s wr, 283 op/s
> >
>
>
> --
> Christian Balzer        Network/Systems Engineer
> ch...@gol.com           Global OnLine Japan/Fusion Communications
> http://www.gol.com/
>



-- 

Andrija Panić
--------------------------------------
  http://admintweets.com
--------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to