Re: [ceph-users] All pgs with - up [0] acting [0], new cluster installation

2015-07-13 Thread alberto ayllon
Maybe this can help to get the origin of the problem.

If I run  ceph pg dump, and the end of the response i get:


osdstat kbused kbavail kb hb in hb out
0 36688 5194908 5231596 [1,2,3,4,5,6,7,8] []
1 34004 5197592 5231596 [] []
2 34004 5197592 5231596 [1] []
3 34004 5197592 5231596 [0,1,2,4,5,6,7,8] []
4 34004 5197592 5231596 [1,2] []
5 34004 5197592 5231596 [1,2,4] []
6 34004 5197592 5231596 [0,1,2,3,4,5,7,8] []
7 34004 5197592 5231596 [1,2,4,5] []
8 34004 5197592 5231596 [1,2,4,5,7] []
 sum 308720 46775644 47084364


Please someone can help me?



2015-07-13 11:45 GMT+02:00 alberto ayllon albertoayllon...@gmail.com:

 Hello everybody and thanks for your help.

 Hello, I'm newbie in CEPH, I'm trying to install a CEPH cluster with test
 purpose.

 I had just installed a CEPH cluster with three VMs (ubuntu 14.04), each
 one has one mon daemon and three OSDs, also each server has 3 disk.
 Cluster has only one poll (rbd) with pg and pgp_num = 280, and osd pool
 get rbd size = 2.

 I made cluster's installation with  ceph-deploy, ceph version is 0.94.2

 I think cluster's OSDs are having peering problems, because if I run ceph
 status, it returns:

 # ceph status
 cluster d54a2216-b522-4744-a7cc-a2106e1281b6
  health HEALTH_WARN
 280 pgs degraded
 280 pgs stuck degraded
 280 pgs stuck unclean
 280 pgs stuck undersized
 280 pgs undersized
  monmap e3: 3 mons at {ceph01=
 172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
 }
 election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03
  osdmap e46: 9 osds: 9 up, 9 in
   pgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects
 301 MB used, 45679 MB / 45980 MB avail
  280 active+undersized+degraded

 And for all pgs, the command ceph pg map X.yy returns something like:

 osdmap e46 pg 0.d7 (0.d7) - up [0] acting [0]

 As I know Acting Set and Up Set must have the same value, but as they
 are equal to 0, there are not defined OSDs to
 stores pgs replicas, and I think this is why all pg are in
 active+undersized+degraded state.

 Has anyone any idea of what I have to do for  Active Set and Up Set
 reaches correct values.


 Thanks a lot!


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] All pgs with - up [0] acting [0], new cluster installation

2015-07-13 Thread Wido den Hollander


On 13-07-15 14:07, alberto ayllon wrote:
 On 13-07-15 13:12, alberto ayllon wrote:
 Maybe this can help to get the origin of the problem.
 
 If I run  ceph pg dump, and the end of the response i get:
 
 
 What does 'ceph osd tree' tell you?
 
 It seems there is something wrong with your CRUSHMap.
 
 Wido
 
 
 Thanks for your answer Wido.
 
 Here is the output of ceph osd tree;
 
 # ceph osd tree
 ID WEIGHT TYPE NAME   UP/DOWN REWEIGHT PRIMARY-AFFINITY 
 -1  0 root default  
 -2  0 host ceph01   
  0  0 osd.0up  1.0  1.0 
  3  0 osd.3up  1.0  1.0 
  6  0 osd.6up  1.0  1.0 
 -3  0 host ceph02   
  1  0 osd.1up  1.0  1.0 
  4  0 osd.4up  1.0  1.0 
  7  0 osd.7up  1.0  1.0 
 -4  0 host ceph03   
  2  0 osd.2up  1.0  1.0 
  5  0 osd.5up  1.0  1.0 
  8  0 osd.8up  1.0  1.0 
 
 

The weights are allo zero (0) of all the OSDs. How big are the disks? I
think they are very tiny , eg 10GB?

You probably want a bit bigger disks to test with.

Or set the weight manually of each OSD:

$ ceph osd crush reweight osd.X 1

Wido

 
 osdstatkbusedkbavailkbhb inhb out
 03668851949085231596[1,2,3,4,5,6,7,8][]
 13400451975925231596[][]
 23400451975925231596[1][]
 33400451975925231596[0,1,2,4,5,6,7,8][]
 43400451975925231596[1,2][]
 53400451975925231596[1,2,4][]
 63400451975925231596[0,1,2,3,4,5,7,8][]
 73400451975925231596[1,2,4,5][]
 83400451975925231596[1,2,4,5,7][]
  sum3087204677564447084364
 
 
 Please someone can help me?
 
 
 
 2015-07-13 11:45 GMT+02:00 alberto ayllon albertoayllonces at
 gmail.com http://gmail.com
 mailto:albertoayllonces mailto:albertoayllonces at gmail.com
 http://gmail.com:
 
 Hello everybody and thanks foryour help.
 
 Hello, I'm newbie in CEPH, I'm trying to install a CEPHcluster with
 test purpose.
 
 I had just installed a CEPH cluster with three VMs (ubuntu 14.04),
 each one has one mon daemon and three OSDs, also each server has 3
 disk.
 Cluster has only one poll (rbd) with pg and pgp_num = 280, and osd
 pool get rbd size = 2.
 
 I made cluster's installation with  ceph-deploy, ceph version is
 0.94.2
 
 I think cluster's OSDs are having peering problems, because if Irun
 ceph status, it returns:
 
 # ceph status
 cluster d54a2216-b522-4744-a7cc-a2106e1281b6
  health HEALTH_WARN
 280 pgs degraded
 280 pgs stuck degraded
 280 pgs stuck unclean
 280 pgs stuck undersized
 280 pgs undersized
  monmap e3: 3 mons at

 {ceph01=172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
 http://172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0

 http://172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0}
 election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03
  osdmap e46: 9 osds: 9 up, 9 in
   pgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects
 301 MB used, 45679 MB / 45980 MB avail
  280 active+undersized+degraded
 
 And for all pgs, the command ceph pg map X.yyreturns something like:
 
 osdmap e46 pg 0.d7 (0.d7) - up [0] acting [0]
 
 As I know Acting Set and Up Set must have the same value, but as
 they are equal to 0, there are not defined OSDs to
 stores pgs replicas, and I think this is why all pg are in
 active+undersized+degraded state.
 
 Has anyone any idea of what I have to do for  Active Set and Up
 Set reaches correct values.
 
 
 Thanks a lot!
 
 
 
 
 ___
 ceph-users mailing list
 ceph-users at lists.ceph.com http://lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] All pgs with - up [0] acting [0], new cluster installation

2015-07-13 Thread alberto ayllon
Hi Wido.

Thanks again.

I will rebuild the cluster with bigger disk.

Again thanks for your help.


2015-07-13 14:15 GMT+02:00 Wido den Hollander w...@42on.com:



 On 13-07-15 14:07, alberto ayllon wrote:
  On 13-07-15 13:12, alberto ayllon wrote:
  Maybe this can help to get the origin of the problem.
 
  If I run  ceph pg dump, and the end of the response i get:
 
 
  What does 'ceph osd tree' tell you?
 
  It seems there is something wrong with your CRUSHMap.
 
  Wido
 
 
  Thanks for your answer Wido.
 
  Here is the output of ceph osd tree;
 
  # ceph osd tree
  ID WEIGHT TYPE NAME   UP/DOWN REWEIGHT PRIMARY-AFFINITY
  -1  0 root default
  -2  0 host ceph01
   0  0 osd.0up  1.0  1.0
   3  0 osd.3up  1.0  1.0
   6  0 osd.6up  1.0  1.0
  -3  0 host ceph02
   1  0 osd.1up  1.0  1.0
   4  0 osd.4up  1.0  1.0
   7  0 osd.7up  1.0  1.0
  -4  0 host ceph03
   2  0 osd.2up  1.0  1.0
   5  0 osd.5up  1.0  1.0
   8  0 osd.8up  1.0  1.0
 
 

 The weights are allo zero (0) of all the OSDs. How big are the disks? I
 think they are very tiny , eg 10GB?

 You probably want a bit bigger disks to test with.

 Or set the weight manually of each OSD:

 $ ceph osd crush reweight osd.X 1

 Wido

 
  osdstatkbusedkbavailkbhb inhb out
  03668851949085231596[1,2,3,4,5,6,7,8][]
  13400451975925231596[][]
  23400451975925231596[1][]
  33400451975925231596[0,1,2,4,5,6,7,8][]
  43400451975925231596[1,2][]
  53400451975925231596[1,2,4][]
  63400451975925231596[0,1,2,3,4,5,7,8][]
  73400451975925231596[1,2,4,5][]
  83400451975925231596[1,2,4,5,7][]
   sum3087204677564447084364
 
 
  Please someone can help me?
 
 
 
  2015-07-13 11:45 GMT+02:00 alberto ayllon albertoayllonces at
  gmail.com http://gmail.com
  mailto:albertoayllonces mailto:albertoayllonces at gmail.com
  http://gmail.com:
 
  Hello everybody and thanks foryour help.
 
  Hello, I'm newbie in CEPH, I'm trying to install a CEPHcluster with
  test purpose.
 
  I had just installed a CEPH cluster with three VMs (ubuntu 14.04),
  each one has one mon daemon and three OSDs, also each server has 3
  disk.
  Cluster has only one poll (rbd) with pg and pgp_num = 280, and osd
  pool get rbd size = 2.
 
  I made cluster's installation with  ceph-deploy, ceph version is
  0.94.2
 
  I think cluster's OSDs are having peering problems, because if Irun
  ceph status, it returns:
 
  # ceph status
  cluster d54a2216-b522-4744-a7cc-a2106e1281b6
   health HEALTH_WARN
  280 pgs degraded
  280 pgs stuck degraded
  280 pgs stuck unclean
  280 pgs stuck undersized
  280 pgs undersized
   monmap e3: 3 mons at
 
  {ceph01=
 172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
  
 http://172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
 
 
  
 http://172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
 }
  election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03
   osdmap e46: 9 osds: 9 up, 9 in
pgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects
  301 MB used, 45679 MB / 45980 MB avail
   280 active+undersized+degraded
 
  And for all pgs, the command ceph pg map X.yyreturns something
 like:
 
  osdmap e46 pg 0.d7 (0.d7) - up [0] acting [0]
 
  As I know Acting Set and Up Set must have the same value, but as
  they are equal to 0, there are not defined OSDs to
  stores pgs replicas, and I think this is why all pg are in
  active+undersized+degraded state.
 
  Has anyone any idea of what I have to do for  Active Set and Up
  Set reaches correct values.
 
 
  Thanks a lot!
 
 
 
 
  ___
  ceph-users mailing list
  ceph-users at lists.ceph.com http://lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] All pgs with - up [0] acting [0], new cluster installation

2015-07-13 Thread alberto ayllon
Hello everybody and thanks for your help.

Hello, I'm newbie in CEPH, I'm trying to install a CEPH cluster with test
purpose.

I had just installed a CEPH cluster with three VMs (ubuntu 14.04), each one
has one mon daemon and three OSDs, also each server has 3 disk.
Cluster has only one poll (rbd) with pg and pgp_num = 280, and osd pool
get rbd size = 2.

I made cluster's installation with  ceph-deploy, ceph version is 0.94.2

I think cluster's OSDs are having peering problems, because if I run ceph
status, it returns:

# ceph status
cluster d54a2216-b522-4744-a7cc-a2106e1281b6
 health HEALTH_WARN
280 pgs degraded
280 pgs stuck degraded
280 pgs stuck unclean
280 pgs stuck undersized
280 pgs undersized
 monmap e3: 3 mons at {ceph01=
172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
}
election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03
 osdmap e46: 9 osds: 9 up, 9 in
  pgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects
301 MB used, 45679 MB / 45980 MB avail
 280 active+undersized+degraded

And for all pgs, the command ceph pg map X.yy returns something like:

osdmap e46 pg 0.d7 (0.d7) - up [0] acting [0]

As I know Acting Set and Up Set must have the same value, but as they
are equal to 0, there are not defined OSDs to
stores pgs replicas, and I think this is why all pg are in
active+undersized+degraded state.

Has anyone any idea of what I have to do for  Active Set and Up Set
reaches correct values.


Thanks a lot!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] All pgs with - up [0] acting [0], new cluster installation

2015-07-13 Thread alberto ayllon
On 13-07-15 13:12, alberto ayllon wrote:
 Maybe this can help to get the origin of the problem.

 If I run  ceph pg dump, and the end of the response i get:


What does 'ceph osd tree' tell you?

It seems there is something wrong with your CRUSHMap.

Wido


Thanks for your answer Wido.

Here is the output of ceph osd tree;

# ceph osd tree
ID WEIGHT TYPE NAME   UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1  0 root default
-2  0 host ceph01
 0  0 osd.0up  1.0  1.0
 3  0 osd.3up  1.0  1.0
 6  0 osd.6up  1.0  1.0
-3  0 host ceph02
 1  0 osd.1up  1.0  1.0
 4  0 osd.4up  1.0  1.0
 7  0 osd.7up  1.0  1.0
-4  0 host ceph03
 2  0 osd.2up  1.0  1.0
 5  0 osd.5up  1.0  1.0
 8  0 osd.8up  1.0  1.0



 osdstatkbusedkbavailkbhb inhb out
 03668851949085231596[1,2,3,4,5,6,7,8][]
 13400451975925231596[][]
 23400451975925231596[1][]
 33400451975925231596[0,1,2,4,5,6,7,8][]
 43400451975925231596[1,2][]
 53400451975925231596[1,2,4][]
 63400451975925231596[0,1,2,3,4,5,7,8][]
 73400451975925231596[1,2,4,5][]
 83400451975925231596[1,2,4,5,7][]
  sum3087204677564447084364


 Please someone can help me?



 2015-07-13 11:45 GMT+02:00 alberto ayllon albertoayllonces at gmail.com
 mailto:albertoayllonces at gmail.com:

 Hello everybody and thanks foryour help.

 Hello, I'm newbie in CEPH, I'm trying to install a CEPHcluster with
 test purpose.

 I had just installed a CEPH cluster with three VMs (ubuntu 14.04),
 each one has one mon daemon and three OSDs, also each server has 3
disk.
 Cluster has only one poll (rbd) with pg and pgp_num = 280, and osd
 pool get rbd size = 2.

 I made cluster's installation with  ceph-deploy, ceph version is
 0.94.2

 I think cluster's OSDs are having peering problems, because if Irun
 ceph status, it returns:

 # ceph status
 cluster d54a2216-b522-4744-a7cc-a2106e1281b6
  health HEALTH_WARN
 280 pgs degraded
 280 pgs stuck degraded
 280 pgs stuck unclean
 280 pgs stuck undersized
 280 pgs undersized
  monmap e3: 3 mons at
 {ceph01=
172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
 
http://172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
}
 election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03
  osdmap e46: 9 osds: 9 up, 9 in
   pgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects
 301 MB used, 45679 MB / 45980 MB avail
  280 active+undersized+degraded

 And for all pgs, the command ceph pg map X.yyreturns something like:

 osdmap e46 pg 0.d7 (0.d7) - up [0] acting [0]

 As I know Acting Set and Up Set must have the same value, but as
 they are equal to 0, there are not defined OSDs to
 stores pgs replicas, and I think this is why all pg are in
 active+undersized+degraded state.

 Has anyone any idea of what I have to do for  Active Set and Up
 Set reaches correct values.


 Thanks a lot!




 ___
 ceph-users mailing list
 ceph-users at lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] All pgs with - up [0] acting [0], new cluster installation

2015-07-13 Thread Wido den Hollander


On 13-07-15 13:12, alberto ayllon wrote:
 Maybe this can help to get the origin of the problem.
 
 If I run  ceph pg dump, and the end of the response i get:
 

What does 'ceph osd tree' tell you?

It seems there is something wrong with your CRUSHMap.

Wido

 
 osdstatkbusedkbavailkbhb inhb out
 03668851949085231596[1,2,3,4,5,6,7,8][]
 13400451975925231596[][]
 23400451975925231596[1][]
 33400451975925231596[0,1,2,4,5,6,7,8][]
 43400451975925231596[1,2][]
 53400451975925231596[1,2,4][]
 63400451975925231596[0,1,2,3,4,5,7,8][]
 73400451975925231596[1,2,4,5][]
 83400451975925231596[1,2,4,5,7][]
  sum3087204677564447084364
 
 
 Please someone can help me?
 
 
 
 2015-07-13 11:45 GMT+02:00 alberto ayllon albertoayllon...@gmail.com
 mailto:albertoayllon...@gmail.com:
 
 Hello everybody and thanks foryour help.
 
 Hello, I'm newbie in CEPH, I'm trying to install a CEPHcluster with
 test purpose.
 
 I had just installed a CEPH cluster with three VMs (ubuntu 14.04),
 each one has one mon daemon and three OSDs, also each server has 3 disk.
 Cluster has only one poll (rbd) with pg and pgp_num = 280, and osd
 pool get rbd size = 2.
 
 I made cluster's installation with  ceph-deploy, ceph version is
 0.94.2
 
 I think cluster's OSDs are having peering problems, because if Irun
 ceph status, it returns:
 
 # ceph status
 cluster d54a2216-b522-4744-a7cc-a2106e1281b6
  health HEALTH_WARN
 280 pgs degraded
 280 pgs stuck degraded
 280 pgs stuck unclean
 280 pgs stuck undersized
 280 pgs undersized
  monmap e3: 3 mons at
 
 {ceph01=172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
 
 http://172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0}
 election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03
  osdmap e46: 9 osds: 9 up, 9 in
   pgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects
 301 MB used, 45679 MB / 45980 MB avail
  280 active+undersized+degraded
 
 And for all pgs, the command ceph pg map X.yyreturns something like:
 
 osdmap e46 pg 0.d7 (0.d7) - up [0] acting [0]
 
 As I know Acting Set and Up Set must have the same value, but as
 they are equal to 0, there are not defined OSDs to
 stores pgs replicas, and I think this is why all pg are in
 active+undersized+degraded state.
 
 Has anyone any idea of what I have to do for  Active Set and Up
 Set reaches correct values.
 
 
 Thanks a lot!
 
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com