[ceph-users] Multiple issues :( Ubuntu 14.04, latest Ceph

2014-12-15 Thread Benjamin
Hey there,

I've set up a small VirtualBox cluster of Ceph VMs. I have one
ceph-admin0 node, and three ceph0,ceph1,ceph2 nodes for a total of 4.

I've been following this guide:
http://ceph.com/docs/master/start/quick-ceph-deploy/ to the letter.

At the end of the guide, it calls for you to run ceph health... this is
what happens when I do.

HEALTH_ERR 64 pgs stale; 64 pgs stuck stale; 2 full osd(s); 2/2 in osds
are down

Additionally I would like to build and run Calamari to have an overview of
the cluster once it's up and running. I followed all the directions here:
http://calamari.readthedocs.org/en/latest/development/building_packages.html

but the calamari-client package refuses to properly build under
trusty-package for some reason. This is the output at the end of salt-call:

Summary

Succeeded: 3 (changed=4)
Failed:3

Here is the full (verbose!) output: http://pastebin.com/WJwCxxxK

The machines each have Ubuntu 14.04 64-bit, with 1GB of RAM and 8GB of
disk. They have between 10% and 30% disk utilization but common between all
of them is that they *have free disk space* meaning I have no idea what the
heck is causing Ceph to complain.

Help? :(

~ Benjamin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple issues :( Ubuntu 14.04, latest Ceph

2014-12-15 Thread Udo Lembke
Hi Benjamin,
On 15.12.2014 03:31, Benjamin wrote:
 Hey there,

 I've set up a small VirtualBox cluster of Ceph VMs. I have one
 ceph-admin0 node, and three ceph0,ceph1,ceph2 nodes for a total of 4.

 I've been following this
 guide: http://ceph.com/docs/master/start/quick-ceph-deploy/ to the letter.

 At the end of the guide, it calls for you to run ceph health... this
 is what happens when I do.

 HEALTH_ERR 64 pgs stale; 64 pgs stuck stale; 2 full osd(s); 2/2 in
 osds are down
hmm, why you have two OSDs only with tree nodes?

Can you post the output of following commands
ceph health detail
ceph osd tree
rados df
ceph osd pool get data size
ceph osd pool get rbd size
df -h # on all OSD-nodes

/etc/init.d/ceph start osd.0  # on node with osd.0
/etc/init.d/ceph start osd.1  # on node with osd.1


Udo


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple issues :( Ubuntu 14.04, latest Ceph

2014-12-15 Thread Craig Lewis
On Sun, Dec 14, 2014 at 6:31 PM, Benjamin zor...@gmail.com wrote:

 The machines each have Ubuntu 14.04 64-bit, with 1GB of RAM and 8GB of
 disk. They have between 10% and 30% disk utilization but common between all
 of them is that they *have free disk space* meaning I have no idea what
 the heck is causing Ceph to complain.


Each OSD is 8GB?  You need to make them at least 10 GB.

Ceph weights each disk as it's size in TiB, and it truncates to two decimal
places.  So your 8 GiB disks have a weight of 0.00.  Bump it up to 10 GiB,
and it'll get a weight of 0.01.

You should have 3 OSDs, one for each of ceph0,ceph1,ceph2.

If that doesn't fix the problem, go ahead and post the things Udo mentioned.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple issues :( Ubuntu 14.04, latest Ceph

2014-12-15 Thread Benjamin
Aha, excellent suggestion! I'll try that as soon as I get back, thank you.
- B
On Dec 15, 2014 5:06 PM, Craig Lewis cle...@centraldesktop.com wrote:


 On Sun, Dec 14, 2014 at 6:31 PM, Benjamin zor...@gmail.com wrote:

 The machines each have Ubuntu 14.04 64-bit, with 1GB of RAM and 8GB of
 disk. They have between 10% and 30% disk utilization but common between all
 of them is that they *have free disk space* meaning I have no idea what
 the heck is causing Ceph to complain.


 Each OSD is 8GB?  You need to make them at least 10 GB.

 Ceph weights each disk as it's size in TiB, and it truncates to two
 decimal places.  So your 8 GiB disks have a weight of 0.00.  Bump it up to
 10 GiB, and it'll get a weight of 0.01.

 You should have 3 OSDs, one for each of ceph0,ceph1,ceph2.

 If that doesn't fix the problem, go ahead and post the things Udo
 mentioned.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple issues :( Ubuntu 14.04, latest Ceph

2014-12-15 Thread Benjamin
I increased the OSDs to 10.5GB each and now I have a different issue...

cephy@ceph-admin0:~/ceph-cluster$ echo {Test-data}  testfile.txt
cephy@ceph-admin0:~/ceph-cluster$ rados put test-object-1 testfile.txt
--pool=data
error opening pool data: (2) No such file or directory
cephy@ceph-admin0:~/ceph-cluster$ ceph osd lspools
0 rbd,

Here's ceph -w:
cephy@ceph-admin0:~/ceph-cluster$ ceph -w
cluster b3e15af-SNIP
 health HEALTH_WARN mon.ceph0 low disk space; mon.ceph1 low disk space;
mon.ceph2 low disk space; clock skew detected on mon.ceph0, mon.ceph1,
mon.ceph2
 monmap e3: 4 mons at {ceph-admin0=
10.0.1.10:6789/0,ceph0=10.0.1.11:6789/0,ceph1=10.0.1.12:6789/0,ceph2=10.0.1.13:6789/0},
election epoch 10, quorum 0,1,2,3 ceph-admin0,ceph0,ceph1,ceph2
 osdmap e17: 3 osds: 3 up, 3 in
  pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
19781 MB used, 7050 MB / 28339 MB avail
  64 active+clean

Any other commands to run that would be helpful? Is it safe to simply
manually create the data and metadata pools myself?

On Mon, Dec 15, 2014 at 5:07 PM, Benjamin zor...@gmail.com wrote:

 Aha, excellent suggestion! I'll try that as soon as I get back, thank you.
 - B
 On Dec 15, 2014 5:06 PM, Craig Lewis cle...@centraldesktop.com wrote:


 On Sun, Dec 14, 2014 at 6:31 PM, Benjamin zor...@gmail.com wrote:

 The machines each have Ubuntu 14.04 64-bit, with 1GB of RAM and 8GB of
 disk. They have between 10% and 30% disk utilization but common between all
 of them is that they *have free disk space* meaning I have no idea what
 the heck is causing Ceph to complain.


 Each OSD is 8GB?  You need to make them at least 10 GB.

 Ceph weights each disk as it's size in TiB, and it truncates to two
 decimal places.  So your 8 GiB disks have a weight of 0.00.  Bump it up to
 10 GiB, and it'll get a weight of 0.01.

 You should have 3 OSDs, one for each of ceph0,ceph1,ceph2.

 If that doesn't fix the problem, go ahead and post the things Udo
 mentioned.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple issues :( Ubuntu 14.04, latest Ceph

2014-12-15 Thread Udo Lembke
Hi,
see here:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg15546.html

Udo

On 16.12.2014 05:39, Benjamin wrote:
 I increased the OSDs to 10.5GB each and now I have a different issue...

 cephy@ceph-admin0:~/ceph-cluster$ echo {Test-data}  testfile.txt
 cephy@ceph-admin0:~/ceph-cluster$ rados put test-object-1 testfile.txt
 --pool=data
 error opening pool data: (2) No such file or directory
 cephy@ceph-admin0:~/ceph-cluster$ ceph osd lspools
 0 rbd,

 Here's ceph -w:
 cephy@ceph-admin0:~/ceph-cluster$ ceph -w
 cluster b3e15af-SNIP
  health HEALTH_WARN mon.ceph0 low disk space; mon.ceph1 low disk
 space; mon.ceph2 low disk space; clock skew detected on mon.ceph0,
 mon.ceph1, mon.ceph2
  monmap e3: 4 mons at
 {ceph-admin0=10.0.1.10:6789/0,ceph0=10.0.1.11:6789/0,ceph1=10.0.1.12:6789/0,ceph2=10.0.1.13:6789/0
 http://10.0.1.10:6789/0,ceph0=10.0.1.11:6789/0,ceph1=10.0.1.12:6789/0,ceph2=10.0.1.13:6789/0},
 election epoch 10, quorum 0,1,2,3 ceph-admin0,ceph0,ceph1,ceph2
  osdmap e17: 3 osds: 3 up, 3 in
   pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
 19781 MB used, 7050 MB / 28339 MB avail
   64 active+clean

 Any other commands to run that would be helpful? Is it safe to simply
 manually create the data and metadata pools myself?

 On Mon, Dec 15, 2014 at 5:07 PM, Benjamin zor...@gmail.com
 mailto:zor...@gmail.com wrote:

 Aha, excellent suggestion! I'll try that as soon as I get back,
 thank you.
 - B

 On Dec 15, 2014 5:06 PM, Craig Lewis cle...@centraldesktop.com
 mailto:cle...@centraldesktop.com wrote:


 On Sun, Dec 14, 2014 at 6:31 PM, Benjamin zor...@gmail.com
 mailto:zor...@gmail.com wrote:

 The machines each have Ubuntu 14.04 64-bit, with 1GB of
 RAM and 8GB of disk. They have between 10% and 30% disk
 utilization but common between all of them is that they
 *have free disk space* meaning I have no idea what the
 heck is causing Ceph to complain.


 Each OSD is 8GB?  You need to make them at least 10 GB.

 Ceph weights each disk as it's size in TiB, and it truncates
 to two decimal places.  So your 8 GiB disks have a weight of
 0.00.  Bump it up to 10 GiB, and it'll get a weight of 0.01.

 You should have 3 OSDs, one for each of ceph0,ceph1,ceph2.

 If that doesn't fix the problem, go ahead and post the things
 Udo mentioned.



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple issues :( Ubuntu 14.04, latest Ceph

2014-12-15 Thread Benjamin
Hi Udo,

Thanks! Creating the MDS did not add a data and metadata pool for me but I
was able to simply create them myself.

The tutorials also suggest you make new pools, cephfs_data and
cephfs_metadata - would simply using data and metadata work better?

- B

On Mon, Dec 15, 2014, 10:37 PM Udo Lembke ulem...@polarzone.de wrote:

  Hi,
 see here:
 https://www.mail-archive.com/ceph-users@lists.ceph.com/msg15546.html

 Udo


 On 16.12.2014 05:39, Benjamin wrote:

 I increased the OSDs to 10.5GB each and now I have a different issue...

 cephy@ceph-admin0:~/ceph-cluster$ echo {Test-data}  testfile.txt
 cephy@ceph-admin0:~/ceph-cluster$ rados put test-object-1 testfile.txt
 --pool=data
 error opening pool data: (2) No such file or directory
 cephy@ceph-admin0:~/ceph-cluster$ ceph osd lspools
 0 rbd,

  Here's ceph -w:
 cephy@ceph-admin0:~/ceph-cluster$ ceph -w
 cluster b3e15af-SNIP
  health HEALTH_WARN mon.ceph0 low disk space; mon.ceph1 low disk
 space; mon.ceph2 low disk space; clock skew detected on mon.ceph0,
 mon.ceph1, mon.ceph2
  monmap e3: 4 mons at {ceph-admin0=
 10.0.1.10:6789/0,ceph0=10.0.1.11:6789/0,ceph1=10.0.1.12:6789/0,ceph2=10.0.1.13:6789/0},
 election epoch 10, quorum 0,1,2,3 ceph-admin0,ceph0,ceph1,ceph2
  osdmap e17: 3 osds: 3 up, 3 in
   pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
 19781 MB used, 7050 MB / 28339 MB avail
   64 active+clean

  Any other commands to run that would be helpful? Is it safe to simply
 manually create the data and metadata pools myself?

 On Mon, Dec 15, 2014 at 5:07 PM, Benjamin zor...@gmail.com wrote:

 Aha, excellent suggestion! I'll try that as soon as I get back, thank you.
 - B
  On Dec 15, 2014 5:06 PM, Craig Lewis cle...@centraldesktop.com
 wrote:


 On Sun, Dec 14, 2014 at 6:31 PM, Benjamin zor...@gmail.com wrote:

 The machines each have Ubuntu 14.04 64-bit, with 1GB of RAM and 8GB of
 disk. They have between 10% and 30% disk utilization but common between all
 of them is that they *have free disk space* meaning I have no idea
 what the heck is causing Ceph to complain.


 Each OSD is 8GB?  You need to make them at least 10 GB.

  Ceph weights each disk as it's size in TiB, and it truncates to two
 decimal places.  So your 8 GiB disks have a weight of 0.00.  Bump it up to
 10 GiB, and it'll get a weight of 0.01.

  You should have 3 OSDs, one for each of ceph0,ceph1,ceph2.

  If that doesn't fix the problem, go ahead and post the things Udo
 mentioned.



 ___
 ceph-users mailing 
 listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com