Hi,
see here:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg15546.html

Udo

On 16.12.2014 05:39, Benjamin wrote:
> I increased the OSDs to 10.5GB each and now I have a different issue...
>
> cephy@ceph-admin0:~/ceph-cluster$ echo {Test-data} > testfile.txt
> cephy@ceph-admin0:~/ceph-cluster$ rados put test-object-1 testfile.txt
> --pool=data
> error opening pool data: (2) No such file or directory
> cephy@ceph-admin0:~/ceph-cluster$ ceph osd lspools
> 0 rbd,
>
> Here's ceph -w:
> cephy@ceph-admin0:~/ceph-cluster$ ceph -w
>     cluster b3e15af-SNIP
>      health HEALTH_WARN mon.ceph0 low disk space; mon.ceph1 low disk
> space; mon.ceph2 low disk space; clock skew detected on mon.ceph0,
> mon.ceph1, mon.ceph2
>      monmap e3: 4 mons at
> {ceph-admin0=10.0.1.10:6789/0,ceph0=10.0.1.11:6789/0,ceph1=10.0.1.12:6789/0,ceph2=10.0.1.13:6789/0
> <http://10.0.1.10:6789/0,ceph0=10.0.1.11:6789/0,ceph1=10.0.1.12:6789/0,ceph2=10.0.1.13:6789/0>},
> election epoch 10, quorum 0,1,2,3 ceph-admin0,ceph0,ceph1,ceph2
>      osdmap e17: 3 osds: 3 up, 3 in
>       pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
>             19781 MB used, 7050 MB / 28339 MB avail
>                   64 active+clean
>
> Any other commands to run that would be helpful? Is it safe to simply
> manually create the "data" and "metadata" pools myself?
>
> On Mon, Dec 15, 2014 at 5:07 PM, Benjamin <zor...@gmail.com
> <mailto:zor...@gmail.com>> wrote:
>
>     Aha, excellent suggestion! I'll try that as soon as I get back,
>     thank you.
>     - B
>
>     On Dec 15, 2014 5:06 PM, "Craig Lewis" <cle...@centraldesktop.com
>     <mailto:cle...@centraldesktop.com>> wrote:
>
>
>         On Sun, Dec 14, 2014 at 6:31 PM, Benjamin <zor...@gmail.com
>         <mailto:zor...@gmail.com>> wrote:
>
>             The machines each have Ubuntu 14.04 64-bit, with 1GB of
>             RAM and 8GB of disk. They have between 10% and 30% disk
>             utilization but common between all of them is that they
>             *have free disk space* meaning I have no idea what the
>             heck is causing Ceph to complain.
>
>
>         Each OSD is 8GB?  You need to make them at least 10 GB.
>
>         Ceph weights each disk as it's size in TiB, and it truncates
>         to two decimal places.  So your 8 GiB disks have a weight of
>         0.00.  Bump it up to 10 GiB, and it'll get a weight of 0.01.
>
>         You should have 3 OSDs, one for each of ceph0,ceph1,ceph2.
>
>         If that doesn't fix the problem, go ahead and post the things
>         Udo mentioned.
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to