Re: recoverying from 95% full osd

2013-01-24 Thread Dan Mick
what is pool 2 (rbd) for? looks like it's absolutely empty. by default it's for rbd images (see the rbd command etc.). It being empty or not has no effect on the other pools. -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to

Re: recoverying from 95% full osd

2013-01-09 Thread Roman Hlynovskiy
Hello again! I left the system in working state overnight and got it in a wierd state this morning: chef@ceph-node02:/var/log/ceph$ ceph -s health HEALTH_OK monmap e4: 3 mons at {a=192.168.7.11:6789/0,b=192.168.7.12:6789/0,c=192.168.7.13:6789/0}, election epoch 254, quorum 0,1,2 a,b,c

Re: recoverying from 95% full osd

2013-01-08 Thread Roman Hlynovskiy
Hello Mark, ok, adding another osd is a good option, however my initial plan was to raise full ratio watermark and remove unnecessary data. it' clear for me that overfilling one of osd will cause big problems to the fs consistency. But... 2 other OSDs are still having plenty of space. what is the

Re: recoverying from 95% full osd

2013-01-08 Thread Roman Hlynovskiy
Thanks a lot Greg, that was the black magic command I was looking for ) I deleted some obsolete data and reached those figures: chef@cephgw:~$ ./clu.sh exec df -kh|grep osd /dev/mapper/vg00-osd 252G 153G 100G 61% /var/lib/ceph/osd/ceph-0 /dev/mapper/vg00-osd 252G 180G 73G 72%

Re: recoverying from 95% full osd

2013-01-08 Thread Sage Weil
On Wed, 9 Jan 2013, Roman Hlynovskiy wrote: Thanks a lot Greg, that was the black magic command I was looking for ) I deleted some obsolete data and reached those figures: chef@cephgw:~$ ./clu.sh exec df -kh|grep osd /dev/mapper/vg00-osd 252G 153G 100G 61% /var/lib/ceph/osd/ceph-0

Re: recoverying from 95% full osd

2013-01-08 Thread Gregory Farnum
On Tuesday, January 8, 2013 at 10:52 PM, Sage Weil wrote: On Wed, 9 Jan 2013, Roman Hlynovskiy wrote: Thanks a lot Greg, that was the black magic command I was looking for ) I deleted some obsolete data and reached those figures: chef@cephgw:~$ ./clu.sh (http://clu.sh) exec df