Under Jewel 10.2.2 I have also had to delete PG directories to get very
full OSDs to restart. I first use "du -sh *" under the "current" directory
to find which OSD directories are the fullest on the full OSD disk, and
pick 1 of the fullest. I then look at the PG map and verify the PG is
replicate
I got to this situation several times, due to a strange behavior in the
xfs filesystem - I initially ran on debian, afterwards reinstalled the
nodes to centos7, kernel 3.10.0-229.14.1.el7.x86_64, package
xfsprogs-3.2.1-6.el7.x86_64. Around 75-80% of usage shown with df, the
disk is already full
So I am wondering ``was`` is the recommended way to fix this issue for
the cluster running Jewel release (10.2.2)?
So I am wondering ``what`` is the recommended way to fix this issue
for the cluster running Jewel release (10.2.2)?
typo?
On Mon, Aug 8, 2016 at 8:19 PM, Mykola Dvornik wrote:
> @
@Shinobu
According to
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
"If you cannot start an OSD because it is full, you may delete some data by
deleting some placement group directories in the full OSD."
On 8 August 2016 at 13:16, Shinobu Kinjo wrote:
> On Mon, A
On Mon, Aug 8, 2016 at 8:01 PM, Mykola Dvornik wrote:
> Dear ceph community,
>
> One of the OSDs in my cluster cannot start due to the
>
> ERROR: osd init failed: (28) No space left on device
>
> A while ago it was recommended to manually delete PGs on the OSD to let it
> start.
Who recommended t
Dear ceph community,
One of the OSDs in my cluster cannot start due to the
*ERROR: osd init failed: (28) No space left on device*
A while ago it was recommended to manually delete PGs on the OSD to let it
start.
So I am wondering was is the recommended way to fix this issue for the
cluster runn