Hi Kevin, thanks for your answer.
How Can I check the (re-)weights?

On Tue, Jan 8, 2019 at 10:36 AM Kevin Olbrich <k...@sv01.de> wrote:

> Looks like the same problem like mine:
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-January/032054.html
>
> The free space is total while Ceph uses the smallest free space (worst
> OSD).
> Please check your (re-)weights.
>
> Kevin
>
> Am Di., 8. Jan. 2019 um 14:32 Uhr schrieb Rodrigo Embeita
> <rodr...@pagefreezer.com>:
> >
> > Hi guys, I need your help.
> > I'm new with Cephfs and we started using it as file storage.
> > Today we are getting no space left on device but I'm seeing that we have
> plenty space on the filesystem.
> > Filesystem              Size  Used Avail Use% Mounted on
> > 192.168.51.8,192.168.51.6,192.168.51.118:6789:/pagefreezer/smhosts
>  73T   39T   35T  54% /mnt/cephfs
> >
> > We have 35TB of disk space. I've added 2 additional OSD disks with 7TB
> each but I'm getting the error "No space left on device" every time that I
> want to add a new file.
> > After adding the 2 additional OSD disks I'm seeing that the load is
> beign distributed among the cluster.
> > Please I need your help.
> >
> > root@pf-us1-dfs1:/etc/ceph# ceph -s
> >  cluster:
> >    id:     609e9313-bdd3-449e-a23f-3db8382e71fb
> >    health: HEALTH_ERR
> >            2 backfillfull osd(s)
> >            1 full osd(s)
> >            7 pool(s) full
> >            197313040/508449063 objects misplaced (38.807%)
> >            Degraded data redundancy: 2/508449063 objects degraded
> (0.000%), 2 pgs degraded
> >            Degraded data redundancy (low space): 16 pgs
> backfill_toofull, 3 pgs recovery_toofull
> >
> >  services:
> >    mon: 3 daemons, quorum pf-us1-dfs2,pf-us1-dfs1,pf-us1-dfs3
> >    mgr: pf-us1-dfs3(active), standbys: pf-us1-dfs2
> >    mds: pagefs-2/2/2 up
> {0=pf-us1-dfs3=up:active,1=pf-us1-dfs1=up:active}, 1 up:standby
> >    osd: 10 osds: 10 up, 10 in; 189 remapped pgs
> >    rgw: 1 daemon active
> >
> >  data:
> >    pools:   7 pools, 416 pgs
> >    objects: 169.5 M objects, 3.6 TiB
> >    usage:   39 TiB used, 34 TiB / 73 TiB avail
> >    pgs:     2/508449063 objects degraded (0.000%)
> >             197313040/508449063 objects misplaced (38.807%)
> >             224 active+clean
> >             168 active+remapped+backfill_wait
> >             16  active+remapped+backfill_wait+backfill_toofull
> >             5   active+remapped+backfilling
> >             2   active+recovery_toofull+degraded
> >             1   active+recovery_toofull
> >
> >  io:
> >    recovery: 1.1 MiB/s, 31 objects/s
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to