On Wed, Mar 20, 2019 at 6:06 PM Dan van der Ster <d...@vanderster.com> wrote:

> On Tue, Mar 19, 2019 at 9:43 AM Erwin Bogaard <erwin.boga...@gmail.com>
> wrote:
> >
> > Hi,
> >
> >
> >
> > For a number of application we use, there is a lot of file duplication.
> This wastes precious storage space, which I would like to avoid.
> >
> > When using a local disk, I can use a hard link to let all duplicate
> files point to the same inode (use “rdfind”, for example).
> >
> >
> >
> > As there isn’t any deduplication in Ceph(FS) I’m wondering if I can use
> hard links on CephFS in the same way as I use for ‘regular’ file systems
> like ext4 and xfs.
> >
> > 1. Is it advisible to use hard links on CephFS? (It isn’t in the ‘best
> practices’: http://docs.ceph.com/docs/master/cephfs/app-best-practices/)
> >
> > 2. Is there any performance (dis)advantage?
> >
> > 3. When using hard links, is there an actual space savings, or is there
> some trickery happening?
> >
> > 4. Are there any issues (other than the regular hard link ‘gotcha’s’) I
> need to keep in mind combining hard links with CephFS?
>
> The only issue we've seen is if you hardlink b to a, then rm a, then
> never stat b, the inode is added to the "stray" directory. By default
> there is a limit of 1 million stray entries -- so if you accumulate
> files in this state eventually users will be unable to rm any files,
> until you stat the `b` files.
>

Eek. Do you know if we have any tickets about that issue? It's easy to see
how that happens but definitely isn't a good user experience!
-Greg


>
> -- dan
>
>
> -- dan
>
>
> >
> >
> >
> > Thanks
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to