You're correct, there's only one inode consumed per set of hardlinked
files. So it uses less inodes.

What can happen though is that if you have a filesystem that inlines small
files into the directory entry, this will remove the inlining and instead
cause more seeks. Likewise, files that were grouped together will now be
spread out a bit more. So if you don't have an SSD this might slow down
initial loading of files.

However, you don't have to keep two copies of the same file in cache so
there's more cache available. (Only in the case when two versions of a
package are used simultaneously)

So it's hard to say what the effect is generally, but the disk space saving
is definitely there. If you are worried about seek times, you might want to
turn on filesystem compression and run some sort of defragmentation if
available. /nix/store should compress quite well and that improves I/O
times for reading. The compression overhead should not be noticeable.

E.g. for nix store on btrfs "noatime,autodefrag,compress=lzo,space_cache"
are nice flags, and add "discard,ssd" for SSD drives.

Wout.

On Thu Feb 12 2015 at 11:04:14 PM Vladimír Čunát <vcu...@gmail.com> wrote:

> On 02/12/2015 07:38 PM, John Wiegley wrote:
> > The reason why I don't like this optimization is that it doubles
> > i-node consumption on my main volume
>
> Oh, I thought each set of hardlinked files share the single i-node. Is
> it not so?
>
> Vladimir
>
>
> _______________________________________________
> nix-dev mailing list
> nix-dev@lists.science.uu.nl
> http://lists.science.uu.nl/mailman/listinfo/nix-dev
>
_______________________________________________
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev

Reply via email to