Setup:
Fedora 19 with kernel 3.11.4-201.fc19.x86_64
btrfs-progs-0.20.rc1.20130917git194aa4a-1.fc19.x86_64

I am trying to do an initial btrfs send of a large (206GB, 1392221
paths) snapshot to a single file on a
different filesystem (a slow external USB drive with btrfs):

   btrfs send /tank/backups/snapshots/test1 > /extpool/filetest/test5.btr

What happens is eventually get following error in in /var/log/messages:

  VFS: file-max limit 202149 reached

I worked around this by:

echo 900000 > /proc/sys/fs/file-max

but then it hit that so tried 1800000 and left it running. This
produced a 377GB test5.btr file
before system ran out of memory and crashed (oom killer etc.)

This is reproducible but the btrfs send takes about 6+ hours (USB 3.0
card on order...)

The reason I think btrfs send is leaking open files is if you watch
/proc/sys/fs/file-nr you see the
number of open files increasing  but if you kill the btrfs send
process then the open
files count reduces back down.  In fact suspending the process also
reduces the open file count but
resuming it then makes the count start increasing again.

I also found Robert Buhren reporting very similar issue back in April 2013:
http://comments.gmane.org/gmane.comp.file-systems.btrfs/24795

If further information is needed, i'd be happy to help.

-- 
Phil Davis
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to