Thank you for the response. I suspect this is primarily an algorithm
issue. The aufs directory read takes about 1 minute if the SSD
directory has 10,000 files. It takes only 30 seconds if the SSD
directory has 5000 files. It takes 1 second if the SSD directory has
100 files. Does aufs look at the
I built using module-assistant and the Ubuntu 8.04 version of aufs, as
packaged by Julian Andres Klode. It builds fine without the patch.
With the patch, compile fails on the BUILD_BUG_ON line. This is on
x86_64 architecture.
# m-a -k /usr/src/linux-headers-2.6.24-16-server -l 2.6.24-16-server
I grabbed Julian's aufs-source package from Debian Lenny, which is a
little newer. Using it, I was able to successfully patch and compile
and run. No change in performance. Maybe I made a mistake, I am pretty
sleepy.
# cat /sys/module/aufs/version
20080714
$ time ls -U | wc -l # cached, with
I don't see any memory allocation errors in the logs. I did have to
upgrade the kernel slightly yesterday to get all this to work, and it
appears to have splashed relatime mount options everywhere. Also,
while I am now using the 20080714 aufs, my copy of aufs-tools is
completely ancient 20070605.
These large files are created on your first writable branch
(/data4/.aufs.xino). Is the capacity enough?
Also, I don't see this file.
# ls -a /data4
. .. archive .wh..wh.aufs .wh..wh.plink
--
--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
Did you revert the patch I've sent which enlarges some aufs parameters?
Not yet. I will revert now.
--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial
Valerie Aurora wrote a set of articles recently on the various
union-type filesystems Linux Weekly News. Are you thinking about
getting involved with union mounts? According to the articles, there
are still some serious problems with readdir().
http://lwn.net/Articles/326818
For 1.6 million files, assuming their name are all numbered
sequentially, 1, 2, 3 ... 1599, 1600, about 180MB will be
necessary for them. In this case, you might want to try these values.
#define AuSize_DEBLK (4 * 1024 * 1024)
#define AuSize_NHASH (16 * 1024)
Filenames are like
How do you think?
I am not qualified to comment on the technical design.
However, I suggest understanding the problem better before doing
serious work. Let me know if I can be of help.
Also, I don't know how important this use case is. I suspect that very
few people work with this many files.
Looks good to me. You may want to explain in the documentation why
user memory is better than kernel memory. Not everyone knows about
this. Again we are appreciative but will be slow and careful to test;
I haven't compiled my own kernel since Linux 1.1.54 on Slackware.
-Jeff
On Thu, Aug 20, 2009
I spent this weekend testing. Here is my report.
I applied the kernel patch vmscan: do not unconditionally treat zones
that fail zone_reclaim() as full. Next, I activated
AUFS_CONFIG_HINOTIFY and made sure to only move data with aufs mounted
with udba=inotify. Finally, I set AuSize_DEBLK to (4
12 matches
Mail list logo