Thanks,

I will take all your comments, below, on board - particularly the information about the misleading error message!

My ISP assures me there is no limit.

On the speed issue, my observation is that having simple html files in a directory makes for a faster website than any webcontent management system delivers.

Marghanita


Peter Miller wrote:
On Tue, 2012-04-03 at 13:57 +1000, Marghanita da Cruz wrote:
pe...@chubb.wattle.id.au wrote:
<snip>
Depends on the underlying filesystem.

depends a lot on the underlying file system

On XFS it's as many as you can fit filenames into an 8 Exabytye file!

Thanks - that sounds a lot.

You may want to think about directory structure here.
Name searches (for older file systems, at least) are linear, so if you
have 1000000 files in a directory, it will take 1000000 times as long to
figure out that the file of interest isn't there, than a directory with
one file in it.

One way of limiting this O(n) search time is to introduce directory
levels (aa/bb/cc instead of aabbcc); another is to use a file system
with O(log n) search times, like reiserfs.  And, just maybe, what you
actually need is a database, not a flat directory structure, which
possibly giving O(1) search times depending on the database engine.

What constitutes a filename?

the bit between the /slashes/

You can't create a new link in that directory.

open(2) will report ENOSPC, because the directory data can't be made any
bigger because the disk is full.

Some file systems also have a limit on the total number of inodes
(files) on the disk, independent of the directory they are in.  You will
also get an ENOSPC error for this one, but df(1) stubbornly insists
there are available data blocks... that's not what it ran out of.




--
Marghanita da Cruz
Ramin Communications (Sydney)
Website: http://ramin.com.au
Phone:(+612) 0414-869202


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to