On 2013-02-20, Keith <ke...@scott-land.net> wrote:
>>
> Hi, thanks for the info. Yesterday I did a backup, format, restore of 
> the /var/www partition although to be honest I wasn't really sure what i 
> was doing with regards to the newfs command. I tried running "newfs 
> -i"with different values and settled on "newfs -i 1 /var/www" as it 
> seemed at the time to makes the make the most inodes and that was just 
> based on how much output was generated while newfs was running.

Those aren't inodes, they're superblock backups, clue is in the text
printed by newfs.

> # df -hi
> Filesystem     Size    Used   Avail Capacity iused   ifree  %iused Mounted on
> /dev/sd0l      4.7G    1.2G    3.3G    26%  449170 2206316    17% /var/www
>
> The above "df -hi" output was done today after the wiped the app and 
> started it again from scratch. It had been running for about 12 hours 
> and there was about 450,000 files. How many files do you think I'll be 
> able to store with this number of inodes ?

I would think you'd be able to store 2206316 files purely based on the
number of inodes, but this would be limited by the minimum file size.

$ df -hi /tmp; touch /tmp/bleh; df -hi /tmp | tail -1
Filesystem     Size    Used   Avail Capacity iused   ifree  %iused  Mounted on
mfs:21643      991M    110M    831M    12%   16175  253967     6%   /tmp
mfs:21643      991M    110M    831M    12%   16176  253966     6%   /tmp

><ll>: do you want 20GB of files in your db?
><forkless>: i know i dont
..
><Safra>: Then you will get "why is my nzbfiles table corrupt"?

There is absolutely no reason for a database to corrupt itself just by
having 20GB of data in it.

It's at least as likely that a filesystem would corrupt itself,
and databases often have better recovery mechanisms than many types
of filesystem.

Please at least tell me that these files are split across a number
of directories and not all lumped together in one....

Reply via email to