Frank Steinmetzger wrote:
> Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
>> Frank Steinmetzger wrote:
>>> <<<SNIP>>>
>>>
>>> When formatting file systems, I usually lower the number of inodes from the 
>>> default value to gain storage space. The default is one inode per 16 kB of 
>>> FS size, which gives you 60 million inodes per TB. In practice, even one 
>>> million per TB would be overkill in a use case like Dale’s media storage.¹ 
>>> Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, 
>>> not 
>>> counting extra control metadata and ext4 redundancies.
>> If I ever rearrange my
>> drives again and can change the file system, I may reduce the inodes at
>> least on the ones I only have large files on.  Still tho, given I use
>> LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
>> assume it increases the inodes as well.
> I remember from yesterday that the manpage says that inodes are added 
> according to the bytes-per-inode value.
>
>> I wonder.  Is there a way to find out the smallest size file in a
>> directory or sub directory, largest files, then maybe a average file
>> size???
> The 20 smallest:
> `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
>
> The 20 largest: either use tail instead of head or reverse sorting with -r.
> You can also first pipe the output of stat into a file so you can sort and 
> analyse the list more efficiently, including calculating averages.

When I first run this while in / itself, it occurred to me that it
doesn't specify what directory.  I thought maybe changing to the
directory I want it to look at would work but get this: 


root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
-0 stat -c '%s %n' | sort -n | head -n 20`
-bash: 2: command not found
root@fireball /home/dale/Desktop/Crypt #


It works if I'm in the / directory but not when I'm cd'd to the
directory I want to know about.  I don't see a spot to change it.  Ideas.

>> I thought about du but given the number of files I have here,
>> it would be a really HUGE list of files.  Could take hours or more too. 
> I use a “cache” of text files with file listings of all my external drives. 
> This allows me to glance over my entire data storage without having to plug 
> in any drive. It uses tree underneath to get the list:
>
> `tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`
>
> This gives me a list of all directories and files, with their full path, 
> date and size information and accumulated directory size in a concise 
> format. Add -pug to also include permissions.
>

Save this for later use.  ;-)

Dale

:-)  :-) 

Reply via email to