Jason Hughes wrote:
> Evren Yurtesen wrote:
>   
>> Jason Hughes wrote:
>>     
>>> That drive should be more than adequate.  Mine is a 5400rpm 2mb 
>>> buffer clunker.  Works fine.
>>> Are you running anything else on the backup server, besides 
>>> BackupPC?  What OS?  What filesystem?  How many files total?
>>>       
>> FreeBSD, UFS2+softupdates, noatime.
>>
>> There are 4 hosts that have been backed up, for a total of:
>>
>>     * 16 full backups of total size 72.16GB (prior to pooling and 
>> compression),
>>     * 24 incr backups of total size 13.45GB (prior to pooling and 
>> compression).
>>
>> # Pool is 17.08GB comprising 760528 files and 4369 directories (as of 
>> 3/27 05:54),
>> # Pool hashing gives 38 repeated files with longest chain 6,
>> # Nightly cleanup removed 10725 files of size 0.40GB (around 3/27 05:54),
>> # Pool file system was recently at 10% (3/27 07:16), today's max is 
>> 10% (3/27 01:00) and yesterday's max was 10%.
>>
>>  Host       User       #Full       Full Age (days)       Full Size 
>> (GB)       Speed (MB/s)       #Incr       Incr Age (days)       Last 
>> Backup (days)       State       Last attempt
>> host1             4     5.4     3.88     0.22     6     0.4     
>> 0.4     idle     idle
>> host2             4     5.4     2.10     0.06     6     0.4     
>> 0.4     idle     idle
>> host3             4     5.4     7.57     0.14     6     0.4     
>> 0.4     idle     idle
>> host4             4     5.4     5.56     0.10     6     0.4     
>> 0.4     idle     idle
>>
>>     
>
> Hmm.  This is a tiny backup setup, even smaller than mine.  However, it 
> appears that the average size of your file is only 22KB, which is quite 
> small.  For comparison sake, this is from my own server:
>     Pool is 172.91GB comprising 217311 files and 4369 directories (as of 
> 3/26 01:08),
>
> The fact that you have tons of little files will probably give 
> significantly higher overhead when doing file-oriented work, simply 
> because the inodes must be fetched for each file before seeking to the 
> file itself.  If we assume no files are shared between hosts (very 
> conservative), and you have an 8ms access time, you will have 190132 
> files per host and two seeks per file, neglecting actual i/o time, gives 
> you 50 minutes.  Just to seek them all.  If you have a high degree of 
> sharing, it can be up to 4x worse.  Realize, the same number of seeks 
> must be made on the server as well as the client.
>
> Are you sure you need to be backing up everything that you're putting 
> across the network?  Maybe excluding some useless directories, maybe 
> temp files or logs that haven't been cleaned up?  Perhaps you can 
> archive big chunks of it with a cron job?
>
> I'd start looking for ways to cut down the number of files, because the 
> overhead of per-file accesses are probably eating you alive.  I'm also 
> no expert on UFS2 or FreeBSD, so it may be worthwhile to research its 
> behavior with hard links and small files.
>
> JH
>
>   
For what it's worth, I have a server that backs up 8.6 million files  
averaging 10k in size from one host.  It takes a full 10 hours for a 
full backup via tar over NFS ( 2.40MB/s for 87GB). CPU usage is low, 
around 10-20%, however IOwait is a pretty steady 25%.

Server info:
HP DL380 G4
debian sarge
dual processor 3.2ghz xeon
2GB Ram
5x10k rpm scsi disks, raid5
128MB battery backed cache (50/50 r/w)
ext3 filesystems

brien

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to