Evren Yurtesen wrote:
David Rees wrote:
  
On 3/27/07, Les Mikesell <[EMAIL PROTECTED]> wrote:
    
Evren Yurtesen wrote:

      
What is wall clock time for a run and is it
reasonable for having to read through both the client and server copies?
          
I am using rsync but the problem is that it still has to go through a
lot of hard links to figure out if files should be backed up or not.
        
Evren, I didn't see that you mentioned a wall clock time for your
backups? I want to know how many files are in a single backup, how
much data is in that backup and how long it takes to perform that
backup.
    

I sent the status of the backups earlier today to mailing list?

# Pool is 17.08GB comprising 760528 files and 4369 directories (as of 
3/27 05:54),
# Pool hashing gives 38 repeated files with longest chain 6,
# Nightly cleanup removed 10725 files of size 0.40GB (around 3/27 05:54),
# Pool file system was recently at 10% (3/27 22:40), today's max is 10% 
(3/27 01:00) and yesterday's max was 10%.


     *  16 full backups of total size 72.16GB (prior to pooling and 
compression),
     * 24 incr backups of total size 13.60GB (prior to pooling and 
compression).


This is from 1 machine.

  	 Totals  	 Existing Files  	 New Files
Backup# 	Type 	#Files 	Size/MB 	MB/sec 	#Files 	Size/MB 	#Files 	Size/MB
112 	full 	149290 	1957.8 	0.10 	149251 	1937.9 	4601 	20.7
224 	full 	151701 	2022.8 	0.09 	151655 	2004.7 	100 	18.1
238 	full 	152170 	2099.5 	0.06 	152131 	2081.6 	115 	17.9
244 	incr 	214 	48.9 	0.00 	165 	22.9 	78 	26.0
245 	full 	152228 	2095.2 	0.06 	152177 	2076.9 	108 	18.3
246 	incr 	118 	17.3 	0.00 	76 	0.2 	69 	17.1
247 	incr 	159 	21.4 	0.00 	111 	3.1 	75 	18.4
248 	incr 	181 	22.1 	0.00 	132 	2.5 	79 	19.7
249 	incr 	186 	24.0 	0.00 	146 	7.6 	54 	16.4
250 	incr 	206 	25.5 	0.00 	159 	6.7 	70 	18.8



  
 From the perspective of the backup directories, it doesn't matter
whether or not there are additional links to a file. It just looks at
the directory entry to find the file it wants.  It may matter that the
inodes and file contents end up splattered all over the place because
they were written at different times, though.
      
Yep, Lee is right here. Unless BSD handles hard-links in some crazy manner.
    

I dont know if the problem is hard links. This is not a FreeBSD or Linux 
problem. It exists on both. Just that people using ultra fast 5 disk 
raid 5 setups are seeing 2mbytes/sec transfer rate means that backuppc 
is very very inefficient.
  
Well, I'd like to point out that I am using tar over nfs over a bonded 100mbit(x2) link.  NFS is terrible for small file access.  However, we have an xserve that I use xtar with, over ssh-- I get 5MB/s for that (which is the best I've seen).  My other rsync clients get up to 4.2 MB/sec.  So, I think the 25% [EMAIL PROTECTED]/s indicates I should never expect more than 10MB/s.

Considering everything backuppc has to do with each file as it receives them, without writing them to disk first, I don't think we can expect it to be very fast.  However, it is absurdly efficient with space.  You can't have everything :-).  Complexity, speed, efficiency; pick any two? 

brien


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to