On May 2, 2007, at 9:02 AM, Holger Parplies wrote:

> resource forks. You know more about Macs than I do (what they look  
> like, for
> instance ;-). The list archives know even more.

That's what I thought to however I see the ._* resource forks on the  
Linux filesystem. The copy of tar in 10.4.x should handle resource  
forks correctly. Clearly though, this is a general file copy question  
and not a backuppc specific issue. Given the large volume of OS X  
admins on this list I was hoping someone might know.

Maybe the folder icons are stored in the parent folder .DS_Store or  
something. I'll have to do some research. :)

> Unchanged files will have a copy both in pool and cpool (at least  
> after the
> next full backup). More precisely: files in cpool will no longer be
> considered for pooling with newly received files. A copy in pool  
> will be
> used or created as necessary.
> The copy in cpool will eventually expire, but until then you  
> potentially need
> space for both.

Interesting information. Thank you.

> Due to the implementation of pooling your second full backup may be  
> much
> faster than the first: on the first backup, all files need to be  
> compressed
> in order to create the pool. On the second backup (with tar), all  
> files will
> be re-transfered, but then the pool files will be decompressed to  
> do the
> match rather than the transfered files compressed. Decompression is
> significantly faster. Only new files added to the pool will need to be
> compressed (meaning new or changed files not matching a file  
> already in the
> pool).

This sounds like it just skewed the results of my ssh -c blowfish  
full backup test. I didn't see an obvious way to delete a full backup  
so I just ran another full backup. I didn't change compression, all I  
did was add the -c blowfish option to ssh.

My backup speeds over GigE this time were 5.42MB/ compaired with my  
previous full backup which was 3.10MB/s. While -c blowfish could have  
made a difference then again so could have the pool!

> BackupPC does some magic to ensure that no intermediate copies of  
> files
> already in the pool need to be stored. This may come at a cost if  
> you have
> long chains in the pool (eg. many files of which the first 1 MB is  
> identical
> but the rest is not).
> Your server status page tells you "Pool hashing gives N repeated  
> files with
> longest chain M" - are the values of N and M especially high?

"Pool hashing gives 45 repeated files with the longest chain of 10."  
I have no idea yet if these are considered normal. Figuring out what  
pool-chains are is on a post-it note here.

Thanks for the suggestions and explanations. Before doing any otther  
tests I think I need to figure out how to remove backups and clean  
out the pools.

--
Scott <[EMAIL PROTECTED]>
AIM: BlueCame1

--
Scott <[EMAIL PROTECTED]>
AIM: BlueCame1


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to