Dear all,

How scalable is backuppc?
Where are the limits or what can produce performance bottlenecks?

I've heard about hardlinks which can be a problem if theire are millions of
it. Is that true?

I can imagine that backuppc have a lot of work to find identical files if
theire are to much of them. Is that possible?

I believe the harddisk bottleneck isn't the throughput but the seek time.
Anybody have experience with that?

Anyone would share his experience about CPU usage in relation to bandwidth
usage? Particularly by using rsync?


I have a 2x1GHz Server with 2GB RAM and 4x500GB SATA Disks in Software
Raid5.
Most of the time I see:
 a disk throughput by 2-10MB/s
 a CPU usage of 20-50%
 a network (LAN) throughput of 5-10 MBit/s. Even if 4 or 7 clients
   deliver theire data by rsnync.

Whats your experience?

br
Matthias
-- 
Don't Panic


------------------------------------------------------------------------------
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to