Re: [BackupPC-users] Traffic on this mailing list

2009-09-04 Thread David Rees
On Fri, Sep 4, 2009 at 7:30 AM, Jim Leonard wrote:
> I put all BackupPC messages into their own folder with an automatic
> rule; every 1-2 days, I check out that folder.  Main inbox remains
> uncluttered.  I also use a threaded newsreader so it's really easy to
> follow (or delete) entire conversations.

That's what I do with Gmail.

> Maybe you need a better email program?

Or simply browse one of the web archives that are available when it's
convenient.

-Dave

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [OT] Linux "load" values (was: Re: Hardware considerations for building dedicated backuppc server)

2009-07-08 Thread David Rees
On Tue, Jul 7, 2009 at 4:38 PM, Holger Parplies wrote:
> No, my point was, the "load average" is an attempt to fit the state of a
> system into one single number (which, as we've agreed, is only good for
> getting a quick impression, nothing more).

Exactly.  On Linux systems (don't have enough high performance
experience with other systems to say conclusively) the load average
simply gives you a rough indication of the number of processes that
are in the run queue.

If you have one process running using 100% CPU, your load average will be 1.

If you have one process running waiting 100% IO, your load average will be 1.

To get a full picture you can run top (make sure if you have multiple
CPUs you expand the display to show each one) or vmstat (I typically
use `vmstat 1` to watch the data on 1 second intervals).

vmstat has the benefit of splitting out the number of processing
running on CPU and waiting on disk (see the first columns r and b) as
well as showing you overall IO load and the type of CPU utilization.

Here's a sample from vmstat of my desktop/server system running a
local and remote backup at the same time:

procs ---memory-- ---swap-- -io --system-- -cpu-
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa st
 2  2  0  35592 200332 145911600 73100   392 2884 2375 65
12  0 24  0
 3  2  0  21508 200648 147397200 55036 0 2523 2641 28
6  4 63  0
 2  1  0  24940 201092 147166400 81664 0 3085 2949 33
9  1 58  0
 1  2  0  25512 201376 147278400 81216  1064 3094 2883 28
8  0 65  0
 3  1  0  23612 201732 147576000 81256 0 3039 2939 28
8  1 64  0

Here we can see a pretty good mix of CPU and IO wait - Improving the
capacity of either would lead to a reduction in backup times.

-Dave

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Child Exited Prematurely

2008-12-05 Thread David Rees
On Fri, Dec 5, 2008 at 8:47 AM, Bowie Bailey <[EMAIL PROTECTED]> wrote:
> David Rees wrote:
>> I've got one server that all it does is back itself up that gives this
>> error. It started out only occasionally failing, but now I can't
>> complete a full backup without it bailing out with
>>
>> I've ruled out ssh by invoking rsync directly, and when that failed
>> with the same exact error, I tried tar which also died with a similar
>> message.  Running the backup from the command line didn't reveal any
>> additional interesting error messages. It's like something is breaking
>> pipes on the machine after a random amount of time.
>>
>> I am very confused by this one but have seen other similar
>> reports.
>
> When I had this problem, it was always a particular file that was
> causing it.  Take a look at the log and find the last file referenced
> before it stopped.  I fixed the problem by either excluding the file
> from the backup or, in one case, creating a separate backup for the
> directory containing the file.

That's just it, it doesn't stop at the same file - it stops at
different spots on each try. Which is why I'm baffled...

It doesn't seem like it's necessarily a BackupPC or rsync issue, it
seems more like a kernel issue to me (Fedora 9, latest kernel).

-Dave

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Child Exited Prematurely

2008-12-04 Thread David Rees
On Thu, Dec 4, 2008 at 7:24 PM, Nick Smith <[EMAIL PROTECTED]> wrote:
>
> Did you ever get this resolved? Im having the same problem, now all of
> my backups are failing with the same errors you are getting.  Im using
> 2.6.9 protocol version 29.  Ubuntu doesnt seem to have a newer version
> available yet.  They are all on fiber or fast cable internet that is
> reliable.  Different firewalls at each location.  I could never find
> any info if pfsense or m0n0wall close inactive connections.

I've got one server that all it does is back itself up that gives this
error. It started out only occasionally failing, but now I can't
complete a full backup without it bailing out with

I've ruled out ssh by invoking rsync directly, and when that failed
with the same exact error, I tried tar which also died with a similar
message.  Running the backup from the command line didn't reveal any
additional interesting error messages. It's like something is breaking
pipes on the machine after a random amount of time.

I am very confused by this one but have seen other similar reports.

-Dave

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] sudo error

2008-12-03 Thread David Rees
On Wed, Dec 3, 2008 at 9:01 AM, WebIntellects Technical Support
<[EMAIL PROTECTED]> wrote:
> When trying to backup a server for the first time we are receiving the
> following error, has anybody seen this and know the fix:
>
> Fatal error (bad version): sudo: symbol lookup error: sudo: undefined symbol: 
> audit_log_user_command

Check to make sure that audit-libs is up to date and that ldconfig has
been run recently. I have audit-libs-1.6.5-9.el5 on one of my CentOS 5
servers.

> 
> Josh Elson
> WebIntellects Technical Support
> [EMAIL PROTECTED]
> 800-994-6364
> 760-477-1100
> http://cm.controlmaestro.com
> 

Heh, your office is in the office right next to mine, I work for eBet
just down the hill on Grand. If you step outside and look down you can
probably see a big SAM container in our parking lot. :-)

-Dave

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Native Window version of rsync

2008-11-19 Thread David Rees
On Wed, Nov 19, 2008 at 7:11 PM, David Rees <[EMAIL PROTECTED]> wrote:
> On Wed, Nov 19, 2008 at 6:34 PM, dan <[EMAIL PROTECTED]> wrote:
>> the rync algorythm is actually part of the GPL code released by Andrew
>> Tridgell.
>
> Yes, it is, but you can not Copyright algorithms, and you can't
> protect them from reverse engineering. You can patent an algorithm,
> but I know of no such patents on the algorithms used by rsync (not
> that I've looked).

A reference for my claim:
http://www.iusmentis.com/copyright/software/protection/

-Dave

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Native Window version of rsync

2008-11-19 Thread David Rees
On Wed, Nov 19, 2008 at 6:34 PM, dan <[EMAIL PROTECTED]> wrote:
> the rync algorythm is actually part of the GPL code released by Andrew
> Tridgell.

Yes, it is, but you can not Copyright algorithms, and you can't
protect them from reverse engineering. You can patent an algorithm,
but I know of no such patents on the algorithms used by rsync (not
that I've looked).

> Like I said, GPL is viral.  Tridgell released this algorythm as a part of
> rsync undel GPL so use of the exact algorythm requires a GPL license.

Please stop using the term viral, as the GPL is not viral.  Google it.
 The terms of which GPLed software is licensed are quite clear to
those who wish to read the license.

> You may reverse engineer the functionality but you cannot use the algoryth,
> just a work-alike.

This is not true.

> If you have even looked at the code for rsync, you will likely be infringing
> on the GPL if you license this anything but GPL.

This coud be true which is why I bought up my concern.  That is why if
there is any doubt, one should always perform a clean-room reverse
engineering effort to produce compatible works.

> I am not some GPL nazi, I am just making this point.

> You might run into some other problems if using GPL code in a C# environment
> if you are using shared libraries that are not GPL.

Another myth. Please stop spreading FUD.

-Dave

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Native Window version of rsync

2008-11-19 Thread David Rees
On Wed, Nov 19, 2008 at 4:19 PM, dtktvu
<[EMAIL PROTECTED]> wrote:
> It's still being discussed whether it's going to be open source or commercial.
> Right now, looks like it's going to be free to download type...

If you used the rsync source code to create the rsync .NET version, I
think that you may be legally required to keep the software licensed
under the GPL if you distribute it.

If you only used the rsync algorithms and interfaces to create the
port, then you should be able to license the software any way you
wish.

But IANAL.

-Dave

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backing up large SQL DB's

2008-10-09 Thread David Rees
On Thu, Oct 9, 2008 at 11:06 AM, Nick Smith <[EMAIL PROTECTED]> wrote:
> I am using the volume shadow copy to backup large (12gig+) sql db's.
> After the first full backup, and things are changed/added to the DB,
> is it going to pull down the entire DB again or will it just download
> the changes
> (if thats possible)

As long as the file names remain the same and you are using rsync,
only the changes will be downloaded.

-Dave

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC data on a Samba share

2008-04-24 Thread David Rees
On Tue, Apr 22, 2008 at 8:31 AM, Stephen Joyce <[EMAIL PROTECTED]> wrote:
>  For anything approaching 1TB or larger, consider xfs over ext3. Fsck'ing a
>  large ext3 filesystem takes ages.

Why would you ever need to fsck a ext3 volume? I suspect that a full
fsck of an xfs volume is just as slow as fscking a ext3 volume...

If you were comparing ext2 and xfs, then I'd agree, but really, there
is no real need to regularly perform a full fsck on ext3 volumes.

-Dave

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] suggestion -- nice down BackupPC_link

2008-04-02 Thread David Rees
On Mon, Mar 31, 2008 at 8:24 AM, Carl Wilhelm Soderstrom
<[EMAIL PROTECTED]> wrote:
>  My original contention still stands tho; that lowering the priority of the
>  BackupPC_link process is a Good Thing.

I certainly agree - at least for servers where BackupPC is not the
only thing running.

On my Fedora/CentOS systems where BackupPC is not the only thing
running, I add "+20" to the daemon start command in
/etc/init.d/backuppc so it looks like this:

daemon +20 --user backuppc /usr/local/BackupPC/bin/BackupPC -d

I also change the RsyncClientCmd to nice -20 the rsync process in config.pl:

$Conf{RsyncClientCmd} = '$sshPath -q -x -l root $host nice -20
$rsyncPath $argList+';

-Dave

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Transfer just rsync differences?

2008-04-02 Thread David Rees
On Tue, Apr 1, 2008 at 1:52 PM, Hereward Cooper
<[EMAIL PROTECTED]> wrote:
>  Glad to here it behaves like I had expected and hoped.
>
>  But I've attached 3 quick screen shots as illustration of my problem.

First full backed up 30 MB.
First incr backed up ~1.3 GB of mostly new files.
Second incr backed up ~1.3GB of mostly existing files (they were
backed up in the first incr).

Looks normal to me. Why is your full only 30MB when your incr has
1.3GB of new data?

You can try disabling incrementals and only do full backups - you'll
have more disk IO, but for sure you'll only be transferring the
incremental changes between backups.

-Dave

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Transfer just rsync differences?

2008-04-01 Thread David Rees
On Tue, Apr 1, 2008 at 11:52 AM, Les Mikesell <[EMAIL PROTECTED]> wrote:
> Hereward Cooper wrote:
>  > Is there a solution to this, as I'd love to keep using this program
>  > rather than going back to my custom rsync script.
>
>  It should be doing what you want now.  You just need to balance the full
>  vs. incremental runs for the tradeoff you want in runtime vs. rebuilding
>  the tree for the next comparison.

I believe you can make each incremental act like a full by setting
$Conf{IncrFill} to 1 so that incrementals don't have to transfer all
the data since the last full backup and only transfer the changes from
the last backup. This way you can get the best of both worlds (reduced
IO when performing an incremental backup, periodic full backups to
really ensure that everything is there).

-Dave

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] nexenta vmware test

2008-03-24 Thread David Rees
On Mon, Mar 24, 2008 at 7:41 AM, dan <[EMAIL PROTECTED]> wrote:
> Unfortunately, I still cannot install 0.68 as I get the same make error
> "array type has incomplete element type" which is gcc4 being more picky that
> gcc3 was :(

You can't get an old version of gcc on there to compile with?

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] zfs-fuse real world

2008-03-20 Thread David Rees
On Thu, Mar 20, 2008 at 7:08 AM, Daniel Denson <[EMAIL PROTECTED]> wrote:
> I will run whatever specific test you would like with Bonnie++, just
>  give me the command line arguements you would like to see.  i have each
>  filesystem mounted to /test$filesystem so you can include that if you
>  like.  I have never used bonnie++ before.

bonnie++ is easy. By default, it tries to choose sane file sizes to
avoid the effects of buffer caches. So all you need to do is this:

# bonnie++ -d /path/to/filesystem

I suggest creating the filesystems on the same partition. This means
creating a new filesystem for each test, but since the performance of
the disk varies depending on the location, not doing so can bias the
results. You can see this by running the zcav tool included with the
bonnie++ package. Also make sure as little as possible is running on
the system while you're running the tests, too.

Looking forward to those results!

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] zfs-fuse real world

2008-03-20 Thread David Rees
On Wed, Mar 19, 2008 at 11:07 PM, dan <[EMAIL PROTECTED]> wrote:
> CPU e8400 3Ghz Dual Core.
> single 7200rpm 16MB cache 200GB maxtor drive.
> ubuntu 7.10

You don't mention how much memory you have in the machine...

> FILE COUNT
> 138581 634MB average of 4.68KB per file(coped the /etc directory 20 times)

This doesn't look like a large enough data set, unless you are
dropping all caches in between each test. See
http://linux-mm.org/Drop_Caches
You should run `sync; echo 3 > /proc/sys/vm/drop_caches` before each test.

> find all files and run 'wc -l'(access speed) (wow zfs-fuse is slow here)
> zfs compression=gzip9.688 sec
>  zfs compression=off10.734 sec
> *ext3.3218 sec
> *reiserfs.431 sec
> jfs36.18 sec
> *xfs.310 sec

I've used jfs before and would have noticed that it performed an order
of magnitude worse than the other filesystems - I have to think that
there is something peculiar with your benchmark.

> copy from RAM to disk(/dev/shm -> partition w/ filesystem.  bus speed not a
> factor)

Why read from /dev/shm ? Something like this would be better:

time dd if=/dev/zero of=/tmp/bigfile count=1 bs=1M

Adjust count as necessary to ensure that you are writing out
significantly more data than you have available RAM.

>  issues:jfs and xfs both did write caching and then spent periods catching
> up.
> ext31m13s8.68MB/s
> jfs3m21s3.15MB/s
> *reiserfs20s31.7MB/s (WOW!)
> xfs2m56s3.60MB/s
>  zfs (CPU bound)2m22.76s4.44MB/s

All of your numbers seem to be very slow. I would expect at least
25MB/s, probably 50MB/s for ext3, jfs, reiserfs and xfs.

Could you try running an established disk IO benchmark tool like bonnie++?

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Running Top

2008-03-03 Thread David Rees
On Mon, Mar 3, 2008 at 6:47 PM, Adam Goryachev
<[EMAIL PROTECTED]> wrote:
>  So would it then make sense for a backuppc data partition to use a
>  smaller stripe size since most writes will be very small?

Yes, if you're using RAID5. Doing some benchmarking would help find
the "sweet spot".

>  > Having battery backed RAM on the RAID controller can help, because the
>  > controller can lie to the OS and say the data is written to disk
>  > immediately instead of waiting for an read-calculate-write cycle,
>  > since it's sure that if it does lose power, it can store the data that
>  > should be written to disk later when power is restored in addition to
>  > buffering the reads/writes so that it can reorder them to reduce the
>  > amount of seeking required.
>
>  Is it possible to instruct linux to use it's memory to do this? If you
>  have a UPS and feel that it is pretty unlikely to crash, you might be
>  happy to get this kind of speed improvement.

Yes, it's possible to tune Linux to do this as well. To get some
ideas, look at the "large archives and scalability issues" thread,
this exact subject just came up in the past couple days. Be warned,
the thread is very long.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Running Top

2008-03-03 Thread David Rees
On Mon, Mar 3, 2008 at 5:01 PM, Christopher Derr <[EMAIL PROTECTED]> wrote:
>  Is backuppc up to the task of backing up TBs of data?  Or should I be
>  looking at software that explicitly states "for the enterprise" like
>  Symantec Backup Exec, Legato, or even open source Bacula?  All of these
>  are just getting on the bandwagon for deduplication (backuppc's
>  "pooling") and that's almost a must-have feature for disk-to-disk backups.

Yes, it is, assuming that your BackupPC data partition is up to the task.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Running Top

2008-03-03 Thread David Rees
On Mon, Mar 3, 2008 at 2:54 PM, Adam Goryachev
<[EMAIL PROTECTED]> wrote:
>  I was always led to believe that the more drives you had in an array the
>  faster it would get. ie, comparing the same HDD and controller, if you
>  have 3 HDD in a RAiD5 it would be slower than 6 HDD in a RAID5.

For most workloads, yes, more spindles will be faster. For small
writes (writes smaller than stripe size) to RAID 5, more spindles do
not help since performance is limited by the seek time of the slowest
disk.

For streaming read/writes and random read loads, a 6 disk RAID5 will
generally be faster than a 3 disk RAID5.

>  Is that an invalid assumption? How does RAID6 compare in all this? Would
>  it be faster than RAID5 for the same number of HDD's ? (Exclude CPU
>  overheads in all this)

As Les mentioned, RAID6 simply adds an additional parity disk so that
you can suffer from up to 2 disk failures without losing data.

RAID6 itself does not perform any differently than RAID5 and tends to
be slightly slower because of the overhead of having to maintain two
disks for parity instead of one.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Running Top

2008-03-03 Thread David Rees
On Mon, Mar 3, 2008 at 12:08 PM, Tomasz Chmielewski <[EMAIL PROTECTED]> wrote:
>  RAID5/6 have a performance penalty when compared to other RAID level
>  because every single write (or, write IO operation) requires four disk
>  IOs on two drives (two reads, and two writes), possibly harming other IO
>  operations.

Correction:

Small writes (writes smaller than stripe size) require reads from
disk. Writes that are the size of a stripe or larger do not incur the
additional read penalty.

For example if you have a 3-disk RAID 5 and a 256 KB stripe size, two
disks hold 128 KB of data and the third holds 128 KB of parity data.

If you write less than 256 KB to a stripe, you first have to read the
data from the two data disks, calculate the parity with the new data
and write to all 3 disks.

But if you are writing 256 KB or more, you can skip the read and
simply calculate the parity and write all 3 chunks to disk.

Having battery backed RAM on the RAID controller can help, because the
controller can lie to the OS and say the data is written to disk
immediately instead of waiting for an read-calculate-write cycle,
since it's sure that if it does lose power, it can store the data that
should be written to disk later when power is restored in addition to
buffering the reads/writes so that it can reorder them to reduce the
amount of seeking required.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hardware upgrade advice

2008-02-28 Thread David Rees
On Wed, Feb 27, 2008 at 4:38 PM, Stephen Joyce <[EMAIL PROTECTED]> wrote:
>  (Mostly) agreed. If you can afford a hardware raid controller, raid 5 is a
>  good choice.

To clarify, a hardware raid controller with battery backed RAM is a
good choice fo RAID 5, otherwise it will either be very slow for small
random writes or run the risk of data corruption.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] large archives and scalability issues

2008-02-27 Thread David Rees
On Wed, Feb 27, 2008 at 2:54 AM, Tomasz Chmielewski <[EMAIL PROTECTED]> wrote:
>  Stripe size is 64k.
>  Also, the system was made with just "mkfs.ext3 -j /dev/sdX", so without
>  the stride option (or other useful options, like online resizing, which
>  is enabled by default only in the recent releases of e2fsprogs).
>
>  On the other hand, using "stride" is a bit unclear for me.
>
>  Although you can somehow calculate it if you place your fs directly on a
>  RAID array:
>
>stride=stripe-size
>Configure the filesystem for a RAID array with stripe-size
>filesystem blocks per stripe.
>
>  It is a bit harder if you have LVM on your RAID, I guess.

Right - I don't know if LVM "offsets" the filesystem layout any. If
you ever moved the array or reshaped the array using LVM, then you'd
lose the performance benefits of having an optimal stride setting.

IIRC, xfs automagically uses the correct stride if the RAID is a local array.

>  But as I looked at dumpe2fs output (and HDD LEDs blinking), everything
>  is scattered rather among all disks.

Which doesn't tell us much. The point of the stride setting is to
avoid writing specific bits of data across multiple stripes.

For example, if you perform a small write across two stripes, that
means you have to read/write two 64kB stripes in your case. By
aligning writes, you could have avoided this and only read/write a
single 64kB stripe. It's pretty easy to see how this might affect
performance.

>  Hey, I just disabled internal bitmap in RAID-5 and it seems the things
>  are much faster now - this is "iostat sda -d 10" output without the
>  internal bitmap.

Doh, I forgot about RAID-5 bitmaps. I did a quick search and it
appears that bitmaps can really kill performance. But it does prevent
a full-resync after a crash. I don't think it's worth it in your case.

It might be worth posting back to the thread on LKML (and cc
linux-raid) to see if there are any known workarounds if you want to
try to keep bitmaps enabled.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] large archives and scalability issues

2008-02-26 Thread David Rees
On Tue, Feb 26, 2008 at 4:39 PM, David Rees <[EMAIL PROTECTED]> wrote:
>  So there you go. IMO, unless you are willing to overhaul your storage
>  system or slightly increase the risk of data corruption (IMO,
>  data=writeback instead of the default data=ordered should be a large
>  gain for you and is very safe), you are going to continue to fall
>  further behind in your nightly cleanup runs.

I forgot to mention, this link may be informative:

http://wiki.centos.org/HowTos/Disk_Optimization

But I think it covers most of the topics in this thread already.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] large archives and scalability issues

2008-02-26 Thread David Rees
On Tue, Feb 26, 2008 at 2:23 PM, Tomasz Chmielewski <[EMAIL PROTECTED]> wrote:
>  > Can you give us more details on your disk array? Controller, disks,
>  > RAID layout, ext3 fs creation options, etc...
>
>  I said some of that already - but here are some missing parts.
>  5x 400 MB HDD (WDC WD4000YR)
>  The controller uses sata_mv module (Marvell).
>
>  Linux software RAID over it, LVM over it. Filesystem for backup is 1.2
>  TB big.
>  The fs has these features: has_journal filetype needs_recovery
>  sparse_super large_file.

Let me see if I can summarize this properly:

iSCSI to a Thecus n5200 with 5x 400 MB HDD (WDC WD4000YR). The box
runs a custom Debian kernel.
The box uses software raid 5 + LVM.

You didn't mention stripe size of the raid 5. You also didn't mention
whether you used the stride option when creating the ext3 filesystem.

>  On an empty filesystem on than NAS I can write with ~25 MB/s.
>  I guess that backup filesystem is just very fragmented?
>  On the other hand, it is only 60-70% full, so drop from ~25 MB/s (empty
>  filesystem) to ~1.3 MB/s is indeed something odd.

25 MB/s seems abysmal for an empty filesystem. I really would expect
at least twice that on a GigE iSCSI network, and probably closer to 75
MB/s.

>  But hey, sequential writing isn't something very often used with BackupPC.
>  Just how fast can you write a file downloaded from the internet (or even
>  LAN) which is being compressed with bzip2 at the same time. Or better
>  yet, when there are 10 such threads compressing with bzip2.

No, sequential write performance isn't used much for BackupPC, but low
numbers do indicate that there is some sort of bottleneck there.

FWIW, on my BackupPC system here with a simple software RAID1 of two
250MB ATA disks can write 25MB/s sequentially on it's BackupPC
partition which is 56% full.

>  Would BackupPC detect a corrupted file which is unique? Even if yes, it
>  would be corrupted, with no other copy, so...

BackupPC does detect when files on disk get corrupted when new backups
are being made (since it compares files being backed up directly
against the files in the pool). They are checked less frequently when
the rsync checksum-seed option is used, but you can change that. See
the docs for more info.

Anyway, looking at your setup, you have a number of things which are
contributing to your performance.

* Software RAID5 - RAID5 is HORRIBLE (yes, it needs to be
capitalized!) for small reads and writes. Especially for small random
writes, performance can degrade to speeds well below the speed of a
single disk. Small random reads will also perform poorly and at best
will match the speed of a single disk. Hardware RAID5 with a large
enough battery backed cache can negate some of the performance
drawbacks of small random reads/writes

* LVM - Often LVM can slow things down as well since it adds another
layer of abstraction.

* iScsi - Network based filesystem is never going to be as fast as
directly attached storage.

* Unlikely that the proper ext3 stride option was used to create the
filesystem which can result in poor performance on a striped RAID
array. I'm not sure how LVM would further affect this.

* Small amount of memory on NAS - It appears that your Thecus 5200
only has 256MB of RAM. I would expect that having significantly more
memory could help IO performance.

So there you go. IMO, unless you are willing to overhaul your storage
system or slightly increase the risk of data corruption (IMO,
data=writeback instead of the default data=ordered should be a large
gain for you and is very safe), you are going to continue to fall
further behind in your nightly cleanup runs.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] large archives and scalability issues

2008-02-26 Thread David Rees
On Tue, Feb 26, 2008 at 2:07 AM, Tomasz Chmielewski <[EMAIL PROTECTED]> wrote:
>  But I didn't have IO::Dirent installed, thanks for the hint. Let's hope
>  the list of directories in "trash" will keep decreasing now. Right now,
>  I have almost 100 directories there, and it is growing each day a bit.
>
>  Although - with IO::Dirent "wa" is now 100% almost all the time, and the
>  system feels much slower. Hm. Let's hope it's coincidence, and assume
>  the system was just committing a big write...

I would have thought that if nightly is running all the time, IO wait
time would have been 100% with or without IO::Dirent since you appear
to be IO bound 100% of the time.

>  > I did find it odd that you said after dropping the caches deletes ran
>  > fairly quickly for a while, then slowed down. This seems to point to
>  > some sort of kernel bug.
>
>  No, actually, everything is fine here.
>
>  To remove anything, kernel has to read directory structures etc. first.
>  Then, it can "remove" lots of files really fast without committing
>  anything to the disk (yet) - everything stays in memory.
>  Once kernel feels the pressure on memory, it will start committing the
>  changes (file removals) to the disk.
>  This is where seek problem leads to poor performance: write commits
>  compete with reads, so there is lots of seek.

Right, and I would expect the kernel to be flushing out those writes
at very small intervals (whatever the default is, I think 5 seconds)
compared to how often reads are happening so the effect on those
writes should be small. But if the writes are going out very
frequently, then yes, that would be an issue and really increase the
seek load.

>  I managed to get around this a bit by increasing commit= ext3 mount
>  option, and also, setting these values (and putting 3 GB of RAM into the
>  machine):
>
>  echo 50 > /proc/sys/vm/dirty_ratio
>  echo 50 > /proc/sys/vm/dirty_background_ratio
>  echo 6000 > /proc/sys/vm/dirty_writeback_centisecs
>  echo 6000 > /proc/sys/vm/dirty_expire_centisecs

Yep, those should help.

>  I guess enabling write cache would help a lot, as then, the storage
>  could commit the changes out of order (less seek) - but I don't trust it
>  much.

Where would you be enabling the write cache? It looks like you are
using iscsi (I'm not familar with it myself). If it's a battery-backed
write cache on the controller, then yes, that would be very helpful
and should be safe.

Can you give us more details on your disk array? Controller, disks,
RAID layout, ext3 fs creation options, etc...

I noticed in your linux-kernel thread that you said that large writes
are are very slow (1MB/s). That certainly sounds abnormal. I would
expect at least 25MB/s and with an array that big, you probably have
at least 3 disks so it should be at least 2-3 times faster than that
for writing large files.

Have you also tried mounting the filesystem in data=writeback mode?
Accord to this whitepaper[1] it should significantly improve the
performance of creating/deleting small files at the expense of some
data integrity in the case of a crash (which should not be a big deal
since BackupPC does a good job of verifying the pool during backups).

-Dave

[1] http://www.redhat.com/support/wpapers/redhat/ext3/tuning.html

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] large archives and scalability issues

2008-02-25 Thread David Rees
On Mon, Feb 25, 2008 at 6:29 PM, dan <[EMAIL PROTECTED]> wrote:
> reiserfs will certainly help a lot with the hardlink and directorie creation
> and deletion.  claims about reiserfs tend to be greatly exagerated but this
> is a true strength of it and would will see a really remarkable performance
> improvement for these specific operations.  i don't believe that xfs will
> net any significant inprovement.  xfs is better at handling small files
> quickly than ext3 but not significantly in real world situations.

Last time I tried reiserfs it also slowed down significantly during
BackupPC nightly operations, so any claims of "it's way faster than
ext3" should be backed up with BackupPC specific benchmarks, please.

ext3 performance while not stellar is consistent.

Tomasz, it would be interesting to see how many IOPS your array is
getting during nightly operations and whether the behavior changes
significantly before/after the perceived slowdown after dropping
caches.

After things slow down again, do they speed up again after dropping
the caches? It would be interesting to note if the increased IO is
actually from nightly processing things faster, or simply increased IO
from having no more filesystem data in cache and things are actually
progressing slower (though they appear faster because more IO is
taking place).

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] large archives and scalability issues

2008-02-25 Thread David Rees
On Mon, Feb 25, 2008 at 1:23 AM, Tomasz Chmielewski <[EMAIL PROTECTED]> wrote:
>  Unfortunately, it doesn't scale very well in terms of performance - you
>  may see this thread on linux-fsdevel list for more info:
>  http://marc.info/?t=12033398513&r=2&w=4

What version of BackupPC? 3.1.0 does the inode sorting in the nightly
process as described in the thread if you have IO::Dirent installed
(check that's installed, too).

>  The main problem seems to be hard disk seeks caused by a great amount of
>  hardlinks. I.e., removing anything from the drive takes ages.

The only way you can really reduce the cost of all these seeks (unless
you can find a filesystem better than ext3 which handles this type of
situation better) is to try to limit head movement on the spindles.
This would mean partitioning your disks and using a smaller portion of
the disk for the BackupPC data partition. Or adding more spindles with
higher rotation speeds and lower seek latencies.

I did find it odd that you said after dropping the caches deletes ran
fairly quickly for a while, then slowed down. This seems to point to
some sort of kernel bug.

Some people have said that perhaps reiserfs or xfs may perform better
than ext3 for this type of workload, but I don't know of any real
benchmarks made with a BackupPC type workload which is fairly unusual.

I assume that you've already mounted the filesystem with noatime?

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Enhancing WAN link transfers

2008-02-21 Thread David Rees
On Thu, Feb 21, 2008 at 11:43 AM, Nick Webb <[EMAIL PROTECTED]> wrote:
> Rich Rauenzahn wrote:
>  > dan wrote:
>  >> no, incrementals are more efficient on bandwidth.  they do a less
>  >> strenuous test to determine if  a file has changed.
>  >>
>  >> at the expense of CPU power on both sides, you can compress the rsync
>  >> traffic either with rsync -z
>  >
>  > Have you tried rsync -z?   Last I heard, BackupPC's rsync modules don't
>  > support it.
>
>  Actually, if I use the -z/--compress option in rsync or use ssh
>  compression BackupPC dies after a few hours (aborted by signal=PIPE).
>  Any suggestions on how to figure out why this is failing.  Works fine
>  without compression, but takes forever...

As Rich said, BackupPC's rsync modules don't support compression. SSH
compression should work fine, though.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My thoughts on building an inexpensive backuppc server.

2008-02-11 Thread David Rees
On Feb 11, 2008 7:51 PM, Justin Best <[EMAIL PROTECTED]> wrote:
> On Feb 11, 2008, at 7:18 PM, Nicholas Mistry wrote:
>> Install your favorite flavor of linux with backuppc (CentOS, Fedora, Ubuntu,
>> Debian) but install a stripped down version w/o the gui and the like.
>
> Well, my first concern would be that the board you selected doesn't seem to
> have great Linux support. See one of the reviews at
> http://www.newegg.com/Product/Product.aspx?Item=N82E16813121326

The review mentions graphics support - and Nicholas specifically
mentioned he won't me running anything but text mode. Not an issue.

> Why not just buy a used machine, or better still, re-use one you've already
> got? All my BackupPC servers are old, re-used hardware, and they work great.
> And, if you get a machine that is well known (example: PowerEdge 400SC)
> you'll be able to google for help if you have weird issues with the OS.

Not a bad idea, too! Biggest drawback is that those PowerEdge 400SCs
are power hogs compared to the system that Nicholas specced out. I'd
estimate the PowerEdge to idle around 100 watts where his system will
likely idle around half that.

Perhaps there's something out there with lower power consumption that
is inexpensive on the used market.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrading old server

2008-01-18 Thread David Rees
On Jan 17, 2008 1:37 PM, Bowie Bailey <[EMAIL PROTECTED]> wrote:
> I have a BackupPC server that I haven't touched in a while.  It is currently
> running version 2.1.2pl1.  Since I am so far behind, are there any problems
> I would run into upgrading this to the latest version?  Anything in
> particular that I would need to do to avoid problems?

Nope, not really. Just compare the old/new configuration file and you
should be good to go. Might also want to verify that all of BackupPC's
dependencies (perl modules) are up to date.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How can I limit memory usage (RAM) of backuppc ?

2008-01-18 Thread David Rees
On Jan 18, 2008 12:50 AM, KLEIN Stéphane <[EMAIL PROTECTED]> wrote:
> are there a directive or other stuff to limit memory usage (RAM) of
> backuppc ?

Not really, the maximum amount of memory is more or less tied to the
type of backups your are doing and the number of files being backed
up.

For example, rsync backups of a large number of files can use a lot of
memory. If this becomes an issue, possible work arounds are to split
up the backup into smaller chunks or use a different transport
mechanism.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why is SMB backups so much faster than ssh/rsync ???

2007-12-18 Thread David Rees
On Dec 18, 2007 5:05 PM, Brendan Simon <[EMAIL PROTECTED]> wrote:
> So is the bottleneck rsync or the number of files or memory ???

In this case, it's neither the number of files or memory.

If you look at top in this particular case, the backup is complete CPU
bound, with ~70% CPU being used by BackupPC and ~30% CPU being used by
ssh.

The machine isn't swapping, and isn't waiting on disk.

The only way you're going to go faster is to get more CPU power.

-Dave

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why is SMB backups so much faster than ssh/rsync ???

2007-12-18 Thread David Rees
On Dec 16, 2007 9:44 PM, Brendan Simon <[EMAIL PROTECTED]> wrote:
> *   The backuppc server is an Intel P3 800MHz with 512MB RAM and
>   550GB or raid storage for data backups.
> * The linux host I am backing up is a Dual Processor AMD64 2GHz with
>   2GB RAM.
> * All are connected via 1Gbps Ethernet switch.

What are the specs of the raid array for the BackupPC server? What
type of disk subsystem does the linux host have?

-Dave

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 3.1.0 released

2007-11-28 Thread David Rees
On Nov 28, 2007 10:15 AM, Arch Willingham <[EMAIL PROTECTED]> wrote:
> Can you upgrade if the original install was from the source?

Yes, it is very easy to upgrade from source. Just follow the
installation instructions, an upgrade follows the same procedure.

http://backuppc.sourceforge.net/faq/BackupPC.html#installing_backuppc

-Dave

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 4 x speedup with one tweak to freebsd server

2007-11-08 Thread David Rees
On Nov 6, 2007 7:35 AM, Paul Archer <[EMAIL PROTECTED]> wrote:
> > I mount my /backup raid with noatime and notail options.
> >
> Don't forget nodiratime.

nodiratime is a subset of noatime, so if you have noatime set, there
is no need to set nodiratime.

-Dave

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Expiring Backups for Disabled Hosts

2007-11-01 Thread David Rees
I noticed that for hosts which I have disabled, old full/incremental
backups don't get removed automatically anymore.

Reading the docs, it appears that backups only get moved to the trash
after a successful backup which likely explains why this is happening.

Now, I could just go move those backups manually to the trash folder
for cleanup, but it would be nice if the nightly process automatically
cleaned up old backups for disabled hosts (respecting FullKeepCntMin
and IncrKeepCntMin numbers of course).

Would anyone else find this useful?

-Dave

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How long does a normal Backup take?

2007-10-24 Thread David Rees
On 10/24/07, Hendrik Friedel <[EMAIL PROTECTED]> wrote:
> Well, what surprises me is, that I can't hear it seeking...

Try using `iostat 3` or similar during a backup. Typical 7200 rpm IDE
disks can't do more than 100-150 IOP/s or so.

> /dev/hda5 94% /mnt/data <--xfs, not used by backuppc
> /dev/hdb1 99% /mnt/data1 <--reiserfs, backuppc

Eek! Keeping your disks over 90% full is a bad idea. I really suspect
that your reiserfs partition is very heavily fragmented. I must wonder
if the directory structures have also become fragmented. At least with
xfs you can degragment it online.

> > You should also check that each disk is running full speed by
> > running hdparm -tT /dev/hdX. You should be seeing at least
> > 30MB/s, probably
> > 40-50MB+.
>
> Well, it's just doing a backup and an emerge (xfs_fsr ;-)
>
> /dev/hda:
>  Timing cached reads:   154 MB in  2.02 seconds =  76.18 MB/sec
>  Timing buffered disk reads:   66 MB in  3.04 seconds =  21.72 MB/sec
>
> /dev/hdb:
>  Timing cached reads:   144 MB in  2.00 seconds =  71.90 MB/sec
>  Timing buffered disk reads:   60 MB in  3.00 seconds =  19.97 MB/sec
>
> Were you refering to the buffered or unbuffered speed?

Buffered speeds. Those speeds are low if the system is idle. You
should be seeing 30MB+ for those disks, probably at least 40MB/s. If
the system wasn't idle, please try again when it is.

Anyway, seeing the iostat data will let us know if the disks are maxed
out or not.

-Dave

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How long does a normal Backup take?

2007-10-08 Thread David Rees
On 10/8/07, Hendrik Friedel <[EMAIL PROTECTED]> wrote:
> Ok, what I did now was a backup of localhost. Here, the network cannot be
> the bottleneck, and I can check the rest with vmstat, too. By the way: The
> Pool-Disk is dedicated an not the source disk for the backup.
>
> Here's the vmstat. I hope it's readable.
> procs -memory-- ---swap-- -io--- -system-- cpu--
>  r  b swpd free buffcache  si so  bi   bo  incs us sy id wa
>  1  3 2816 6308 135036 228608  0  0   527 583 3027 4872 11 11  0 77
>  0  3 2816 6236 134420 229588  0  0   779 281 2946 4755 18 12  0 70
>  0  4 2816 5532 134820 229848  0  0   436 333 3906 4505  8 19  0 73

Here your system is primarily waiting on disk, see the high IO wait
time of ~70%? Also see the low bi/bo rates? That indicates that the
disks are seeking all over the place.

You mentioned that one of the disks is running xfs. You can check the
fragmentation level by running: xfs_db -c frag -r /dev/hdXX

If the fragmentation level is over 10%, it's worth defragging, just
run xfs_fsr which will defrag all mounted xfs filesystems for 2 hours.
It's not a bad idea to run that periodically if you have xfs
partitions.

I seem to remember reiserfs having performance issues after some time,
especially if you happen to let the partition get too full. How full
are your partitions?

You should also check that each disk is running full speed by running
hdparm -tT /dev/hdX. You should be seeing at least 30MB/s, probably
40-50MB+.

-Dave

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] How long does a normal Backup take?

2007-10-08 Thread David Rees
On 10/8/07, Hendrik Friedel <[EMAIL PROTECTED]> wrote:
> procs ---memory-- ---swap-- -io -system-- cpu
>  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa
>  3  0 80   6944  32080 32842800   668  1176 3503 7659 33 29 38 0
>  1  0 80   8744  32156 32763600 5  2240 3515 8019 29 22 45 4
>  3  0 80   7548  32156 32893200   685   805 3854 8550 21 23 56 0
>  2  0 80   6516  32152 32897600   673  1131 3788 8503 28 24 47 0
>  0  0 80   7152  32144 32946400 0   727 3450 8009 35 20 45 0
>  1  0 80   7364  32204 32915200   668  2371 3906 8817 23 21 52 4
>  0  0 80   6388  32028 32914400 0   536 3320 7767 33 19 48 0
>  0  0 80   8360  32028 32821200   676  1900 3710 8449 26 22 49 3
>  3  0 80   7028  32084 32841200 0  1637 3716 8555 27 19 52 2
>  2  0 80   7200  31992 32845200   668   712 3135 7354 40 19 41 0
>
> Sorry, but I really don't know, how to interpret it. I know (read it in the
> man-page) what the columns are, but still I cannot draw conclusions.

I snipped a bit of it and fixed the word wrapping (hopefully) to make
it a bit easier to read.

CPU, memory use, disk IO all look reasonable. Your server isn't
completely maxed out here. The context switch rate is a bit high, but
you have CPU cycles to spare so it's not really an issue here. It is
likely either the client or network keeping the backups from going
faster.

I would start looking at client side (CPU/disk) or network bottlenecks
if you are looking to go faster.

> Out of couriousity: How many MB/s do your backups usually acheive?

Anywhere from 1MB/s to 8MB/s is typical depending on the client speed
and files being backed up.

-Dave

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] How long does a normal Backup take?

2007-10-07 Thread David Rees
On 10/7/07, Hendrik Friedel <[EMAIL PROTECTED]> wrote:
> So what do you think is a good approach to find out what slows backuppc
> down?

Running top and vmstat 3 while a backup is running works very well and
will help track down where the bottleneck is.

-Dave

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] New Hardware [war Troubleshooting a slow backup]

2007-10-02 Thread David Rees
On 10/2/07, Tony Nelson <[EMAIL PROTECTED]> wrote:
> My first decision point is potentially the easiest.  I thought rather
> than buying one huge backup server and trying to backup all 32 hosts, it
> might be smarter to buy 2 (or more) smaller machines and splitting up
> the load.  I would think that multiple machines might cost a little more
> up front, but I'm hoping I would get better throughput that way.

How many machines you buy and how big of a machine you buy will depend
on the size of your backups and what your backup window of your
largest backups are.

If most of your hosts back up fairly quickly, but you have a handful
of hosts with a lot of data/files and not a lot of time to get the
backup done, you may need a machine with faster CPUs and more spindles
to get the job done.

If you identify the problematic backups using your current machine and
get some performance metrics as well as what your performance targets
are, that will help identify the type of hardware you should be
looking at.

Don't forget the overhead of managing an additional machine as well.

> The second decision point would be vendor.  Currently we buy most of our
> hardware from Dell.  I'm fine with Dell unless there is vendor out there
> that sells boxes for a reasonable price that are hands down better for
> the application (possibly because of RAID controller cards).

Dell is what we usually use when purchasing hardware. Sometimes we use
HP. Vendor isn't too important here, what's most important for
BackupPC is disk speed, total memory and CPU speed and cores.

> I've already read a lot of the discussion on RAID configurations in the
> list.  I've ruled out RAID 5.  RAID 10 looks nice, but really expensive.
>   A simple mirror seems pointless, and a simple stripe seems like too
> much of a single point of failure.  If I were to do RAID 10, with say
> 500GB SATA drives, would a single raid controller suffice, or does it
> really pay to get 2?

As others mentioned, there isn't any reason to get more than one
controller. If you were doing a lot of streaming reads/writes at high
bandwidth with enough disks, splitting onto multiple controllers can
help. But with BackupPCs workload of lots of small reads and writes,
it's not critical.

RAID10 while expensive, gets you the highest performance and highest
reliability in the face of disk failures.

How many disks per machine are you looking at? If RAID10, that means
at least 4. Our fastest BackupPC servers use 6 15k SCSI disks.
Remember that with the high-seek activity BackupPC presents, high
spindle speeds are important for fast backups.

As a general guideline, we look to spec a machine with 1 CPU and 2
disks for every 2 concurrent local net backups that we want to run at
full speed. This will be overkill if the clients are over a slow
network and/or if the CPU/disk speed on the clients are slower than
the servers.

Also take into consideration the power requirements of the system(s).
Since your backupc system will likely be idle 90% of the time, AMD
systems will usually have much lower idle power requirements. But the
pricing of quad-core Intel systems is very good right now.

If you were to build a single system backing up 32 clients that have
~10gb average to back up (seeing your current host summary page would
help), I would recommend something with 4 cores and 8 disks in a
RAID10 and 2-4+GB of RAM. Cut that in half if going for multiple
smaller systems. This should let you back up 4+ clients in parallel as
fast as they can go. Get 10k or 15k disks if you can, but the price
premium may be too much depending on what your total storage
requirements are.

But given that your current system is a dual CPU system with 6 250GB
disks in RAID 5, I really suspect that if you could simply reconfigure
your existing system into a RAID10 configuration (assuming going from
1.25TB to 750GB is still enough disk space) would be a substantial
improvement. In fact, if you are going the multiple system approach,
you may start off with one new small/medium system (2-4 cores, 4-6
disks) to get you to where you can reconfigure your existing system.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-27 Thread David Rees
On 9/27/07, Doug Lytle <[EMAIL PROTECTED]> wrote:
> I've recently purchased two 500GB drives that I wanted to add to my XFS
> LVM.  It turns out that you can't resize an XFS partition.  I ended up
> having to recreate the LVM.

You can resize an XFS partition, you need to use the xfs_growfs utility.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Troubleshooting a slow backup

2007-09-27 Thread David Rees
On 9/27/07, Dan Pritts <[EMAIL PROTECTED]> wrote:
> So I've been of the opinion (not backed up by experimental data) that
> a concatenation (what linux md driver calls LINEAR; similar effects can
> be realized with LVM) of two RAID1's would be better for BackupPC than
> a RAID10.
>
> My rationale for this is that in RAID10, the disks are generally
> seeking to the same spot, unless you have a write that doesn't span
> across multiple raid stripes.  This certainly happens, but i suspect
> most writes span multiple stripes.
>
> i guess this really depends on the RAID stripe size, bigger would be better.

Looking at my average file size on one of my backuppc servers, it
appears to be about 50KB. With a typical stripe size being 64KB, that
would seem to indicate that your average write will fit on one stripe,
so that may hurt your argument.

Additionally, if we look at the big picture where we are writing out a
bunch of files, these are pretty much guaranteed to be scattered all
over the disk with your typical filesystem. Even a fresh filesystem
will scatter new files over the disk to help avoid fragmentation
issues should you decide to grow the file later.

Now throw in the number of backups you are doing and you end up with
too many variables to consider before assuming that a linear array
will outperform a striped array.

For random reads all over the disk, the performance should be similar
for small files but large file reads should be up to twice as fast.
Throw in multiple readers and the difference will narrow.

> > Stay away from RAID5 unless you have a good
> > controller with a battery backed cache.
>
> Even then, performance won't be great, especially on random small writes
> (look up the "RAID5 write hole" and "read-modify-write" to understand why).

But wait, I thought you said that the average write under backuppc
load would be larger than a stripe? So which is it? ;-)

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Troubleshooting a slow backup

2007-09-26 Thread David Rees
On 9/26/07, Tony Nelson <[EMAIL PROTECTED]> wrote:
> David Rees wrote:
> > Your machine looks fine to me. Your backuppc data partition is a single 
> > disk?
>
> My servers disk it 6 250G IDE drives arranged in a RAID5 with 1 Hot
> Spare.  The Controller is a 3Ware Escalade 7506-8 Controller.

OK, there are some settings which can especially help with 3ware controllers.

What is the output of the following files (use cat)?

/sys/block/sd*/queue/max_sectors_kb
/sys/block/sd*/queue/nr_requests
/sys/block/sd*/device/queue_depth

Specifically, the queue depth is often much larger than nr_requests,
so setting nr_requests to something about twice as big as the
queue_depth helps. On a 3ware card, the queue_depth should be about
254, so trying setting nr_requests to 512 like this (assuming sda is
your raid array):

echo 512 > /sys/block/sda/queue/nr_requests

Also setting max_sectors_kb to 64 can help with 3ware raid 5 arrays:

echo 64 > /sys/block/sda/queue/max_sectors_kb

If the system gets sluggish under high IO load, you can also switch to
the deadline IO scheduler:

echo deadline > /sys/block/sda/queue/scheduler

More info/reference: http://www.3ware.com/KB/article.aspx?id=11050

> > As the others suggested, mounting the backuppc data partition with
> > noatime can help. I usually mount all my filesystems with noatime.
> > Mounting the backuppc data partition with data=ordered option may
> > help, too (the default is data=writeback).
>
> I'll add data=ordered now

Ugh, data=ordered is the default, you should try data=writeback (had
it backwards).
http://linuxmafia.com/faq/Filesystems/journaled-filesystems.html

> I'll try the checksum seed with a test backup now..

Typically, the next backup won't get much faster, IIRC it takes 2-3
backups to realize the full performance from using checksum seed.

I don't know if the actual value of checksum seed matters, but 32761
is typically used as suggested in the default config.pl file.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Troubleshooting a slow backup

2007-09-26 Thread David Rees
On 9/26/07, Tony Nelson <[EMAIL PROTECTED]> wrote:
> Well, due to a power failure, I was put in the lovely position of a
> corrupted ReiserFS tree.  I ran reiserfsck, which took 4 days to
> complete and just couldn't bring myself to trust stability of the disk.

Given the lack of interest/maintainers in ReiserFS, I wouldn't
recommend it to anyone.

> It still seems very slow to me.  I don't know if I should attribute it
> to the fact that everything it is doing is a full backup or not.
>
> I've attached the output from vmstat on the BackupPC server.  The server
> is currently running 3 full backups.

Your BackupPC server's disk is completely maxed out. Looks like it is
doing a lot of seeking. To get more throughput, you'll need more disk
spindles. RAID1 will improve random read IO performance, but you'll
need RAID10 w/4 disks which should get you 2-4x read performance and
2x write performance. Stay away from RAID5 unless you have a good
controller with a battery backed cache.

You can see from the vmstat output that the server is doing a bit of
reading and some writing. Given the low throughput (2-3MB/s) and high
IO wait, you can see that the disk is spending most of it's time doing
lots of small IO all over the disk which is typical of BackupPC disk
load.

> I'm going to investigate the currently installed software on this
> machine just to see if I can find any issues.

Your machine looks fine to me. Your backuppc data partition is a single disk?

As the others suggested, mounting the backuppc data partition with
noatime can help. I usually mount all my filesystems with noatime.
Mounting the backuppc data partition with data=ordered option may
help, too (the default is data=writeback).

I also typically avoid running more than 2 concurrent backups at a
time, but it's really a matter of balancing server CPU/disk/network
utilization to find out how many you can run. For a typical
single/dual CPU system with a single disk or RAID1, backing up
machines on the LAN, 2 concurrent backups is enough.

Did you also try enabling the checksum-seed option? What about
upgrading to backuppc 3.1beta and installing the IO::Direct perl
module? 3.1beta also has a few other performance improvements.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Troubleshooting a slow backup

2007-09-19 Thread David Rees
On 9/19/07, Tony Nelson <[EMAIL PROTECTED]> wrote:
> Attached are the files you requested.  The BackupPC server was running 2
> long running backups when I took these. In addition to the screenshots
> you requested, I added a screenshot from the web console of BackupPC.

>From your screenshots, the web server CPU/disk utilization is not an
issue, vmstat is showing it to be near idle.

Your backup server is showing about 22% waiting on disk IO, how many
processors and what type of disk/filesystem is your backuppc data
partition on?

If your backuppc data partition isn't mounted with noatime, that's one
of the first things I'd do (but isn't likely to make a huge
difference). It also appears that you aren't using the checksum-seed
rsync option, you should enable that as well.

> I attempted counting the number of files being backed up by selecting
> data from a SQL server that contains information about most of the
> files.  The most reasonable ballpark I can give you is between 275k and
> 300k files.  If a it's better, I'll run a 'find /ha -type f | wc -l'
> during the off hours tonight, unless you can suggest a better command.

Not too many files. How much new data is being backed up each day? You
can get file counts and sizes of full/incr backups by looking at the
hosts's summary page under "File Size/Count Reuse Summary"

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Troubleshooting a slow backup

2007-09-19 Thread David Rees
On 9/19/07, Tony Nelson <[EMAIL PROTECTED]> wrote:
> I looked into the checksum-seed option for rsync and it appears to a
> patch that I don't have.  I am using Gentoo and just installed rsync
> from Portage.  Has that patch every made it into the rsync upstream?

Checksum seed support was added in rsync 2.6.3 (30 Sep 2004) so
any halfway recent distro should support it. I can't imagine that
Gentoo doesn't.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Troubleshooting a slow backup

2007-09-19 Thread David Rees
On 9/19/07, Tony Nelson <[EMAIL PROTECTED]> wrote:
> David Rees wrote:
> > Your backup server is showing about 22% waiting on disk IO, how many
> > processors and what type of disk/filesystem is your backuppc data
> > partition on?
>
> The system has 2 dual core Xeon processors.

That's what I expected given the slightly less than 25% disk wait, one
of the processors is just sitting there for disk IO most of the time.

> The filesystem is a 1.5TB Hardware controlled RAID Array made up of
> 250GB IDE (not SATA) drives, in a Penguin Computing Relion 430 server.

What kind of RAID? (1, 5, 10, etc)

> The filesystem is a ReiserFS (M5 hash if memory serves) filesystem,
> mounted as:
>
> /dev/sda1  /var/lib/backuppc   reiserfs   noatime 1 2

I know some people like using reiserfs for some reason, but I seem to
recall having slow IO performance over lots of small files a while
back using it. I use ext3 on all my backuppc servers myself.

> I'm not aware of the checksum-seed option to rsync, but I'll look it up
> and add it tonight, and see how the backup goes.

The check-sum seed option will definitely reduce the IO load on the
server by some amount. This was added in rsync 2.6.3 (30 Sep 2004) so
any halfway recent distro should support it.

> It turns out my guess was way wrong.
>
> New
> 3086 Files
> 1572.8MB Data

If you're getting that much new data a day, you should try to run full
backups more frequently, or upgrade to BackupPC 3 which supports
multi-level rsync incrementals.

> I also see looking at that page, I haven't had a good full backup in
> quite some time.  I manually schedule this particular backup out of cron
> to ensure that it starts at the same time every day and it looks like
> the full from the past couple of weekends failed.  I wonder if that is
> affecting performance.

That is definitely affecting performance since for each incremental
backup, you need to transfer over all the data that has changed since
the last full backup (unless you are using multi-level rsync
incrementals in BackupPC 3).

Also available in BackupPC 3.1 beta, you can install the IO::Dirent
perl module on the server which should also improve backuppc
performance during trash clean operations.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fastest way to backup lots of small files?

2007-09-19 Thread David Rees
On 9/19/07, Merz, Christian <[EMAIL PROTECTED]> wrote:
> I'm running Backuppc successfully for quite some time now, backing up mostly
> Windows Clients and Servers using smbclient.
> As the Data is growing, its becoming difficult to backup a complete Server
> over Night, so I'm looking if I can speed it up.
> I'm wondering if rsync on Windows would be faster then smbclient, has
> anybody of you experience with it, and has a comparison between rsync and
> smbclient?
> Large Portions of the data consist of small files, 1-10kb, millions of them.

The biggest issue with rsync is that it can use a lot of memory if you
have a lot of files, so you will often need to break up rsync backups
with a lot of files into smaller batches.

That said, if the files you are backing up don't change much, rsync
may be faster even after breaking up the backup into manageable
chunks. This sort of thing varies so much depending on the speed of
the backup server, client, network, disk setup, etc, that you will
likely just have to try it yourself to find out.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Troubleshooting a slow backup

2007-09-18 Thread David Rees
On 9/18/07, Tony Nelson <[EMAIL PROTECTED]> wrote:
> What I would like to do is figure out the best way of determining if the
> source of the slowness is the target server, the backuppc server or a
> network bottleneck that I just can't imagine.

Fire up `top` and `vmstat 3` on each machine while the backup is
running, preferably when it's the only backup running and nothing else
is going on.

Grab a representative screenshot from each and post them up, it should
quickly show where the bottleneck is. How many files total are being
backed up?

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] yum package for latest release?

2007-09-10 Thread David Rees
On 9/10/07, Les Dunaway <[EMAIL PROTECTED]> wrote:
> The yum pkg for BackupPC on the Fedora Extras is back level (2.1.2-7)
> with AFAICS no web UI?
>
> Is there a plan to pkg 3.x? If so, when?

You should probably open a bug in RedHat's bugzilla to get an answer
to this, it's very likely that Craig does not maintain the Fedora
BackupPC RPMs there.

In fact, searching for backuppc reveals this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=232738

Which shows that the current maintainer is looking for a co-maintainer
to help with updates and the like, meaning that backuppc is more or
less unmaintained in Fedora right now.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync --compress broken ?

2007-08-22 Thread David Rees
On 8/21/07, Rich Rauenzahn <[EMAIL PROTECTED]> wrote:
> Whenever I use these options, rsync "seems" to work and transfer
> files but nothing ever seems to actually get written to the backup
> dirs:

The Perl Rsync library doesn't support compression which is why adding
the compression option to the configuration does not work.

-Dave

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Outlook and lock file problem

2007-07-26 Thread David Rees
On 7/26/07, David Rees <[EMAIL PROTECTED]> wrote:
> On 7/26/07, Yaakov Chaikin <[EMAIL PROTECTED]> wrote:
> > This is a very basic question... After reading the docs, I am unclear
> > on the difference between an incremental backup and a full backup.
> > Since the backups are stored in a pool which stores JUST the
> > difference between the older and newer files, than what is the
> > difference then?
>
> Please do not reply to a post when starting a new topic, and even if
> you are not able to post a new topic, please at the very least change
> the subject!

And shame on me for not looking for other posts before replying. :-(
Sorry for the noise.

-Dave

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Outlook and lock file problem

2007-07-26 Thread David Rees
On 7/26/07, Yaakov Chaikin <[EMAIL PROTECTED]> wrote:
> This is a very basic question... After reading the docs, I am unclear
> on the difference between an incremental backup and a full backup.
> Since the backups are stored in a pool which stores JUST the
> difference between the older and newer files, than what is the
> difference then?

Please do not reply to a post when starting a new topic, and even if
you are not able to post a new topic, please at the very least change
the subject!

-Dave

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] High Memory Usage

2007-07-06 Thread David Rees
On 7/5/07, Carl Wilhelm Soderstrom <[EMAIL PROTECTED]> wrote:
> On 07/05 05:11 , David Rees wrote:
> > I think that possible workarounds would be to switch to a different
> > backup transport other than rsync. Can anyone think of any other
> > solutions?
>
> Try carving it up into several chunks that get backed up separately.
> Make sure your backup server is only trying to back up one host at a time.

Thanks for the suggestion. Yes, I am only backing up one host at a
time. I think for now I am going to try to see if I can remove or
archive some of the files being backed up to reduce the number of
files in the backup.

I guess the other option would be to upgrade the backup machine from
4gb to 8gb. It is running a 64bit Xeon which I suspect may increase
memory requirements as well compared to a 32bit install.

-Dave

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] High Memory Usage

2007-07-05 Thread David Rees
Hi,

I'm using backuppc to backup a number of different machines, but am
having some memory consumption issues with the backuppc daemon when
backing up one particular host we just started backing up using
BackupPC 3.0.0.

When backing up this client, the daemon uses over 4GB of RAM causing
the machine to swap significantly. The client is a CentOS 4 box and is
being backed up using rsync over ssh. Interestingly, the full backup
seemed to use a lot less memory than the incrementals as the machine
didn't go into swap when doing the full backup.

I am pretty sure that the main issue causing the memory utilization is
that the client has over 1.8 million files to be backed up with
another 20k files to back up each day. Perhaps the File::RsyncP module
is causing this memory utilization?

I think that possible workarounds would be to switch to a different
backup transport other than rsync. Can anyone think of any other
solutions? Has any work been put into the RsyncP module to reduce
memoty utilization?

-Dave

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using Backup PC in "online backup" model

2007-06-13 Thread David Rees
On 6/13/07, Francis Lessard <[EMAIL PROTECTED]> wrote:
> I currently use BackupPC 3.0.0 to backup 2 www servers. As bandwidth cost a
> lot, I would like to use BackupPC similar to a commercial online backup
> service we use. This service does a full backup only once, then do only
> incremental backups afterward. Am I correct if I set FullPeriod to 0, start
> a manual full backup, then let run incremental backup all the rest of the
> time ?

If you're using one of the rsync data transfer methods, rsync always
transfers only data that has changed since the last backup, whether
it's a "full" or "incremental" backup.

-Dave

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] double backup data disk

2007-05-04 Thread David Rees
On 5/3/07, Les Stott <[EMAIL PROTECTED]> wrote:
> YOUK Sokvantha wrote:
> > I installed Backuppc 2.1.2-2ubuntu5 on Ubuntu 5 server edition.  I
> > mounted data directory /var/lib/backuppc to /dev/sda. I added another
> > hard disk  /dev/sdb and mount /dev/sdb as /var/lib/backuppc-mirror. My
> > purpose is to copy all the data from /var/lib/backuppc to the second
> > hard disk (/var/lib/backuppc-mirror) in case of disk failure. So i use
> > /usr/bin/rsync -apvz --delete /var/lib/backuppc /var/lib/backuppc-mirror
> > as root but the size on the /var/lib/backuppc-mirror is very big if
> > compare to the size on /var/lib/backuppc.
> >
> > Can anyone show me on how to make double backup data disk?
>
> i think it would be easier to setup a software mirrored partition for
> your backuppc data. mirror the partitions on the two drives, then you
> dont need to worry about manually copying, its all done by the raid and
> its redundant.

I would recommend this as well. It also has the benefit of improving
read performance as well.

Otherwise I would recommend a simple dd of the backuppc data partition
to the backup disk. This would be fastest, but does require that your
backup disk be at least as big as your backuppc data partition.

-Dave

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC ignoring the BackupsDisable entry

2007-04-23 Thread David Rees
On 4/23/07, James Kyle <[EMAIL PROTECTED]> wrote:
> I'm getting the "Administrative Attention Needed" alerts daily
> referring to a host that I've set as being archived.
>
> I've set the $Conf{BackupsDisable} variable to "2 -> Don't do any
> backups on this client.  Manually requested  backups (via the CGI
> interface) will be ignored." from within the CGI Schedule
> Configuration. I've also toggled that variable to "Override".

FWIW, I'm seeing the same thing on one of my hosts that I've disabled
using BackupPC 3.0.0.

I just realized that I didn't have $Conf{EMailNotifyOldBackupDays} set
to a large value for those hosts like I do for some hosts that don't
generate notices, but I wouldn't have thought that would have been
required.

-Dave

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-30 Thread David Rees
On 3/30/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> David Rees wrote:
> > How long are full and incremental backups taking now?
>
> In one machine it went down from 900 minutes to 175 minutes. I expect better
> performance
> when more memory is added (today or tomorrow they will add it) and I dont 
> think all
> files had checksums cached when this full was ran.

Wow, that is a huge difference! I didn't expect performance to
increase that much, apparently the checksum caching is really reducing
the number of disk IOPs.

> I could try tar for testing purposes if you like? I think rsync will be 
> sufficiently
> fast enough. I am guessing that with checksum-seeds the difference shouldnt 
> be so
> much
> tar probably transfers much more data in full backups? Rsync can be faster 
> perhaps if
> ignore-times was removed when taking full backups. I am thinking of removing
> ignore-times
> option from full backups with rsync and see how much it effects for seeing 
> the difference.

Tar is definitely worth a shot if it's short comings for incremental
backups are acceptable and network bandwidth isn't an issue.

Removing rsync ignore-times may also be an option if the reduction in
possible data integrity is acceptible.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-29 Thread David Rees
On 3/29/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> I didnt blame anybody, just said BackupPC is working slow and it was working
> slow, very slow indeed. checksum-seeds option seems to be doing it's trick 
> though.

How long are full and incremental backups taking now?

> I am thankful to people who wrote suggestions here in this forum, I tried all 
> of
> those suggestions one by one. I think that shows that I took them seriously 
> even
> though some of them looked like long shots. Eventually one of the suggestions
> seems to be working.

You only tried 2 things. Mounting the backup partition async and
turning on checksum-seeds. Are you going to the 2 others? (Add memory
and try tar instead of rsync)

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Poor backup performance

2007-03-28 Thread David Rees
On 3/28/07, John T. Yocum <[EMAIL PROTECTED]> wrote:
> Here is the iostat output, the server is doing two full backups at the
> moment, along with a nightly. Server specs: P4 3.2Ghz, 512MB RAM, 300GB
> SATA drive.
>
> [EMAIL PROTECTED] ~]# iostat
> Linux 2.6.9-42.0.10.ELsmp (backup2.fluidhosting.com)03/28/2007
>
> avg-cpu:  %user   %nice%sys %iowait   %idle
> 2.980.000.76   50.02   46.24
>
> Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> sda  69.48   798.71   428.68 1556126098  835203048
> sda1  0.00 0.00 0.00   1556200
> sda2  0.17 2.68 0.5752197231109536
> sda3  0.00 0.00 0.00   1073320
> sda4  0.00 0.00 0.00  2  0
> sda5  0.23 3.04 1.1459141462215408
> sda6111.33   792.87   426.97 1544752254  831876168
>
> On that server, we're seeing backup speeds as low as .11MB/s which are
> backups of several thousand files. Some backups are over 1MB/s, but
> those are servers with only 100ish files.

Was this a raid 5 or raid 1 system with those SATA drives?

I am concerned about the fact that it's reporting 50% of it's time in
system instead of near 100% in wait which indicates some other sort of
bottleneck. If you run iostat with a 5 second interval over a few
intervals does the avg-cpu look similar?

Also do you have the dir_index option on? (check by running `tune2fs
-l /dev/sda6`) Not sure how much of a difference that makes with
backuppc loads, but that does help significantly with metadata
operations on large directories.

It also looks like you are doing tar backups, have you tried rsync backups?

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-28 Thread David Rees
On 3/28/07, David Rees <[EMAIL PROTECTED]> wrote:
> Here's a sample from my backuppc server which has 3 disks while
> backups are being run. The backuppc partition uses 2 disks in RAID1
> exactly the same as yours. The other disk is the system disk (also
> 7200rpm ATA).
> Hardware/OS: AthlonXP 2000+ 1GB RAM, Fedora Core 6. A max of 3 backups
> will run at a time on this machine. This same data can be generated
> using sar -b.
>
> 23:00:02  tps  rtps  wtps   bread/s   bwrtn/s
> 23:10:02   639.17219.11420.06   5440.01   8864.10
> 23:20:03   757.28215.81541.46   5422.98  11696.18
> 23:30:04   497.98211.21286.78   3367.85   5903.83
> 23:40:04   984.20322.85661.35   7436.95  14135.63
> 23:50:03   619.68391.74227.93  10355.83   5160.97
>
> You can see how much higher this system's IO/s seems to be that yours
> from the only performance data you sent us.

I should note that those tps numbers don't match up with the numbers I
get from iostat and are misleading. I think they are the sum of IO on
all devices which inflates the numbers. Watching iostat during a
backup gets me about a max of 100 tps / disk but seems to vary between
50-100.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-28 Thread David Rees
On 3/27/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> Yes it is terrible. I get much better performance if I do the tar option
> with the same files. As a matter of fact I was using a smaller script
> for taking backups earlier. (which I still use on some servers) and
> transfer files over NFS. It works way faster, especially incremental
> backups take 5-10 minutes compared to 400 minutes with backuppc

So have you tried using tar for backups using BackupPC?

I've said it before, there is something wrong with your setup because
on my server incrementals also only take 5-10 minutes as you'd expect.
And my server isn't that different than yours disk-wise, just RAID1
instead of no raid, it's even the exact same disk.

On 3/27/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> David Rees wrote:
> That is true, full backups take about 500-600 minutes and incrementals
> take 200-300minutes.

Is that from all 4 clients being backed up? Are all similar in having
~150k files and ~2GB data to back up?

> > Your transfer rates are at least 5x slower than his and 10x slower
> > than what most people here are getting. There is something wrong
> > specifically with your setup. I suspect that if your backups were
> > going 5x faster you'd be happy and 10x faster you'd be very happy.
>
> I would be happy if backups were made with the same speed as other
> backup programs can do.

What other backup programs are you comparing BackupPC to?

> I am using rsync to sync 2GB of files from one machine to another and
> the number of files is about 100k and the operation takes 30seconds to 1
> minute in average.

Is this using the same backup server and client in your earlier data example?

> > Are you using the --checksum-seed option? That will significantly
> > speed up (the 3rd and later) backups as well.
>
> No, it requires a patched rsync.

You must have a very old version of rsync on your machine.
--checksum-seed has been available for quite some time in the official
rsync tree. It significantly improves rsync performance, I recommend
that you installed a recent rsync if possible. You need rsync 2.6.3
(released 30 Sep 2004) or later.

> > Are you also sure the backup server isn't swapping as well? Can you
> > get us some good logs from vmstat or similar during a backup?
>
> I can tell that it is not swapping because the disk where the swaps are
> idle while taking backup. ad0 is the disk with the swap. If there was
> such problem then it would be reading like crazy from swap. I have given
> this information before.

The information you provided before was just about impossible to read.
Please provide additional data in a readable format.

> The machine is working fine. I was using another backup program which
> was working way faster to backup the same machines. So I dont think that
> I have a hardware problem or such.

OK, so it is using the same machines?

> Earlier I sent the disk stats when the backup was working and I have
> pointed out that the disk with swap is not working at all while taking
> backup. We can easily conclude that swapping is not an issue.

Can you provide stats during the entire backup?

> > On all my production machines I make sure sysstat is running and then
> > run `sar -A` the minute before midnight and pipe the output to email
> > for analysis should the need arise.
>
> sar -A ?

That provides detailed system statistics collected at specified intervals.

Here's a sample from my backuppc server which has 3 disks while
backups are being run. The backuppc partition uses 2 disks in RAID1
exactly the same as yours. The other disk is the system disk (also
7200rpm ATA).
Hardware/OS: AthlonXP 2000+ 1GB RAM, Fedora Core 6. A max of 3 backups
will run at a time on this machine. This same data can be generated
using sar -b.

23:00:02  tps  rtps  wtps   bread/s   bwrtn/s
23:10:02   639.17219.11420.06   5440.01   8864.10
23:20:03   757.28215.81541.46   5422.98  11696.18
23:30:04   497.98211.21286.78   3367.85   5903.83
23:40:04   984.20322.85661.35   7436.95  14135.63
23:50:03   619.68391.74227.93  10355.83   5160.97

You can see how much higher this system's IO/s seems to be that yours
from the only performance data you sent us.

On 3/28/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> Adam Goryachev wrote:
> > anyway, could you please elaborate on the method that your 'other'
> > backup solution used, ie, if your other solution used rsync, and you are
> > using rsync with backuppc, then that might be helpful to know. If you
> > used tar before, and now use rsync, that would obviously make a difference.
>
> I have already said it earlier, the other backup was taking backup over

Re: [BackupPC-users] very slow backup speed

2007-03-27 Thread David Rees
On 3/27/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> David Rees wrote:
> > Evren, I didn't see that you mentioned a wall clock time for your
> > backups? I want to know how many files are in a single backup, how
> > much data is in that backup and how long it takes to perform that
> > backup.
>
> I sent the status of the backups earlier today to mailing list?

Still missing wall-clock time. Though we can extrapolate that it may
be taking over 6 hours for a full backup of the system below. Is that
correct?

> This is from 1 machine.
>
>  Totals  Existing Files  New Files
> Backup# Type#Files  Size/MB MB/sec  #Files  Size/MB   
>   #Files  Size/MB
> 112 full149290  1957.8  0.10149251  1937.9  460120.7

> I dont know if the problem is hard links. This is not a FreeBSD or Linux
> problem. It exists on both. Just that people using ultra fast 5 disk
> raid 5 setups are seeing 2mbytes/sec transfer rate means that backuppc
> is very very inefficient.

As mentioned earlier, RAID 5 is horrible for random small read/write
performance. It is often worse than a single disk.

But still, I have a client which has 1.5 million files and 80-100GB of
data. A full backup takes 4-6 hours which is reasonable. Full backups
average 4.5-5MB/s.

> For example this guy is using Linux (problem is OS independent)
> http://forum.psoft.net/showpost.php?p=107808&postcount=16

Your transfer rates are at least 5x slower than his and 10x slower
than what most people here are getting. There is something wrong
specifically with your setup. I suspect that if your backups were
going 5x faster you'd be happy and 10x faster you'd be very happy.

> On Linux with raid setup with async io etc. people are getting slightly
> better results. I think ufs2 is just fine. I wonder if there is
> something in my explanations...The problem is backuppc. People are
> getting ~2mbytes/sec(was it 2 or 5?) speed with raid5 and 5 drives,
> using Linux. It is a miracle that backup even finishes in 24 hours using
> a standart ide drive.

With ext2 the default is async IO. With ext3 (the default system now)
the default is ordered which is similar to BSD's soft updates.

> This is like the 'Contact' movie. The sphere took 30 seconds to download
> but there were 18 hours of recording. If what you said was true and
> backuppc would be backing up very small amount of files and skipping
> most, then backups would probably take less time than 2-4 hours each.

Using the rsync transfer method, it does require at least a stat of
every single file to be backed up. For 150k files and 2GB of data,
you'd really expect this to be done within a hour with nearly any
machine.

Are you using the --checksum-seed option? That will significantly
speed up (the 3rd and later) backups as well.

Are you also sure the backup server isn't swapping as well? Can you
get us some good logs from vmstat or similar during a backup?

I also suspect that there is something else in this case slowing your
machine down. Unless you give us the information to help track it
down, we will not be able to figure it out. I feel like I am pulling
teeth.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-27 Thread David Rees
On 3/27/07, David Rees <[EMAIL PROTECTED]> wrote:
> Can you try mounting the backup partition async so we can see if it
> really is read performance or write performance that is killing backup
> performance?
>
> I must wonder if ufs2 is really bad at storing inodes on disk...

I went and did some research on ufs filesystem performance and found this paper:
http://www.bsdcan.org/2006/papers/FilesystemPerformance.pdf

There appears to be 4 different mount options related to data integrity:

sync: slowest, all writes synchronous
noasync: (default?) data asynchronous, metadata synchronous
soft updates: dependency ordering of writes to ensure on-disk consistency
async: fastest, all writes asynchronous

noasync seems to be the default. Evran, can you confirm that your
filesystem is mounted this way?

On Linux using ext3, the default mount option is "data=ordered" which
should be similar to soft updates in terms of performance. If you can
mount your backup partition in "soft updates" mode that should perform
best, much better than the default noasync mode for the type of writes
BackupPC does.

I wouldn't recommend mounting a partition async permanently because of
the risk to file system integrity.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-27 Thread David Rees
On 3/27/07, Les Mikesell <[EMAIL PROTECTED]> wrote:
> Evren Yurtesen wrote:
>
> >> What is wall clock time for a run and is it
> >> reasonable for having to read through both the client and server copies?
> >
> > I am using rsync but the problem is that it still has to go through a
> > lot of hard links to figure out if files should be backed up or not.

Evren, I didn't see that you mentioned a wall clock time for your
backups? I want to know how many files are in a single backup, how
much data is in that backup and how long it takes to perform that
backup.

>  From the perspective of the backup directories, it doesn't matter
> whether or not there are additional links to a file. It just looks at
> the directory entry to find the file it wants.  It may matter that the
> inodes and file contents end up splattered all over the place because
> they were written at different times, though.

Yep, Lee is right here. Unless BSD handles hard-links in some crazy manner.

> > If you check namesys.com benchmarks, you will see that they only tested
> > reiserfs against ext2/ext3/xfs/jfs and conveniently forgot to test
> > against ufs2.
> >
> > You can see in the end of the page that slight performance increase in
> > reiserfs is also bringing twice the cpu usage! (plus extX is faster in
> > certain operations even)
> > http://www.namesys.com/benchmarks.html

When your overall speed is limited by the speed of your disks and your
CPU spends all it's time twiddling it's thumbs waiting for disk, who
cares if CPU doubles and still spends 90% of it's time waiting as long
as it gets the job done faster?

To summarize Evran's setup:

FreeBSD, 250MB ram, CPU unknown, 1 7200rpm Seagate ATA disk
Filesystem: ufs2, sync, noatime
Pool is 17.08GB comprising 760528 files (avg file size ~22KB)
BackupPC reports backup speeds between 0.06 -> 0.22 MB/s
Total backup time per host: Unknown
CPU is 99% idle during backups
Disk shows ~75 IO/s during load and low transfer rate
Says even small backup w/small number of files is slow.

Can you try mounting the backup partition async so we can see if it
really is read performance or write performance that is killing backup
performance?

I would also highly recommend that you limit backups to 1 concurrent
backup at a time.

I must wonder if ufs2 is really bad at storing inodes on disk...

BTW, how does BackupPC calculate speed? I think it calculates backup
speed by reporting files transferred over time, so if you don't have
many files that change, won't BackupPC report a very low backup speed.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread David Rees
On 3/26/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> Lets hope this doesnt wrap around... as you can see load is in 0.1-0.01
> range.
>
>  1 usersLoad  0.12  0.05  0.01  Mar 27 07:30
>
> Mem:KBREALVIRTUAL VN PAGER  SWAP PAGER
>  Tot   Share  TotShareFree in  out in  out
> Act   260203592   144912 6868   12384 count
> All  2497845456  232789611800 pages

It wrapped pretty badly, but let me see if I'm interpreting this right
(I'm no BSD expert, either):

1. Your server has ~250MB of memory.
2. Load average during backups is only 0.1-0.01? Does BSD calculate
load average differently than Linux? Linux calculates load average by
looking at the number of runnable tasks - this means if you have a
single process waiting on disk IO you will have a load average of 1.
If BSD calculates the load average the same way, then that means your
server is not waiting on disk, but waiting for the clients.

What's the load like on the clients you are backing up?

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] upgrade to 3.0

2007-03-26 Thread David Rees
On 3/20/07, Henrik Genssen <[EMAIL PROTECTED]> wrote:
> are there any issues upgrading from 2.1.2.pl1?

None that I know of. The upgrade process is pretty smooth. (though I
opted to convert to the new configuration file layout at the same time
which does take a bit of tweaking).

> is 3.0 yet apt-getable?

Don't know, I always install from source.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread David Rees
Let's start at the beginning:

On 3/26/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> I am using backuppc but it is extremely slow. I narrowed it down to disk
> bottleneck. (ad2 being the backup disk). Also checked the archives of
> the mailing list and it is mentioned that this is happening because of
> too many hard links.
>
> Disks   ad0   ad2
> KB/t   4.00 25.50
> tps   175
> MB/s   0.00  1.87
> % busy196

What OS are you runnnig? What filesystem? What backup method
(ssh+rsync, rsyncd, smb, tar, etc)?

75 tps seems to be a bit slow for a single disk. Do you have vmstat,
iostat and/or top output while a backup is running?

> But I couldnt find any solution to this. Is there a way to get this
> faster without changing to faster disks? I guess I could 2 disks in
> mirror or something but it is stupid to waste the space I gain by
> backuppc algorithm by using multiple disks to get a decent performance :)

A mirror will only help speed up random reads at best. This usually
isn't a problem for actual backups, but will help speed up the nightly
maintenance runs.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread David Rees
On 3/26/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> > And, you could consider buying a faster drive, or one with a larger
> > buffer.  Some IDE drives have pathetically small buffers and slow
> > rotation rates.  That makes for a greater need for seeking, and worse
> > seek performance.
>
> Well this is a seagate barracuda 7200rpm drive with 8mb cache ST3250824A
> http://www.seagate.com/support/disc/manuals/ata/100389997c.pdf

Same drive as I'm using, except mine are in RAID1 which doubles random
read performance.

> I read your posts about wifi etc. on forum. The processor is not the
> problem however adding memory probably might help bufferwise. I think
> this idea can actually work.:) thanks! I am seeing swapping problems but
> the disk the swap is on is almost idle. The backup drive is working all
> the time.

Please show us some more real data showing CPU utilization while a
backup is running. Please also give us the real specs of the machine
and what other jobs the machine performs.

> I have to say that slow performance with BackupPC is a known problem. I
> have heard it from several other people who are using BackupPC and it is
> the #1 reason of changing to another backup program from what I hear.
>
> Things must improve on this area.

There are plenty of ways to speed up BackupPC. It really isn't slow in
my experience.

But you must tell us what you are actually doing and what is going on
with your server for us to help instead of repeatedly saying "it's
slow, speed it up".

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread David Rees
On 3/26/07, Bernhard Ott <[EMAIL PROTECTED]> wrote:
> > It is true that BackupPC is great, however backuppc is slow because it
> > is trying to make backup of a single instance of each file to save
> > space. Now we are wasting (perhaps even more?) space to make it fast
> > when we do raid1.
>
> You can't be serious about that: let's say you have a handful of
> workstations full backup 200GB each and perform backups for a couple of
> weeks - in my case after a month 1,4 TB for the fulls and 179GB for the
> incrementals. After pooling and compression: 203 (!) GB TOTAL.
> Xfer time for a 130GB full: 50min. How fast are your tapes?
> But if you prefer changing tapes (and spending a lot more money on the
> drives) - go ahead ... so much for "wasting space" ;-)

No kidding! My backuppc stats are like this:

18 hosts
76 full backups of total size 748.09GB (prior to pooling and compression)
113 incr backups of total size 134.11GB (prior to pooling and compression)
Pool is 135.07GB comprising 2477803 files and 4369 directories

6.5:1 compression ratio is pretty good, I think.

Athlon XP 2000+ 1GB RAM, software RAID 1 w/ 2 ST3250824A (7200rpm,
ATA, 8MB cache). The machine was just built from leftover parts.
Running on Fedora Core 6.

I love BackupPC. :-)

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread David Rees
On 3/26/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> John Pettitt wrote:
> > The basic problem is backuppc is using the file system as a database -
> > specifically using the hard link capability to store multiple references
> > to an object and the link count to manage garbage collection.   Many
> > (all?) filesystems seem to get slow when you get into the millios of
> > files with thousands of links range.   Changing the way is works (say to
> > use a real database) looks like a very non trivial task.   Adding disk
> > spindles will help (particularly if you have multiple backups going at
> > once) but in the end it's still not going to be blazingly fast.
>
> Well, so there are no plans to fix this problem? I found forum threads
> that in certain cases backups take over 24hours! Goodbye to daily
> incremental backups :)

Well, I just saw a proposal on linux-kernel which addresses inode
allocation performance issues on ext3/4 by preallocating contiguous
blocks of inodes for directories. I suspect this would help reduce the
number of seeks required when performing backups.

If there is another filesystem which does this I imagine it would
perform better than ext3.

> Do you know any alternatives to backuppc with web gui? which probably
> works faster? :P

BackupPC is the best. Most backups complete in a reasonable time,
those that don't are backups which are either very large (lots of
bandwidth) or have lots of files. My backup server is a simple Athlon
XP 2000+ with a RAID1 consisting of 2 Seagate 250GB 7200rpm ATA
drives.

More spindles and/or disks with faster seek times is the way to go.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Poor backup performance

2007-03-22 Thread David Rees
On 3/22/07, John Pettitt <[EMAIL PROTECTED]> wrote:
> Have you checked that the 3ware actually has cache enabled - it has a
> habit of disabling it if the battery backup is bad or missing and it
> will make a *huge* difference 

Just make sure that if you enable the cache you actually have battery
backup, otherwise you run the risk of corruption if you lose power
unexpectedly.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Poor backup performance

2007-03-22 Thread David Rees
On 3/22/07, John T. Yocum <[EMAIL PROTECTED]> wrote:
> On our backup servers, they are all showing a very high wait during
> backups. Here's a screenshot from one of them
> http://www.publicmx.com/fh/backup2.jpg. At the time it was doing two
> backups, and a nightly.
>
> Any advice on improving performance, would be much appreciated.

It's quite easy to see how the server is doing nothing except waiting
for disk IO to complete. If it were waiting for the clients you would
have idle time. If it were busy compressing data you would see high
CPU utilization.

Especially when nightlies are running you will see this as BackupPC
scans the backup archive which results in a huge number of seeks.

If you also run iostat you can confirm that you will see that "tps" is
maxed out on you backup device while the block read/write rate remains
low.

The primary issue is the huge number of hardlinks BackupPC creates and
these get scattered all over the disk because of the way your average
file system lays them out on disk. I don't know of any file systems
which perform significantly better than ext3. xfs, jfs and reiserfs
all seem to suffer from the same issue under BackupPC load but I
haven't done any back to back benchmarks myself.

The best way is to throw disk spindles at it. RAID 10 works great as a
4-disk array has the potential to speed up random reads by 4x and
random writes by 2x.

While RAID 5 will tyically have decent random read performance, it
usually has horrible random small write performance sometimes worse
than a single drive! Having a battery backed raid controller here will
help write performance immensely.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hardware choices for a BackupPC server

2007-03-14 Thread David Rees
On 3/14/07, John Pettitt <[EMAIL PROTECTED]> wrote:
>
>  It's time to build a new server.  My old one (a re-purposed Celeron D
> 2.9Ghz / 768M FreeBSD box with a a 1.5 TB raid on a Highpoint card) has hit
> a wall in both performance and capacity. gstat on FreeBSD shows me that
> the Highpoint raid array is the main bottleneck (partly because it's in a
> regular PCI slot and partly because it's really software raid with crappy
> drivers) and CPU is a close second. I'm going to build a new box with
> SATA disks and a better raid card.

Yep, a combination of CPU and IO load and capacity is what limits
backup speed of the typical BackupPC installation.

>  So my question: Has anybody does any actual benchmarks on BackupPC servers?

Hmm, nothing official here.

>  Which OS & Filesystem is best?  (I'm, leaning toward Ubuntu and  RaiserFS)

OS doesn't matter. Pick whatever you are familiar with. As far as OS
goes, ReiserFS 3 is good because it stores small files very
efficiently because of it's tail packing feature. But I usually stick
with ext3 because of it's reliability. xfs and jfs are generally used
when volumes start getting very large and xfs especially usually
handles large file IO very well, but BackupPC's disk workload usually
involves lots of small IO so it may not help much there.

>  RAID cards that work well?  Allow for on the fly expansion?

3ware cards seem to be a favorite for SATA setups. I am also a big fan
of software RAID on Linux as well.

> RAID mode ?  5?  6?  10?500GB drives seem to be the sweet spot in the
> price curve right now - I'd like to get 1.5TB after RAID  so 6 drives in
> RAID 10.

The RAID mode you use will be determined by storage, performance and
redundancy requirements.

Knowing that multiple disk failures are common enough unless you are
very diligent about scanning/monitoring disks for warning signs for
large arrays it's often a very good idea to use RAID 6 or RAID 10.

Raid 5/6 will not perform well at all for small writes because of the
additional reads required to write parity.

Also keep in mind that since most disk IO on a BackupPC machine is
seek limited, the more spindles you can get working in parallel the
better - this usually means RAID 10. But if storage capacity
requirements outweigh performance I would go with RAID 6 or 5.

I would do your own benchmarks with the various RAID setups to find
what works best for you. Make sure you let us know what you find. ;-)

>  I'm leaning towards a core 2 duo box with 2Gb of ram.

That should be plenty of CPU power depending on how many clients you
intend to back up in parallel.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Update to 3.0

2007-02-21 Thread David Rees
On 2/20/07, Carl Wilhelm Soderstrom <[EMAIL PROTECTED]> wrote:
> On 02/20 12:39 , Nils Breunese (Lemonbit) wrote:
> > All my clients are servers with fast connections. I'll take
> > MaxBackups down to 1 then.
>
> I haven't done any thorough empirical testing on this, but I suspect that
> MaxBackups=2 would give you higher throughput overall (tho not
> individually). the reason is that the processor and disk are going to have
> idle moments (unless they're very slow, or memory is very limited), where
> they're waiting on data from the remote end. So if you have two jobs
> running, you'll be more likely to always have a job ready to use resources
> as they become available.
>
> really, you'll have to test with your specific environment; but that's just
> my experience and thinking.

In my experience I've found that setting MaxBackups to 1-2 normally
results in the best behavior. When backing up clients across a WAN,
more than 2 often simply saturates the network. When backing up local
clients, you only want one more backup running than CPUs and disks.
The more CPUs, disk spindles and network bandwidth you have, the more
you can bump up the number of concurrent backups. Without presenting
and undue load on the backup server or network.

As Carl mentions, monitoring system utilization using top & sar is key
to maximizing throughput.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] CPU Load statistics (Was: Re: OK, how about changing the server's backuppc process niceness?)

2007-01-09 Thread David Rees
On 1/9/07, Timothy J. Massey <[EMAIL PROTECTED]> wrote:
> So, it seems to me that the culprit is rsync.  I think the reason my
> production backup servers are usually at 100% CPU utilization is that
> they're backing up reasonably high-performance file servers that have
> enough CPU power to max out my backup server.  It will be interesting to
> see how much CPU load is put on the target of Machine 1:  I will check
> tonight.
>
> I guess the best way to improve this would be to avoid rsync...
> However, I like rsyncd.  I never realized how heavy the overhead is with
> rsync, though.  Unless I'm missing something?

As Lee mentioned, rsync is fairly CPU intensive on both the server and
client side. rsync's primary advantages are reducing network
utilization by avoiding transfers of files which already exist on the
server.

If the data being backed up changes frequently or you have a fast
network, it can be beneficial to use tar or smb backup methods instead
of rsync.

Naturally, which one performs best for your workload and
network/hardware capabilities is something you will have to test.

BTW, if you are using rsync, be sure to have the checksum-seed options
turned on. That will help reduce IO/CPU costs on both the
client/server after the 3rd full backup.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] CPU Load statistics (Was: Re: OK, how about changing the server's backuppc process niceness?)

2007-01-08 Thread David Rees
On 1/8/07, Timothy J. Massey <[EMAIL PROTECTED]> wrote:
> top - 21:09:02 up  3:55,  2 users,  load average: 1.15, 1.12, 1.06
> Tasks:  45 total,   2 running,  42 sleeping,   0 stopped,   1 zombie
> Cpu(s): 82.1% us, 11.3% sy,  0.0% ni,  0.0% id,  0.3% wa,  2.7% hi,  3.7% si
> Mem:109068k total,   108408k used,  660k free,21416k buffers
> Swap:   385552k total,33992k used,   351560k free,24004k cached
>
>PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>   2032 backuppc  25   0 63328  41m 3160 R 97.7 39.3  82:45.61 BackupPC_dump
> 44 root  15   0 000 S  0.7  0.0   0:32.12 pdflush
>   2022 backuppc  15   0 59552  39m 3160 S  0.7 37.1   3:41.52 BackupPC_dump
>
> Turns out that this box only has 128MB of RAM!  That will be fixed...
> :)  Even so, there is minimal swapping, and it does not seem to be
> thrashing.

Yep, you're right, even another 64MB will help keep swap from being
used at all. It sure looks like you are completely CPU bound with the
BackupPC_dump process, faster CPU may be all you can do to speed up
backups, unless some profiling is done of the BackupPC_dump process to
help figure out where the hot spots are and what (if anything) can be
done to improve performance there.

>  > If logging in via ssh is sluggish, that does indicate memory pressure
>  > to me... If you log in twice in a row is it still sluggish the second
>  > time or does it improve?
>
> Actually, the sluggishness comes from I/O competition, I think, not RAM
> or even CPU usage...

I would have expected to see higher I/O wait times if the sluggishness
was coming completely from I/O competition, but if you're system disk
is the same as the backuppc data partition then that sure doesn't
help.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] OK, how about changing the server's backuppc process niceness?

2007-01-08 Thread David Rees
On 1/2/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> I routinely hit 100% CPU utilization on the Via C3 1GHz Mini-ITX systems I
> use as backup servers.  I will grant you that the C3 is not the most
> efficient processor, but I am definitely CPU-limited.  I too have 512MB RAM,
> but the machines are not swapping.  And that's with 1 backup at a time
> running (there is only one system to back up).

Do you have a screenshot from top and a few lines of `vmstat 3`
output? You could also just type sar to look at the average system
utilization in 10 minute intervals since midnight if it's configured
on your system. That would help verify that CPU is indeed limiting
your throughput.

> The backup server runs fine, but e.g. using the web GUI or logging in via
> SSH is *sluggish*.  These are dedicated backup servers so I don't care, but
> if you made the mistake of putting BackupPC on your main file server, the
> fie server would be unusable in this state.

If logging in via ssh is sluggish, that does indicate memory pressure
to me... If you log in twice in a row is it still sluggish the second
time or does it improve?

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] OK, how about changing the server's backuppc process niceness?

2007-01-03 Thread David Rees
On 1/2/07, Jason Hughes <[EMAIL PROTECTED]> wrote:
> Good recommendations, Holger.  I would add that "nice"ing a process only
> changes its scheduling affinity, but does not modify in any way its hard
> disk activity or DMA priority, so until the original poster understands
> what exactly makes the server slow, he's shooting in the dark.  A busy
> hard drive usually makes a system feel slower than a busy CPU process,
> because hard disk activity requires a 6-10ms seek minimum, plus
> streaming and unloading to vram, depending on what other processes are
> doing.

Actually, if you are using the cfq IO scheduler in recent Linux
kernels, renicing a process also prioritizes IO as well. :-)

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Server freezes when backuping a lot of data

2006-12-29 Thread David Rees
On 12/29/06, Michal Wokoun <[EMAIL PROTECTED]> wrote:
> But when I run a full backup on workstation with about 25 GB of user data in
> small files (mostly word and excel documents), the fileserver freezes
> after circa
> half an hour - leds on keybord blinking and I have to push the hard
> reset. There
> are no error messages in /var/log/messages. Raid arrays are not clear
> after reset,
> but they sync very quikly during boot and also XFS (file shares) and
> EXT3 (system)
> usually recovers. Problem is probably not the memory, because the
> fileserver is about
> 14 days old and I ran memtest86+ without errors prior to installing OS.
> I use rsyncd, but got the same problem with samba transfer method.
> I also copied 120 GB of data in large files to the fileserver smoothly
> through samba.

The problem isn't a problem with BackupPC if the machine is hard
locking. It's either a kernel or a hardware problem.

How long did you run memtest for, was it at least 24 hours? In
addition to memtest, I also recommend running mprime for 24 hours (run
one for each core you have on your system) which will ensure that your
CPUs can handle high heat/load situations. I have had systems which
run memtest for days, but mprime generated an error after ~24 hours or
so and finally determined the CPU was bad.

To help debug the system lockup, are you able to see anything on the
console? You may need to connect a serial console to debug further as
often system lockups don't log anything to /var/log/messages but they
will show up on the console.

Also, if you are running vanilla kernels, I would make sure you are
running 2.6.19.1.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migrating to bigger storage

2006-12-21 Thread David Rees
On 12/20/06, John Pettitt <[EMAIL PROTECTED]> wrote:
> I'm about to migrate my BackupPC partition to a new raid controller
> (more space and more spindles) - my current thinking is to use
> dump/restore - has anybody done this - what issues did you encounter?

I've used tar over ssh which worked well, you could also use tar over
netcat, but haven't tried that.

Not as fast as dd, but not bad.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar versus rsync on internal network

2006-11-13 Thread David Rees
On 11/10/06, James Ward <[EMAIL PROTECTED]> wrote:
> I have ~160 servers connected on a high speed internal network which
> I use to do backups.  Additionally I have ~50 remote servers which I
> back up over the external network.  It's taking  about a week to make
> the rounds of all the systems.  Would it make more sense to use tar
> on the internal network backups in hopes of speeding things up?

tar vs rsync is one of those things that just depends... It primarily
depends on the speed of your network and how many files are changing.
The best thing to do is to just try it on a few machines and see if
they backup faster or slower than using rsync...

-Dave

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrade to 3.0.0 beta

2006-11-08 Thread David Rees
On 11/8/06, David Rees <[EMAIL PROTECTED]> wrote:
> If I pull the latest code from CVS, is there anything special I need
> to do before using it to upgrade compared to the normal tarball?

Hey look, there appears to be a handy makeDist script. Let's see how
that works. :-)

-Dave

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrade to 3.0.0 beta

2006-11-08 Thread David Rees
On 11/8/06, Craig Barratt <[EMAIL PROTECTED]> wrote:
> > Well, are there any known issues with the current beta? I would like
> > to start testing 3.0, but IIRC there was some issues with the first
> > beta so I was waiting for the 2nd one...
>
> There are no known significant issues with the current beta.

Awesome, I'll upgrade a few of my smaller installations today and give
it a whirl.

If I pull the latest code from CVS, is there anything special I need
to do before using it to upgrade compared to the normal tarball?

-Dave

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrade to 3.0.0 beta

2006-11-07 Thread David Rees
On 11/7/06, Craig Barratt <[EMAIL PROTECTED]> wrote:
> I will do one more 3.0.0 beta by the end of this month.
> That should be very close to the final 3.0.0 release.
>
> Even though the 3.0.0 beta releases are quite stable, given the
> wide deployment of BackupPC I wanted to have a conservative beta
> cycle.

I personally would like to see more frequent beta releases that
quickly address any known issues.

Well, are there any known issues with the current beta? I would like
to start testing 3.0, but IIRC there was some issues with the first
beta so I was waiting for the 2nd one...

-Dave

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] backing something up once a month

2006-10-16 Thread David Rees
On 10/16/06, James Ward <[EMAIL PROTECTED]> wrote:
> I have two ~180G NFS filesystems I'm backing up, but they take a
> lng time.  My boss says they only need to be backed up once a
> month.  What's the easiest way to get them to schedule only once
> every four weeks?

Set the FullPeriod to 30 days and disable incrementals for those hosts
would be very easy.

-Dave

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Moving the pool and changing the pools filesystem

2006-10-03 Thread David Rees
On 10/2/06, Steffen Heil <[EMAIL PROTECTED]> wrote:
> My current pool is on a 80 GB raid1 lvm-backed device using ext3.
> Now one of the drives failed and the pool is 82% full.
>
> So I got 2 new 200 GB drives, configured them with raid1 and lvm using xfs.
>
> So, how do I get the pool over there?
>
> cp -a is already running for 3h and got 12GB so far... (is THAT normal?)
> rsync -avH took very long and then copies only a few kb/sec.
>
> I thought about using partimage/dd, but that would force me to use ext3
> again.
>
> What experience did you have using XFS vs ext3?
> Is there a better way to move the pool?

Can you elaborate more on your RAID/lvm setup?

As you have found, cp -a and rsync are very slow when copying lots of
small files and hard links. The fastest way to copy the partition is
to copy the raw partition using dd, then grow the partitions and then
grow the filesystem. I'm not that familar with lvm, but assuming you
have lvm over your RAID partition, you would do this:

1. Setup new drives w/RAID + lvm.
2. Copy old lvm partition to new lvm partition using dd. You will copy
80gb of data over.
3. Grow the filesystem. Whether you are using xfs or ext3 or
 should not matter as long as it supports growing the
filesystem.

Of course, you'll have to have the data partition unmounted during all of this.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] process in Uninterruptible sleep

2006-08-25 Thread David Rees
On 8/25/06, zorg <[EMAIL PROTECTED]> wrote:
> a ps command give me a lot of  process like this
> backuppc  2450  0.0  0.7  11616  7324 ?DAug24   0:00
> /usr/bin/perl /dev/fd/3//usr/share/backuppc/cgi-bin//index.cgi
>
> Don't really know what happen (not log, no error)

Sounds like your disks may be going bad. Have you checked the system logs?

-Dave

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup a single file

2006-06-26 Thread David Rees
On 6/22/06, Mark Wass <[EMAIL PROTECTED]> wrote:
>
>  What are the setting in the config.pl file I need to set if I want to
> backup a single file.
>
>  I'm using Rsync and I only want to backup the /etc/temp.ini file

Have a look at $Conf{BackupFilesOnly}

http://backuppc.sourceforge.net/faq/BackupPC.html#item_%24conf%7bbackupfilesonly%7d

-Dave

Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Status for running backup

2006-06-09 Thread David Rees
On 6/9/06, Bowie Bailey <[EMAIL PROTECTED]> wrote:
> Harry Mangalam wrote:
> > I agree this is a problem - what I do is run iftop on a server
> > console to see what's transferring from what hosts at what rates.  If
> > you keep it running, it will give you a pretty good idea what's
> > happening
>
> Not a bad idea.  I may try it the next time I have to run a manual
> backup.

The other thing you can do is run strace on the backup process as well.

-Dave


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc bug tracker?

2006-06-07 Thread David Rees
On 6/7/06, Craig Barratt <[EMAIL PROTECTED]> wrote:
> There is a new version of File::RsyncP that is close to release
> that you could try.  I can email it to you if you want.

Out of curiosity, what's new in File::RsyncP?

-Dave


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Kernel bug at buffer.c:603?

2006-05-16 Thread David Rees

On 5/9/06, Martin Giebat <[EMAIL PROTECTED]> wrote:

I'm running backuppc  version 2.1.2 on a Debian linux with version 2.4.26. At 
least once a
week  backuppc crashes and gives me an error like this:

kernel BUG  at buffer.c:603
invalid operand  
CPU:  0
EIP:  0010[] Not tainted
EFLAGS  00010202
Process  BackupPC_tarExt


That is either a bug in the kernel or flaky hardware, not a BackupPC
bug. Try posting on the
Debian lists or opening a bug report for Debian.

-Dave


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0709&bid&3057&dat1642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bad page state in process 'BackupPC_tarExt'

2006-05-16 Thread David Rees

On 5/16/06, Raf <[EMAIL PROTECTED]> wrote:

May 15 22:15:32 backup kernel: Bad page state in process 'BackupPC_tarExt'
May 15 22:15:32 backup kernel: page:c117fd20 flags:0x80010008
mapping: mapcount:0 count:2130706432 (Not tainted)
May 15 22:15:32 backup kernel: Trying to fix it up, but a reboot is needed
May 15 22:15:32 backup kernel: Backtrace:



That is a bug in the kernel, not a BackupPC bug. Try posting on the
Fedora Core lists or opening a bug report for Fedora Core.

-Dave


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0709&bid&3057&dat1642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression

2006-05-11 Thread David Rees

On 5/11/06, Lee A. Connell <[EMAIL PROTECTED]> wrote:


I noticed while monitoring backuppc that it doesn't seem to compress on the 
fly, is this
true?  I am backing up 40GB's worth of data on a server and as it is backing up 
I monitor
the disk space usage on the mount point and by looking at that information it 
doesn't
seem like compression is happening on the fly.

Does compression happen after the backup completes?


Whether or not compression is an option during the data transfer
depends on the transfer method. Currently, the only backup method
which supports compression over the network is ssh+rsync, and that
relies on ssh to do the compression. All others will send the data
over the network uncompressed.

Backuppc gets the data in uncompressed form, so it will compress the
data at that point if compression is enabled.

-Dave


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0709&bid&3057&dat1642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude filesystems by type for rsync backups?

2006-04-14 Thread David Rees
On 4/14/06, Matt <[EMAIL PROTECTED]> wrote:
> Have a look at rsync's "-x" option.

Not what I'm looking for, that stays on one partition and each machine
has multiple partitions to backup.

-Dave


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0944&bid$1720&dat1642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Exclude filesystems by type for rsync backups?

2006-04-14 Thread David Rees
When doing rsync over ssh I'd like to be able to specify certain
filesystem types to exclude backing up. For example, I'd like to
exclude all nfs filesystems from being backed up, this way when I back
up a group of machines, mounting the same nfs share, the nfs contents
don't get backed up multiple times.

I can manually specify the nfs mounts using $Conf{BackupFilesExclude},
but this quickly gets redundant when I've got a dozen machines with a
dozen nfs mounts on them.

rsync doesn't seem to have any options to exclude filesystems by type.
Maybe there's something I'm missing?

The only thing I've found so far which is discussed at the link below,
which I guess will work but I was looking for something a bit more
elegant. :)

http://lists.samba.org/archive/rsync/2005-January/011380.html

-Dave


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0944&bid$1720&dat1642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrading to 2.1.2 from 2.1.1

2006-04-14 Thread David Rees
On 4/14/06, Ed Burgstaler <[EMAIL PROTECTED]> wrote:
> How can I painlessly upgrade or patch my current BackupPC version 2.1.1
> without screwing up my now working system?
> Thanks to all

Upgrading is easier as installing. Just make sure you specify the same
data directory and it should go very smoothly using your current
configuration.

It's not a bad idea to shut down backuppc before upgrading and then to
review the config.pl file afterwards. A backup of the old config file
will be there that you can compare to.

-Dave


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0944&bid$1720&dat1642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Breakage with rsync 2.6.7

2006-04-14 Thread David Rees
On 4/14/06, Vincent Ho <[EMAIL PROTECTED]> wrote:
> The -D option to rsync does what we want though, it means --devices on
> older rsyncs and --devices --specials on 2.6.7+.  I've changed our
> $Conf{RsyncArgs} to use -D rather than --devices and things have worked
> since, and suggest we do the same to the shipped version of the config
> file.

Good catch! It's had the rest of us baffled!

-Dave


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0944&bid$1720&dat1642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


  1   2   >