On 07/02/2008, Jon Forrest <[EMAIL PROTECTED]> wrote:
> 1) The first is that the XferLOG somehow seems to contain the
> actual data being backed up! I'm showing a fragment from the
> beginning below (does the problem with ls matter?):
>
[...]
> Got remote protocol 1702057263
> Fatal error (bad
for each disk in a raid5, large file write rate will increase while small
file write rate will decrease. the solution is not necessarily having a
smaller stripe size as many of the files will still be smaller than a 64k
stripe so that is a minor improvement that may not compensate for the
slowdown
On Mon, Mar 3, 2008 at 6:47 PM, Adam Goryachev
<[EMAIL PROTECTED]> wrote:
> So would it then make sense for a backuppc data partition to use a
> smaller stripe size since most writes will be very small?
Yes, if you're using RAID5. Doing some benchmarking would help find
the "sweet spot".
> > H
Christopher Derr wrote:
>
>>
> Right. The reason I mention a multiple-head/SAN situation is that
> people were recommending more than one backuppc server. If that's a
> memory/cpu issue, then multiple-heads would help. If it's a
> disk-thrashing issue, nothing is really going to help other th
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
David Rees wrote:
> On Mon, Mar 3, 2008 at 12:08 PM, Tomasz Chmielewski <[EMAIL PROTECTED]> wrote:
>> RAID5/6 have a performance penalty when compared to other RAID level
>> because every single write (or, write IO operation) requires four disk
>> I
On Mon, Mar 3, 2008 at 5:01 PM, Christopher Derr <[EMAIL PROTECTED]> wrote:
> Is backuppc up to the task of backing up TBs of data? Or should I be
> looking at software that explicitly states "for the enterprise" like
> Symantec Backup Exec, Legato, or even open source Bacula? All of these
>
On Mon, Mar 3, 2008 at 2:54 PM, Adam Goryachev
<[EMAIL PROTECTED]> wrote:
> I was always led to believe that the more drives you had in an array the
> faster it would get. ie, comparing the same HDD and controller, if you
> have 3 HDD in a RAiD5 it would be slower than 6 HDD in a RAID5.
For mos
On Mon, Mar 3, 2008 at 12:08 PM, Tomasz Chmielewski <[EMAIL PROTECTED]> wrote:
> RAID5/6 have a performance penalty when compared to other RAID level
> because every single write (or, write IO operation) requires four disk
> IOs on two drives (two reads, and two writes), possibly harming other I
>> Alternatively, I could go the more extensible route: multiple,
>> slightly less buff memory-wise backuppc servers, backing up to a
>> large SAN, even at the same time. For an environment where I may be
>> backing up data in the terabytes, would multiple backuppc head nodes
>> backing up t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Les Mikesell wrote:
> Christopher Derr wrote:
>> Alternatively, I could go the more extensible route: multiple, slightly
>> less buff memory-wise backuppc servers, backing up to a large SAN, even
>> at the same time. For an environment where I may
Adam Goryachev wrote:
>
> If you are writing small files and doing directory operations you
>> are back to waiting for the heads to seek.
> But since you have more heads, do you still have to wait for all of
> them, or is the one that you want to move more likely to be available
> to go and fetch
Christopher Derr wrote:
>
> So I can see it both ways I guess. I can back up 500 GB at a time from
> a 2 TB server for example, making good use of my 8 GB of memory for each
> full backup (4 full backups per week to get the entire 2 TB). This is
> if I have one backuppc server with onboard d
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Les Mikesell wrote:
> If you are writing small files and doing directory operations you
> are back to waiting for the heads to seek.
But since you have more heads, do you still have to wait for all of
them, or is the one that you want to move more like
I can transfer 5GB files with rsync 2.6.9 on digital unix and on
ubuntu 7.10and on sco openserver4 no problems. im using rsync -aH
only.
you can replace "--server --sender --numeric-ids --perms --owner --group -D
--links --times --block-size=2048 --recursive -D --ignore-times"
with
-aI --server -
"> just take Les' advice, split up the backup job among a few servers
instead of one BIG one.
I guess he meant splitting one big backup job into several smaller
(i.e., instead of backing up 1x350 GB, backup 7x50 GB, all that to one
BackupPC server) - it is always a good idea for large backups."
Adam Goryachev wrote:
>>
>>> The seek time for these may be the real killer since you drag the parity
>>> drive's head along for the ride.
>>>
>> The more drives you have in an array, the closer your seek time will tend to
>> approach worst-case, as the controller waits for the drive with
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Carl Wilhelm Soderstrom wrote:
> On 03/03 02:29 , Les Mikesell wrote:
>
>> The seek time for these may be the real killer since you drag the parity
>> drive's head along for the ride.
>>
> The more drives you have in an array, the closer your
On 03/03 02:29 , Les Mikesell wrote:
> > CPU load because of RAID5/6 computations on today hardware is marginal.
> > RAID5/6 have a performance penalty when compared to other RAID level
> > because every single write (or, write IO operation) requires four disk
> > IOs on two drives (two reads, an
This only seems to happen when it hits very large files, other rsync
modules backup fine on the same system. What can I do to troubleshoot
this? Files are 2GB to 10GB in size.
SERVER w/ BackupPC 2.x on Debian Sarge
--
Connected to ncic-01:873, remote version 29
Negotiated protocol vers
Rahul Awasthi wrote:
> Hi all
> I am new to backuppc, I have installed backuppc on redhat linux 4.1 and
> tried to make a backup, I got following error message.
>
>
>
> full backup started for directory /
> Running: /usr/bin/ssh -q -x -l root 192.168.0.83 /usr/bin/rsync --server
> --sender -
Tomasz Chmielewski wrote:
>>
>> with 8GB of ram, I would give a rough estimate that you can have up to
>> 500,000,000 files in flight at one time as far as ram is concerned!
>> that includes ALL hosts that would be backed up simultaniously. I doubt
>> RAM will be an issue for you. Probably ha
On 03/03/2008, Nils Breunese (Lemonbit) <[EMAIL PROTECTED]> wrote:
> dan wrote:
>
> > -x is one filesystem, not forward X11. -X is forward X11
>
> On OS X -x is 'disable X11 forwarding' and -X is 'enable X11
> forwarding'. I was checking on OS X...
For ssh, -x means "disable X11 forwarding"
Fo
dan schrieb:
> amen Les
> no need to have just 1 backup server!
>
> with 8GB of ram, I would give a rough estimate that you can have up to
> 500,000,000 files in flight at one time as far as ram is concerned!
> that includes ALL hosts that would be backed up simultaniously. I doubt
> RAM will
On 03/03/2008, dan <[EMAIL PROTECTED]> wrote:
> monthly fulls:
> > 00 02 * * * backuppc if [ `/usr/local/bin/date +%d -d tomorrow` = 02 ] ;
> then /usr/share/backuppc/bin/BackupPC_serverMesg backup
> host.domain.tld host.domain.tld backuppc 1
> >
>
> this will say 'if tomorrow is the second, run th
dan wrote:
> -x is one filesystem, not forward X11. -X is forward X11
On OS X -x is 'disable X11 forwarding' and -X is 'enable X11
forwarding'. I was checking on OS X...
Nils Breunese.
-
This SF.net email is sponsored by
amen Les
no need to have just 1 backup server!
with 8GB of ram, I would give a rough estimate that you can have up to
500,000,000 files in flight at one time as far as ram is concerned! that
includes ALL hosts that would be backed up simultaniously. I doubt RAM will
be an issue for you. Probabl
you can put this in crontab:
daily incrementals:
> 00 02 * * * backuppc /usr/share/backuppc/bin/BackupPC_serverMesg backup
> host.domain.tld host.domain.tld backuppc 0
>
monthly fulls:
> 59 11 * * * backuppc if [ `/usr/local/bin/date +%d -d tomorrow` = 01 ] ;
> then /usr/share/backuppc/bin/Back
On 02/29 05:06 , deblike wrote:
> I'm kinda stuck on this too, I need to run backups on a fixed schedule,
> let's say 02:00 AM every day, but I'm failing to see how to achieve
> this.
> Any clue?
put this in /etc/crontab:
00 02 * * * backuppc /usr/share/backuppc/bin/BackupPC_serverMesg backup
ho
On Tue, 26 Feb 2008 11:18:06 +0100
Hervé Richard <[EMAIL PROTECTED]> wrote:
> and is executed when the backup disk is on line normal but the next
> full doesn't begin next Monday but after the number of days
> configured in config.pl.
I'm kinda stuck on this too, I need to run backups on a fixed
Christopher Derr wrote:
>
> I'm a new backuppc user for a college academic department. I have a
> moderately sized disk array (3 TB, RAID 5, Areca RAID) backing up the
> data on various servers.
I think that's the first time I've heard someone call 3 TB "moderately
sized", but I guess times
Greetings,
I'm a new backuppc user for a college academic department. I have a
moderately sized disk array (3 TB, RAID 5, Areca RAID) backing up the
data on various servers. The backup server has 8 GB of memory and is
currently running a backup of 350 GB user directories on a Windows 2003
se
Thanks that fixed the display for me on Centos 5
John Rouillard wrote:
> On Fri, Feb 29, 2008 at 09:51:46AM +0100, Ludovic Drolez wrote:
>
>> On Thu, Feb 28, 2008 at 10:17:43AM -0700, Kimball Larsen wrote:
>>
>>> but the images appear as busted images on the status page.
>>>
>> (Wit
daniel wrote:
> Hey!
> Try to run that command without -x argument in rsync. It works for me.
You're saying that not disabling X11 forwarding is helping you solve a
transfer problem? Sounds really odd to me as you really don't need X11
forwarding to backup using rsync over SSH.
Nils Breunese
Hey!
Try to run that command without -x argument in rsync. It works for me.
/usr/bin/ssh -q -l root 192.168.0.83 /usr/bin/rsync --server
> --sender --numeric-ids --perms --owner --group -D --links --hard-links
> --times --block-size=2048 --recursive --ignore-times . /
On 3 Mar 2008 07:28:43 -
34 matches
Mail list logo