On Thu, Dec 4, 2008 at 7:24 PM, Nick Smith [EMAIL PROTECTED] wrote:
Did you ever get this resolved? Im having the same problem, now all of
my backups are failing with the same errors you are getting. Im using
2.6.9 protocol version 29. Ubuntu doesnt seem to have a newer version
available
On Wed, Dec 3, 2008 at 9:01 AM, WebIntellects Technical Support
[EMAIL PROTECTED] wrote:
When trying to backup a server for the first time we are receiving the
following error, has anybody seen this and know the fix:
Fatal error (bad version): sudo: symbol lookup error: sudo: undefined symbol:
On Wed, Nov 19, 2008 at 4:19 PM, dtktvu
[EMAIL PROTECTED] wrote:
It's still being discussed whether it's going to be open source or commercial.
Right now, looks like it's going to be free to download type...
If you used the rsync source code to create the rsync .NET version, I
think that you
On Wed, Nov 19, 2008 at 6:34 PM, dan [EMAIL PROTECTED] wrote:
the rync algorythm is actually part of the GPL code released by Andrew
Tridgell.
Yes, it is, but you can not Copyright algorithms, and you can't
protect them from reverse engineering. You can patent an algorithm,
but I know of no
On Wed, Nov 19, 2008 at 7:11 PM, David Rees [EMAIL PROTECTED] wrote:
On Wed, Nov 19, 2008 at 6:34 PM, dan [EMAIL PROTECTED] wrote:
the rync algorythm is actually part of the GPL code released by Andrew
Tridgell.
Yes, it is, but you can not Copyright algorithms, and you can't
protect them
On Thu, Oct 9, 2008 at 11:06 AM, Nick Smith [EMAIL PROTECTED] wrote:
I am using the volume shadow copy to backup large (12gig+) sql db's.
After the first full backup, and things are changed/added to the DB,
is it going to pull down the entire DB again or will it just download
the changes
(if
On Tue, Apr 22, 2008 at 8:31 AM, Stephen Joyce [EMAIL PROTECTED] wrote:
For anything approaching 1TB or larger, consider xfs over ext3. Fsck'ing a
large ext3 filesystem takes ages.
Why would you ever need to fsck a ext3 volume? I suspect that a full
fsck of an xfs volume is just as slow as
On Mon, Mar 31, 2008 at 8:24 AM, Carl Wilhelm Soderstrom
[EMAIL PROTECTED] wrote:
My original contention still stands tho; that lowering the priority of the
BackupPC_link process is a Good Thing.
I certainly agree - at least for servers where BackupPC is not the
only thing running.
On my
On Tue, Apr 1, 2008 at 11:52 AM, Les Mikesell [EMAIL PROTECTED] wrote:
Hereward Cooper wrote:
Is there a solution to this, as I'd love to keep using this program
rather than going back to my custom rsync script.
It should be doing what you want now. You just need to balance the full
On Mon, Mar 24, 2008 at 7:41 AM, dan [EMAIL PROTECTED] wrote:
Unfortunately, I still cannot install 0.68 as I get the same make error
array type has incomplete element type which is gcc4 being more picky that
gcc3 was :(
You can't get an old version of gcc on there to compile with?
-Dave
On Wed, Mar 19, 2008 at 11:07 PM, dan [EMAIL PROTECTED] wrote:
CPU e8400 3Ghz Dual Core.
single 7200rpm 16MB cache 200GB maxtor drive.
ubuntu 7.10
You don't mention how much memory you have in the machine...
FILE COUNT
138581 634MB average of 4.68KB per file(coped the /etc directory 20
On Thu, Mar 20, 2008 at 7:08 AM, Daniel Denson [EMAIL PROTECTED] wrote:
I will run whatever specific test you would like with Bonnie++, just
give me the command line arguements you would like to see. i have each
filesystem mounted to /test$filesystem so you can include that if you
like. I
On Mon, Mar 3, 2008 at 12:08 PM, Tomasz Chmielewski [EMAIL PROTECTED] wrote:
RAID5/6 have a performance penalty when compared to other RAID level
because every single write (or, write IO operation) requires four disk
IOs on two drives (two reads, and two writes), possibly harming other IO
On Mon, Mar 3, 2008 at 2:54 PM, Adam Goryachev
[EMAIL PROTECTED] wrote:
I was always led to believe that the more drives you had in an array the
faster it would get. ie, comparing the same HDD and controller, if you
have 3 HDD in a RAiD5 it would be slower than 6 HDD in a RAID5.
For most
On Mon, Mar 3, 2008 at 5:01 PM, Christopher Derr [EMAIL PROTECTED] wrote:
Is backuppc up to the task of backing up TBs of data? Or should I be
looking at software that explicitly states for the enterprise like
Symantec Backup Exec, Legato, or even open source Bacula? All of these
are
On Wed, Feb 27, 2008 at 4:38 PM, Stephen Joyce [EMAIL PROTECTED] wrote:
(Mostly) agreed. If you can afford a hardware raid controller, raid 5 is a
good choice.
To clarify, a hardware raid controller with battery backed RAM is a
good choice fo RAID 5, otherwise it will either be very slow for
On Wed, Feb 27, 2008 at 2:54 AM, Tomasz Chmielewski [EMAIL PROTECTED] wrote:
Stripe size is 64k.
Also, the system was made with just mkfs.ext3 -j /dev/sdX, so without
the stride option (or other useful options, like online resizing, which
is enabled by default only in the recent releases
On Tue, Feb 26, 2008 at 2:07 AM, Tomasz Chmielewski [EMAIL PROTECTED] wrote:
But I didn't have IO::Dirent installed, thanks for the hint. Let's hope
the list of directories in trash will keep decreasing now. Right now,
I have almost 100 directories there, and it is growing each day a bit.
On Tue, Feb 26, 2008 at 2:23 PM, Tomasz Chmielewski [EMAIL PROTECTED] wrote:
Can you give us more details on your disk array? Controller, disks,
RAID layout, ext3 fs creation options, etc...
I said some of that already - but here are some missing parts.
5x 400 MB HDD (WDC WD4000YR)
On Tue, Feb 26, 2008 at 4:39 PM, David Rees [EMAIL PROTECTED] wrote:
So there you go. IMO, unless you are willing to overhaul your storage
system or slightly increase the risk of data corruption (IMO,
data=writeback instead of the default data=ordered should be a large
gain for you
On Mon, Feb 25, 2008 at 1:23 AM, Tomasz Chmielewski [EMAIL PROTECTED] wrote:
Unfortunately, it doesn't scale very well in terms of performance - you
may see this thread on linux-fsdevel list for more info:
http://marc.info/?t=12033398513r=2w=4
What version of BackupPC? 3.1.0 does the
On Mon, Feb 25, 2008 at 6:29 PM, dan [EMAIL PROTECTED] wrote:
reiserfs will certainly help a lot with the hardlink and directorie creation
and deletion. claims about reiserfs tend to be greatly exagerated but this
is a true strength of it and would will see a really remarkable performance
On Thu, Feb 21, 2008 at 11:43 AM, Nick Webb [EMAIL PROTECTED] wrote:
Rich Rauenzahn wrote:
dan wrote:
no, incrementals are more efficient on bandwidth. they do a less
strenuous test to determine if a file has changed.
at the expense of CPU power on both sides, you can compress
On Feb 11, 2008 7:51 PM, Justin Best [EMAIL PROTECTED] wrote:
On Feb 11, 2008, at 7:18 PM, Nicholas Mistry wrote:
Install your favorite flavor of linux with backuppc (CentOS, Fedora, Ubuntu,
Debian) but install a stripped down version w/o the gui and the like.
Well, my first concern would be
On Jan 18, 2008 12:50 AM, KLEIN Stéphane [EMAIL PROTECTED] wrote:
are there a directive or other stuff to limit memory usage (RAM) of
backuppc ?
Not really, the maximum amount of memory is more or less tied to the
type of backups your are doing and the number of files being backed
up.
For
On Jan 17, 2008 1:37 PM, Bowie Bailey [EMAIL PROTECTED] wrote:
I have a BackupPC server that I haven't touched in a while. It is currently
running version 2.1.2pl1. Since I am so far behind, are there any problems
I would run into upgrading this to the latest version? Anything in
particular
On Nov 28, 2007 10:15 AM, Arch Willingham [EMAIL PROTECTED] wrote:
Can you upgrade if the original install was from the source?
Yes, it is very easy to upgrade from source. Just follow the
installation instructions, an upgrade follows the same procedure.
On Nov 6, 2007 7:35 AM, Paul Archer [EMAIL PROTECTED] wrote:
I mount my /backup raid with noatime and notail options.
Don't forget nodiratime.
nodiratime is a subset of noatime, so if you have noatime set, there
is no need to set nodiratime.
-Dave
I noticed that for hosts which I have disabled, old full/incremental
backups don't get removed automatically anymore.
Reading the docs, it appears that backups only get moved to the trash
after a successful backup which likely explains why this is happening.
Now, I could just go move those
On 10/24/07, Hendrik Friedel [EMAIL PROTECTED] wrote:
Well, what surprises me is, that I can't hear it seeking...
Try using `iostat 3` or similar during a backup. Typical 7200 rpm IDE
disks can't do more than 100-150 IOP/s or so.
/dev/hda5 94% /mnt/data --xfs, not used by backuppc
/dev/hdb1
On 10/8/07, Hendrik Friedel [EMAIL PROTECTED] wrote:
procs ---memory-- ---swap-- -io -system-- cpu
r b swpd free buff cache si sobibo in cs us sy id wa
3 0 80 6944 32080 32842800 668 1176 3503 7659 33 29 38 0
1 0
On 10/2/07, Tony Nelson [EMAIL PROTECTED] wrote:
My first decision point is potentially the easiest. I thought rather
than buying one huge backup server and trying to backup all 32 hosts, it
might be smarter to buy 2 (or more) smaller machines and splitting up
the load. I would think that
On 9/27/07, Dan Pritts [EMAIL PROTECTED] wrote:
So I've been of the opinion (not backed up by experimental data) that
a concatenation (what linux md driver calls LINEAR; similar effects can
be realized with LVM) of two RAID1's would be better for BackupPC than
a RAID10.
My rationale for this
On 9/27/07, Doug Lytle [EMAIL PROTECTED] wrote:
I've recently purchased two 500GB drives that I wanted to add to my XFS
LVM. It turns out that you can't resize an XFS partition. I ended up
having to recreate the LVM.
You can resize an XFS partition, you need to use the xfs_growfs utility.
On 9/26/07, Tony Nelson [EMAIL PROTECTED] wrote:
Well, due to a power failure, I was put in the lovely position of a
corrupted ReiserFS tree. I ran reiserfsck, which took 4 days to
complete and just couldn't bring myself to trust stability of the disk.
Given the lack of interest/maintainers
On 9/26/07, Tony Nelson [EMAIL PROTECTED] wrote:
David Rees wrote:
Your machine looks fine to me. Your backuppc data partition is a single
disk?
My servers disk it 6 250G IDE drives arranged in a RAID5 with 1 Hot
Spare. The Controller is a 3Ware Escalade 7506-8 Controller.
OK
On 9/19/07, Merz, Christian [EMAIL PROTECTED] wrote:
I'm running Backuppc successfully for quite some time now, backing up mostly
Windows Clients and Servers using smbclient.
As the Data is growing, its becoming difficult to backup a complete Server
over Night, so I'm looking if I can speed it
On 9/19/07, Tony Nelson [EMAIL PROTECTED] wrote:
I looked into the checksum-seed option for rsync and it appears to a
patch that I don't have. I am using Gentoo and just installed rsync
from Portage. Has that patch every made it into the rsync upstream?
Checksum seed support was added in
On 9/19/07, Tony Nelson [EMAIL PROTECTED] wrote:
Attached are the files you requested. The BackupPC server was running 2
long running backups when I took these. In addition to the screenshots
you requested, I added a screenshot from the web console of BackupPC.
From your screenshots, the web
On 9/18/07, Tony Nelson [EMAIL PROTECTED] wrote:
What I would like to do is figure out the best way of determining if the
source of the slowness is the target server, the backuppc server or a
network bottleneck that I just can't imagine.
Fire up `top` and `vmstat 3` on each machine while the
On 8/21/07, Rich Rauenzahn [EMAIL PROTECTED] wrote:
Whenever I use these options, rsync seems to work and transfer
files but nothing ever seems to actually get written to the backup
dirs:
The Perl Rsync library doesn't support compression which is why adding
the compression option to the
On 7/26/07, Yaakov Chaikin [EMAIL PROTECTED] wrote:
This is a very basic question... After reading the docs, I am unclear
on the difference between an incremental backup and a full backup.
Since the backups are stored in a pool which stores JUST the
difference between the older and newer
On 7/26/07, David Rees [EMAIL PROTECTED] wrote:
On 7/26/07, Yaakov Chaikin [EMAIL PROTECTED] wrote:
This is a very basic question... After reading the docs, I am unclear
on the difference between an incremental backup and a full backup.
Since the backups are stored in a pool which stores
On 7/5/07, Carl Wilhelm Soderstrom [EMAIL PROTECTED] wrote:
On 07/05 05:11 , David Rees wrote:
I think that possible workarounds would be to switch to a different
backup transport other than rsync. Can anyone think of any other
solutions?
Try carving it up into several chunks that get
Hi,
I'm using backuppc to backup a number of different machines, but am
having some memory consumption issues with the backuppc daemon when
backing up one particular host we just started backing up using
BackupPC 3.0.0.
When backing up this client, the daemon uses over 4GB of RAM causing
the
On 4/23/07, James Kyle [EMAIL PROTECTED] wrote:
I'm getting the Administrative Attention Needed alerts daily
referring to a host that I've set as being archived.
I've set the $Conf{BackupsDisable} variable to 2 - Don't do any
backups on this client. Manually requested backups (via the CGI
On 3/29/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
I didnt blame anybody, just said BackupPC is working slow and it was working
slow, very slow indeed. checksum-seeds option seems to be doing it's trick
though.
How long are full and incremental backups taking now?
I am thankful to people
On 3/30/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
David Rees wrote:
How long are full and incremental backups taking now?
In one machine it went down from 900 minutes to 175 minutes. I expect better
performance
when more memory is added (today or tomorrow they will add it) and I dont
.
And my server isn't that different than yours disk-wise, just RAID1
instead of no raid, it's even the exact same disk.
On 3/27/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
David Rees wrote:
That is true, full backups take about 500-600 minutes and incrementals
take 200-300minutes.
Is that from
On 3/28/07, John T. Yocum [EMAIL PROTECTED] wrote:
Here is the iostat output, the server is doing two full backups at the
moment, along with a nightly. Server specs: P4 3.2Ghz, 512MB RAM, 300GB
SATA drive.
[EMAIL PROTECTED] ~]# iostat
Linux 2.6.9-42.0.10.ELsmp (backup2.fluidhosting.com)
On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
Lets hope this doesnt wrap around... as you can see load is in 0.1-0.01
range.
1 usersLoad 0.12 0.05 0.01 Mar 27 07:30
Mem:KBREALVIRTUAL VN PAGER SWAP PAGER
Tot
On 3/27/07, Les Mikesell [EMAIL PROTECTED] wrote:
Evren Yurtesen wrote:
What is wall clock time for a run and is it
reasonable for having to read through both the client and server copies?
I am using rsync but the problem is that it still has to go through a
lot of hard links to figure
On 3/27/07, David Rees [EMAIL PROTECTED] wrote:
Can you try mounting the backup partition async so we can see if it
really is read performance or write performance that is killing backup
performance?
I must wonder if ufs2 is really bad at storing inodes on disk...
I went and did some
On 3/27/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
David Rees wrote:
Evren, I didn't see that you mentioned a wall clock time for your
backups? I want to know how many files are in a single backup, how
much data is in that backup and how long it takes to perform that
backup.
I sent
On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
John Pettitt wrote:
The basic problem is backuppc is using the file system as a database -
specifically using the hard link capability to store multiple references
to an object and the link count to manage garbage collection. Many
On 3/26/07, Bernhard Ott [EMAIL PROTECTED] wrote:
It is true that BackupPC is great, however backuppc is slow because it
is trying to make backup of a single instance of each file to save
space. Now we are wasting (perhaps even more?) space to make it fast
when we do raid1.
You can't be
On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
And, you could consider buying a faster drive, or one with a larger
buffer. Some IDE drives have pathetically small buffers and slow
rotation rates. That makes for a greater need for seeking, and worse
seek performance.
Well this is a
Let's start at the beginning:
On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
I am using backuppc but it is extremely slow. I narrowed it down to disk
bottleneck. (ad2 being the backup disk). Also checked the archives of
the mailing list and it is mentioned that this is happening because
On 3/20/07, Henrik Genssen [EMAIL PROTECTED] wrote:
are there any issues upgrading from 2.1.2.pl1?
None that I know of. The upgrade process is pretty smooth. (though I
opted to convert to the new configuration file layout at the same time
which does take a bit of tweaking).
is 3.0 yet
On 3/22/07, John Pettitt [EMAIL PROTECTED] wrote:
Have you checked that the 3ware actually has cache enabled - it has a
habit of disabling it if the battery backup is bad or missing and it
will make a *huge* difference
Just make sure that if you enable the cache you actually have battery
On 2/20/07, Carl Wilhelm Soderstrom [EMAIL PROTECTED] wrote:
On 02/20 12:39 , Nils Breunese (Lemonbit) wrote:
All my clients are servers with fast connections. I'll take
MaxBackups down to 1 then.
I haven't done any thorough empirical testing on this, but I suspect that
MaxBackups=2 would
On 1/9/07, Timothy J. Massey [EMAIL PROTECTED] wrote:
So, it seems to me that the culprit is rsync. I think the reason my
production backup servers are usually at 100% CPU utilization is that
they're backing up reasonably high-performance file servers that have
enough CPU power to max out my
On 1/8/07, Timothy J. Massey [EMAIL PROTECTED] wrote:
top - 21:09:02 up 3:55, 2 users, load average: 1.15, 1.12, 1.06
Tasks: 45 total, 2 running, 42 sleeping, 0 stopped, 1 zombie
Cpu(s): 82.1% us, 11.3% sy, 0.0% ni, 0.0% id, 0.3% wa, 2.7% hi, 3.7% si
Mem:109068k total,
On 1/2/07, Jason Hughes [EMAIL PROTECTED] wrote:
Good recommendations, Holger. I would add that niceing a process only
changes its scheduling affinity, but does not modify in any way its hard
disk activity or DMA priority, so until the original poster understands
what exactly makes the server
On 12/29/06, Michal Wokoun [EMAIL PROTECTED] wrote:
But when I run a full backup on workstation with about 25 GB of user data in
small files (mostly word and excel documents), the fileserver freezes
after circa
half an hour - leds on keybord blinking and I have to push the hard
reset. There
On 12/20/06, John Pettitt [EMAIL PROTECTED] wrote:
I'm about to migrate my BackupPC partition to a new raid controller
(more space and more spindles) - my current thinking is to use
dump/restore - has anybody done this - what issues did you encounter?
I've used tar over ssh which worked well,
On 11/10/06, James Ward [EMAIL PROTECTED] wrote:
I have ~160 servers connected on a high speed internal network which
I use to do backups. Additionally I have ~50 remote servers which I
back up over the external network. It's taking about a week to make
the rounds of all the systems. Would
On 11/8/06, David Rees [EMAIL PROTECTED] wrote:
If I pull the latest code from CVS, is there anything special I need
to do before using it to upgrade compared to the normal tarball?
Hey look, there appears to be a handy makeDist script. Let's see how
that works. :-)
-Dave
On 11/7/06, Craig Barratt [EMAIL PROTECTED] wrote:
I will do one more 3.0.0 beta by the end of this month.
That should be very close to the final 3.0.0 release.
Even though the 3.0.0 beta releases are quite stable, given the
wide deployment of BackupPC I wanted to have a conservative beta
On 10/16/06, James Ward [EMAIL PROTECTED] wrote:
I have two ~180G NFS filesystems I'm backing up, but they take a
lng time. My boss says they only need to be backed up once a
month. What's the easiest way to get them to schedule only once
every four weeks?
Set the FullPeriod to 30 days
On 10/2/06, Steffen Heil [EMAIL PROTECTED] wrote:
My current pool is on a 80 GB raid1 lvm-backed device using ext3.
Now one of the drives failed and the pool is 82% full.
So I got 2 new 200 GB drives, configured them with raid1 and lvm using xfs.
So, how do I get the pool over there?
cp -a
On 6/22/06, Mark Wass [EMAIL PROTECTED] wrote:
What are the setting in the config.pl file I need to set if I want to
backup a single file.
I'm using Rsync and I only want to backup the /etc/temp.ini file
Have a look at $Conf{BackupFilesOnly}
On 6/7/06, Craig Barratt [EMAIL PROTECTED] wrote:
There is a new version of File::RsyncP that is close to release
that you could try. I can email it to you if you want.
Out of curiosity, what's new in File::RsyncP?
-Dave
___
BackupPC-users mailing
On 5/16/06, Raf [EMAIL PROTECTED] wrote:
May 15 22:15:32 backup kernel: Bad page state in process 'BackupPC_tarExt'
May 15 22:15:32 backup kernel: page:c117fd20 flags:0x80010008
mapping: mapcount:0 count:2130706432 (Not tainted)
May 15 22:15:32 backup kernel: Trying to fix it up, but a
On 5/11/06, Lee A. Connell [EMAIL PROTECTED] wrote:
I noticed while monitoring backuppc that it doesn't seem to compress on the
fly, is this
true? I am backing up 40GB's worth of data on a server and as it is backing up
I monitor
the disk space usage on the mount point and by looking at that
On 4/14/06, Vincent Ho [EMAIL PROTECTED] wrote:
The -D option to rsync does what we want though, it means --devices on
older rsyncs and --devices --specials on 2.6.7+. I've changed our
$Conf{RsyncArgs} to use -D rather than --devices and things have worked
since, and suggest we do the same to
On 4/14/06, Ed Burgstaler [EMAIL PROTECTED] wrote:
How can I painlessly upgrade or patch my current BackupPC version 2.1.1
without screwing up my now working system?
Thanks to all
Upgrading is easier as installing. Just make sure you specify the same
data directory and it should go very
When doing rsync over ssh I'd like to be able to specify certain
filesystem types to exclude backing up. For example, I'd like to
exclude all nfs filesystems from being backed up, this way when I back
up a group of machines, mounting the same nfs share, the nfs contents
don't get backed up
On 4/14/06, Matt [EMAIL PROTECTED] wrote:
Have a look at rsync's -x option.
Not what I'm looking for, that stays on one partition and each machine
has multiple partitions to backup.
-Dave
---
This SF.Net email is sponsored by xPML, a
On 4/6/06, Mattia Martinello [EMAIL PROTECTED] wrote:
I am configuring a BackupPC server with rsync method, so the rsync
daemon resides on the same BackupPC server.
How I have to configure rsyncd.conf to make it working with BackupPC?
The rsync method or rsyncd method? Have a re-read of the
On 4/5/06, dosseh edjé [EMAIL PROTECTED] wrote:
I'm using backuppc-2.1.2 to backup winxx's and
linux's machines. But i want to ask if that would be
possible to restore data with backuppc from archives.
Does it exist a program to do that?
Please an advice would be welcome.
I think you'll
On 3/16/06, Les Mikesell [EMAIL PROTECTED] wrote:
I don't think I'd do raid5 in software but raid1 on scsi
is very usable and better than nothing on IDE as long as
the drives are on separate controllers.
I've run software RAID5 in on Linux for quite some time without any
problems while
On 3/15/06, Carl Wilhelm Soderstrom [EMAIL PROTECTED] wrote:
On 03/15 10:50 , Matt wrote:
Furthermore, I find the setup with raid controllers tedious: The
modules often don't come with the distro and more often than not they do
not load at boot forcing me to tweak /etc/rc.local.
2006-03-07 21:06:26 full backup started for directory cDrive
2006-03-08 05:39:15 Aborting backup up after signal ALRM
You may need to increase $Conf{ClientTimeout} further. What's it
set to now?
I was orriginally 7200, I upped it to 14400 do you think it needs
to be higher? (i have
On 3/11/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
I have been having a bit of trouble with backups since rebuilding my
backuppc server a little while ago. On my longer backups they seem to bomb
out with a message of aborted by signal=ALRM
This is what I am seeing in the logs:
On 3/10/06, Guus Houtzager [EMAIL PROTECTED] wrote:
On Friday 10 March 2006 11:27, [EMAIL PROTECTED] wrote:
Here tis..
top - 21:57:44 up 11:45, 2 users, load average: 5.70, 5.50, 4.29
Tasks: 106 total, 5 running, 99 sleeping, 0 stopped, 2 zombie
Cpu(s): 37.7% us, 25.4% sy,
On 3/9/06, Carl Wilhelm Soderstrom [EMAIL PROTECTED] wrote:
On Thu, 2006-03-09 at 14:40, [EMAIL PROTECTED] wrote:
it just hovers at about 300kb/s I would expect that when the file
listing is sent, for there to be a heavy load on the network at that
time, and then some heavy cpu
On 2/16/06, Les Mikesell [EMAIL PROTECTED] wrote:
There is nothing built in to work that way but you might
establish a vpn connection from the client to the server
with something like openvpn at backup time or do some
tricky port-forwarding over an ssh connection.
OpenVPN is a good solution,
I'm currently backing up a Windows machine using rsyncd across a WAN
(VPN over T1s). The files I'm backing up are large, but they are
fairly compressible. Being able to compress the data across the wire
would speed up backups and reduce bandwidth utilization by at least
2x.
Because backuppc
On 1/18/06, Richard Smith [EMAIL PROTECTED] wrote:
Because I was curious I switched my archives from bzip archives to bzip2.
You mean gzip to bzip2, right?
Archive size when down by 10% but the time it took to generate the
archives went up by a factor of 5.8.
Is that the expected response?
On 1/18/06, Les Mikesell [EMAIL PROTECTED] wrote:
No one has been able to make cygwin rsync work when started by
sshd on windows. It will run for some short period of time
and then hang with both sides waiting for something. The
alternatives seem to be using port-forwarding over ssh with
On 11/23/05, Alex Schaft [EMAIL PROTECTED] wrote:
I'm taking an offsite backup by plugging in an IDE drive, mounting it,
and running the archive to it. Currently it's formatted as ext3, but I'm
wondering if I really need the journal on there, and if I should just
use ext2?
Pretty much any FS
On 11/11/05, Marten van Wezel [EMAIL PROTECTED] wrote:
- Im using ssh+tar to backup a remote system to a local dir (not to a
device or compressed archive files)
Why not use ssh+rsync to do your backups? That should save a lot more
bandwidth than streaming the whole filesystem all the time...
93 matches
Mail list logo