Hi,
I realized that I have one host where I have a backup history going
back only 47 days while on all other hosts I have roughly 2 years. I am
using the following settings for all hosts:
FullPeriod=6.97
FillCycle=0
FullKeepCnt=[3, 0, 3, 0, 6]
FullKeepCntMin=3
FullAgeMax=720
That should
On 11/28/22 09:02, Paulo Ricardo Bruck wrote:
Hi all
New adventures using backuppc4.0 80)
I was using backuppcV3 to do a 3 full backup and its was
working like a charm 8)
Now , reading the doc of backuppc4 it changes the view of
backup.
https://backuppc.github.io/backuppc/BackupPC.html#Ba
Hi all
New adventures using backuppc4.0 80)
I was using backuppcV3 to do a 3 full backup and its was working like a
charm 8)
Now , reading the doc of backuppc4 it changes the view of backup.
https://backuppc.github.io/backuppc/BackupPC.html#BackupPC-4.0
Using Ubuntu-22.04 + backuppc-4.4.0-5ubun
That worked. Thanks for the help!
Graham
On 26/04/2020 17:35, Craig Barratt via BackupPC-users wrote:
Sorry, the correct form should be "$@":
#!/bin/sh -f
exec /bin/tar -c "$@"
(Note that you want to force tar to have the -c option, not exec).
Craig
On Sun, Apr 26, 2020 at 5:14 AM
Sorry, the correct form should be "$@":
#!/bin/sh -f
exec /bin/tar -c "$@"
(Note that you want to force tar to have the -c option, not exec).
Craig
On Sun, Apr 26, 2020 at 5:14 AM Graham Seaman wrote:
> Hi Craig
>
> I set sudoers to allow backuppc to run tar as root with no password, and
>
Hi Craig
I set sudoers to allow backuppc to run tar as root with no password, and
incremental backups work fine.
This is only marginally less secure than the old setup, which allowed
backuppc to run the script which called tar, so I guess I can live with
this.
But in case you have any othe
It would be helpful if you included the edited script in your reply. Did
you use double quotes, or two single quotes?
I'd recommend trying without the script, just the make sure it works
correctly. Then you can be sure it's an issue with how the script
handles/splits arguments.
Craig
On Sat, A
Craig
Quoting $* gives me a new error:
/bin/tar: invalid option -- ' '
(I get exactly the same error whether I use $incrDate or $incrDate+)
That script is to avoid potential security problems from relaxing the
rules in sudoers, so I'd rather not get rid of it, but I'm a bit
surprised no-one
Graham,
Your script is the problem. Using $* causes the shell the resplit
arguments at whitespace. To preserve the arguments you need to put that in
quotes:
exec /bin/tar -c "$*"
Craig
On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman wrote:
> Thanks Craig
>
> That's clearly the problem, but I'
Thanks Craig
That's clearly the problem, but I'm still mystified.
I have backuppc running on my home server; the storage is on a NAS NFS
mounted on the home server. Backing up other hosts on my network (both
full and incremental) over rsync works fine.
The home server backs up using tar. The com
Graham,
This is a problem with shell (likely ssh) escaping of arguments that
contain a space.
For incremental backups a timestamp is passed as an argument to tar running
on the client. The argument should be a date and time, eg:
--after-date 2020-04-22\ 21:18:10'
Notice there needs to be a bac
Ok, I guess its this (from the start of XferLOG.bad):
/bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
00:00:00
/bin/tar: 21\:18\:10: Cannot stat: No such file or directory
which is kind of confusing, as it goes on to copy the rest of the
directory and then says '0 E
Graham,
Tar exit status of 512 means it encountered some sort of error (eg, file
read error) while it was running on the target client. Please look at the
XferLOG.bad file carefully to see the specific error from tar.
If you are unable to see the error, please send me the entire XferLOG.bad
file
I have a persistent problem with backing up one host: I can do a full
backup, but an incremental backup fails on trying to transfer the first
directory:
tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist, 18122
sizeExistComp, 2 filesTotal, 81381 sizeTotal
Got fatal error during xfer (Tar ex
On 2020-03-09 02:11, George Campbell wrote:
Hello, I am new to this list, so please let me know if there is anything missing from my question...
I have setup backuppc as a docker (version 4.3.2) on an Ubuntu server. My client is a Windows 10 running RPC.
A full backup runs and I can see lots
Hello, I am new to this list, so please let me know if there is anything
missing from my question...
I have setup backuppc as a docker (version 4.3.2) on an Ubuntu server. My
client is a Windows 10 running RPC.
A full backup runs and I can see lots of files on the server. But, the
backup never se
The Windows schedule backup does really worse because it is easy to go to
wrong. Thus why I would rather use third party software to do incremental
backup (http://www.backup-utility.com/articles/incremental-backup.html) instead
of using Windows built-in backup tool. Third party software is alway
On 2016-06-09 12:43, Carl Wilhelm Soderstrom wrote:
> I have seen it happen on a couple of occasions where a Windows machine
> (backed up via Cygwin rsyncd, not the minimal rsyncd off the SF page)
> will suddenly stop working for incremental backups. Full backups will
> continue to work, but incr
On 06/09 02:36 , Les Mikesell wrote:
> It might just be somewhat different timing for that host too - that
> is, there may be a large number of unchanging files or it has slow
> drives that make it take a longer time to find something that changed.
I don't think so.
At this point I'm starting to t
On 2016-06-09 12:43, Carl Wilhelm Soderstrom wrote:
> I have seen it happen on a couple of occasions where a Windows machine
> (backed up via Cygwin rsyncd, not the minimal rsyncd off the SF page)
> will
> suddenly stop working for incremental backups. Full backups will
> continue to
> work, but
On Thu, Jun 9, 2016 at 2:07 PM, Carl Wilhelm Soderstrom
wrote:
> On 06/09 01:50 , Les Mikesell wrote:
>> Sometimes this is caused by a nat router or stateful firewall
>> (possibly even host firewall software) timing out and breaking a
>> connection due to too much idle time in the traffic. If you
On 06/09 01:50 , Les Mikesell wrote:
> Sometimes this is caused by a nat router or stateful firewall
> (possibly even host firewall software) timing out and breaking a
> connection due to too much idle time in the traffic. If you are
> running over ssh you can usually fix it by enabling keepalives
On Thu, Jun 9, 2016 at 12:43 PM, Carl Wilhelm Soderstrom
wrote:
> I have seen it happen on a couple of occasions where a Windows machine
> (backed up via Cygwin rsyncd, not the minimal rsyncd off the SF page) will
> suddenly stop working for incremental backups. Full backups will continue to
> wor
I have seen it happen on a couple of occasions where a Windows machine
(backed up via Cygwin rsyncd, not the minimal rsyncd off the SF page) will
suddenly stop working for incremental backups. Full backups will continue to
work, but incrementals will start failing with a PIPE error. For example,
he
Hi,
After some days tracking the reason of log error "unable to read 4 bytes", I
detected that it´s being thrown after full backups execution, somehow it´s
changing the directory ( ./ and ../ ) permissions at client side for user
backuppc home (where is .ssh and .profile folder) . When perm
r
log, the problem is not immediately obvious. It is obvious only after
you browse your full backups and discover some stuff is not there.
-Mikko
2014-10-14 15:26 GMT+03:00 Holger Parplies :
> Hi,
>
> Mikko Kortelainen wrote on 2014-10-14 12:18:00 +0300 [[BackupPC-users] Full
> backup n
Hi,
Mikko Kortelainen wrote on 2014-10-14 12:18:00 +0300 [[BackupPC-users] Full
backup not backing up all files]:
> I have a problem with two BackupPC hosts, yet there's a further third
> host that is working ok.
>
> The problem is that a full backup does not seem to back
I have a problem with two BackupPC hosts, yet there's a further third
host that is working ok.
The problem is that a full backup does not seem to back up all files
on Windows hosts. An incremental backs up many more files than a full
one, or so it seems, looking at the version history.
On a typic
On Thu, 16 Feb 2012 10:12:23 -0500
Steve Blackwell wrote:
8>< snip
> The problem I'm having is that whenever I try to do a full backup, the
> computer locks up. There are no messages in any of the logs to
> indicate what might have caused the problem. Interestingly,
> incremental backups wo
I have a fairly old computer, a ~6yr old dual 3.4GHz Pentium 4 that is
running Fedora 12. It's (past) time for an upgrade. I'm want to do a
clean install as the requirements for boot partition size have
increased and so I need a good complete backup before I start.
The problem I'm having is that
On Tue, Oct 11, 2011 at 10:53 AM, Carlos Albornoz
wrote:
>
> And this 'archive host' can be the same backuppc server?, or
> necessarily has to be another host?
>
See:
http://backuppc.sourceforge.net/faq/BackupPC.html#configuring_an_archive_host
An 'archive host' is just a special configuration i
On Tue, Oct 11, 2011 at 12:37 PM, Les Mikesell wrote:
> On Tue, Oct 11, 2011 at 10:23 AM, Carlos Albornoz
> wrote:
>>
>> Recently in my company we adquiered a tape backup (ts2900), and i wish
>> send full backup to this tape's, that is possible?
>> The idea is send montly backup to tape.
>>
>> I
On Tue, Oct 11, 2011 at 10:23 AM, Carlos Albornoz
wrote:
>
> Recently in my company we adquiered a tape backup (ts2900), and i wish
> send full backup to this tape's, that is possible?
> The idea is send montly backup to tape.
>
> I ask because i read that backuppc is designed for disk backup.
Yo
Hi
Recently in my company we adquiered a tape backup (ts2900), and i wish
send full backup to this tape's, that is possible?
The idea is send montly backup to tape.
I ask because i read that backuppc is designed for disk backup.
cheers
--
Carlos Albornoz C.
Linux User #360502
Fono: 97864420
-
Saturn2888 wrote:
> I'm pretty sure, when using Rsync, this is done automatically. Secondly,
> there's a way to fill in your incrementals and make them no-longer dependent
> on the full backups.
Thank you for your answer !
Sincerely,
+
I'm pretty sure, when using Rsync, this is done automatically. Secondly,
there's a way to fill in your incrementals and make them no-longer dependent on
the full backups.
+--
|This was sent by saturn2...@gmail.com via Backup Ce
Sam Przyswa wrote:
> Hi,
>
> I got this error in full backup with rsync method :
>
>
>
> Got remote protocol 30
> Negotiated protocol version 28
> fileListReceive() failed
> Done: 0 files, 0 bytes
> Got fatal error during xfer (fileListReceive fa
Hi,
I got this error in full backup with rsync method :
Got remote protocol 30
Negotiated protocol version 28
fileListReceive() failed
Done: 0 files, 0 bytes
Got fatal error during xfer (fileListReceive failed)
Backup aborted (fileListReceive fail
Hi,
Daniel Carrera wrote on 2009-05-23 08:38:01 +0200 [Re: [BackupPC-users] "Full"
backup]:
> Holger Parplies wrote:
> > 1.) That is what you are requesting BackupPC to do.
> > [...] An incremental backup *can* miss changes. That is highly
> > unlikely
Daniel Carrera wrote:
> Holger Parplies wrote:
>> 1.) That is what you are requesting BackupPC to do.
>> If you want your backups to depend on a different reference point
>> than the previous full backup, you can use IncrLevels. An incremental
>> backup *can* miss changes. That is highl
Holger Parplies wrote:
> 1.) That is what you are requesting BackupPC to do.
> If you want your backups to depend on a different reference point
> than the previous full backup, you can use IncrLevels. An incremental
> backup *can* miss changes. That is highly unlikely with rsync but
>
Hi,
Les Mikesell wrote on 2009-05-22 18:28:21 -0500 [Re: [BackupPC-users] "Full"
backup]:
> Daniel Carrera wrote:
> >
> >> the reference backup for an incremental rsync backup is the
> >> *previous backup of lower level* of the host. Level 1 incrementals
&g
Daniel Carrera wrote:
>
>> the reference backup
>> for an incremental rsync backup is the *previous backup of lower level* of
>> the
>> host. Level 1 incrementals will re-transmit any changed files until the next
>> full backup (because they are relative to the previous full, not to each
>> other
Holger Parplies wrote:
> the reference backup
> for an incremental rsync backup is the *previous backup of lower level* of the
> host. Level 1 incrementals will re-transmit any changed files until the next
> full backup (because they are relative to the previous full, not to each
> other).
That se
Hi,
Les Mikesell wrote on 2009-05-22 15:10:56 -0500 [Re: [BackupPC-users] "Full"
backup]:
> Daniel Carrera wrote:
> > Hello,
> >
> > If BackupPC uses hard links, what exactly makes a full backup different
> > from an incremental backup? Is it just the --che
Daniel Carrera wrote:
> Hello,
>
> If BackupPC uses hard links, what exactly makes a full backup different
> from an incremental backup? Is it just the --checksum flag for rsync?
It depends on the xfer method. With smb and tar, a full actually
transfers everything, with rsync it sets the -i fl
Hello,
If BackupPC uses hard links, what exactly makes a full backup different
from an incremental backup? Is it just the --checksum flag for rsync?
Suppose that a file has not changed since the last full backup. Will
BackupPC re-transmit the file and create a new redundant file on the
backup
Hi,
I got this message on full backup on a machine with rsync:
Got fatal error during xfer (fileListReceive failed)
Backup aborted (fileListReceive failed)
Not saving this as a partial backup since it has fewer files than the prior one
(got 6041 and 0 files versus 6041)
But the incremential ba
Toni writes:
> BackupPC full dump, with patch which removed --ignore-times for a full
> backup:
> Done: 507 files, 50731819 bytes
> full backup complete
> real13m39.796s
> user0m4.232s
> sys 0m0.556s
> Network IO used: 620MB
>
> 'rsync -auvH --ignore-times' on the same data:
> sent 48
Toni Van Remortel wrote:
> Les Mikesell wrote:
>> Toni Van Remortel wrote:
>>> Toni Van Remortel wrote:
Anyway, I'm preparing a separate test setup now, to be able to do
correct tests (so both BackupPC and an rsync tree are using data from
the same time).
Test results will be he
Les Mikesell wrote:
> Toni Van Remortel wrote:
>> Toni Van Remortel wrote:
>>> Anyway, I'm preparing a separate test setup now, to be able to do
>>> correct tests (so both BackupPC and an rsync tree are using data from
>>> the same time).
>>> Test results will be here tomorrow.
>>>
>> So that is
Toni Van Remortel wrote:
> Toni Van Remortel wrote:
>> Anyway, I'm preparing a separate test setup now, to be able to do
>> correct tests (so both BackupPC and an rsync tree are using data from
>> the same time).
>> Test results will be here tomorrow.
>>
> So that is today.
>
> BackupPC full du
Toni Van Remortel wrote:
> Anyway, I'm preparing a separate test setup now, to be able to do
> correct tests (so both BackupPC and an rsync tree are using data from
> the same time).
> Test results will be here tomorrow.
>
So that is today.
BackupPC full dump, with patch which removed --ignore-
nexenta is alive and well. in fact, check this out.
http://www.nexenta.com/corp/
nexenta is not advancing at the pace of ubuntu though. i like the ubuntu
system so nexenta is great for me. if i were you and you were not tied to
ubuntu then you might consider opensolaris or solaris10. solaris10
Gene Horodecki wrote:
> Sounds reasonable... What did you do about the attrib file? I noticed
> there is a file called 'attrib' in each of the pool directories with
> some binary data in it.
>
Nothing... it just contains permissions, etc. That's why I did another
full after the move -- then
Sounds reasonable... What did you do about the attrib file? I noticed
there is a file called 'attrib' in each of the pool directories with some
binary data in it.
"Rich Rauenzahn" <[EMAIL PROTECTED]> wrote:
> Gene Horodecki wrote:
>
>>
>>>
>>>I had that problem as well.. so I uhh..
Gene Horodecki wrote:
I had that problem as well.. so I uhh.. well, I fiddled with the backup
directory on the backuppc server and moved them around so that backuppc
wouldn't see I had moved them remotely.. Not something I would exactly
recommend doing... although it worked.
Great suggesti
> Perhaps you could fiddle with them to make them exactly the same...
> At least if you have the 3.x version you will be able to stop and
> restart the initial full if you have to while getting the first complete
> copy.
> I had that problem as well.. so I uhh.. well, I fiddled with the backup
> d
Gene Horodecki wrote:
> I fiddled with the paths of my biggest backup in order to simplify an
> offsite copy and now because the files aren't "exactly the same" it seems
> it's going to take as long as the very first backup which was 4x as long as
> subsequent fulls. Unfortunate, because all the
Gene Horodecki wrote:
> I fiddled with the paths of my biggest backup in order to simplify an
> offsite copy and now because the files aren't "exactly the same" it seems
> it's going to take as long as the very first backup which was 4x as long as
> subsequent fulls. Unfortunate, because all the f
I fiddled with the paths of my biggest backup in order to simplify an
offsite copy and now because the files aren't "exactly the same" it seems
it's going to take as long as the very first backup which was 4x as long as
subsequent fulls. Unfortunate, because all the files are there.. but they
nee
Les Mikesell wrote:
Gene Horodecki wrote:
Is this true? Why not just send the checksum/name/date/permissions of the
file first and see if it exists already and link it in if it does. If the
file does not exist by name but there is a checksum for the file, then just
use the vital data to lin
Gene Horodecki wrote:
>> I'm not sure what you mean by 'pool' here. The only thing relevant to
>> what a backuppc rsync transfer will copy is the previous full of the
>> same machine. Files of the same name in the same location will use the
>> rsync algorithm to decide how much, if any, data ne
dan wrote:
> the ZFS machine is an nextenta(opensolaris+ubuntu) machine with an
> athlon64x2 3800+ and 1Gb Ram with 2 240Gb sata drives in the array. its
> a dell e521
Is nexenta still an active project? And would you recommend using it?
--
Les Mikesell
[EMAIL PROTECTED]
---
the ZFS machine is an nextenta(opensolaris+ubuntu) machine with an
athlon64x2 3800+ and 1Gb Ram with 2 240Gb sata drives in the array. its a
dell e521
On Nov 27, 2007 9:33 AM, Les Mikesell <[EMAIL PROTECTED]> wrote:
> Toni Van Remortel wrote:
>
> > But I don know that BackupPC does use more band
> I'm not sure what you mean by 'pool' here. The only thing relevant to
> what a backuppc rsync transfer will copy is the previous full of the
> same machine. Files of the same name in the same location will use the
> rsync algorithm to decide how much, if any, data needs to be copied -
> any
Toni Van Remortel wrote:
> But I don know that BackupPC does use more bandwidth.
> Besides: when dumping a full backup, the 'pool' means (I hope): file
> already in pool, using it. If not, then there is a problem, as those
> files are already in another backup set of the test host. But BackupPC
>
What kind of specs does your server have (besides running ZFS)? That is,
processor, memory, etc.
I've got a P-III 500Mhz with 512MB RAM as my backup server. It also is my
file server (I want to split those into separate machines, but I can't right
now), with about 250GB of data. (Most of that i
With rsync, the time required to do a backup depends as much on the number
of files as the total size of the data. For example, backing up an email
server with 20GB in 2 million files will take much longer than backing up
10 2GB isos.(*)
So "I backed up X GB in Y minutes" is meaningless without
I backup about 6-7Gb during a full backup of one of my sco unix servers
using rsync over ssh and it takes under an hour.
4-5Gb on an very old unix machine using rsync on an nfs mount takes just
over an hour.
full backups of my laptop is about 8Gb and takes about 15minutes though it
is on gigabit
Toni Van Remortel wrote:
And I have set up BackupPC here 'as-is' in the first place, but we saw
that the full backups, that ran every 7 days, took about 3 to 4 days
to
complete, while for the same hosts the incrementals finished in 1
hour.
That's why I got digging into the principles of Back
Les Mikesell wrote:
> How are you measuring the traffic?
ntop
Anyway, I'm preparing a separate test setup now, to be able to do
correct tests (so both BackupPC and an rsync tree are using data from
the same time).
Test results will be here tomorrow.
But I don know that BackupPC does use more band
Toni Van Remortel wrote:
>> Could you give us some numbers? How much traffic are you seeing for
>> a BackupPC backup compared to a 'plain rsync'?
> Full backup, run for the 2nd time today (no changes in files):
> - - BackupPC full dump : killed it after 30mins, as it pulled all data
> again (2.8G
PS: I hacked BackupPC to skip the '--ignore-times' argument addition for
full backups.
--
Toni Van Remortel
Linux System Engineer @ Precision Operations NV
+32 3 451 92 26 - [EMAIL PROTECTED]
-
This SF.net email is sponsore
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Nils Breunese (Lemonbit) wrote:
> It might be because BackupPC doesn't run the equivalent of rsync
> -auv. See $Conf{RsyncArgs} in your config.pl for the options used
> and remember rsync is talking to BackupPC's rsync interface, not a
> stock rsync.
T
Toni Van Remortel wrote:
>>> How can I reduce bandwidth usage for full backups?
>>>
>>> Even when using rsync, BackupPC does transfer all data on a full backup,
>>> and not only the modified files since the last incremental or full.
>> That's not true. Only modifications are transfered over the ne
Toni Van Remortel wrote:
Nils Breunese (Lemonbit) wrote:
Toni Van Remortel wrote:
How can I reduce bandwidth usage for full backups?
Even when using rsync, BackupPC does transfer all data on a full
backup,
and not only the modified files since the last incremental or full.
That's not true
Nils Breunese (Lemonbit) wrote:
> Toni Van Remortel wrote:
>> How can I reduce bandwidth usage for full backups?
>>
>> Even when using rsync, BackupPC does transfer all data on a full backup,
>> and not only the modified files since the last incremental or full.
> That's not true. Only modification
Toni Van Remortel wrote:
How can I reduce bandwidth usage for full backups?
Even when using rsync, BackupPC does transfer all data on a full
backup,
and not only the modified files since the last incremental or full.
That's not true. Only modifications are transfered over the network
whe
How can I reduce bandwidth usage for full backups?
Even when using rsync, BackupPC does transfer all data on a full backup,
and not only the modified files since the last incremental or full.
I would love to see BackupPC performing this simple task:
- cp -al $last new
- rsync -au --delete host:/s
Jorge writes:
> Remote[2]: file has vanished: "/proc/2/exe"
You should exclude /proc from the backup by adding it to
$Conf{BackupFilesExclude}.
Craig
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.ne
Hi list!
I recently set up a backuppc server running on debian etch 64 bits,
installation and configuration were clean and easy.
I did some test backups (full and incremental) with small folders and
everything was fine.
But when I tried to backup up the /, (approximately 10 gigas) it doesn't
Hi list!
I recently set up a backuppc server running on debian etch 64 bits,
installation and configuration were clean and easy.
I did some test backups (full and incremental) with small folders and
everything was fine.
But when I tried to backup up the /, (approximately 10 gigas) it doesn't w
83 matches
Mail list logo