Hi
On Wed, Nov 10, 2010 at 05:07, Andrew Spiers wrote:
> Hi, I've downloaded the debian version of BackupPC from
> http://packages.debian.org/sid/backuppc
what's wrong with http://packages.ubuntu.com/maverick/backuppc
Luis Paulo napsal(a):
> On Tue, Nov 9, 2010 at 10:19, Pavel Hofman wrote:
>> Hi,
>>
>> Is there a way to tell backuppc to finish the currently running backup
>> jobs and not to start new ones? We mirror backuppc partitions to
>> external drives via SW RAID and need to stop backuppc and umount the
Hi, I've downloaded the debian version of BackupPC from
http://packages.debian.org/sid/backuppc
and am setting about installing it on a Ubuntu Maverick system for
evaluation.
Following the documentation, I've installed the modules mentioned via
cpan. All of them seem to have installed except
Hi,
we have the same problem like Chris Baker.
Backuppc makes an incremental backup of some folders, but don't copy the
modified files!
For example: There is a file called concept.odt, which would be modified
2010-11-09 09:45.
BackupPC starts a incremental backup on 2010-11-09 21:00 but don't
On Tue, Nov 9, 2010 at 10:19, Pavel Hofman wrote:
> Hi,
>
> Is there a way to tell backuppc to finish the currently running backup
> jobs and not to start new ones? We mirror backuppc partitions to
> external drives via SW RAID and need to stop backuppc and umount the
> filesystem to keep the data
Quick question I could not find the answer to elsewhere.
I currently use BackupPC on Fedora which is stuck at 3.1. The reason
given is that there are now some Perl module dependencies that are
built into the BackupPC package which Fedora requires be packages
separately (no bundled libraries).
Did
>
> I'm archiving the BackupPC backup folder (/var/lib/BackupPC) folder to
> external disk with rsync.
>
> However, it looks like rsync is filling the links?
>
> My total disk usage on the backup server is 407g, and the space used on
> the external drive is up to 726g.
>
> (using rsync -avh --delet
Hi,
I've been using BackupPC for a several years without problem but since I
started adding Windows 7 clients, they are only getting partial backups
with this reported as an error:
tarExtract: Done: 0 errors, 3720 filesExist, 623786455 sizeExist,
496652191 sizeExistComp, 3755 filesTotal, 63896377
And I should mention, too, that this is the first rsync of a freshly
formatted USB drive :)
On 11/9/2010 2:44 PM, Rob Poe wrote:
> I'm archiving the BackupPC backup folder (/var/lib/BackupPC) folder to
> external disk with rsync.
>
> However, it looks like rsync is filling the links?
>
> My total
I'm archiving the BackupPC backup folder (/var/lib/BackupPC) folder to
external disk with rsync.
However, it looks like rsync is filling the links?
My total disk usage on the backup server is 407g, and the space used on
the external drive is up to 726g.
(using rsync -avh --delete --quiet /var/
On 11/9/2010 9:10 AM, Boniforti Flavio wrote:
> Hello Les.
>
>> An rsync full should be marked as a 'partial' with the
>> completed portion merged into the previous full as the
>> comparison base when it restarts. I think an incomplete
>> incremental is discarded. I'd bump up the timeout and add
> Well, by hand you'd do:
>
> ssh host 'tar -czvf - /dir' >/backups/foo.tgz
But wouldn't this create *one huge tarball*??? That's not what I'd like
to get...
Flavio Boniforti
PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL:
On Tue, Nov 09, 2010 at 05:13:58PM +0100, Boniforti Flavio wrote:
>
> > Well, by hand you'd do:
> >
> > ssh host 'tar -czvf - /dir' >/backups/foo.tgz
>
> But wouldn't this create *one huge tarball*??? That's not what I'd
> like to get...
It was an example for your benefit in future; it has noth
Hi
>> How may I take a look at the log of the *actually running* processes? I
>> feel something may be stuck, but the process still is shown as
>> "running", therefore I'd like to have a look at what's happening, or at
>> what point it arrived.
>
>I'm not sure about how the logs are buffered, but
On Tue, Nov 09, 2010 at 09:37:01AM -0600, Richard Shaw wrote:
> On Tue, Nov 9, 2010 at 9:27 AM, Robin Lee Powell
> wrote:
> > On Tue, Nov 09, 2010 at 04:11:15PM +0100, Boniforti Flavio
> > wrote:
> >> Hello Pavel.
> >>
> >> > for huge dirs with millions of files we got almost an order
> >> > of ma
On Tue, Nov 9, 2010 at 9:27 AM, Robin Lee Powell
wrote:
> On Tue, Nov 09, 2010 at 04:11:15PM +0100, Boniforti Flavio wrote:
>> Hello Pavel.
>>
>> > for huge dirs with millions of files we got almost an order of
>> > magnitude faster runs with the tar mode instead of rsync (which
>> > eventually co
On Tue, Nov 09, 2010 at 04:11:15PM +0100, Boniforti Flavio wrote:
> Hello Pavel.
>
> > for huge dirs with millions of files we got almost an order of
> > magnitude faster runs with the tar mode instead of rsync (which
> > eventually consumed all the memory anyways :) )
>
> How would I be able to
On 11/9/10 1:24 AM, Boniforti Flavio wrote:
> Hello list.
>
> How may I take a look at the log of the *actually running* processes? I
> feel something may be stuck, but the process still is shown as
> "running", therefore I'd like to have a look at what's happening, or at
> what point it arrived.
On 11/9/10 2:13 AM, Boniforti Flavio wrote:
> Hello everybody.
>
> One of my remote servers has grown a single directory from a couple of
> GB to 20GB in one day. Now backups (rsync through ssh) seem to be taking
> ages and time-out after 72000 seconds. My question: when such a backup
> gets stoppe
Hello Les.
> An rsync full should be marked as a 'partial' with the
> completed portion merged into the previous full as the
> comparison base when it restarts. I think an incomplete
> incremental is discarded. I'd bump up the timeout and add a -C
> (compress) option to your ssh command if yo
Les Mikesell napsal(a):
> On 11/9/10 2:13 AM, Boniforti Flavio wrote:
>> Hello everybody.
>>
>> One of my remote servers has grown a single directory from a couple of
>> GB to 20GB in one day. Now backups (rsync through ssh) seem to be taking
>> ages and time-out after 72000 seconds. My question: w
On Tue, 2010-11-09 at 12:41 +0100, Pavel Hofman wrote:
> Thanks a lot for your suggestion. In fact we use the PreDumpUserCmd to
> lock the backed-up machines to disable shutdown while the backup is in
> progress. You are right, it will work. Though IMHO it is an "unpretty"
> workaround :-) , especi
Hello Pavel.
> for huge dirs with millions of files we got almost an order
> of magnitude faster runs with the tar mode instead of rsync
> (which eventually consumed all the memory anyways :) )
How would I be able to use tar over a remote DSL connection?
Flavio Boniforti
PIRAMIDE INFORMATICA
Tyler J. Wagner napsal(a):
> On Tue, 2010-11-09 at 12:41 +0100, Pavel Hofman wrote:
>> Thanks a lot for your suggestion. In fact we use the PreDumpUserCmd to
>> lock the backed-up machines to disable shutdown while the backup is in
>> progress. You are right, it will work. Though IMHO it is an "u
Mirco Piccin napsal(a):
> Hi,
>
>> Is there a way to tell backuppc to finish the currently running backup
>> jobs and not to start new ones?
>
> maybe not the better way, but you could obtain that using DumpPreUserCmd
> and DumpPostUserCmd
> You need also to set UserCmdCheckStatus = 1;
>
> In th
Hi,
> Is there a way to tell backuppc to finish the currently running backup
> jobs and not to start new ones?
maybe not the better way, but you could obtain that using DumpPreUserCmd and
DumpPostUserCmd
You need also to set UserCmdCheckStatus = 1;
In the DumpPreUserCmd, you can use a diy script
Hi,
Is there a way to tell backuppc to finish the currently running backup
jobs and not to start new ones? We mirror backuppc partitions to
external drives via SW RAID and need to stop backuppc and umount the
filesystem to keep the data. I do not want to interrupt the long-running
backups but befo
Hello everybody.
One of my remote servers has grown a single directory from a couple of
GB to 20GB in one day. Now backups (rsync through ssh) seem to be taking
ages and time-out after 72000 seconds. My question: when such a backup
gets stopped, will the next task consider the already transferred
28 matches
Mail list logo