I'm trying to backup 6 different shares from a single windows host.
According to the docs and
the mailinglist archives I made different config.pl files under pc/ of the
form
host_share1.pl
host_share2.pl
and so on. They are also defined in the hosts file. "CleintNameAlias" is
set and works (
Ludovic Drolez wrote:
> On Wed, Apr 09, 2008 at 10:12:09AM -0500, Les Mikesell wrote:
>> I'd probably look at what rdiff-backup does with incremental differences
>> and instead of chunking everything, just track changes where the
>> differences are small.
>
> Yes but rdiff-backup has no pooling/
>
> Just thinking out loud here, but couldn't you achieve the same result
> by using the automounter? The the drive is present, the automounter
> would mount it up and then BackupPC would be happy. If the drive
> isn't present, the mount should fail and BackupPC would error out
> because i
Hello,
since backuppc is very handy I would like to use it to keep an accurate
history (like cdp or cvs) of each machine day by day.
So I would like to keep 365 day x 10 years backups.
I do not understand if it is possible, nor how to do it. If it is not possible
I would like to do the most sim
>> Can someone help me, please?
>>
> Yes.
>
Just thinking out loud here, but couldn't you achieve the same result
by using the automounter? The the drive is present, the automounter
would mount it up and then BackupPC would be happy. If the drive
isn't present, the mount should fail and Bac
Mauro Condarelli wrote:
Hi,
I asked this before, but no one answered, so I will try again :)
I am using a large (500G) external USB disk as backup media.
It performs reasonably, so no sweat.
Problem is:
Is there a way to do a pre-check to see if the drive is actually mounted
and, if not, just s
On Mon, Apr 14, 2008 at 11:21:02AM -0400, Raman Gupta wrote:
> > I have three hosts configured to backup to my PC. Here are the speeds
> > from the host summary:
> >
> > host 1: 24.77 GB, 14,000 files, 18.78 MB/s (slower WAN link)
> > host 2: 1.27 GB, 4,000 files, 1.89 MB/s (faster WAN lin
On Apr 14, 2008, at 11:20 AM, Tino Schwarze wrote:
>
> Of
> course, you shouldn't underestimate the cost of managing a lot of
> small
> files (my pool has about 5 million files, some of them are pretty
> large), so the pool will have even more files which means more seeking
> and looking up file
You'll find your answer in the documentation
http://backuppc.sourceforge.net/faq/limitations.html#maximum_backup_file_sizes
Simone Marzona wrote:
> Hi all
>
> When I extract some data from backuppc on a windows host the extraction
> stops at 2 GB. This happens either when I use the archive functi
Simone Marzona wrote:
> When I extract some data from backuppc on a windows host the
> extraction
> stops at 2 GB. This happens either when I use the archive function
> either when I recover with/without compression.
>
> This happens only if working on Windows even if the FS is ntfs.
>
> Is ther
Hi all
When I extract some data from backuppc on a windows host the extraction
stops at 2 GB. This happens either when I use the archive function
either when I recover with/without compression.
This happens only if working on Windows even if the FS is ntfs.
Is there a solution for this problem?
Hi all
I searched the mailing list archive for some infos, but I didn't found
anything.
I think that some improvements in the user interface could be usefull,
it should be easy to get the folder size in the explore window of an
host. If I need to recover an entire directory I need to know the si
Hi!
I have some problems with my backuppc.
First one is, that a few days ago the web interface reports the wrong time in
the first status line. The time reporting "started at" is 2 hours in the future.
But my system time is correct. The other lines in the status window are also
correct.
The problem
On 04/14 02:26 , Micha Silver wrote:
> The newer one, configured to backup some workstations, won't start
> scheduled backups.
Are you out of disk space?
--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com
-
On Mon, Apr 14, 2008 at 10:09:57AM +0200, Ludovic Drolez wrote:
> > How long are you willing to have your backups and restores take? If
> > you do more processing on the backed up files, you'll take a greater
>
> Not true :
> - working with fixed size chunks may improve speed, because algorit
On Mon, Apr 14, 2008 at 12:55:22PM -0400, Alexandre Joly wrote:
> Has anyone ever managed to add a functionality to archive in zip format
> additionally with encryption?
> Maybe a slight modification of the BackupPC_archiveHost would be
> necessary or is it too complex?
Zip encryption is useless
Hi Craig,
On Mon, Apr 14, 2008 at 06:33:19AM -0700, Craig Barratt wrote:
> > I found a problem. IO::Dirent returns 0 as the type for the directories,
> > so BackupPC::Lib->find() doesn't descent into them. Why it does so if
> > run manually - I don't know.
> >
> > It does return a type 4 on ext3
Mauro Condarelli wrote:
> Hi,
> I asked this before, but no one answered, so I will try again :)
>
> I am using a large (500G) external USB disk as backup media.
> It performs reasonably, so no sweat.
>
> Problem is:
> Is there a way to do a pre-check to see if the drive is actually mounted
> and
Has anyone ever managed to add a functionality to archive in zip format
additionally with encryption?
Maybe a slight modification of the BackupPC_archiveHost would be
necessary or is it too complex?
--
Alexandre Joly
Network Administrator
Infodev Electronic Designers Intl
(418) 681-3539 ext. 15
Hi,
I asked this before, but no one answered, so I will try again :)
I am using a large (500G) external USB disk as backup media.
It performs reasonably, so no sweat.
Problem is:
Is there a way to do a pre-check to see if the drive is actually mounted
and, if not, just skip the scheduled backup?
Raman Gupta wrote:
> I have three hosts configured to backup to my PC. Here are the speeds
> from the host summary:
>
> host 1: 24.77 GB, 14,000 files, 18.78 MB/s (slower WAN link)
> host 2: 1.27 GB, 4,000 files, 1.89 MB/s (faster WAN link)
> host 3: 4.82 GB, 190,000 files, 0.66 MB/s (fa
On Monday 14 April 2008 16:20:08 Nils Breunese (Lemonbit) wrote:
> Have you read 'How BackupPC Finds Hosts'?
> http://backuppc.sourceforge.net/faq/BackupPC.html#how_backuppc_finds_hosts
Yes, the following is part of my output when running
$> /usr/share/backuppc/bin/BackupPC_dump -v dalek
---
NetB
Hello,
I have a problem when I try to extract by tar command the file zipped by
archive function:
all the files are stored in backup tar file with RsyncShareName path
before the true path, an example is:
RsyncShareName is disk_c , so the file /data/fileA.txt in the client
become /disk_c/data/f
Wayne Gemmell wrote:
> On Monday 14 April 2008 15:46:37 Paul Horn wrote:
>> Hosts with a dash in the name are not resolved by nmb-lookup. I
>> ended up
>> putting reserved addresses in my local DHCP server so that such
>> workstations always receive a "known" ip when on my network, then
>> mad
On Monday 14 April 2008 15:46:37 Paul Horn wrote:
> Hosts with a dash in the name are not resolved by nmb-lookup. I ended up
> putting reserved addresses in my local DHCP server so that such
> workstations always receive a "known" ip when on my network, then made a
> corresponding entry in /etc/hos
Hosts with a dash in the name are not resolved by nmb-lookup. I ended up
putting reserved addresses in my local DHCP server so that such
workstations always receive a "known" ip when on my network, then made a
corresponding entry in /etc/hosts on the backuppc server.
- Paul
On Mon, 2008-04-14 a
Tino writes:
> I found a problem. IO::Dirent returns 0 as the type for the directories,
> so BackupPC::Lib->find() doesn't descent into them. Why it does so if
> run manually - I don't know.
>
> It does return a type 4 on ext3, on xfs it's always 0.
Good detective work.
There is a check in Back
On Tuesday 01 April 2008 17:29:42 Les Mikesell wrote:
> kanti wrote:
> > Hie thanx for ur valuable reply , now every thing is fine . But when i am
> > trying to take a backup of client again that same error has occurred.
> > (Unable to read 4 bytes). The error is like as follows :-
> > full backup
I'm trying to backup 6 different shares from a single windows host.
According to the docs and
the mailinglist archives I made different config.pl files under pc/ of the
form
host_share1.pl
host_share2.pl
and so on. They are also defined in the hosts file. "CleintNameAlias" is
set and works (
I've been scratching my head over this for more than a week.
I have two backup servers running, both on CentOS (64 bit). One has been
humming along nicely for several months now, backing up several servers.
The newer one, configured to backup some workstations, won't start
scheduled backups. I
On Wed, Apr 09, 2008 at 10:12:09AM -0500, Les Mikesell wrote:
> I'd probably look at what rdiff-backup does with incremental differences
> and instead of chunking everything, just track changes where the
> differences are small.
Yes but rdiff-backup has no pooling/deduplication.
With that featu
On Wed, Apr 09, 2008 at 06:11:58PM -0700, Michael Barrow wrote:
> How long are you willing to have your backups and restores take? If
> you do more processing on the backed up files, you'll take a greater
Not true :
- working with fixed size chunks may improve speed, because algorithms
could
32 matches
Mail list logo