Re: [BackupPC-users] improving the deduplication ratio

2008-04-14 Thread Ludovic Drolez
On Wed, Apr 09, 2008 at 06:11:58PM -0700, Michael Barrow wrote:
> How long are you willing to have your backups and restores take? If  
> you do more processing on the backed up files, you'll take a greater  

Not true :
- working with fixed size chunks may improve speed, because algorithms 
could be optimized for 1 chunk size (md5, compression, etc)
- if you implement block level deduplication to backup only the last
64kb of a log file, instead of the full 5 mb file, do you think it
will take longer to write 64 kb than 5 mb ?

File + block level deduplication will improve both BackupPC's
performance, and space savings.

Cheers,


-- 
Ludovic Drolez.

http://www.palmopensource.com   - The PalmOS Open Source Portal
http://www.drolez.com  - Personal site - Linux, Zaurus and PalmOS stuff

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] improving the deduplication ratio

2008-04-14 Thread Ludovic Drolez
On Wed, Apr 09, 2008 at 10:12:09AM -0500, Les Mikesell wrote:
> I'd probably look at what rdiff-backup does with incremental differences 
> and instead of chunking everything, just track changes where the 
> differences are small.

Yes but rdiff-backup has no pooling/deduplication.

With that feature, backuppc would be closer to rdiff-backup with
pooling on top of that.

Cheers,

-- 
Ludovic Drolez.

http://www.palmopensource.com   - The PalmOS Open Source Portal
http://www.drolez.com  - Personal site - Linux, Zaurus and PalmOS stuff

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] scheduled backups won't start

2008-04-14 Thread Micha Silver
I've been scratching my head over this for more than a week.
I have two backup servers running, both on CentOS (64 bit). One has been 
humming along nicely for several months now, backing up several servers.

The newer one, configured to backup some workstations,  won't start 
scheduled backups. I can manually start backups from the cgi interface, 
both full and incr, and they run to completion OK. My wakeup time is 
12:00 noon, and blackout is from 07:00 to 11:30. In the log I see the 
cpool cleanup and the BackupPC_link lines. But no backup, and no email 
of failure.

I've been thru the config.pl several times, and I can't see what I'm 
missing... (I reChecked that I have a value, 6.97 in $Conf{FullPeriod}).

Any ideas?

Thanks,
Micha

-- 
Micha Silver
Arava Development Co
+972-8-6592270


-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup of single shares

2008-04-14 Thread Hermann-Josef Beckers
I'm trying to backup 6 different shares from a single windows host. 
According to the docs and 
the mailinglist archives I made different config.pl files under pc/ of the 
form

host_share1.pl
host_share2.pl 

and so on. They are also defined in the hosts file. "CleintNameAlias" is 
set and works (1).
 But backuppc insists on trying to back up the share defined in config.pl:

My config.pl:

#
$Conf{RsyncShareName} = '';

For test purposes I also defined a share "ftp" in the above variable, 
which is not defined
on the client side  Following is the relevant log file part:

2008-04-14 13:42:09 Got fatal error during xfer (auth required, but 
service  is open/insecure)
2008-04-14 13:42:14 Backup aborted (auth required, but service  is 
open/insecure)
2008-04-14 13:43:32 full backup started for directory ftp
2008-04-14 13:43:32 Got fatal error during xfer (Unknown module 'ftp')
2008-04-14 13:43:37 Backup aborted (Unknown module 'ftp')
2008-04-14 13:56:51 full backup started for directory 
2008-04-14 13:56:52 Got fatal error during xfer (auth required, but 
service  is open/insecure)
2008-04-14 13:56:57 Backup aborted (auth required, but service  is 
open/insecure)
2008-04-14 14:00:00 full backup started for directory 
2008-04-14 14:00:01 Got fatal error during xfer (auth required, but 
service  is open/insecure)
2008-04-14 14:00:06 Backup aborted (auth required, but service  is 
open/insecure)

Also commenting out the above variable made no difference. Any hints?


Yours
hjb

(!) For the record/archive: For me it works only with FQDN. Using only the 
host part gives DNS-/
nmblookup errors.-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error :- No ping Response

2008-04-14 Thread Wayne Gemmell
On Tuesday 01 April 2008 17:29:42 Les Mikesell wrote:
> kanti wrote:
> > Hie thanx for ur valuable reply , now every thing is fine . But when i am
> > trying to take a backup of client again that same error has occurred.
> > (Unable to read 4 bytes). The error is like as follows :-
> > full backup started for directory /
> > Running: /usr/bin/ssh -q -x -l root scn-ws9 /usr/bin/rsync --server
> > --sender --numeric-ids   --perms --owner  --group -D --links --hard-links
> > --times --block-size=2048 --recursive --ignore-times . / Xfer PIDs are
> > now 11798
>
> What happens if you try to execute that exact command line as the
> backuppc user yourself? It should send some odd character as the start
> of the transfer protocol and wait - hit a control-C to exit.  If you get
> a password prompt or ssh error, you still don't have the keys set up
> correctly.  If you see some other text message it may be coming from the
>   login scripts on the remote side and causing trouble with the rsync
> protocol.

I have a very similar problem to Kanti so I'm going to finish where he left 
off. The command is run pointing to a dhcp client. The nmblookup works but 
the rsync command above fails because the IP address isn't passed from the 
lookup to ssh. Do I then tell my server to do a nmblookup when searching for 
the host name?

-- 
Regards
Wayne 

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Solution for Re: no cpool info shown on web interface

2008-04-14 Thread Craig Barratt
Tino writes:

> I found a problem. IO::Dirent returns 0 as the type for the directories,
> so BackupPC::Lib->find() doesn't descent into them. Why it does so if
> run manually - I don't know.
> 
> It does return a type 4 on ext3, on xfs it's always 0.

Good detective work.

There is a check in BackupPC::Lib that should catch this.
I'm not sure why that check passes in this case.  Here's
the code:

BEGIN {
eval "use IO::Dirent qw( readdirent DT_DIR );";
if ( !$@ && opendir(my $fh, ".") ) {
#
# Make sure the IO::Dirent really works - some installs
# on certain file systems don't return a valid type.
#
my $dt_dir = eval("DT_DIR");
foreach my $e ( readdirent($fh) ) {
if ( $e->{name} eq "." && $e->{type} == $dt_dir ) {
$IODirentOk = 1;
last;
}
}
closedir($fh);
}
};

Ah, I bet "." (where BackupPC is running) is on an ext3 file system,
where is should be checking $TopDir.

Unfortunately this test is done before the config file has been read.
I would guess that if you replace this line:

if ( !$@ && opendir(my $fh, ".") ) {

with your hardcoded TopDir, eg:

if ( !$@ && opendir(my $fh, "/data/BackupPC") ) {

then the correct thing should happen (IO::Dirent should be disabled).

If you can confirm that, then the fix is I should delay the check on
IO::Dirent until after TopDir is known.

Craig

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error :- No ping Response

2008-04-14 Thread Paul Horn
Hosts with a dash in the name are not resolved by nmb-lookup. I ended up
putting reserved addresses in my local DHCP server so that such
workstations always receive a "known" ip when on my network, then made a
corresponding entry in /etc/hosts on the backuppc server.

 - Paul


On Mon, 2008-04-14 at 15:10 +0200, Wayne Gemmell wrote:

> On Tuesday 01 April 2008 17:29:42 Les Mikesell wrote:
> > kanti wrote:
> > > Running: /usr/bin/ssh -q -x -l root scn-ws9 /usr/bin/rsync --server



> I have a very similar problem to Kanti so I'm going to finish where he left 
> off. The command is run pointing to a dhcp client. The nmblookup works but 
> the rsync command above fails because the IP address isn't passed from the 
> lookup to ssh. Do I then tell my server to do a nmblookup when searching for 
> the host name?
> 
-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error :- No ping Response

2008-04-14 Thread Wayne Gemmell
On Monday 14 April 2008 15:46:37 Paul Horn wrote:
> Hosts with a dash in the name are not resolved by nmb-lookup. I ended up
> putting reserved addresses in my local DHCP server so that such
> workstations always receive a "known" ip when on my network, then made a
> corresponding entry in /etc/hosts on the backuppc server.
>
I knew it wasn't a great plan hijacking another thread. The workstations I am 
trying to debug are dude and bob2, no funny characters. The nmblookup does 
work, it just doesn't seem to use the result in the rsync.



-- 
Regards
Wayne

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error :- No ping Response

2008-04-14 Thread Nils Breunese (Lemonbit)
Wayne Gemmell wrote:

> On Monday 14 April 2008 15:46:37 Paul Horn wrote:
>> Hosts with a dash in the name are not resolved by nmb-lookup. I  
>> ended up
>> putting reserved addresses in my local DHCP server so that such
>> workstations always receive a "known" ip when on my network, then  
>> made a
>> corresponding entry in /etc/hosts on the backuppc server.
>>
>
> I knew it wasn't a great plan hijacking another thread. The  
> workstations I am
> trying to debug are dude and bob2, no funny characters. The  
> nmblookup does
> work, it just doesn't seem to use the result in the rsync.

Have you read 'How BackupPC Finds Hosts'? 
http://backuppc.sourceforge.net/faq/BackupPC.html#how_backuppc_finds_hosts

Nils Breunese.

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] A problem to extract the file zipped , with hard links, that backup by archive function

2008-04-14 Thread Ferri Alessandro
Hello,

I have a problem when I try to extract by tar command the file zipped by 
archive function:
all the files are stored in backup tar file with RsyncShareName path 
before the true path, an example is:
RsyncShareName is disk_c , so the file /data/fileA.txt in the client 
become  /disk_c/data/fileA.txt but if i have an hard link file 
/data/filelinktoA.txt that point to /data/fileA.txt when it try to 
rewrite /data/filelinktoA.txt it fails because  not find  
/data/fileA.txt , infact the file is under /disk_c/data/fileA.txt .
Are there another mode to extract or restore the file backupped by 
archive function ?

Thanks 
Axel


-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error :- No ping Response

2008-04-14 Thread Wayne Gemmell
On Monday 14 April 2008 16:20:08 Nils Breunese (Lemonbit) wrote:
> Have you read 'How BackupPC Finds Hosts'?
> http://backuppc.sourceforge.net/faq/BackupPC.html#how_backuppc_finds_hosts
Yes, the following is part of my output when running 
$> /usr/share/backuppc/bin/BackupPC_dump -v dalek

---
NetBiosInfoGet: success, returning host dalek, user dalek
full backup started for directory /home
started full dump, share=/home
Running: /usr/bin/ssh -q -x -l root 
dalek /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group 
--devices --links --times --block-size=2048 --recursive -D --ignore-times . 
/home/
Xfer PIDs are now 12711
xferPids 12711
Read EOF: Connection reset by peer
Tried again: got 0 bytes
Done: 0 files, 0 bytes
Got fatal error during xfer (Unable to read 4 bytes)
cmdSystemOrEval: about to system /bin/ping -c 1 192.168.0.17
cmdSystemOrEval: finished: got output PING 192.168.0.17 (192.168.0.17) 56(84) 
bytes of data.
64 bytes from 192.168.0.17: icmp_seq=1 ttl=64 time=0.228 ms


When I run the rsync part on its own it says unknown host. Running 

$>ssh -l root 192.168.0.17  whoami
I get "root"

The rsync versions are the same on both hosts. 

This leaves me confused. 

-- 
Regards
Wayne

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Wildly different speeds for hosts

2008-04-14 Thread Raman Gupta
Raman Gupta wrote:
> I have three hosts configured to backup to my PC. Here are the speeds
> from the host summary:
> 
> host 1:  24.77 GB,  14,000 files, 18.78 MB/s (slower WAN link)
> host 2:   1.27 GB,   4,000 files,  1.89 MB/s (faster WAN link)
> host 3:   4.82 GB, 190,000 files,  0.66 MB/s (fast LAN link)
> 
> They all use rsync with the same setup, other than the exclude list.
> Backups are configured to run one at a time so there is no overlap
> between them.
> 
> The speed of host 3 concerns me. Host 3 is by far the beefiest
> machine, and on the fastest network link of all the hosts, but yet
> backs up at only 0.66 MB/s (incrementals are even slower).

Ok, it seems that the number of files has a large non-linear affect on
the performance of BackupPC. I excluded a bunch of stuff from my host
3 backup, and the new stats are:

host 3:4.2 GB,  85,000 files,  2.19 MB/s

For a file count reduction factor of 2.2, there was a speed increase
factor of 3.3.

Cheers,
Raman

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup to USB disk.

2008-04-14 Thread Mauro Condarelli
Hi,
I asked this before, but no one answered, so I will try again :)

I am using a large (500G) external USB disk as backup media.
It performs reasonably, so no sweat.

Problem is:
Is there a way to do a pre-check to see if the drive is actually mounted
and, if not, just skip the scheduled backup?
It would be easy to put a do_not_backup file in the directory over which
I mount the remote.
I could then do a test to see if that file is present (no disk) or if it
is absent (something was mounted over it.
Unfortunately I have no idea where to put such a test in BackupPC!

Can someone help me, please?

Related issue:
I would like to use a small pool of identical external HDs in order to
increase further security.
Aside from switching the disks round-robin with a certain time interval
(weekly?) is there anything else to be done to notify BackupPC it has a
new media to use? I guess not, but I would like someone to confirm this.

Thanks in Advance
Mauro

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Archive encrypted zip

2008-04-14 Thread Alexandre Joly
Has anyone ever managed to add a functionality to archive in zip format 
additionally with encryption?
Maybe a slight modification of the BackupPC_archiveHost would be 
necessary or is it too complex?

-- 
Alexandre Joly
Network Administrator
Infodev Electronic Designers Intl
(418) 681-3539 ext. 153


-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup to USB disk.

2008-04-14 Thread Martin Leben
Mauro Condarelli wrote:
> Hi,
> I asked this before, but no one answered, so I will try again :)
> 
> I am using a large (500G) external USB disk as backup media.
> It performs reasonably, so no sweat.
> 
> Problem is:
> Is there a way to do a pre-check to see if the drive is actually mounted
> and, if not, just skip the scheduled backup?
> It would be easy to put a do_not_backup file in the directory over which
> I mount the remote.
> I could then do a test to see if that file is present (no disk) or if it
> is absent (something was mounted over it.
> Unfortunately I have no idea where to put such a test in BackupPC!
> 
> Can someone help me, please?
> 
> Related issue:
> I would like to use a small pool of identical external HDs in order to
> increase further security.


Hi Mauro,

Considering what it seems like you want to achieve, I would suggest another 
approach: Use at least three disks in a rotating scheme and RAID1.

Say I have three disks labeled 1, 2 and 3. Then I would rotate them according 
to 
the schedule below, which guarantees that:
- there is always at least one disk in the BackupPC server.
- there is always at least one disk in the off-site storage.
- all disks are never at the same location.

1 2 3   (a = attached, o = off-site)
a o o
a a o -> RAID sync
o a o
o a a -> RAID sync
o o a
a o a -> RAID sync
. . .

An even safer approach would of course be to rotate four disks where at least 
two disks are always attached to the BackupPC server.

Good luck!
/Martin Leben


-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Solution for Re: no cpool info shown on web interface

2008-04-14 Thread Tino Schwarze
Hi Craig,

On Mon, Apr 14, 2008 at 06:33:19AM -0700, Craig Barratt wrote:

> > I found a problem. IO::Dirent returns 0 as the type for the directories,
> > so BackupPC::Lib->find() doesn't descent into them. Why it does so if
> > run manually - I don't know.
> > 
> > It does return a type 4 on ext3, on xfs it's always 0.
> 
> Good detective work.
> 
> There is a check in BackupPC::Lib that should catch this.
> I'm not sure why that check passes in this case.  Here's
> the code:

I've looked into that today (and already wrote on the -devel list about
it).

> BEGIN {
> eval "use IO::Dirent qw( readdirent DT_DIR );";
> if ( !$@ && opendir(my $fh, ".") ) {
> #
> # Make sure the IO::Dirent really works - some installs
> # on certain file systems don't return a valid type.
> #
> my $dt_dir = eval("DT_DIR");
> foreach my $e ( readdirent($fh) ) {
> if ( $e->{name} eq "." && $e->{type} == $dt_dir ) {
> $IODirentOk = 1;
> last;
> }
> }
> closedir($fh);
> }
> };
> 
> Ah, I bet "." (where BackupPC is running) is on an ext3 file system,
> where is should be checking $TopDir.

Yes, that's what I figured out, too. When running from init script, "."
happened to be ext3, while my manual invocation was from xfs.

> Unfortunately this test is done before the config file has been read.
> I would guess that if you replace this line:
> 
> if ( !$@ && opendir(my $fh, ".") ) {
> 
> with your hardcoded TopDir, eg:
> 
> if ( !$@ && opendir(my $fh, "/data/BackupPC") ) {
> 
> then the correct thing should happen (IO::Dirent should be disabled).
> 
> If you can confirm that, 

I can confirm that this fix works as expected - BackupPC_nightly is
processing my pool, I've started the init script from /root.

> then the fix is I should delay the check on IO::Dirent until after
> TopDir is known.

With my patch applied, we could always use IO::Dirent (if it's
available) and handle the type==0 case gracefully - only if the type is
requested. I'm not sure, anybody would benefit, i.e. calls dirRead with
requesting inode and nlink only.

Bye,

Tino.

-- 
„Es gibt keinen Weg zum Frieden. Der Frieden ist der Weg.” (Mahatma Gandhi)

www.craniosacralzentrum.de
www.forteego.de

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Archive encrypted zip

2008-04-14 Thread Tino Schwarze
On Mon, Apr 14, 2008 at 12:55:22PM -0400, Alexandre Joly wrote:
> Has anyone ever managed to add a functionality to archive in zip format 
> additionally with encryption?
> Maybe a slight modification of the BackupPC_archiveHost would be 
> necessary or is it too complex?

Zip encryption is useless. IIRC it's cracked within seconds. Rather use
GPG or some other method to encrypt the generated archive.

HTH,

Tino.

-- 
„Es gibt keinen Weg zum Frieden. Der Frieden ist der Weg.” (Mahatma Gandhi)

www.craniosacralzentrum.de
www.forteego.de

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] improving the deduplication ratio

2008-04-14 Thread Tino Schwarze
On Mon, Apr 14, 2008 at 10:09:57AM +0200, Ludovic Drolez wrote:

> > How long are you willing to have your backups and restores take? If  
> > you do more processing on the backed up files, you'll take a greater  
> 
> Not true :
> - working with fixed size chunks may improve speed, because algorithms 
> could be optimized for 1 chunk size (md5, compression, etc)
> - if you implement block level deduplication to backup only the last
> 64kb of a log file, instead of the full 5 mb file, do you think it
> will take longer to write 64 kb than 5 mb ?
> 
> File + block level deduplication will improve both BackupPC's
> performance, and space savings.

Hm. Rsync has a --block-size option, so this should be doable. Of
course, you shouldn't underestimate the cost of managing a lot of small
files (my pool has about 5 million files, some of them are pretty
large), so the pool will have even more files which means more seeking
and looking up file blocks.

IIRC, currently the rsync-style backup works like this: The remove file
is rsync'ed against a version of that file from the same host. After
backup is done, BackupPC_link will look through the received files and
either link them into the pool if they're new or remove them and
hardlink them from the pool.

Introducing file chunking would introduce a new abstraction layer - a
file would need to be split into chunks and recreated for restore. You
can currently go to a host's backup directory, take a file and use it
directly if it is uncompressed. If it's compressed, you've got to use
BackupPC_zcat anyway. Whether a file in the pool is still used by some
backup is currently tracked by the file system via hardlink count.
Either we drop that hardlink scheme altogether (pool cleanup would
become very expensive) or we need to invent some way to hardlink the
31250 chunks of a 2 GB file into a directory in a sane way. And there
are files a lot larger than 2 GB around here - I've got some VMware
images in backup (which shouldn't be there, I know), and I'm not fond of
having another 50 files on the file system just because the image is
split into 64k chunks.

But this is just my guessing - one of the developers needs to think this
through and layout the consequences.

Bye,

Tino.

-- 
„Es gibt keinen Weg zum Frieden. Der Frieden ist der Weg.” (Mahatma Gandhi)

www.craniosacralzentrum.de
www.forteego.de

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] scheduled backups won't start

2008-04-14 Thread Carl Wilhelm Soderstrom
On 04/14 02:26 , Micha Silver wrote:
> The newer one, configured to backup some workstations,  won't start 
> scheduled backups. 

Are you out of disk space?

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] 2 Problems: Wrong time and unsaved partial backups

2008-04-14 Thread Marco
Hi!
I have some problems with my backuppc.
First one is, that a few days ago the web interface reports the wrong time in
the first status line. The time reporting "started at" is 2 hours in the future.
But my system time is correct. The other lines in the status window are also
correct.
The problem is, that the next start for automatic backups will start 2 hours in
the future instead of now. If i manually restart the server, backups will start
correctly in the next cycle, but the startingtime of the server remains wrong.
Time is set in BIOS and System correct, so i have no clue what is causing this.
Second problem: I think its more a bug. When i shutdown the client or server
while a (first, complete) backup is running, the partial backup will not be
saved. Only when i use the webinterface and click on "Stop/Dequeue Backup" it
will be saved.
I think this is bug, has anyone a solution for this?
Thanks in advance!


-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] feature request

2008-04-14 Thread Simone Marzona
Hi all

I searched the mailing list archive for some infos, but I didn't found
anything. 

I think that some improvements in the user interface could be usefull,
it should be easy to get the folder size in the explore window of an
host. If I need to recover an entire directory I need to know the size
of the directory BEFORE recovering it.

Is  there the possibility to gain this information out of an
installation of BackupPC ?




-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] file extraction, windows

2008-04-14 Thread Simone Marzona
Hi all

When I extract some data from backuppc on a windows host the extraction
stops at 2 GB. This happens either when I use the archive function
either when I recover with/without compression.

This happens only if working on Windows even if the FS is ntfs.

Is there a solution for this problem?

The size of 2 GB is so suspect... I think that this is a Windows xp
"gift".


-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] file extraction, windows

2008-04-14 Thread Nils Breunese (Lemonbit)
Simone Marzona wrote:

> When I extract some data from backuppc on a windows host the  
> extraction
> stops at 2 GB. This happens either when I use the archive function
> either when I recover with/without compression.
>
> This happens only if working on Windows even if the FS is ntfs.
>
> Is there a solution for this problem?
>
> The size of 2 GB is so suspect... I think that this is a Windows xp
> "gift".

Smells like a samba limitation to me.

Nils Breunese.

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] file extraction, windows

2008-04-14 Thread Alexandre Joly
You'll find your answer in the documentation

http://backuppc.sourceforge.net/faq/limitations.html#maximum_backup_file_sizes

Simone Marzona wrote:
> Hi all
>
> When I extract some data from backuppc on a windows host the extraction
> stops at 2 GB. This happens either when I use the archive function
> either when I recover with/without compression.
>
> This happens only if working on Windows even if the FS is ntfs.
>
> Is there a solution for this problem?
>
> The size of 2 GB is so suspect... I think that this is a Windows xp
> "gift".
>
>
> -
> This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
> Don't miss this year's exciting event. There's still time to save $100. 
> Use priority code J8TL2D2. 
> http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>   

-- 
Alexandre Joly
Network Administrator
Infodev Electronic Designers Intl
(418) 681-3539 ext. 153


-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] improving the deduplication ratio

2008-04-14 Thread Michael Barrow

On Apr 14, 2008, at 11:20 AM, Tino Schwarze wrote:
>
> Of
> course, you shouldn't underestimate the cost of managing a lot of  
> small
> files (my pool has about 5 million files, some of them are pretty
> large), so the pool will have even more files which means more seeking
> and looking up file blocks.
>
> Introducing file chunking would introduce a new abstraction layer - a
> file would need to be split into chunks and recreated for restore. You


Tino -- thanks for posting this. These issues are exactly what I had  
in mind when I posted about adding sub-file deduplication. There's a  
lot more work to do and definitely a bunch more housekeeping. Right  
now, BackupPC gets off "easy" by utilizing hardlinks to do the  
dedupe. Once we delve below the file, a brand new data structure/ 
mechanism needs to be designed and built to efficiently link all of  
these blocks together.

If you look at the commercial solutions that provide this  
functionality exclusively in software (as opposed to appliance-based  
solutions), you see that it is quite processor intensive. If there  
are flaws in the design of the mechanism to track the chunks, you  
will most definitely see pain in the backup and restore processes  
compared to the existing mechanism of deduping at the file level.


--
Michael Barrow
michael at michaelbarrow dot name




-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Wildly different speeds for hosts

2008-04-14 Thread Tino Schwarze
On Mon, Apr 14, 2008 at 11:21:02AM -0400, Raman Gupta wrote:

> > I have three hosts configured to backup to my PC. Here are the speeds
> > from the host summary:
> > 
> > host 1:  24.77 GB,  14,000 files, 18.78 MB/s (slower WAN link)
> > host 2:   1.27 GB,   4,000 files,  1.89 MB/s (faster WAN link)
> > host 3:   4.82 GB, 190,000 files,  0.66 MB/s (fast LAN link)
> > 
> > They all use rsync with the same setup, other than the exclude list.
> > Backups are configured to run one at a time so there is no overlap
> > between them.
> > 
> > The speed of host 3 concerns me. Host 3 is by far the beefiest
> > machine, and on the fastest network link of all the hosts, but yet
> > backs up at only 0.66 MB/s (incrementals are even slower).
> 
> Ok, it seems that the number of files has a large non-linear affect on
> the performance of BackupPC. I excluded a bunch of stuff from my host
> 3 backup, and the new stats are:
> 
> host 3:4.2 GB,  85,000 files,  2.19 MB/s
> 
> For a file count reduction factor of 2.2, there was a speed increase
> factor of 3.3.

I suppose, BackupPC's speed is mainly affected by random access speed of
the server's pool storage. I've got hosts with lots of files as well
(small ones, mostly) and they take pretty long to back up. Look at I/O
utilization of the client during backup - it might be a bottleneck as
well.

Reading a file linearly is a quite cheap operation: The OS will
read-ahead (the RAID probably as well), disk heads don't need to move a
lot, metadata fits nicely into the OS' disk cache (and stays there) etc.
But if you've got a file system with several millions of files (like the
pool) distributed across tens of thousands of directories (like the
backup directories below the pc/ directory), things get worse: Lots of
random seeking across the disk, cache trashing, I/O waiting etc.. I'm
thinking about getting another 2 GB of RAM for my BackupPC server and
see whether it improves things.

This is an iostat -x -k 60 output during backup runs (2 backups in
parallel, 1 pretty fast client, 1 pretty slow with lots of files):

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   1.890.002.539.490.00   86.09

Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sdb  0.00   0.73 91.64 139.54 1208.53 2105.88   604.27  1052.94 14.34   
  2.75   11.89   4.31  99.57

(Server is quad-core with 2 GB, 3x500GB RAID5 on Dell PERC5/i -
switching to RAID10 would probably improve things a lot as well)

Bye,

Tino.

-- 
„What we resist, persists.” (Zen saying)

www.craniosacralzentrum.de
www.forteego.de

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup to USB disk.

2008-04-14 Thread Les Stott

Mauro Condarelli wrote:

Hi,
I asked this before, but no one answered, so I will try again :)

I am using a large (500G) external USB disk as backup media.
It performs reasonably, so no sweat.

Problem is:
Is there a way to do a pre-check to see if the drive is actually mounted
and, if not, just skip the scheduled backup?
It would be easy to put a do_not_backup file in the directory over which
I mount the remote.
I could then do a test to see if that file is present (no disk) or if it
is absent (something was mounted over it.
Unfortunately I have no idea where to put such a test in BackupPC!

Can someone help me, please?
  

Yes.

The way i do it in a usb drive scenario...

Don't start backuppc at boot.
Schedule it via cron at the times you actually want it to run. I.e. do a 
"service backuppc start" at say 10pm, then do a "service backuppc stop" 
at 8am or so.
customize the init.d startup script for backuppc to mount the drives 
before trying to start backuppc. if the drive fails the mount then don't 
bother starting backuppc. When the stop command runs it unmounts the drive.
The init script can also send an email if there is a problem mounting 
the drive.
Each drive will be configured as an ext3 filesystem, so your test to see 
if its valid and mounted can be by looking for the "lost+found" 
directory at the root of the drive.
That way then between 8am and 10pm you can hotplug the drives without 
needing to unmount.
When you switch drives and start backuppc it just begins from scratch on 
the new drive, or picks up where it left off from the last rotation.


Note: the new 3.1.0 series changes the order of tests when BackupPC 
starts and it does a symlink test to see if it can create a link between 
pc and cpool directories at the top level.
As you rotate in a new drive it will be empty and backuppc will fail to 
start because it doesn't see the cpool or pc directory existing on the 
filesystem.
This never happened before in 3.0.0 and i preferred that because it 
meant i could just put an empty usb drive in and it would create the top 
level tree of folders on the fly.

I posted on the list back in December.

http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg08103.html

but got no response. I'd like to see the top level directories 
created first, then do the symlink test.


a sample init script for a redhat based system which does this is 
attached. It could be done better of course, no warranty use at your own 
risk etc.



Related issue:
I would like to use a small pool of identical external HDs in order to
increase further security.
Aside from switching the disks round-robin with a certain time interval
(weekly?) is there anything else to be done to notify BackupPC it has a
new media to use? I guess not, but I would like someone to confirm this.

  
nothing really to do, BackupPC will carry on where the old drive left 
off or start afresh on a blank drive.


Hope that helps.

Regards,

Les
#!/bin/sh
#
# DESCRIPTION
#
#   Startup init script for BackupPC on Redhat linux.
#
# Distributed with BackupPC version 3.0.0, released 28 Jan 2007.
#
# chkconfig: - 91 35
# description: Starts and stops the BackupPC server

# Modified by Les Stott 19/3/07
# to suit automatically mounting and unmounting of an external usb drive
# when backuppc is installed on such a device.

MOUNTPOINT=/mnt/maxtor
DEVICE=/dev/sdc1

# Source function library.
if [ -f /etc/init.d/functions ] ; then
  . /etc/init.d/functions
elif [ -f /etc/rc.d/init.d/functions ] ; then
  . /etc/rc.d/init.d/functions
else
  exit 0
fi

RETVAL=0

start() {
#
# You can set the SMB share password here is you wish.  Otherwise
# you should put it in the config.pl script.
# If you put it here make sure this file has no read permissions
# for normal users!  See the documentation for more information.
#
# Replace the daemon line below with this:
#   
#  daemon --user backuppc /usr/bin/env BPC_SMB_PASSWD=x \
#   /usr/local/BackupPC/bin/BackupPC -d
#   
echo -n "Mounting External Drive: "
mount $DEVICE $MOUNTPOINT 2> /tmp/mounted
RETVAL=$?
if [ "`grep -q "already mounted" /tmp/mounted ;echo $?`" = "0" ];then 
RETVAL=0
fi   
if [ "$RETVAL" = "0" ]; then 
echo_success
chmod 775 $MOUNTPOINT
chgrp backuppc $MOUNTPOINT
echo ""
  if [ -d "${MOUNTPOINT}/lost+found" ]; then
 echo -n "Starting BackupPC: "
 daemon --user backuppc /usr/local/BackupPC/bin/BackupPC -d
 RETVAL=$?
 echo
 [ $RETVAL -eq 0 ] && touch /var/lock/subsys/backuppc || \
RETVAL=1
 return $RETVAL
  else
 echo "Possible Problem with mounted drive. No lost+found directory"
 echo "to indicate filesystem."
 echo -n "Starting BackupPC: "
 echo_failure
 echo "Possible Problem with mounted drive. No lost+found directory" | 
mail -s"${HOSTNAME} Problem with Backu

Re: [BackupPC-users] Backup to USB disk.

2008-04-14 Thread Michael Barrow
>> Can someone help me, please?
>>
> Yes.
>


Just thinking out loud here, but couldn't you achieve the same result  
by using the automounter? The the drive is present, the automounter  
would mount it up and then BackupPC would be happy. If the drive  
isn't present, the mount should fail and BackupPC would error out  
because its directory tree would not be present.

Perhaps it's not as pretty as the scripted solution you're using, but  
I think it should "just work" otherwise.

--
Michael Barrow
michael at michaelbarrow dot name




-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Conserve daily backups for 10 years

2008-04-14 Thread Mario Giammarco
Hello,
since backuppc is very handy I would like to use it to keep an accurate 
history (like cdp or cvs) of each machine day by day. 
So I would like to keep 365 day x 10 years backups. 
I do not understand if it is possible, nor how to do it. If it is not possible 
I would like to do the most similar thing.

Thanks  you in advance for any reply.

Mario Giammarco

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup to USB disk.

2008-04-14 Thread Les Stott

>
> Just thinking out loud here, but couldn't you achieve the same result  
> by using the automounter? The the drive is present, the automounter  
> would mount it up and then BackupPC would be happy. If the drive  
> isn't present, the mount should fail and BackupPC would error out  
> because its directory tree would not be present.
>
> Perhaps it's not as pretty as the scripted solution you're using, but  
> I think it should "just work" otherwise.
>
>
>   
Never had much luck with the automounter doing things the right way. It 
always wants to auto mount on directories under /media, and the 
directory can either be a label or a made up name.

i prefer mounting and unmounting explicitly and choosing my own 
directory mount points.

BackupPC would still try to create the top level directories even if the 
drive never mounted, and thus they would end up being created in the 
root filesystem and not on the mounted drive. That would be bad.

Les





-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] improving the deduplication ratio

2008-04-14 Thread Les Mikesell
Ludovic Drolez wrote:
> On Wed, Apr 09, 2008 at 10:12:09AM -0500, Les Mikesell wrote:
>> I'd probably look at what rdiff-backup does with incremental differences 
>> and instead of chunking everything, just track changes where the 
>> differences are small.
> 
> Yes but rdiff-backup has no pooling/deduplication.

You get the same effect within a single host.  That is you can restore 
states from multiple times without keeping full copies of each.

> With that feature, backuppc would be closer to rdiff-backup with
> pooling on top of that.

Yes, I wouldn't expect many random matches from chunked files except in 
the special cases of growing logfiles or small changes to large 
databases.  If the rsync process built the difference files like 
rdiff-backup and then pooled them where it would save space compared to 
a new copy it might be a big win.  But, it would have to be just for 
incrementals or it would be tricky to keep track of dependencies when 
expiring earlier runs.

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backuppc-users@lists.sourceforge.net

2008-04-14 Thread Hermann-Josef Beckers
I'm trying to backup 6 different shares from a single windows host. 
According to the docs and 
the mailinglist archives I made different config.pl files under pc/ of the 
form

host_share1.pl
host_share2.pl 

and so on. They are also defined in the hosts file. "CleintNameAlias" is 
set and works (1).
 But backuppc insists on trying to back up the share defined in config.pl:

My config.pl:

#
$Conf{RsyncShareName} = '';

For test purposes I also defined a share "ftp" in the above variable, 
which is not defined
on the client side  Following is the relevant log file part:

2008-04-14 13:42:09 Got fatal error during xfer (auth required, but 
service  is open/insecure)
2008-04-14 13:42:14 Backup aborted (auth required, but service  is 
open/insecure)
2008-04-14 13:43:32 full backup started for directory ftp
2008-04-14 13:43:32 Got fatal error during xfer (Unknown module 'ftp')
2008-04-14 13:43:37 Backup aborted (Unknown module 'ftp')
2008-04-14 13:56:51 full backup started for directory 
2008-04-14 13:56:52 Got fatal error during xfer (auth required, but 
service  is open/insecure)
2008-04-14 13:56:57 Backup aborted (auth required, but service  is 
open/insecure)
2008-04-14 14:00:00 full backup started for directory 
2008-04-14 14:00:01 Got fatal error during xfer (auth required, but 
service  is open/insecure)
2008-04-14 14:00:06 Backup aborted (auth required, but service  is 
open/insecure)

Also commenting out the above variable made no difference. Any hints?


Yours
hjb

(!) For the record/archive: For me it works only with FQDN. Using only the 
host part gives DNS-/
nmblookup errors.-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/