[BackupPC-users] open firewall ports on (domain controlled) windows clients

2010-10-31 Thread Bernhard Ott
Hi all,
I have to backup some windows clients which are members of a domain that 
I can't control (don't ask ;-)), so no way to change the domain profile.

Is there a way to open the ssh port at boot time via script without 
being "overruled" by the domain controller?
Would something like:

work in this case (and where to put it)?

Of course I do have root privileges on all clients.

Thanks for your help,
Bernhard

--
Nokia and AT&T present the 2010 Calling All Innovators-North America contest
Create new apps & games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] open firewall ports on (domain controlled) windows clients

2010-10-31 Thread Bernhard Ott
On 10/31/2010 02:09 PM, Bernhard Ott wrote:
> Hi all,
> I have to backup some windows clients which are members of a domain that
> I can't control (don't ask ;-)), so no way to change the domain profile.
>
> Is there a way to open the ssh port at boot time via script without
> being "overruled" by the domain controller?
> Would something like:
> 
> work in this case (and where to put it)?
>
Maybe this could work (sorry, can't test right now and I need this to be 
done very soon):

Add the following entries to the Windows Firewall Netfw.inf:

[ICF.AddReg.DomainProfile]
 ^^
HKLM,"SYSTEM\CurrentControlSet\Services\
SharedAccess\Parameters\FirewallPolicy\
DomainProfile\GloballyOpenPorts\List",
"22:TCP",0x,
"22:TCP:192.168.1.1:enabled:secure shell (SSH)"


Sorry for this question not being entirely backuppc-specific ;-)
BTW, win-clients OS are XP SP3, Win 7 ultimate 64bit

Bernhard

--
Nokia and AT&T present the 2010 Calling All Innovators-North America contest
Create new apps & games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RE Rsyncd/Samba include/exclude

2008-02-12 Thread Bernhard Ott
 Original Message 
Subject: Re:[BackupPC-users] RE  Rsyncd/Samba include/exclude
From: Alan Orlič Belšak <[EMAIL PROTECTED]>
To: backuppc-users@lists.sourceforge.net
Date: 12.02.2008 11:11

> It's working now with exclude list :). Just one more question, is it
>  possible to say not to write empty directories?

AFAIK File::RsyncP doesn't support the rsync option --prune-empty-dirs,
so you have to include the whole tree, even when your filter rules leave
the directories empty. Reading the section about filter rules of the
rsync man page is highly recommended: ;-)

> One solution is to ask for all directories in the hierarchy to be
> included by using a single rule: "+ */" (put it somewhere before the
> "- *" rule), and perhaps use the --prune-empty-dirs option. Another
> solution is to add specific include rules for all the parent dirs
> that need to be visited.


Regards,
Bernhard

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] pool size zero but no linking errors

2008-03-07 Thread Bernhard Ott
Hi,
my pool size is displayed 0.00GB (see below), but everythings seems to 
work perfectly: I couldn't find any linking errors in the logs, 
permissions checked, and
du -sh /home/RAID5/backuppc/cpool/ gives me 639G, so it's not exactly 
empty ;-)

I'm running backuppc 3.1.0 on Debian etch/amd64, xfs on 1.5 TB LVM/RAID5

Does the GUI only display the uncompressed pool-size?
I can't believe that I didn't see this before the related thread came up 
on the list ... shame on me!

Regards,
Bernhard


> * Pool is 0.00GB comprising 0 files and 1 directories (as of,
> * Pool hashing gives 0 repeated files with longest chain 0,
> * Nightly cleanup removed 0 files of size 0.00GB (around 3/7 01:00),
> * Pool file system was recently at 44% (3/7 14:37), today's max is
> 44% (3/7 01:00) and yesterday's max was 44%.
> ^^


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] pool size zero but no linking errors

2008-03-07 Thread Bernhard Ott
> Did you change the $TopDIR entry in the config file?  you need to have 
> it set at /var/lib/backuppc in debian or ubuntu.  if you are storing 
> your data somewhere else, just link it to /var/lib/backuppc OR mount it 
> the /var/lib/backuppc.  you can dual-mount it also, meaning you can do a 
> 'mount -o bind /source /var/lib/backuppc'

I have a symlink from /var/lib/backuppc to my LVM-mountpoint, $TopDir 
in config.pl unchanged (/var/lib/backuppc).

Despite the fact that the statistics on the status page are wrong 
everything seems to be fine, host summary works as expected, backups and 
restores are no problem ... what am I missing?

Permissions are backuppc:backuppc - should www-data be a member of group 
backuppc? But why are all of the other informations correct?

Thanks in advance,
Bernhard

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] pool size zero but no linking errors

2008-03-08 Thread Bernhard Ott
> backuppc should be in group backuppc only, www-data should be in group 
> www-data only.
> the permissions on /var/lib/backuppc should be user rwx, group rx, all none.
> 
> On Fri, Mar 7, 2008 at 5:36 PM, Bernhard Ott <[EMAIL PROTECTED] 
> <mailto:[EMAIL PROTECTED]>> wrote:
> 
>  > Did you change the $TopDIR entry in the config file?  you need to
> have
>  > it set at /var/lib/backuppc in debian or ubuntu.  if you are storing
>  > your data somewhere else, just link it to /var/lib/backuppc OR
> mount it
>  > the /var/lib/backuppc.  you can dual-mount it also, meaning you
> can do a
>  > 'mount -o bind /source /var/lib/backuppc'
> 
> I have a symlink from /var/lib/backuppc to my LVM-mountpoint, $TopDir
> in config.pl unchanged (/var/lib/backuppc).
> 
> Despite the fact that the statistics on the status page are wrong
> everything seems to be fine, host summary works as expected, backups and
> restores are no problem ... what am I missing?
> 
> Permissions are backuppc:backuppc - should www-data be a member of group
> backuppc? But why are all of the other informations correct?
> 
> Thanks in advance,
> Bernhard
> 
> 

Permission settings are correct, hardlinks are definitely working, there 
are plenty of inodes left, and still the logs tell me that my [c]pool is 
0.00GB  - what else could I look for?
How does backuppc determine the [c]pool size/disk usage? Ok, ok I should 
read the sources... ;-)
What I really don't get is that the host summary is correct:
 >>140 full backups of total size 11086.51GB (prior to pooling and 
compression)

Bernhard




-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What things may influence BackupPC to run faster?

2008-03-18 Thread Bernhard Ott

John Pettitt wrote:
> Bruno Faria wrote:
>> Hello to everyone,
>>
>> Lately for some reason, BackupPC has been running very slow on a 
>> server that we have configured to do backups. Just so that you guys 
>> can have an idea of how slow it really is going, it took BackupPC  
>> 10265.2 minutes to backup 1656103 files totaling to only 24 gigabytes 
>> worth of files. Obviously, I can't really wait a week for a 24 gigbyte 
>> backup to be done. Now here's what makes me think that this problem 
>> with BackupPC could be due to server hardware: I first started started 
>> doing backups for one pc at a time, and it took BackupPC 468.8 minutes 
>> to backup 2626069 files or 32 gigabyte worth of files for that same 
>> computer. But now I have about 45 computers added to BackupPC and 
>> sometimes BackupPC is backing up 30 of them or more at the same time, 
>> and that's when the server really goes slow.
>>
>> Here's the top command when the BackupPC server is going slow:
>> top - 19:06:36 up 15 days,  6:11,  3 users,  load average: 28.76, 
>> 39.03, 32.14
>> Tasks: 156 total,   1 running, 155 sleeping,   0 stopped,   0 zombie
>> Cpu(s):  3.7% us,  2.2% sy,  0.0% ni,  7.7% id, 86.0% wa,  0.4% hi,  
>> 0.0% si
>> Mem:   1033496k total,  1019896k used,13600k free,   141720k buffers
>> Swap:  5116692k total,  2538712k used,  2577980k free,41932k cached
> 
> Any time your load average is more than your # of CPU's your system is 
> contending for CPU.You are also  using a lot of swap which makes me 
> think your box has gone into thrashing death spiral.   Add ram, limit 
> the number of simultaneous backups (I found by trial and error that the 
> number of spindles in the backup array is a good starting point for how 
> many backups can be run at once).
> 
> In the server I just upgraded  (Code 2 quad 2ghz, 2GB, 1.5TB ufs on 
> RAID10 ,  FreeBSD 7.0) my backups run between 3.6 MB/sec for a remote 
> server (*) and 56 MB/sec for a volume full of digital media on a gig-e 
> connected mac pro. Having a multi core CPU make a big difference 
> (bigger than I expected).
> 
> (*) rsync is a wonderful thing -  six times the actual line speed.
> 
> John
> 

Just out of curiosity: why not using using ZFS? Is it really to be 
considered experimental? ZFS could be a reason for me  to switch to 
FreeBDS, I remember dan to be the expert on ZFS - any news?

Bernhard



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupsDisable issue

2008-03-26 Thread Bernhard Ott
Hi all,
I recently disabled automatic backups of a single host via
$Conf{BackupsDisable} = 1;
Host summary says "auto disabled", but backuppc still tries to queue the 
host whenever a wakeup is scheduled.
BTW, the host already had errors in the logs before I disabled it (it 
crashed), nevertheless I need to keep the backups.

Any ideas (backuppc 3.1.0 on debian etch/amd64)?

Thanks in advance,
Bernhard


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupsDisable issue

2008-04-02 Thread Bernhard Ott
Solved: I finally disabled the host in the hosts-file, reloaded server
config, enabled the host and reloaded again --> no more failure messages

Stop/Dequeue didn't work

I think there might/could/should be a more elegant way to temporarily
disable hosts that are causing trouble?

Kind regards,
Bernhard



Bernhard Ott wrote:
> Hi all,
> I recently disabled automatic backups of a single host via
> $Conf{BackupsDisable} = 1;
> Host summary says "auto disabled", but backuppc still tries to queue the 
> host whenever a wakeup is scheduled.
> BTW, the host already had errors in the logs before I disabled it (it 
> crashed), nevertheless I need to keep the backups.
> 
> Any ideas (backuppc 3.1.0 on debian etch/amd64)?
> 
> Thanks in advance,
> Bernhard
> 

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bear Metal Restore

2008-04-10 Thread Bernhard Ott
Hi Jack,
as Andreas said before bare metal restore is not the thing you can do 
best with backuppc, so many folks around here use backuppc for data and 
other products for imaging system-partitions.
I use partimage for cloning windows hosts (careful: it doesn't work on 
amd64 platform, http://www.partimage.org/),
if you want to restore a bunch of hosts you should have a look at
http://www.clonezilla.org/.

Hope that helps,
Bernhard


Jack wrote:
> Ok, I have not found this, and it must be 'out there' somewhere.
> 
> I would like to be able to do a 'bear metal restore' for both Linux and
> Windows.
> 
> I have done it using a commercial product on Windows (basics were:
> build a 'new' installation with just the minimal network basics, with the
> c:\windows somewhere else (like c:\winx ) if c:\windows is your 'system'
> directory.
> then restore all the data, including the c:\windows directory, 
> restore the 'system state', reboot the machine.  It is now exactly
> the same as it was before, after you erase c:\winx
> 
> I never did it on Linux.  I basically have just built a 'new' machine and
> restored data, and re-installed all programs.  Doing bear metal here would
> be
> nice too.
> 
> Does someone know where I can ReadTheFineManual for some 'how-to's' with
> BackupPC?
> 
> TIA
> 
> 
> -
> This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
> Don't miss this year's exciting event. There's still time to save $100. 
> Use priority code J8TL2D2. 
> http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] no cpool info shown on web interface

2008-04-11 Thread Bernhard Ott
Hi,
on one of my backuppc servers (3.1.0 on Debian etch/amd64, pool 
filesystem is xfs on 1.5 TB LVM/RAID5 [which performs ok for our setup 
;-)]) everything works like charm,
*BUT*:
"General Server Information" gives me no Compressed pool info (see 
quote_01 below).
Statistics in host summary are ok (see quote_02), logs show no errors at 
all, despite the fact that Cpool nightly shows the same strange behavior 
(please see quote_03).

quote_01:
> * Other info:
>   (...)
>   o Pool is 0.00GB comprising 0 files and 1 directories (...)
>   o Pool hashing gives 0 repeated files with longest chain 0,
>   o Nightly cleanup removed 0 files of size 0.00GB (...)
>   o Pool file system was recently at 54% (4/8 21:18), today's
> max is 54% (4/8 01:00) and yesterday's max was 54%.

quote_02:
> There are 18 hosts that have been backed up, for a total of:
> 
> * 144 full backups of total size 11927.99GB (prior to pooling and 
> compression),
> * 97 incr backups of total size 170.72GB (prior to pooling and 
> compression). 


quote_03:
> 2008-04-11 01:00:36 Pool nightly clean removed 0 files of size 0.00GB
> 2008-04-11 01:00:36 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max 
> links), 1 directories
> 2008-04-11 01:00:36 Cpool nightly clean removed 0 files of size 0.00GB
> 2008-04-11 01:00:36 Cpool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max 
> links), 4369 directories


Once again, the backups are *fine*, restores *do work*, transfer logs ok 
but I really don't like the idea that BackupPC_nightly might not work 
the way it should.
I checked the permissions, docs, archives and ... I'm stuck.


Thanks in advance for your help,
Bernhard


-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no cpool info shown on web interface

2008-04-11 Thread Bernhard Ott
Les Mikesell wrote:
> Bernhard Ott wrote:
>>
>> quote_03:
>>> 2008-04-11 01:00:36 Pool nightly clean removed 0 files of size 0.00GB
>>> 2008-04-11 01:00:36 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 
>>> 0 max links), 1 directories
>>> 2008-04-11 01:00:36 Cpool nightly clean removed 0 files of size 0.00GB
>>> 2008-04-11 01:00:36 Cpool is 0.00GB, 0 files (0 repeated, 0 max 
>>> chain, 0 max links), 4369 directories
>>
>>
>> Once again, the backups are *fine*, restores *do work*, transfer logs 
>> ok but I really don't like the idea that BackupPC_nightly might not 
>> work the way it should.
>> I checked the permissions, docs, archives and ... I'm stuck.
> 
> Have you done anything unusual like moving the archive location after 
> installation?
> 

Not AFAIK , I symlinked the whole backuppc directory to the (debian) 
standard location /var/lib/backuppc.
No linking errors (there are hardlinks, find /var/lib/backuppc/pc/ 
-links +500 -printf '%n %k %p\n' gives me lots of files).
I hope it is correct that only the backuppc-directory must reside on the 
same file system?

Hmm...maybe I moved the pool after the first test runs, I just can't 
remember (the path didn't change) ... but if there is something wrong 
with the pool, shouldn't there be massive linking problems?

Sorry that I repeat that the Host Summary is correct.
BTW, it's the only server I'm running on the amd64 platform...

Can I run the BackupPC_nightly in debug mode? Any logs that I 
may/could/should provide?

Bernhard

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] change $Conf{ClientCharset} for existing hosts

2008-04-13 Thread Bernhard Ott
Hi,
is it safe to change the $Conf{ClientCharset} for existing hosts/backups?

Thanks in advance,
Bernhard

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no cpool info shown on web interface

2008-04-13 Thread Bernhard Ott
> PS: I've just started adding debug log messages to BackupPC_nightly. Is
> there an easy way to make it run other than changing
> $Conf{WakeupSchedule} and waiting an hour?
> 
run as backuppc:
/path/to/bin/BackupPC_serverMesg BackupPC_nightly run

Bernhard


-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no cpool info shown on web interface

2008-04-13 Thread Bernhard Ott
Daniel Denson wrote:
> did you change the $TopDIR entry in config.pl?  I have found that the
> 
nope
> mechanism for reporting disk usage requires than $TopDIR be 
> /var/lib/backuppc.  its best to mount your target disk onto that 
> location.  you can either mount it directly or mount it via bind
^^
Thanks, I just tried that - no succes, BackupPC_nightly still doesn't work.
But basically that's not a disk usage issue - the df works. What really
bothers me (and Tino) is that BackupPC_nightly doesn't work (we can only
run it manually). Backuppc just doesn't get it's own (c)pool correctly
(see posting of Tino)?

BTW, there are certain similarities in our setup:
* 64bit linux
* file system xfs/LVM/RAID

Any more ideas? ;-)

Bernhard

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Solution for Re: no cpool info shown on web interface

2008-04-13 Thread Bernhard Ott
Tino Schwarze wrote:
> On Sun, Apr 13, 2008 at 12:08:17PM +0200, Tino Schwarze wrote:
> 
>>> BTW, there are certain similarities in our setup:
>>> * 64bit linux
>>> * file system xfs/LVM/RAID
>> More info from my setup:
>> * Upgraded from 3.0.0 to 3.1.0 via configure.pl
>> * Perl version 5.8.8
>> * IO::Dirent 0.04 (but it's been there for a long time)
>> * everything working fine, except BackupPC_nightly
> * kernel 2.6.18.8-0.7-default (almost up-to-date openSUSE 10.2)
> 
> I found a problem. IO::Dirent returns 0 as the type for the directories,
> so BackupPC::Lib->find() doesn't descent into them. Why it does so if
> run manually - I don't know.
> 
> It does return a type 4 on ext3, on xfs it's always 0.
> 
> I used the following perl program to check:
> 
> #!/usr/bin/env perl
> #
> 
> use IO::Dirent;
> 
> foreach my $dir ( @ARGV) {
> opendir DIR, $dir;
> my @entries = readdirent(DIR);
> closedir DIR;
> 
> print "listing $dir:\n";
> foreach my $entry ( @entries ) {
> print ("name: ", $entry->{name}, ", type: ", $entry->{type}, 
> ", inode: ", $entry->{inode}, "\n");
> }
> }
> 
> I've rebuilt IO::Dirent, just to be sure - no change.
> 
> I found a fix for lib/BackupPC/Lib.pm (the line with "map {...}" is
> the relevant one):
> 
> --- lib/BackupPC/Lib.pm 2007-11-26 04:00:07.0 +0100
> +++ lib/BackupPC/Lib.pm   2008-04-13 12:52:03.938619979 +0200
> @@ -485,10 +485,15 @@
>  
>  from_to($path, "utf8", $need->{charsetLegacy})
>  if ( $need->{charsetLegacy} ne "" );
> -return if ( !opendir(my $fh, $path) );
> +my ($fh);
> +if ( !opendir($fh, $path) ) {
> +   print "log ERROR: opendir ($path) failed\n";
> +   return;
> +}
> +
>  if ( $IODirentOk ) {
>  @entries = sort({ $a->{inode} <=> $b->{inode} } readdirent($fh));
> -map { $_->{type} = 0 + $_->{type} } @entries;   # make type numeric
> +map { $_->{type} = 0 + $_->{type}; $_->{type} = undef if ($_->{type} 
> eq BPC_DT_UNKNOWN); } @entries;   # make type numeric, unset unknown types
>  } else {
>  @entries = map { { name => $_} } readdir($fh);
>  }
> @@ -553,9 +559,11 @@
>  return if ( !chdir($dir) );
>  my $entries = $bpc->dirRead(".", {inode => 1, type => 1});
>  #print Dumper($entries);
> +#print ("log got ",scalar(@$entries)," entries for $dir\n");
>  foreach my $f ( @$entries ) {
>  next if ( $f->{name} eq ".." || $f->{name} eq "." && $dontDoCwd );
>  $param->{wanted}($f->{name}, "$dir/$f->{name}");
> +#if ( $f->{type} != BPC_DT_DIR ) { print ("log skipping 
> non-directory ", $f->{name}, " type: ", $f->{type}, "\n"); }
>  next if ( $f->{type} != BPC_DT_DIR || $f->{name} eq "." );
>  chdir($f->{name});
>  $bpc->find($param, "$dir/$f->{name}", 1);
> 
> HTH,
> 
> Tino.

Wow, I'm deeply impressed. I applied your patch the main problem seems
to be solved! I just found and opendir error in the logs:

> ERROR: opendir 
> (/var/lib/backuppc/pc/susexeon-mit_alles/401/f%2fraid/fWORK/fFISCHER) failed

This occurred only during the first run, now everything seems to work 
fine. Is it safe to leave the Lib.pm this way ;-)) ?

Thanks a lot for your help, it's great to know that there are 
specialists out there,

Bernhard

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pool is 0.00GB comprising 0 files and 0 directories....

2009-05-27 Thread Bernhard Ott
Ralf Gross wrote:
> Hi,
> 
> I use BackupPC since many years without hassle. But something seems to
> be broken now.
> 
> BackupPC 3.1 (source)
> Debian Etch
> xfs fs
> 

Hi Ralf,
look for the thread "no cpool info shown on web interface" (2008-04)in 
the archives, Tino Schwarze found a solution for a xfs-related issue:

> I found a fix for lib/BackupPC/Lib.pm (the line with "map {...}" is
> the relevant one):
> 
> --- lib/BackupPC/Lib.pm 2007-11-26 04:00:07.0 +0100
> +++ lib/BackupPC/Lib.pm   2008-04-13 12:52:03.938619979 +0200
> @@ -485,10 +485,15 @@
>  
>  from_to($path, "utf8", $need->{charsetLegacy})
>  if ( $need->{charsetLegacy} ne "" );
> -return if ( !opendir(my $fh, $path) );
> +my ($fh);
> +if ( !opendir($fh, $path) ) {
> +   print "log ERROR: opendir ($path) failed\n";
> +   return;
> +}
> +
>  if ( $IODirentOk ) {
>  @entries = sort({ $a->{inode} <=> $b->{inode} } readdirent($fh));
> -map { $_->{type} = 0 + $_->{type} } @entries;   # make type numeric
> +map { $_->{type} = 0 + $_->{type}; $_->{type} = undef if ($_->{type} 
> eq BPC_DT_UNKNOWN); } @entries;   # make type numeric, unset unknown types
>  } else {
>  @entries = map { { name => $_} } readdir($fh);
>  }
> @@ -553,9 +559,11 @@
>  return if ( !chdir($dir) );
>  my $entries = $bpc->dirRead(".", {inode => 1, type => 1});
>  #print Dumper($entries);
> +#print ("log got ",scalar(@$entries)," entries for $dir\n");
>  foreach my $f ( @$entries ) {
>  next if ( $f->{name} eq ".." || $f->{name} eq "." && $dontDoCwd );
>  $param->{wanted}($f->{name}, "$dir/$f->{name}");
> +#if ( $f->{type} != BPC_DT_DIR ) { print ("log skipping 
> non-directory ", $f->{name}, " type: ", $f->{type}, "\n"); }
>  next if ( $f->{type} != BPC_DT_DIR || $f->{name} eq "." );
>  chdir($f->{name});
>  $bpc->find($param, "$dir/$f->{name}", 1);


HTH,
Bernhard

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers & brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing, & 
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA, & Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsyncd on Vista 64-bit cygwin vs SUA

2009-08-17 Thread Bernhard Ott
Hi,
anyone successfully using the SUA environment for backing up a windows 
vista 64bit client via ssh-rsync or rsyncd?
I failed running cygwin on Vista Business 6.0 64-bit and considered 
giving MS a chance ...

Any comments very much appreciated,

thanks in advance,
Bernhard

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd on Vista 64-bit cygwin vs SUA

2009-08-18 Thread Bernhard Ott
Koen Linders wrote:
> I don't know what you mean with SUA environment, but I use Deltacopy in
^^
It's Microsoft "Subsystem for UNIX-based Applications":
http://technet.microsoft.com/en-us/library/cc779522(WS.10).aspx

Regards,
Bernhard


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd on Vista 64-bit cygwin vs SUA

2009-08-18 Thread Bernhard Ott
Koen Linders wrote:
> I don't know what you mean with SUA environment, but I use Deltacopy in
> Vista 64 bit via rsyncd.
> 
> http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp
> 
> Works without a problem atm. Easy to use and you can copy the files to other
> computers and easily register the service.
> 
> Greetings,
> Koen Linders

So I will have to  play around with DeltaCopy (yet another 
win-client-solution ;-))!

Thanks,
Bernhard

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Excludes not working

2009-08-26 Thread Bernhard Ott
Carl Wilhelm Soderstrom wrote:
> On 08/26 12:57 , Osburn, Michael wrote:
>> I am trying to backup my backuppc server while excluding the backups
>> directory. No matter what I put under excludes in the config, I still
>> end up with the cpool and pc directories in my backups.
> 
> You misunderstand the exclude syntax. Here's an example for a (SMB) share
> named 'c$':
> 
> $Conf{BackupFilesExclude} = {
>'c$' => [
> '/RECYCLER', 
> '/winnt/tmp', 
> '/temp', 
> '/WUTemp', 
> '/WINDOWS', 
> '/Documents and Settings/*/Local Settings/Temporary Internet Files/', 
> '/Documents and Settings/*/Local Settings/history/', 
> '/Documents and Settings/*/Cookies/', 
> '/Documents and Settings/*/Favorites/', 
> '/Documents and Settings/*/IETldCache/', 
> '/Documents and Settings/*/IECompatCache/', 
> '/Documents and Settings/*/NetHood/', 
> '/Documents and Settings/*/PrivacIE/', 
> '/Documents and Settings/*/PrintHood/', 
> '/pagefile.sys', 
> '/hiberfil.sys',
> ]
>};
>  
hmm ... I'm having some problems with smb-tar excludes, too (my first 
smb client because of never ending VISTA 64-bit related issues with 
DeltaCopy and/or cygwin-rsyncd) and I read on:
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Smb_exclude


>1.  Backslashes (\) seem to be the only effective way to get smbclient to 
> correctly exclude files.
>2. Subfolders need to be followed by a \* to be correctly excluded.
>3. Files off of the root of the share need to be prepended by an extra 
> backslash to be correctly excluded.
>4. Folders off of the root of the share need to be prepended by an extra 
> backslash to be correctly excluded. 


Maybe I should try it your way then ... ;-)

Bernhard




--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Excludes not working

2009-08-27 Thread Bernhard Ott
Koen Linders wrote:
> Carl Wilhelm Soderstrom wrote:
>> On 08/26 12:57 , Osburn, Michael wrote:
>>> I am trying to backup my backuppc server while excluding the backups
>>> directory. No matter what I put under excludes in the config, I still
>>> end up with the cpool and pc directories in my backups.
>> You misunderstand the exclude syntax. Here's an example for a (SMB) share
>> named 'c$':
>>
>> $Conf{BackupFilesExclude} = {
>>'c$' => [
>> '/RECYCLER', 
>> '/winnt/tmp', 
>> '/temp', 
>> '/WUTemp', 
>> '/WINDOWS', 
>> '/Documents and Settings/*/Local Settings/Temporary Internet
> Files/', 
>> '/Documents and Settings/*/Local Settings/history/', 
>> '/Documents and Settings/*/Cookies/', 
>> '/Documents and Settings/*/Favorites/', 
>> '/Documents and Settings/*/IETldCache/', 
>> '/Documents and Settings/*/IECompatCache/', 
>> '/Documents and Settings/*/NetHood/', 
>> '/Documents and Settings/*/PrivacIE/', 
>> '/Documents and Settings/*/PrintHood/', 
>> '/pagefile.sys', 
>> '/hiberfil.sys',
>> ]
>>};
>>  
> hmm ... I'm having some problems with smb-tar excludes, too (my first 
> smb client because of never ending VISTA 64-bit related issues with 
> DeltaCopy and/or cygwin-rsyncd) and I read on:
> http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Smb_exclude
> 
> 
>>1.  Backslashes (\) seem to be the only effective way to get smbclient
> to correctly exclude files.
>>2. Subfolders need to be followed by a \* to be correctly excluded.
>>3. Files off of the root of the share need to be prepended by an extra
> backslash to be correctly excluded.
>>4. Folders off of the root of the share need to be prepended by an
> extra backslash to be correctly excluded. 
> 
> 
> Maybe I should try it your way then ... ;-)
> 
> Bernhard
> 
> 
> --
> 
> I backup Windows Vista without a problem with Deltacopy. I mainly followed
> one of the wiki pages to exclude the 'junction points'. There is more stuff
> than necessary in the one I list below, but I can't find the Wiki pages
> (what happened there?)
> 
> Ah, here is the link to the specific page for Vista
> www.cs.umd.edu/~cdunne/projs/backuppc_guide.html
> 
> Also mind the {ClientCharset}.
> 
> Greetings,
> Koen Linders
> 
Thanks for the tips, Koen,
I followed all these instructions (at least I think I did ;-)), but my 
problems were not related to excludes, I simply got no connection to the 
shares:
"@ERROR: chdir failed", so maybe a permission denied error due to ACL or 
similar.
Unfortunately I had no time to investigate any further (the workstation 
is down right now so I can't provide any logs).
Should I use "CYGWIN=ntsec tty" with DeltaCopy, too?
Whatever virtual directory I defined didn't show up when I connected 
(rsync -av u...@host::), only the standard share "Backup" was displayed.
Are you running the 64bit version?

Bernhard

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] par2 question

2005-08-16 Thread Bernhard Ott
Hi,
first of all: I'm simply loving backuppc, using it for 2 years now,
works flawlessly!

Sorry that this subject is not entirely backuppc-related:
What are your recommended chunk-sizes for burning tar-files (archive
host) to DVD-RW?
I'm neither expert on optical disks nor on filesystems, but when I
experienced problems reading files from CD/DVD s, the complete file was
corrupted: am I totally wrong when I think that par2 recovery data only
makes sense when the size of the chunks is smaller than the percentage
of the recovery data from par, i.e. using the -r 5 option on a file of
100MB each chunk shouldn't be bigger than 5MB to recover as many errors
as possible?

I googled around quite a lot an searched the list but didn't find an
answer (yet),
TIA & regards,
Bernhard



---
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] rsyncp problem --prune-empty-dirs

2007-02-15 Thread Bernhard Ott
Hi,
there seems to be a problem using the rsync --prune-empty-dirs
(-m) option with backuppc (see log-file/config).
The rsync command/options works with all clients invoked via shell (and, 
of course, without the -m option), but not via rsyncp. Seems like rsync 
"reads" all the directories and filters them afterwards, so it might be 
a timeout-issue?
Or am I missing something?

Regards,
Bernhard


### log
Connected to 192.168.x.x:873, remote version 29
Negotiated protocol version 26
Connected to module Ddrive
Sending args: --server --sender --numeric-ids --perms --owner --group -D 
--links --times --block-size=2048 --recursive --prune-empty-dirs -D 
--ignore-times . .
Read EOF:
Tried again: got 0 bytes
Done: 0 files, 0 bytes
Got fatal error during xfer (Unable to read 4 bytes)
Backup aborted (Unable to read 4 bytes)



### Rsync Args of host.pl
$Conf{RsyncArgs} = [
 '--numeric-ids',
 '--perms',
 '--owner',
 '--group',
 '--devices',
 '--links',
 '--times',
 '--block-size=2048',
 '--recursive',
'--prune-empty-dirs',
 '--checksum-seed=32761',
 # Add additional arguments here
 #
 '-D',

'--include', '**/',
'--include', '**/[mM][iI][tT]_[aA][lL][lL][eE][sS]/*',
'--exclude', '*',

];

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncp problem --prune-empty-dirs

2007-02-25 Thread Bernhard Ott
Holger Parplies wrote:
  presuming I had a new enough version of rsync for the man page to
include an
> explanation of what '--prune-empty-dirs' does, I'd probably be asking, why
> you would want to use that.
It's the only way (as far as I understood the rsync-man page) to include
a directory recursively. The downside is, that the whole tree is
included (containing only directories but no files) which makes it
difficult to find the backup files for recovery.
I tried the $Conf{BackupFilesOnly} first, but that didn't work.
> 
> Generally speaking, you can't just add any option your client side rsync
> might support. Some options might work, some might be silently ignored,
> others will break things. Is '--prune-empty-dirs' a request to the server
> side rsync process (modifying the file list) or to the client side
> (File::RsyncP in this case), or does it even affect the protocol exchange
> between both? File::RsyncP is known not to support all rsync options, much
> less recent extensions.
I was afraid to hear that;-)

> This seems to indicate you are not running the latest version of
> File::RsyncP. Which version are you running?
Debian says 0.64-1, backuppc is 2.1.2pl1

> 
>> Sending args: --server --sender --numeric-ids --perms --owner --group -D 
>> --links --times --block-size=2048 --recursive --prune-empty-dirs -D 
>> --ignore-times . .

> 
> This does not seem to agree with your config file.
you're right - I have to check that ...
> 
> 
> Are you sure your --include and --exclude options are compatible with what
> BackupPC generates? Are '--include=**/' and '--prune-empty-dirs' compatible?
Syntax is from manpage and, as mentioned above, the only way to solve my 
"problem" including a specific pattern (directory) wherever it shows up 
in the tree.
rsync -avm [EMAIL PROTECTED]::share --include=*/ --include=MIT_ALLES/* 
--exclude=* works as expected.
The main problem for me was to find out how backuppc and the different 
transfer methods deal with $Conf{BackupFilesOnly}-values: I still have 
to work on that next week ;-) Unfortunately I deleted the complete 
pc-directory (including the log files), so I have to set up a new host.pl.

Kind regards,
Bernhard


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] maximum backup file sizes - positive surprise?

2007-03-13 Thread Bernhard Ott
Hi,
I just discovered that backuppc (2.1.2pl1) managed to backup a 11,67GB 
file via rsyncd-cygwin ... how comes?
Are the known limitations on 
http://backuppc.sourceforge.net/faq/limitations.html#maximum_backup_file_sizes
deprecated?
Sorry if this is an "old" topic, but I didn't find the answer in the 
archives and I really need to know this for an ongoing project where big 
files are very common.
How far can I push the file size?
Is anybody interested in test results?

Regards,
Bernhard

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC for file backups

2007-03-14 Thread Bernhard Ott
Tino Schwarze wrote:
> I'd try to get some kind of incremental DB dump from these databases,
> then backup these. Backup of DB space directly is not a good idea since
> it usually leads to inconsistent databases. You should always use the
> databases facilities to e.g. take a nightly dump (which uses a
> transaction for consistency).

I use $Conf{DumpPreUserCmd} for that purpose (to be sure that the dumps 
and the backups are synchronized)
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_dumppreusercmd_

Regards,
Bernhard

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with backuppc

2007-03-15 Thread Bernhard Ott
Peter Nearing wrote:
> Aaron,
> 
>   When I ran the command line that it's trying, the data isn't coming, 
> rsync is running on the client, but it stops there.  The backuppc logs 
> state that it's saving the data as a partial, tho.
   ^^
If backuppc saves partial backups there must some kind of data finding 
its way to the server.
Which value is shown in the "Duration"-column of your hosts backup 
summary? If it's the same as in your config.pl settings then follow 
Aarons advice. Maybe try to split the shares?

BTW, what amount of data are we talking about?

Bernhard


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to reinstall BackupPc -

2007-03-17 Thread Bernhard Ott
vladimir domjan wrote:
> HI,
> 
> I just installed backuppc on debian box with apt-get install backuppc.
> It installed version 2.x.x.. Test it thourgh webinterface and it worked .
> 
> Then I decided to upgrade to latest stable release.
You mean backuppc 3.0 or debian sarge ;-)?

> I tried to remove backuppc and reinstall it. But I get message: This 
> module is already enabled!
I think apt "said" this because backuppc was still running when you 
installed the new package?

> Where to disable this module and how to reinstall backuppc.
Try: /etc/init.d/backuppc stop
Then start aptitude and *purge* your old backuppc-package (don't forget 
to make a backup of /etc/backuppc). Reinstall the new package with 
apt-get install (-f? debian might complain about broken dependencies) 
and you should be fine.

hope that helps,
Bernhard

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Bernhard Ott
 Original Message 
Subject: Re:[BackupPC-users] very slow backup speed
From: Evren Yurtesen <[EMAIL PROTECTED]>
To: David Rees <[EMAIL PROTECTED]>
Date: 26.03.2007 23:37

> David Rees wrote:
> 
> 
> It is true that BackupPC is great, however backuppc is slow because it 
> is trying to make backup of a single instance of each file to save 
> space. Now we are wasting (perhaps even more?) space to make it fast 
> when we do raid1.
You can't be serious about that: let's say you have a handful of 
workstations full backup 200GB each and perform backups for a couple of 
weeks - in my case after a month 1,4 TB for the fulls and 179GB for the 
incrementals. After pooling and compression: 203 (!) GB TOTAL.
Xfer time for a 130GB full: 50min. How fast are your tapes?
But if you prefer changing tapes (and spending a lot more money on the 
drives) - go ahead ... so much for "wasting space" ;-)

Regards, Bernhard


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync file size >50GB

2007-04-02 Thread Bernhard Ott
 Original Message 
Subject: Re:[BackupPC-users] rsync file size >50GB
From: Craig Barratt <[EMAIL PROTECTED]>
To: Bernhard Ott <[EMAIL PROTECTED]>
Date: 02.04.2007 09:23

> Bernhard Ott writes:
> 
>> So it seems that I have a good reason to perform the upgrade? ;-)
>> Do you recommend to continue using rsync for large files (10-20GB quite 
>> regularly, sometimes up to >80GB)? Due to the fact that I have to deal 
>> with a mixed OS-situation I would like to stick to rsync.
> 
> Yes - you should upgrade and rsync should work fine for large files.
> 
>> BTW can the recent version of RsyncP be used with the --prune-empty-dirs 
>> option?
> 
> I don't think so.  Looking at the rsync source implies it needs
> protocol version 29 to work, and File::RsyncP only supports up
> to 28.
Obviously I'm the only one who is looking forward using this option?
My clients are doing finite element calculations with *huge* data 
output. Backup strategy is to save the input data.
For top-priority projects we backup everything using a virtual host with 
different FullKeepCnt values including only a single specified 
subdirectory. So it's kind of a pain to have all the empty dirs 
transferred - but we can live with that. Or am I missing a much more 
elegant approach?

Thanks a lot for your help!
Bernhard

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync error: error allocating core memory buffers

2007-04-09 Thread Bernhard Ott
 Original Message 
Subject: Re:[BackupPC-users] rsync error: error allocating core memory 
buffers
From: Holger Parplies <[EMAIL PROTECTED]>
To: John Hannfield <[EMAIL PROTECTED]>
Date: 29.03.2007 02:24

 > Hi,
 >
 > John Hannfield wrote on 28.03.2007 at 16:12:23 [[BackupPC-users] 
rsync error: error allocating core memory buffers]:
 >> Backups work fine, but restores over rsync and ssh are failing with an
 >> rsync error:
 >> [...]
 >> Has anyone seen this before and know of a solution?
 >
 > no, but I notice that you are running different rsync versions (well, 
a new
 > rsync on the host and an older File::RsyncP on the BackupPC server, as it
 > seems):
 >
 >> Got remote protocol 29
 >> Negotiated protocol version 28
 >
Even the latest File::RsyncP only supports up
to 28 (as Craig pointed out)

Regards,
Bernhard


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Next 3.x.y release

2007-04-17 Thread Bernhard Ott
 Original Message 
Subject: Re:[BackupPC-users] Next 3.x.y release
From: Brendan Simon <[EMAIL PROTECTED]>
To: backuppc-users 
Date: 18.04.2007 07:27

> Cool. So are you saying that 3.0.0 has no known critical bugs, and
> that 3.1.0 would have minor bug fixes/improvements and some new
> features.  If so, then I should be confident in upgrading from 2.1.1
> to 3.0.0 and not have any problems, right?
> 
> Has anyone had problems upgrading from 2.1.1 to 3.0.0 ??? BTW, I'm
> running a Debian server.

Upgrade worked fine with debian package from unstable on debian etch for 
me. Just don't forget to upgrade File::RsyncP ;-)
So far only improvements, no bugs.

Bernhard


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/