[BackupPC-users] Upgrading is changing many thins in config.pl?!

2011-02-07 Thread Boniforti Flavio
Hello everybody.

I'm in the middle of upgrading on my Debian Sid of BackupPC and get many
differences in the config.pl file. Besides the differences depending on
custom parameters (done by me), the main differences I see are like:

-$Conf{BackupPCNightlyPeriod} = '1';
+$Conf{BackupPCNightlyPeriod} = 1;

There is only one difference: the removal of the ' signs. Is this vital
or may I leave the file untouched (thus saving me from doing changes to
my actual configuration)?

Thanks in advance and kind regards.

Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: fla...@piramide.ch 

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backup for all of the hosts at certain point of time

2011-02-07 Thread Левкович Андрей
is it possible to do so at a time run a backup for all of the hosts
--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup for all of the hosts at certain point of time

2011-02-07 Thread Sorin Srbu
-Original Message-
From: Левкович Андрей [mailto:volan...@inbox.ru]
Sent: Monday, February 07, 2011 1:13 PM
To: backuppc-users
Subject: [BackupPC-users] backup for all of the hosts at certain point
of time

is it possible to do so at a time run a backup for all of the hosts

I don't follow, please clarify!

Otherwise, in BPC you can specify a time window, within which time all hosts
will be queued to be backed up. Is this what you mean?

-- 
/Sorin


smime.p7s
Description: S/MIME cryptographic signature
--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup for all of the hosts at certain pointof time

2011-02-07 Thread Levkovich Andrew
i'l try to explane:
I have shedule: every day at 5pm bpc run queue, but at 5pm at run's backup only 
for 2 or 4 machins. but i need to satrt backup for all 20 host's


Mon, 7 Feb 2011 13:48:31 +0100 письмо от Sorin Srbu 
sorin.s...@orgfarm.uu.se:

 -Original Message-
 From: Левкович Андрей [mailto:volan...@inbox.ru]
 Sent: Monday, February 07, 2011 1:13 PM
 To: backuppc-users
 Subject: [BackupPC-users] backup for all of the hosts at certain point
 of time
 
 is it possible to do so at a time run a backup for all of the hosts
 
 I don't follow, please clarify!
 
 Otherwise, in BPC you can specify a time window, within which time all hosts
 will be queued to be backed up. Is this what you mean?
 
 -- 
 /Sorin
 
 --
 The modern datacenter depends on network connectivity to access resources
 and provide services. The best practices for maximizing a physical server's
 connectivity to a physical network are well understood - see how these
 rules translate into the virtual world? 
 http://p.sf.net/sfu/oracle-sfdevnlfb
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup for all of the hosts at certain pointof time

2011-02-07 Thread Daniel Berteaud
You need to adjust the MaxBackups number if you really want to backup
all 20. But increasing this value to 20 will probably slow down backups
(slower than backing them up 2 or 4 at a time).

Regards, Daniel

Le lundi 07 février 2011 à 16:26 +0300, Levkovich Andrew a écrit :
 i'l try to explane:
 I have shedule: every day at 5pm bpc run queue, but at 5pm at run's backup 
 only for 2 or 4 machins. but i need to satrt backup for all 20 host's
 
 
 Mon, 7 Feb 2011 13:48:31 +0100 письмо от Sorin Srbu 
 sorin.s...@orgfarm.uu.se:
 
  -Original Message-
  From: Левкович Андрей [mailto:volan...@inbox.ru]
  Sent: Monday, February 07, 2011 1:13 PM
  To: backuppc-users
  Subject: [BackupPC-users] backup for all of the hosts at certain point
  of time
  
  is it possible to do so at a time run a backup for all of the hosts
  
  I don't follow, please clarify!
  
  Otherwise, in BPC you can specify a time window, within which time all hosts
  will be queued to be backed up. Is this what you mean?
  
  -- 
  /Sorin
  
  --
  The modern datacenter depends on network connectivity to access resources
  and provide services. The best practices for maximizing a physical server's
  connectivity to a physical network are well understood - see how these
  rules translate into the virtual world? 
  http://p.sf.net/sfu/oracle-sfdevnlfb
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
 
 
 --
 The modern datacenter depends on network connectivity to access resources
 and provide services. The best practices for maximizing a physical server's
 connectivity to a physical network are well understood - see how these
 rules translate into the virtual world? 
 http://p.sf.net/sfu/oracle-sfdevnlfb
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-- 
Daniel Berteaud
FIREWALL-SERVICES SARL.
Société de Services en Logiciels Libres
Technopôle Montesquieu
33650 MARTILLAC
Tel : 05 56 64 15 32
Fax : 05 56 64 15 32
Mail: dan...@firewall-services.com
Web : http://www.firewall-services.com


--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup for all of the hosts at certain pointof time

2011-02-07 Thread Sorin Srbu
-Original Message-
From: Levkovich Andrew [mailto:volan...@inbox.ru]
Sent: Monday, February 07, 2011 2:27 PM
To: sorin.s...@orgfarm.uu.se; General list for user discussion,questions
and support
Subject: Re[2]: [BackupPC-users] backup for all of the hosts at certain
pointof time

i'l try to explane:
I have shedule: every day at 5pm bpc run queue, but at 5pm at run's
backup only for 2 or 4 machins. but i need to satrt backup for all 20
host's

Ah, ok. But you may want to reconsider backing up all your twenty hosts at
once. Are you sure your network and backup server can handle that amount of
load? Normally one sets the queue to handle two to four hosts at once.

-- 
/Sorin


smime.p7s
Description: S/MIME cryptographic signature
--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup for all of the hosts at certain pointof time

2011-02-07 Thread Sorin Srbu
-Original Message-
From: Daniel Berteaud [mailto:d...@firewall-services.com]
Sent: Monday, February 07, 2011 2:43 PM
To: Levkovich Andrew; General list for user discussion, questions and
support
Cc: sorin.s...@orgfarm.uu.se
Subject: Re: [BackupPC-users] backup for all of the hosts at certain
pointof time

You need to adjust the MaxBackups number if you really want to backup
all 20. But increasing this value to 20 will probably slow down backups
(slower than backing them up 2 or 4 at a time).

Exactly! 8-)

What kind of hardware and infrastructure would you actually need to perform a 
backup on twenty hosts at once, insofar, as this is quantifiable?

-- 
/Sorin


smime.p7s
Description: S/MIME cryptographic signature
--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup for all of the hosts at certain pointof time

2011-02-07 Thread Les Mikesell
On 2/7/11 7:26 AM, Levkovich Andrew wrote:
 i'l try to explane:
 I have shedule: every day at 5pm bpc run queue, but at 5pm at run's backup 
 only for 2 or 4 machins. but i need to satrt backup for all 20 host's

You will almost certainly finish faster if you back up a few at a time than if 
the server tries to run them all at once.  If you have a real need to have a 
consistent snapshot of all the filesystems at one point in time you'll probably 
need support from the target filesystems to do the snapshot locally (LVM, VSS, 
zfs, etc.) and let backuppc back up the snapshots.

-- 
   Les Mikesell
lesmikes...@gmail.com



--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup for all of the hosts at certain pointoftime

2011-02-07 Thread Levkovich Andrew
in my case - it's about 600Mb-1,5Gb from all hosts per a day (incr backup) :) 
the full backups about 25-30Gb, the are doing at night.

thanks for reply, I will watch for results of changing Maxbackups.


Mon, 7 Feb 2011 14:53:43 +0100 письмо от Sorin Srbu 
sorin.s...@orgfarm.uu.se:

 -Original Message-
 From: Daniel Berteaud [mailto:d...@firewall-services.com]
 Sent: Monday, February 07, 2011 2:43 PM
 To: Levkovich Andrew; General list for user discussion, questions and
 support
 Cc: sorin.s...@orgfarm.uu.se
 Subject: Re: [BackupPC-users] backup for all of the hosts at certain
 pointof time
 
 You need to adjust the MaxBackups number if you really want to backup
 all 20. But increasing this value to 20 will probably slow down backups
 (slower than backing them up 2 or 4 at a time).
 
 Exactly! 8-)
 
 What kind of hardware and infrastructure would you actually need to perform a 
 backup on twenty hosts at once, insofar, as this is quantifiable?
 
 -- 
 /Sorin
 
 --
 The modern datacenter depends on network connectivity to access resources
 and provide services. The best practices for maximizing a physical server's
 connectivity to a physical network are well understood - see how these
 rules translate into the virtual world? 
 http://p.sf.net/sfu/oracle-sfdevnlfb
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrading is changing many thins in config.pl?!

2011-02-07 Thread Boniforti Flavio
Hello again.

 I'm in the middle of upgrading on my Debian Sid of BackupPC 
 and get many differences in the config.pl file. Besides the 
 differences depending on custom parameters (done by me), the 
 main differences I see are like:
 
 -$Conf{BackupPCNightlyPeriod} = '1';
 +$Conf{BackupPCNightlyPeriod} = 1;

OK, not having too much time, I decided to go for the simple solution: I
told APT to maintain the actual config.pl

What now happened is that the upgrade *failed*, but still the 3.1.0 is
running.

Here the excerpts I got from APT:

Starting backuppc...2011-02-07 14:16:05 Another BackupPC is running (pid
1110); quitting...
invoke-rc.d: initscript backuppc, action start failed.
dpkg: error processing backuppc (--configure):
 subprocess installed post-installation script returned error exit
status 1

[...]

Errors were encountered while processing:
 backuppc
E: Sub-process /usr/bin/dpkg returned an error code (1)

Where may I look for errors?

Thanks,
Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: fla...@piramide.ch 

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrading is changing many thins in config.pl?!

2011-02-07 Thread Jeffrey J. Kosowsky
Why don't you just make a copy of the config file(s) and let apt
proceed normally and then restore the old config files after the
backup.
Then if anything breaks when you run with the old configs you can fix
it based on the error messages.
But based on my recollection, most of the changes were additions of
new variables or minor grammar/typo corrections, so it may just work
as-is.

What I did was to do a 'diff -ruw' between my edited 3.1.0 config file
and the original virgin 3.1.0 config file.
Then I applied that as a *patch* to the new 3.2.0 config file. I did
this across versions (3.1.0 -- 3.2.0) and across Distros/architecture
(Fedora 12/X86 -- Debian Lenny/armel). I think of several dozen
config changes only 2 hunks failed to patch and it was pretty obvious
how to fix them.

Boniforti Flavio wrote at about 15:18:40 +0100 on Monday, February 7, 2011:
  Hello again.
  
   I'm in the middle of upgrading on my Debian Sid of BackupPC 
   and get many differences in the config.pl file. Besides the 
   differences depending on custom parameters (done by me), the 
   main differences I see are like:
   
   -$Conf{BackupPCNightlyPeriod} = '1';
   +$Conf{BackupPCNightlyPeriod} = 1;
  
  OK, not having too much time, I decided to go for the simple solution: I
  told APT to maintain the actual config.pl
  
  What now happened is that the upgrade *failed*, but still the 3.1.0 is
  running.
  
  Here the excerpts I got from APT:
  
  Starting backuppc...2011-02-07 14:16:05 Another BackupPC is running (pid
  1110); quitting...
  invoke-rc.d: initscript backuppc, action start failed.
  dpkg: error processing backuppc (--configure):
   subprocess installed post-installation script returned error exit
  status 1
  
  [...]
  
  Errors were encountered while processing:
   backuppc
  E: Sub-process /usr/bin/dpkg returned an error code (1)
  
  Where may I look for errors?
  
  Thanks,
  Flavio Boniforti
  
  PIRAMIDE INFORMATICA SAGL
  Via Ballerini 21
  6600 Locarno
  Switzerland
  Phone: +41 91 751 68 81
  Fax: +41 91 751 69 14
  URL: http://www.piramide.ch
  E-mail: fla...@piramide.ch 
  
  --
  The modern datacenter depends on network connectivity to access resources
  and provide services. The best practices for maximizing a physical server's
  connectivity to a physical network are well understood - see how these
  rules translate into the virtual world? 
  http://p.sf.net/sfu/oracle-sfdevnlfb
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] One more time

2011-02-07 Thread Carl Wilhelm Soderstrom
On 02/03 01:45 , David Williams wrote:
 Got a little free time so thought that I would try again to see how I 
 can back up my laptop.
 It's set to DHCP and is connected to the same network as the BackupPC 
 server.

Is there any hope you can get a static IP address assignment for your
laptop, so that the BackupPC server doesn't have to 'hunt' for it?

This is what we do everywhere, and it solves the problem perfectly.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] One more time

2011-02-07 Thread David Williams


  
  
The problem that I have is that I travel
  everyweek and hook my laptop onto clients networks so DHCP is
  needed.
  That said, perhaps there is a way (I think there is) that I can
  force my DHCP server at home to always provide the same internal
  IP address to my laptop right, by specifying the MAC address or
  something?


  
  David Williams
  Check
  out our WebOS mobile phone app for the Palm Pre and
  Pixi:

Golf
  Caddie | Golf
Caddie Forum | Golf
  Caddie FAQ by DTW-Consulting, Inc.
  
  


On 2/7/2011 10:49 AM, Carl Wilhelm Soderstrom wrote:

  On 02/03 01:45 , David Williams wrote:

  
Got a little free time so thought that I would try again to see how I 
can back up my laptop.
It's set to DHCP and is connected to the same network as the BackupPC 
server.

  
  
Is there any hope you can get a static IP address assignment for your
laptop, so that the BackupPC server doesn't have to 'hunt' for it?

This is what we do everywhere, and it solves the problem perfectly.



  

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] One more time

2011-02-07 Thread Carl Wilhelm Soderstrom
On 02/07 11:38 , David Williams wrote:
 That said, perhaps there is a way (I think there is) that I can force my 
 DHCP server at home to always provide the same internal IP address to my 
 laptop right, by specifying the MAC address or something?

That's what I was suggesting, tho perhaps not clearly enough.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC and Windows junction points

2011-02-07 Thread Jeffrey J. Kosowsky
There was a thread a little while back warning about junction point
and Windows Vista/7. Also, the Wikki
(http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Common_backup_excludes)
talks about the need to exclude Junction points to avoid duplicate
backup trees.

But it seems to me that at least when using cygwin rsync, that
junction points are treated as symlinks so that there doesn't appear
to be any duplication in backups.

The only issue may be in restoring in that cygwin rsync won't
distinguish between true symlinks and junction points which are
different animals in the Windows world.

Am I missing something?

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] One more time

2011-02-07 Thread Les Mikesell
On 2/7/2011 11:38 AM, David Williams wrote:
 The problem that I have is that I travel everyweek and hook my laptop
 onto clients networks so DHCP is needed.
 That said, perhaps there is a way (I think there is) that I can force my
 DHCP server at home to always provide the same internal IP address to my
 laptop right, by specifying the MAC address or something?

Yes, all DHCP servers should have a way to reserve an IP by MAC address, 
and most will give the same MAC the same IP for some reasonable length 
of time anyway unless there is a big turnover with the lease expired. 
Anyway, if you just connect to the backuppc web interface from the 
laptop itself and request the backup, it will find you - and keep 
working at least until the IP changes.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] One more time

2011-02-07 Thread David Williams


  
  
Ok,
  
  I will take a look into that. I'm not a networking person by any
  means and tend to stumble through these things :)


  
  David Williams
  Check
  out our WebOS mobile phone app for the Palm Pre and
  Pixi:

Golf
  Caddie | Golf
Caddie Forum | Golf
  Caddie FAQ by DTW-Consulting, Inc.
  
  


On 2/7/2011 12:19 PM, Carl Wilhelm Soderstrom wrote:

  On 02/07 11:38 , David Williams wrote:

  
That said, perhaps there is a way (I think there is) that I can force my 
DHCP server at home to always provide the same internal IP address to my 
laptop right, by specifying the MAC address or something?

  
  
That's what I was suggesting, tho perhaps not clearly enough.



  

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] One more time

2011-02-07 Thread David Williams


On 2/7/2011 12:32 PM, Les Mikesell wrote:

On 2/7/2011 11:38 AM, David Williams wrote:

The problem that I have is that I travel everyweek and hook my laptop
onto clients networks so DHCP is needed.
That said, perhaps there is a way (I think there is) that I can force my
DHCP server at home to always provide the same internal IP address to my
laptop right, by specifying the MAC address or something?

Yes, all DHCP servers should have a way to reserve an IP by MAC address,
and most will give the same MAC the same IP for some reasonable length
of time anyway unless there is a big turnover with the lease expired.
*Anyway, if you just connect to the backuppc web interface from the
laptop itself and request the backup, it will find you - and keep
working at least until the IP changes.*

I do connect to the web interface from the laptop itself and request the 
backup and I get the following message:


laptop1 is a DHCP host, and I don't know its IP address. I checked the 
netbios name of 192.168.15.155, and found that that machine is not laptop1.


Until I see laptop1 at a particular DHCP address, you can only start 
this request from the client machine itself.


That's my issue, but will look into trying to reserve a specific address 
for this laptop as that's probably an easier solution to the problem, at 
least for me :)



--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Windows junction points

2011-02-07 Thread Michael Stowe
 There was a thread a little while back warning about junction point
 and Windows Vista/7. Also, the Wikki
 (http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Common_backup_excludes)
 talks about the need to exclude Junction points to avoid duplicate
 backup trees.

 But it seems to me that at least when using cygwin rsync, that
 junction points are treated as symlinks so that there doesn't appear
 to be any duplication in backups.

 The only issue may be in restoring in that cygwin rsync won't
 distinguish between true symlinks and junction points which are
 different animals in the Windows world.

 Am I missing something?

I don't *think* you are -- junction points have been around since Windows
2000 or so, and are best described as a kind of limited symbolic link --
to be confusingly replaced in Vista with NTFS symbolic links (symlinks)
which are still called junction points for historical reasons.

These are not to be confused with directory junctions, which was kind of
the missing piece of a symbolic link -- and NTFS *does* also do hard
links.  On the plus side, in more recent versions of NTFS, although the
implementation ultimately is reparse point weirdness, behaves pretty much
like POSIX symbolic and hard links.

I'll whang together a chart:
POSIX  |   Windows 7  | Older Windows
-
symbolic link  | soft link or symlink | junction point/directory junction
hard link  | hard link| hard link

Last I checked, cygwin/rsync/tar treated modern Windows symbolic links
sanely, and treated hard links like unrelated copies of the same file. 
I'm not sure if this is still the case or what the ramifications are for
recovery.

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] One more time

2011-02-07 Thread Les Mikesell
On 2/7/2011 12:41 PM, David Williams wrote:

 The problem that I have is that I travel everyweek and hook my laptop
 onto clients networks so DHCP is needed.
 That said, perhaps there is a way (I think there is) that I can force my
 DHCP server at home to always provide the same internal IP address to my
 laptop right, by specifying the MAC address or something?
 Yes, all DHCP servers should have a way to reserve an IP by MAC address,
 and most will give the same MAC the same IP for some reasonable length
 of time anyway unless there is a big turnover with the lease expired.
 *Anyway, if you just connect to the backuppc web interface from the
 laptop itself and request the backup, it will find you - and keep
 working at least until the IP changes.*

 I do connect to the web interface from the laptop itself and request the
 backup and I get the following message:

 laptop1 is a DHCP host, and I don't know its IP address. I checked the
 netbios name of 192.168.15.155, and found that that machine is not laptop1.

That doesn't mean it can't find you. It means it didn't like the name it 
found.

 Until I see laptop1 at a particular DHCP address, you can only start
 this request from the client machine itself.

 That's my issue, but will look into trying to reserve a specific address
 for this laptop as that's probably an easier solution to the problem, at
 least for me :)

The check could be case sensitive.  What do you see if you do:
nmblookup -A 192.168.15.155

or from windows, 'nbtstat -A 192.168.15.155'?

Also, a quick brute-force fix would be to set ClientAlias to the current 
IP address of the box, changing as needed.

-- 
   Les Mikesell
lesmikes...@gmail.com




--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] *BUMP* *BUMP* Re: BackupPC perl code hacking question... (Craig any chance you might have a suggestion?)

2011-02-07 Thread Jeffrey J. Kosowsky
Let me rewrite my earlier posting to be more clear so maybe someone
can help me.

I am using 'rsyncd' to backup several of my systems.
Rather than storing the rsyncd secret in the BackupPC config files as
$Conf{RsyncdPasswd}, I would prefer to keep it stored only once in each client's
/etc/rsyncd.secrets file.
I then would like to use DumpPreUserCmd to retrieve the rsyncd secret
from the client and set the $Conf{RsyncdPasswd} then.

To do this, I used the fact that DumpPreUserCmd can be a perl
subroutine. Here is a simplified version of my actual command

$Conf{DumpPreUserCmd} = {sub {\$args[1]{RsyncdPasswd} = `ssh
-x mybackuypclient get_rsyncd_secret`}};

This uses the fact that $args[1]=$Conf and $args[0]=$vars (see
BackupPC_dump). So, that \$args[1]{RsyncdPasswd} is equivalent to
$Conf{RsyncdPasswd}.

However, while this does set $Conf{RsyncdPasswd} properly within the
BackupPC_dump routine itself a I can verify by Data dumping it with
the routine, the value *FAILS* to be passed on to Rsync.pm where it is
actually used (indeed it remains set to '').

So, my question is is there any way to dynamically set Conf parameters
along the lines I am trying to do?


Jeffrey J. Kosowsky wrote at about 13:27:37 -0500 on Sunday, December 19, 2010:
  Jeffrey J. Kosowsky wrote at about 12:53:28 -0500 on Monday, December 13, 
  2010:
For reasons I can explain later, I am trying to set
$Conf{RsyncdPasswd} in the main routine of BackupPC_dump (I am
actually trying to do something a bit more complex but this is easier
to understand).

Now since %Conf = $bpc-Conf(), I would have thought that for example
setting $Conf{RsyncPasswd} = mypasswd would then be pushed down to
all the routines called directly or indirectly from BackupPC_dump.

However, in Rsync.pm where the value of $Conf{RsyncPasswd} is actually
used, the value remains at ''.

(Of course setting the paramter the normal way within a config file
works and shows up as set in Rsync.pm)

So why isn't it working when I set it at the top level?
And what would I have to set at the top level to make it properly
passed to Rsync.pm?

I'm sure I must be missing something about how perl inherits and/or
overwrites variables... but I am stumped here...


  --
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
  
  --
  Lotusphere 2011
  Register now for Lotusphere 2011 and learn how
  to connect the dots, take your collaborative environment
  to the next level, and enter the era of Social Business.
  http://p.sf.net/sfu/lotusphere-d2d
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Windows junction points

2011-02-07 Thread Jeffrey J. Kosowsky
Michael Stowe wrote at about 12:44:51 -0600 on Monday, February 7, 2011:
   There was a thread a little while back warning about junction point
   and Windows Vista/7. Also, the Wikki
   (http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Common_backup_excludes)
   talks about the need to exclude Junction points to avoid duplicate
   backup trees.
  
   But it seems to me that at least when using cygwin rsync, that
   junction points are treated as symlinks so that there doesn't appear
   to be any duplication in backups.
  
   The only issue may be in restoring in that cygwin rsync won't
   distinguish between true symlinks and junction points which are
   different animals in the Windows world.
  
   Am I missing something?
  
  I don't *think* you are -- junction points have been around since Windows
  2000 or so, and are best described as a kind of limited symbolic link --
  to be confusingly replaced in Vista with NTFS symbolic links (symlinks)
  which are still called junction points for historical reasons.
  
  These are not to be confused with directory junctions, which was kind of
  the missing piece of a symbolic link -- and NTFS *does* also do hard
  links.  On the plus side, in more recent versions of NTFS, although the
  implementation ultimately is reparse point weirdness, behaves pretty much
  like POSIX symbolic and hard links.
  
  I'll whang together a chart:
  POSIX  |   Windows 7  | Older Windows
  -
  symbolic link  | soft link or symlink | junction point/directory junction
  hard link  | hard link| hard link
  
  Last I checked, cygwin/rsync/tar treated modern Windows symbolic links
  sanely, and treated hard links like unrelated copies of the same file. 
  I'm not sure if this is still the case or what the ramifications are for
  recovery.
  

Thanks for the additional clarification!
Now just to be extra certain, am I correct in my observation that
while Win7 add lots of junction points (which as we both agree are
treated as symbolic links), it does not add any hard links.

So, if so, then there really shouldn't be any backup duplication
problem unless the *user* introduces his/her own new hard links either
via data or new program installs. But I also haven't seen many (if
any) hard links in typical commercial software.

So, I am concluding that from a backup perspective I don't need to
worry about data duplication.

---
On a side note, I *am* looking for a good way to cleanly list all the
junction points so that I can periodically catalog them for potential
future restore.

Note I tried dir /aL /s but it doesn't give a very clean listing
plus it seems to itself get hung up on junction loops. So, is there
any good code (either cmd.exe, powershell, or cgywin) to find all
junction points and list them in a simple 2-column like list
consisting of the source and the target (note standard cygwin
'find' or 'ls' won't help since it doesn't distinguish between
symlinks and junction points)

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Windows junction points

2011-02-07 Thread John Rouillard
On Mon, Feb 07, 2011 at 01:30:22PM -0500, Jeffrey J. Kosowsky wrote:
 There was a thread a little while back warning about junction point
 and Windows Vista/7. Also, the Wikki
 (http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Common_backup_excludes)
 talks about the need to exclude Junction points to avoid duplicate
 backup trees.
 
 But it seems to me that at least when using cygwin rsync, that
 junction points are treated as symlinks so that there doesn't appear
 to be any duplication in backups.
 
 The only issue may be in restoring in that cygwin rsync won't
 distinguish between true symlinks and junction points which are
 different animals in the Windows world.
 
 Am I missing something?

I think so. The junction point isn't really treated as a symbolic
link. Rsync will back up a symbolic link as a link, it won't
dereference it (unless you ask it to). However a junction point grafts
the target location into the tree at that point and rsync merrily
continues to walk down and back up the grafted part of the tree. So
you have two copies of the files:

  1 the original location the junction point is pointing to
  2 the same files located under the junction point

Also your backups don't have a record of the junction point that rsync
traversed. When you restore the files you get two copies of the
grafted tree. One at each location.

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] R: Upgrading is changing many thins in config.pl?!

2011-02-07 Thread Boniforti Flavio
Hello Jeff and sorry for top-replying but I'm not using a comfortable interface 
right now...

You state that there are minor corrections: do you think that the ' (single 
quotes) now *have to be used* for delimiting parameter values?

In fact, I'm now in a quite strange situation: 3.1.0 still running, but dpkg 
-l | grep backuppc tells me that 3.2.0 is installed!

Any clues?

Thanks.

Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: fla...@piramide.ch



-Messaggio originale-
Da: Jeffrey J. Kosowsky [mailto:backu...@kosowsky.org]
Inviato: lun 07.2.11 15:49
A: General list for user discussion,questions and support
Oggetto: Re: [BackupPC-users] Upgrading is changing many thins in config.pl?!
 
Why don't you just make a copy of the config file(s) and let apt
proceed normally and then restore the old config files after the
backup.
Then if anything breaks when you run with the old configs you can fix
it based on the error messages.
But based on my recollection, most of the changes were additions of
new variables or minor grammar/typo corrections, so it may just work
as-is.

What I did was to do a 'diff -ruw' between my edited 3.1.0 config file
and the original virgin 3.1.0 config file.
Then I applied that as a *patch* to the new 3.2.0 config file. I did
this across versions (3.1.0 -- 3.2.0) and across Distros/architecture
(Fedora 12/X86 -- Debian Lenny/armel). I think of several dozen
config changes only 2 hunks failed to patch and it was pretty obvious
how to fix them.

Boniforti Flavio wrote at about 15:18:40 +0100 on Monday, February 7, 2011:
  Hello again.
  
   I'm in the middle of upgrading on my Debian Sid of BackupPC 
   and get many differences in the config.pl file. Besides the 
   differences depending on custom parameters (done by me), the 
   main differences I see are like:
   
   -$Conf{BackupPCNightlyPeriod} = '1';
   +$Conf{BackupPCNightlyPeriod} = 1;
  
  OK, not having too much time, I decided to go for the simple solution: I
  told APT to maintain the actual config.pl
  
  What now happened is that the upgrade *failed*, but still the 3.1.0 is
  running.
  
  Here the excerpts I got from APT:
  
  Starting backuppc...2011-02-07 14:16:05 Another BackupPC is running (pid
  1110); quitting...
  invoke-rc.d: initscript backuppc, action start failed.
  dpkg: error processing backuppc (--configure):
   subprocess installed post-installation script returned error exit
  status 1
  
  [...]
  
  Errors were encountered while processing:
   backuppc
  E: Sub-process /usr/bin/dpkg returned an error code (1)
  
  Where may I look for errors?
  
  Thanks,
  Flavio Boniforti
  
  PIRAMIDE INFORMATICA SAGL
  Via Ballerini 21
  6600 Locarno
  Switzerland
  Phone: +41 91 751 68 81
  Fax: +41 91 751 69 14
  URL: http://www.piramide.ch
  E-mail: fla...@piramide.ch 
  
  --
  The modern datacenter depends on network connectivity to access resources
  and provide services. The best practices for maximizing a physical server's
  connectivity to a physical network are well understood - see how these
  rules translate into the virtual world? 
  http://p.sf.net/sfu/oracle-sfdevnlfb
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

winmail.dat--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Windows junction points

2011-02-07 Thread Michael Stowe
 On Mon, Feb 07, 2011 at 01:30:22PM -0500, Jeffrey J. Kosowsky wrote:
 There was a thread a little while back warning about junction point
 and Windows Vista/7. Also, the Wikki
 (http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Common_backup_excludes)
 talks about the need to exclude Junction points to avoid duplicate
 backup trees.

 But it seems to me that at least when using cygwin rsync, that
 junction points are treated as symlinks so that there doesn't appear
 to be any duplication in backups.

 The only issue may be in restoring in that cygwin rsync won't
 distinguish between true symlinks and junction points which are
 different animals in the Windows world.

 Am I missing something?

 I think so. The junction point isn't really treated as a symbolic
 link. Rsync will back up a symbolic link as a link, it won't
 dereference it (unless you ask it to). However a junction point grafts
 the target location into the tree at that point and rsync merrily
 continues to walk down and back up the grafted part of the tree. So
 you have two copies of the files:

This has NOT been my experience on Windows 7.  I simply get symlinks,
which is what I expect.

The difference may be explained by versions of cygwin/rsync; I do recall
older versions following symlinks as you describe, but that is NOT what I
see.  I don't have any Vista systems, so I can't speak to how those
behave, but realistically, it should be a matter of how rsync sees NTFS,
which is semantically identical to a *nix symbolic link.

   1 the original location the junction point is pointing to
   2 the same files located under the junction point

 Also your backups don't have a record of the junction point that rsync
 traversed. When you restore the files you get two copies of the
 grafted tree. One at each location.

 --
   -- rouilj

 John Rouillard   System Administrator
 Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111


--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Windows junction points

2011-02-07 Thread Michael Stowe
 Thanks for the additional clarification!
 Now just to be extra certain, am I correct in my observation that while
Win7 add lots of junction points (which as we both agree are treated as
symbolic links), it does not add any hard links.

Yes, I'm not aware of *any* hard links used in any Windows OS (although
you *can*, none are set up by default.)

 So, if so, then there really shouldn't be any backup duplication problem
unless the *user* introduces his/her own new hard links either via data
or new program installs. But I also haven't seen many (if any) hard
links in typical commercial software.

Yes.

 So, I am concluding that from a backup perspective I don't need to worry
about data duplication.

I'd say yes to this as well.

 On a side note, I *am* looking for a good way to cleanly list all the
junction points so that I can periodically catalog them for potential
future restore.

 Note I tried dir /aL /s but it doesn't give a very clean listing plus
it seems to itself get hung up on junction loops. So, is there any good
code (either cmd.exe, powershell, or cgywin) to find all junction points
and list them in a simple 2-column like list
 consisting of the source and the target (note standard cygwin 'find'
or 'ls' won't help since it doesn't distinguish between
 symlinks and junction points)

My first suggestion is what you've already tried:  dir /aL  The Windows
command shell behaves quite differently than POSIX, so (for example)
deleting a symlink to a directory in Windows actually deletes the
*contents* not the symlink itself.  (Instead, you use rmdir to delete the
symlink.  Yeesh.)

I'm not aware of a tool that gets past the looping issues, or even that
has better output than dir (this doesn't mean they don't exist.)



--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] One more time

2011-02-07 Thread Tyler J. Wagner
On Mon, 2011-02-07 at 12:39 -0500, Ryan Blake wrote:
 However, if that's not an option for whatever reason, the only other option 
 would be to ensure that your dhcpd service is properly connected/integrated 
 with bind [named] (assuming you are using these).

That's what I do at my office. However, I use dnsmasq at home, which
provides both DNS and DHCP. It automatically integrates them, so when
you supply your hostname with the DHCP request (as most clients do), it
gets added to the local DNS domain automatically.

Also, consider Bonjour/Avahi. Then you can use hostname.local as your
name/alias and it will work. Ubuntu and Macs will support this out of
the box. On Windows it's easy to install.

Regards,
Tyler

-- 
... that your voice is amplified to the degree where it reaches from
one end of the country to the other does not confer upon you greater
wisdom or understanding than you possessed when your voice reached only
from one end of the bar to the other.
   -- Edward R. Murrow


--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Issue with remote backup of server(s) over VPN after failover

2011-02-07 Thread Scott Saunders
I've got a couple of servers running in a 2 node master/slave cluster 
using pacemaker(corosync)/drbd. Like other servers, I've got them 
configured to backup to a local BackupPC server as well as a remote (VPN 
over T1) BackupPC server (rsync over ssh for both). However, with the 
cluster, only the master node has the partition mounted that is to be 
backed up, so the backups for the slave node will always fail. This is 
ok, but maybe there is a better way to do this? Anyway, to get the 
backups started I brought the remote backup server local to take a full 
backup (because ~300GB). After a fail over of the master node to the 
slave node the slave becomes the new master, gets the partition mounted 
and thus has something to backup. The local backups work without a 
problem on the new master. The remote backups act like they are working 
on the new master, but never actually finish. I've let them go more than 
a week, which is well past the default client timeout which has actually 
never taken effect with these two boxes. This erroneous behavior 
persists when failing back over to the original master. The only way I 
get the remote backups going again is to bring the remote server local 
for a full backup. Any subsequent remote backups work after this until a 
fail over of the cluster occurs. Remote backups for other servers in the 
past have been performed without these issues. Any ideas as to why there 
are issues with the remote backup in this setup? And what I might try to 
get the backups running again on the master node after a fail over 
without having to bring the remote server local every time?

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Windows junction points

2011-02-07 Thread John Rouillard
On Mon, Feb 07, 2011 at 01:56:08PM -0600, Michael Stowe wrote:
  On a side note, I *am* looking for a good way to cleanly list all the
 junction points so that I can periodically catalog them for potential
 future restore.
 
  Note I tried dir /aL /s but it doesn't give a very clean listing plus
 it seems to itself get hung up on junction loops. So, is there any good
 code (either cmd.exe, powershell, or cgywin) to find all junction points
 and list them in a simple 2-column like list
  consisting of the source and the target (note standard cygwin 'find'
 or 'ls' won't help since it doesn't distinguish between
  symlinks and junction points)
 
 My first suggestion is what you've already tried:  dir /aL  The Windows
 command shell behaves quite differently than POSIX, so (for example)
 deleting a symlink to a directory in Windows actually deletes the
 *contents* not the symlink itself.  (Instead, you use rmdir to delete the
 symlink.  Yeesh.)
 
 I'm not aware of a tool that gets past the looping issues, or even that
 has better output than dir (this doesn't mean they don't exist.)

junction.exe from sysinternals comes to mind as another junction
detection/creation tool. Not sure if it handles loops properly or not.

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Windows junction points

2011-02-07 Thread Jeffrey J. Kosowsky
John Rouillard wrote at about 23:44:48 + on Monday, February 7, 2011:
  On Mon, Feb 07, 2011 at 01:56:08PM -0600, Michael Stowe wrote:
On a side note, I *am* looking for a good way to cleanly list all the
   junction points so that I can periodically catalog them for potential
   future restore.
   
Note I tried dir /aL /s but it doesn't give a very clean listing plus
   it seems to itself get hung up on junction loops. So, is there any good
   code (either cmd.exe, powershell, or cgywin) to find all junction points
   and list them in a simple 2-column like list
consisting of the source and the target (note standard cygwin 'find'
   or 'ls' won't help since it doesn't distinguish between
symlinks and junction points)
   
   My first suggestion is what you've already tried:  dir /aL  The Windows
   command shell behaves quite differently than POSIX, so (for example)
   deleting a symlink to a directory in Windows actually deletes the
   *contents* not the symlink itself.  (Instead, you use rmdir to delete the
   symlink.  Yeesh.)
   
   I'm not aware of a tool that gets past the looping issues, or even that
   has better output than dir (this doesn't mean they don't exist.)
  
  junction.exe from sysinternals comes to mind as another junction
  detection/creation tool. Not sure if it handles loops properly or not.

Unfortunately junction.exe also gets caught up in loops (due to
following junction points).


I don't understand why Microsoft can't do recursion 101.
As in:

find_junctions(dir) {
for each 'entry' in 'dir' {
if 'entry' is a junction, print junction
else if 'entry' is a directory, find_junctions(dir)
}
}

How hard would that be?

I don't know *anything* about PowerShell, but is that something that
would be easy or even possible to write in PowerShell?

(I could do it in bash with calls to junction.exe or dir.exe but I am
concerned that the speed would suffer if I had to call a function like
junction on *every* single directory entry recursively)

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] R: Upgrading is changing many thins in config.pl?!

2011-02-07 Thread Jeffrey J. Kosowsky
Boniforti Flavio wrote at about 20:35:37 +0100 on Monday, February 7, 2011:
  Hello Jeff and sorry for top-replying but I'm not using a comfortable 
  interface right now...
  
  You state that there are minor corrections: do you think that the ' 
  (single quotes) now *have to be used* for delimiting parameter values?

The config file is really just perl code and since perl hasn't
changed and the code is just eval'd it probably won't make any
difference.

  In fact, I'm now in a quite strange situation: 3.1.0 still running, but 
  dpkg -l | grep backuppc tells me that 3.2.0 is installed!

I'm no debian expert and on my debian machine I actually compiled it
from scratch since I didn't like the dependence on apache (I am
running it on an arm-based  plugcomputer with just 512MB for root and
not much processing power)

But it seems like potentially the install was only partial.
I would use 'dpkg -r' to remove what is there now and then reinstall.
Maybe even do a purge if you are careful to copy over your old config files.

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/