[BackupPC-users] Backing Us an NSS volume on OES2

2009-06-10 Thread Paul Hennion
Hi All,

I am wanting to run a backuppc server that will backup all of our servers 
including some that 
run SLES with OES that have NSS volumes. Is this possible and what distro would 
be best to 
run the backuppc server?

TIA,
Paul

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing Us an NSS volume on OES2

2009-06-10 Thread anandiwp
You can use any distro of your choice. Just download the package from sf.net 
and fire away. 

--
Thanks and Regards,

Anand Gupta

-Original Message-
From: Paul Hennion p...@bhs.org.za

Date: Wed, 10 Jun 2009 10:59:45 
To: backuppc-users@lists.sourceforge.net
Subject: [BackupPC-users] Backing Us an NSS volume on OES2


Hi All,

I am wanting to run a backuppc server that will backup all of our servers 
including some that 
run SLES with OES that have NSS volumes. Is this possible and what distro would 
be best to 
run the backuppc server?

TIA,
Paul

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing Us an NSS volume on OES2

2009-06-10 Thread Paul Hennion
Um, yes, but what about sharing the nss volumes? how does one authenticate?
-P

On 10 Jun 2009 at 9:39, anand...@gmail.com wrote:

 You can use any distro of your choice. Just download the package from sf.net 
 and fire away. 
 
 --
 Thanks and Regards,
 
 Anand Gupta
 
 -Original Message-
 From: Paul Hennion p...@bhs.org.za
 
 Date: Wed, 10 Jun 2009 10:59:45 
 To: backuppc-users@lists.sourceforge.net
 Subject: [BackupPC-users] Backing Us an NSS volume on OES2
 
 
 Hi All,
 
 I am wanting to run a backuppc server that will backup all of our servers 
 including some that 
 run SLES with OES that have NSS volumes. Is this possible and what distro 
 would be best to 
 run the backuppc server?
 
 TIA,
 Paul
 
 --
 Crystal Reports - New Free Runtime and 30 Day Trial
 Check out the new simplified licensing option that enables unlimited
 royalty-free distribution of the report engine for externally facing 
 server and web deployment.
 http://p.sf.net/sfu/businessobjects
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 --
 Crystal Reports - New Free Runtime and 30 Day Trial
 Check out the new simplified licensing option that enables unlimited
 royalty-free distribution of the report engine for externally facing 
 server and web deployment.
 http://p.sf.net/sfu/businessobjects
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backups not working - backupdisable picking up incorrectly

2009-06-10 Thread Steve Redmond
On Jun 09, Les Mikesell wrote:
 
 The usual reason for backups stopping is that your disk space is nearly 
 full but the error message makes this look like some other problem.


There's well over 600 gigs available on the current storage pool so I
don't think this is the issue. I've just gone through and checked all
settings from the GUI aswell and there's nothing in place saying backups
should be disabled at all.

How does backuppc decide when there is not enough disk space to consider
performing backups?

Regards,
- Steve

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SOLVED: backups not working - backupdisable picking up config incorrectly

2009-06-10 Thread Steve Redmond
Solved:

For reference if anyone else has this problem, it seems that at some
point (presumably to stop backups) someone has changed the FullPeriod
option to -1, which upon reading the comments in greater detail seems to
disable backups from running. 

Having changed this to a positive value - I can now get backups running.

Thanks for the pointers.

Regards,
- Steve


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing Us an NSS volume on OES2

2009-06-10 Thread Anand Gupta
Sorry not sure on that, but i am sure some one on the mailing list knows 
and will reply to you.


Thanks and Regards,

Anand Gupta

 Original Message  
Subject: Re: [BackupPC-users] Backing Us an NSS volume on OES2
From: Paul Hennion p...@bhs.org.za
To: anand...@gmail.com, General list for user discussion, questions and 
support backuppc-users@lists.sourceforge.net

Date: Wed Jun 10 15:28:12 2009

Um, yes, but what about sharing the nss volumes? how does one authenticate?
-P

On 10 Jun 2009 at 9:39, anand...@gmail.com wrote:

   

You can use any distro of your choice. Just download the package from sf.net 
and fire away.

--
Thanks and Regards,

Anand Gupta

-Original Message-
From: Paul Hennionp...@bhs.org.za

Date: Wed, 10 Jun 2009 10:59:45
To:backuppc-users@lists.sourceforge.net
Subject: [BackupPC-users] Backing Us an NSS volume on OES2


Hi All,

I am wanting to run a backuppc server that will backup all of our servers 
including some that
run SLES with OES that have NSS volumes. Is this possible and what distro would 
be best to
run the backuppc server?

TIA,
Paul

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
 




   


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backups not working - backupdisable picking up incorrectly

2009-06-10 Thread Les Mikesell
Steve Redmond wrote:
 On Jun 09, Les Mikesell wrote:
 The usual reason for backups stopping is that your disk space is nearly 
 full but the error message makes this look like some other problem.

 
 There's well over 600 gigs available on the current storage pool so I
 don't think this is the issue. I've just gone through and checked all
 settings from the GUI aswell and there's nothing in place saying backups
 should be disabled at all.
 
 How does backuppc decide when there is not enough disk space to consider
 performing backups?

It looks at the percentage used: $Conf{DfMaxUsagePct} = 95;
But the symptoms would be different than yours - this just stops the 
scheduled runs but you can still start them through the web interface.

-- 
   Les Mikesell
lesmikes...@gmail.com




--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Clusters of hosts

2009-06-10 Thread John Rouillard
On Sat, May 30, 2009 at 10:16:33PM +0200, Pieter Wuille wrote:
 I don't know how common this usage is, but in our setup we have a lot of
 backuppc hosts that are physically located on a few machines only. It
 would be nice if it were possible to allow hosts on different machines to
 be backupped simultaneously, but prevent simultaneous backups(dumps) of
 hosts on the same machine.
 
 Any thoughts?

If you have a way of mapping the host names to a physical machine, you
can use my queing/locking strategy described in:


  http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg13698.html

Create one queue/semaphore per physical machine and have the
$Conf{DumpPreUserCmd} command exit with an error if it can't get a
slot/lock (also you will have to set $Conf{UserCmdCheckStatus} = 1;).

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrading from etch to lenny

2009-06-10 Thread Jim McNamara
On Tue, Jun 9, 2009 at 6:24 PM, Ward... James Ward jew...@torzo.com wrote:

 I have two etch BackupPC servers and two lenny BackupPC servers.  All were
 built at the OS level currently running.  I like the new BackupPC interface
 a lot and would like to upgrade the etch servers to lenny and therefore
 BackupPC.  How smoothly is this likely to go?  Any HOWTOs or READMEs or
 gotchas I can study beforehand?

 Thanks,

 James


I've had backuppc running on Debian boxes since sarge. Upgrading the OS
never caused a problem for me. My hosts tend to backup single digit machine
on LANs, but I haven't even seen a hiccup from doing the dist-upgrades.







 --
 Crystal Reports - New Free Runtime and 30 Day Trial
 Check out the new simplified licensing option that enables unlimited
 royalty-free distribution of the report engine for externally facing
 server and web deployment.
 http://p.sf.net/sfu/businessobjects
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Clusters of hosts

2009-06-10 Thread Les Mikesell
John Rouillard wrote:
 On Sat, May 30, 2009 at 10:16:33PM +0200, Pieter Wuille wrote:
 I don't know how common this usage is, but in our setup we have a lot of
 backuppc hosts that are physically located on a few machines only. It
 would be nice if it were possible to allow hosts on different machines to
 be backupped simultaneously, but prevent simultaneous backups(dumps) of
 hosts on the same machine.

 Any thoughts?
 
 If you have a way of mapping the host names to a physical machine, you
 can use my queing/locking strategy described in:
 
 
   
 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg13698.html
 
 Create one queue/semaphore per physical machine and have the
 $Conf{DumpPreUserCmd} command exit with an error if it can't get a
 slot/lock (also you will have to set $Conf{UserCmdCheckStatus} = 1;).

As a feature request, I think it would be nice to have a way to add 
hosts to groups, then limit how many in each group the scheduler would 
start at once.  There are several scenarios where this is needed to 
avoid overloading some common reasource - like a low-bandwidth link as 
well as sharing a physical host or filesystem.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrading from etch to lenny

2009-06-10 Thread Ward... James Ward

Have you specifically done a dist-upgrade from etch to lenny?

On Jun 10, 2009, at 8:14 AM, Jim McNamara wrote:




On Tue, Jun 9, 2009 at 6:24 PM, Ward... James Ward  
jew...@torzo.com wrote:
I have two etch BackupPC servers and two lenny BackupPC servers.   
All were built at the OS level currently running.  I like the new  
BackupPC interface a lot and would like to upgrade the etch servers  
to lenny and therefore BackupPC.  How smoothly is this likely to  
go?  Any HOWTOs or READMEs or gotchas I can study beforehand?


Thanks,

James

I've had backuppc running on Debian boxes since sarge. Upgrading the  
OS never caused a problem for me. My hosts tend to backup single  
digit machine on LANs, but I haven't even seen a hiccup from doing  
the dist-upgrades.






--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing
server and web deployment.
http://p.sf.net/sfu/businessobjects___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Ward... James Ward
Tekco Management Group, LLC
jew...@torzo.com
520-290-0190x268
ICQ: 201663408



smime.p7s
Description: S/MIME cryptographic signature
--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Clusters of hosts

2009-06-10 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 10:23:10 -0500 on Wednesday, June 10, 2009:
  John Rouillard wrote:
   On Sat, May 30, 2009 at 10:16:33PM +0200, Pieter Wuille wrote:
   I don't know how common this usage is, but in our setup we have a lot of
   backuppc hosts that are physically located on a few machines only. It
   would be nice if it were possible to allow hosts on different machines to
   be backupped simultaneously, but prevent simultaneous backups(dumps) of
   hosts on the same machine.
  
   Any thoughts?
   
   If you have a way of mapping the host names to a physical machine, you
   can use my queing/locking strategy described in:
   
   
 
   http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg13698.html
   
   Create one queue/semaphore per physical machine and have the
   $Conf{DumpPreUserCmd} command exit with an error if it can't get a
   slot/lock (also you will have to set $Conf{UserCmdCheckStatus} = 1;).
  
  As a feature request, I think it would be nice to have a way to add 
  hosts to groups, then limit how many in each group the scheduler would 
  start at once.  There are several scenarios where this is needed to 
  avoid overloading some common reasource - like a low-bandwidth link as 
  well as sharing a physical host or filesystem.
  

I think the notion of host groups is a good idea. Even more generally,
it would be nice to be able to define config files at the group level
rather than the current choice between the default config.pl file and
host-specific config files.

For example, this would allow one to define a config file for Linux
vs. Windows machines or for desktops vs. notebooks or for critical
machines vs. less critical machines (I know you can currently do this
in a kludgey fashion using links or by adding perl code to the config
file but it would be nice to have a better way to do it).

This generalization of host groups could easily include the notion of
maximum simultaneous group backups to run.

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrading from etch to lenny

2009-06-10 Thread Jim McNamara
On Wed, Jun 10, 2009 at 11:50 AM, Ward... James Ward jew...@torzo.comwrote:

 Have you specifically done a dist-upgrade from etch to lenny?

 On Jun 10, 2009, at 8:14 AM, Jim McNamara wrote:



 On Tue, Jun 9, 2009 at 6:24 PM, Ward... James Ward jew...@torzo.comwrote:

 I have two etch BackupPC servers and two lenny BackupPC servers.  All were
 built at the OS level currently running.  I like the new BackupPC interface
 a lot and would like to upgrade the etch servers to lenny and therefore
 BackupPC.  How smoothly is this likely to go?  Any HOWTOs or READMEs or
 gotchas I can study beforehand?

 Thanks,

 James


 I've had backuppc running on Debian boxes since sarge. Upgrading the OS
 never caused a problem for me. My hosts tend to backup single digit machine
 on LANs, but I haven't even seen a hiccup from doing the dist-upgrades.







 --
 Crystal Reports - New Free Runtime and 30 Day Trial
 Check out the new simplified licensing option that enables unlimited
 royalty-free distribution of the report engine for externally facing
 server and web deployment.
 http://p.sf.net/sfu/businessobjects
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



 --
 Crystal Reports - New Free Runtime and 30 Day Trial
 Check out the new simplified licensing option that enables unlimited
 royalty-free distribution of the report engine for externally facing
 server and web deployment.

 http://p.sf.net/sfu/businessobjects___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


 Ward... James Ward
 Tekco Management Group, LLC
 jew...@torzo.com
 520-290-0190x268
 ICQ: 201663408



 --
 Crystal Reports - New Free Runtime and 30 Day Trial
 Check out the new simplified licensing option that enables unlimited
 royalty-free distribution of the report engine for externally facing
 server and web deployment.
 http://p.sf.net/sfu/businessobjects
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


Yes, I did upgrade from Etch to Lenny on over 20 machines running backuppc,
none of them had any issues.

By the way, top posting (writing above the previous post) is frowned upon by
most mailing lists. Most mail programs handle it well, but people trying to
read the thread via archives or on older software have trouble when someone
writes above the older text.
--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrading from etch to lenny

2009-06-10 Thread Tino Schwarze
On Wed, Jun 10, 2009 at 01:30:35PM -0400, Jim McNamara wrote:
 
  Have you specifically done a dist-upgrade from etch to lenny?

[...90 lines snipped...]

 By the way, top posting (writing above the previous post) is frowned upon by
 most mailing lists. Most mail programs handle it well, but people trying to
 read the thread via archives or on older software have trouble when someone
 writes above the older text.

Full-quoting is about the same league.

SCNR, Tino.

-- 
What we nourish flourishes. - Was wir nähren erblüht.

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Clusters of hosts

2009-06-10 Thread John Rouillard
On Wed, Jun 10, 2009 at 11:38:30AM -0400, Jeffrey J. Kosowsky wrote:
 Les Mikesell wrote at about 10:23:10 -0500 on Wednesday, June 10, 2009:
   John Rouillard wrote:
   On Sat, May 30, 2009 at 10:16:33PM +0200, Pieter Wuille wrote:
I don't know how common this usage is, but
in our setup we have a lot of backuppc
hosts that are physically located on a
few machines only. It would be nice if it
were possible to allow hosts on different
machines to be backupped simultaneously,
but prevent simultaneous backups(dumps) of
hosts on the same machine.
If you have a way of mapping the host names to a physical machine, you
can use my queing/locking strategy described in:


  
 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg13698.html

   As a feature request, I think it would be nice to have a way to add 
   hosts to groups, then limit how many in each group the scheduler would 
   start at once.  There are several scenarios where this is needed to 
   avoid overloading some common reasource - like a low-bandwidth link as 
   well as sharing a physical host or filesystem.
 
 I think the notion of host groups is a good idea. Even more generally,
 it would be nice to be able to define config files at the group level
 rather than the current choice between the default config.pl file and
 host-specific config files.

I agree with both group definitions, but the hosts should be able
to participate in multiple groups. The groups used to define what
gets backed up and the group that defines how the schedule of
backups occurs should be able to be different. E.G. if you have
two data centers that are being backed up, you may have the same
types of machines in both sites, but one of the data centers is
remote from the backup-pc server and you only want 4 machines at
the remote site to be simultaneously backed up to restrict
bandwidth use etc.
 
 For example, this would allow one to define a config file for Linux
 vs. Windows machines or for desktops vs. notebooks or for critical
 machines vs. less critical machines (I know you can currently do this
 in a kludgey fashion using links or by adding perl code to the config
 file but it would be nice to have a better way to do it).

At one point I looked at including a series of perl files in an
existing per host config file to build up the default settings.
I need to go back and look at that again.
 
 This generalization of host groups could easily include the notion of
 maximum simultaneous group backups to run.

Only if the two host and simultaneous backup groups overlapped
100%.

E.G. I have two redundant database servers db10 and db11. Because
of the impact of doing backups on the servers, I never want both
of them at a site to be backed up at the same time. So these
would share the same configurations (or part of a configuration)
and also be a group that would be limited to 1 backup from the
group at a time.

Now add a second (redundant) site/cluster with database
servers db21 and db22.  Now all 4 servers can share a config, but
I have two different sub-groups of servers (db11, db12) and (db21
and db22) that have different rules about the max number of
backups to be done.

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Clusters of hosts

2009-06-10 Thread Les Mikesell
John Rouillard wrote:
  
 For example, this would allow one to define a config file for Linux
 vs. Windows machines or for desktops vs. notebooks or for critical
 machines vs. less critical machines (I know you can currently do this
 in a kludgey fashion using links or by adding perl code to the config
 file but it would be nice to have a better way to do it).
 
 At one point I looked at including a series of perl files in an
 existing per host config file to build up the default settings.
 I need to go back and look at that again.
  
 This generalization of host groups could easily include the notion of
 maximum simultaneous group backups to run.
 
 Only if the two host and simultaneous backup groups overlapped
 100%.
 
 E.G. I have two redundant database servers db10 and db11. Because
 of the impact of doing backups on the servers, I never want both
 of them at a site to be backed up at the same time. So these
 would share the same configurations (or part of a configuration)
 and also be a group that would be limited to 1 backup from the
 group at a time.
 
 Now add a second (redundant) site/cluster with database
 servers db21 and db22.  Now all 4 servers can share a config, but
 I have two different sub-groups of servers (db11, db12) and (db21
 and db22) that have different rules about the max number of
 backups to be done.

Maybe a 'groups' concept could be added such that you could put a host 
in multiple groups with an order specified and the perl-snippet 
configurations are just evaluated in cascading order (site level, 
group1, group2..., host).  That should be easy to do and would make 
sense if you generally don't overlap the group variables but might get 
confusing if you get carried away.  You'd still have to tie the 
rate-limiting value to the group definition where it was specified, 
though, so a host in multiple groups wouldn't start a backup if it would 
exceed any of the limits that had been picked up.

-- 
   Les Mikesell
 lesmikes...@gmail.com



--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backup from the internet?

2009-06-10 Thread error403

Hi, I want to make backups for some members of my family and I'm trying to find 
a way to use something else than netbios names, but rather dynamicdomains like 
useralias.no-ip.com .  Is there any way to do that with backuppc?  Also, two 
computers are in the same remote house so I might need to do some port 
redirection on their router but I would need to know how to use an alternate 
port.  I'm thinking of installing/using some sftp server sofware on their 
computer.

Thanks!

+--
|This was sent by krunchyf...@videotron.ca via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] send emails to customers AND admin?

2009-06-10 Thread error403

Hi, I'm trying to find  a way to send an email to the personal email of the 
people I'm doing their backups for.  I tried to search but the terms email and 
message are so general it gives me almost all the posts on the forum!  :?

+--
|This was sent by krunchyf...@videotron.ca via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backup the backuppc pool with bacula

2009-06-10 Thread jhaglund

There are several implied references here to likely problems with rsync and how 
they are all deal breakers.  I've been trying to find a solution to this 
problem for weeks and have not found any direct documentation or evidence to 
support what is being said here.  I'm not skeptical, though, I just need to 
understand what's going on.

Rsync is the only option for me, and I'm rather confused by the other solutions 
floated in this and other threads.  On-site backup is precarious and viable 
only in a datacenter type situation imho.  What about the fire scenario?  
Getting the data somewhere else is crucial, and in my case I am limited to 
rsync through rsh.  I'm running rsync 3.0.6 but the server is 2.6.x.  I have ~ 
1.9 files found by rsync and it always fails on some level.  I use -aH but it 
randomly exits with an unknown error during remote comparison or the initial 
transfers.  During the transfer phase it says its sending data, but nothing 
shows up on the server.  The server admins are not aware of any incompatibility 
with their filesystem and the internet does not seem to deal with this problem, 
which brings me back to the initial question.

What does one use if not rsync?  There's no way to justify or implement backing 
up the entire pool every time without a lot of bandwidth, which I don't have.  
What exactly is rsync's problem?  Do I really need to shut down backuppc every 
time I want to attempt a sync or would syncing to a local disk and rsync'ing 
from that be sufficient?  I'd really like to know the specifics of the hardlink 
and inode problem talked about in this thread like how to find out how many I 
have and what the threshold is for Trouble and how the rest of the community 
deals with getting pools of 100+GB offsite in less than a week of transfer time.

Lots of info requests, I know, but I really appreciate the help.  My ISP and 
all the experts I've tapped are completely stumped on this one.


Holger Parplies wrote:
 Hi,
 
 Rob Terhaar wrote on 2009-05-21 13:01:58 -0400 [Re: [BackupPC-users] backup 
 the backuppc pool with bacula]:
 
  [...]
  Try Rsync v3, it has much lower memory requirements since it builds
  the file list incrementally.
  
 
 by all means, try it. But it's not the file list that is the specific problem
 of BackupPC.
 
 
  I used it at one company to do nightly
  syncs of their ~4TB backuppc pool offsite.
  
 
 It's still a matter of file count (used inodes, to be exact), not pool storage
 size. rsync V3 may perform significantly better if you have many links to
 comparatively few inodes, but if you have many inodes (for some unknown value
 of many), I am still convinced that you will hit a problem. Feel free to
 convince me otherwise, but works for me is unlikely to succeed ;-).
 
 Regards,
 Holger
 
 --
 Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
 is a gathering of tech-side developers  brand creativity professionals. Meet
 the minds behind Google Creative Lab, Visual Complexity, Processing,  
 iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
 Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
 ___
 BackupPC-users mailing list
 BackupPC-users  at  lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


+--
|This was sent by jonathan.hagl...@gotravelsites.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Clusters of hosts

2009-06-10 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 11:38:30 -0400 on Wednesday, June 10, 2009:
  Les Mikesell wrote at about 10:23:10 -0500 on Wednesday, June 10, 2009:
John Rouillard wrote:
 On Sat, May 30, 2009 at 10:16:33PM +0200, Pieter Wuille wrote:
 I don't know how common this usage is, but in our setup we have a lot 
  of
 backuppc hosts that are physically located on a few machines only. It
 would be nice if it were possible to allow hosts on different machines 
  to
 be backupped simultaneously, but prevent simultaneous backups(dumps) of
 hosts on the same machine.

 Any thoughts?
 
 If you have a way of mapping the host names to a physical machine, you
 can use my queing/locking strategy described in:
 
 
   
  http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg13698.html
 
 Create one queue/semaphore per physical machine and have the
 $Conf{DumpPreUserCmd} command exit with an error if it can't get a
 slot/lock (also you will have to set $Conf{UserCmdCheckStatus} = 1;).

As a feature request, I think it would be nice to have a way to add 
hosts to groups, then limit how many in each group the scheduler would 
start at once.  There are several scenarios where this is needed to 
avoid overloading some common reasource - like a low-bandwidth link as 
well as sharing a physical host or filesystem.

  
  I think the notion of host groups is a good idea. Even more generally,
  it would be nice to be able to define config files at the group level
  rather than the current choice between the default config.pl file and
  host-specific config files.
  
  For example, this would allow one to define a config file for Linux
  vs. Windows machines or for desktops vs. notebooks or for critical
  machines vs. less critical machines (I know you can currently do this
  in a kludgey fashion using links or by adding perl code to the config
  file but it would be nice to have a better way to do it).
  
  This generalization of host groups could easily include the notion of
  maximum simultaneous group backups to run.
  

It would also be nice to use the group notion to allow the ability to
specify a different topdir for different groups. This could be useful
in cases where there is not much overlap (i.e. pooling potential)
between groups and where there might be reasons to split the pool
between drives. This would presumably be better than the kludge recently
discussed on this list of running multiple instances of backuppc on
one server.

It would further be beneficial to carry the group distinction to the
web interface so that you could view results by group which could be
helpful if you have *many* machines or if you want to subtotal various
stats by group.


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup the backuppc pool with bacula

2009-06-10 Thread Bowie Bailey
jhaglund wrote:
 I'd really like to know the specifics of the hardlink and inode problem 
 talked about in this thread like how to find out how many I have and what the 
 threshold is for Trouble and how the rest of the community deals with getting 
 pools of 100+GB offsite in less than a week of transfer time.
   


I don't know the details on the problem with rsyncing hardlinks.  I just 
know that rsync cannot deal with the number of hardlinks generated by 
BackupPC.

As to how I get my 750GBs of backups offsite...  sneakernet.  :)  I have 
a 3-member raid 1 array with the third member being a removable drive 
enclosure.  When I need an offsite backup, I pull this drive, deliver it 
to a secure storage location, and replace with a new drive.  It only 
takes about 3 hours for the new drive to sync up with the rest of the array.

-- 
Bowie

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup the backuppc pool with bacula

2009-06-10 Thread Les Mikesell
jhaglund wrote:
 There are several implied references here to likely problems with rsync and 
 how they are all deal breakers.  I've been trying to find a solution to this 
 problem for weeks and have not found any direct documentation or evidence to 
 support what is being said here.  I'm not skeptical, though, I just need to 
 understand what's going on.

It boils down to how much RAM rsync needs to handle all the directory 
entries and hardlinks and the amount of time it takes to wade through 
them.

 Rsync is the only option for me, and I'm rather confused by the other 
 solutions floated in this and other threads.  On-site backup is precarious 
 and viable only in a datacenter type situation imho.  What about the fire 
 scenario?  Getting the data somewhere else is crucial, and in my case I am 
 limited to rsync through rsh.  I'm running rsync 3.0.6 but the server is 
 2.6.x.  I have ~ 1.9 files found by rsync and it always fails on some level.  
 I use -aH but it randomly exits with an unknown error during remote 
 comparison or the initial transfers.  During the transfer phase it says its 
 sending data, but nothing shows up on the server.  The server admins are not 
 aware of any incompatibility with their filesystem and the internet does not 
 seem to deal with this problem, which brings me back to the initial question.

3.x on both ends might help. It claims to not need the whole directory 
in memory at once - but you'll still need to build a table to map all 
the inodes with more than one link  (essentially everything) to 
re-create the hardlinks so you have to throw a lot of RAM at it anyway. 
  You shouldn't actually crash unless you run out of both ram and swap, 
but if you push the system into swap you might as well quit anyway.

Note that if you can do rsync over ssh initiated from the other site, 
you could just run the backuppc server there, or a separate independent 
copy.  Unless you have a lot of duplication among the on-site servers 
there wouldn't be a huge difference in traffic after the initial copy 
and you don't have a single point of failure.

 What does one use if not rsync?

The main alternative is some form of image-copy of the archive 
partition.  This is only practical if you have physical access to the 
server or very fast network connections.

 There's no way to justify or implement backing up the entire pool every time 
 without a lot of bandwidth, which I don't have.  What exactly is rsync's 
 problem?  Do I really need to shut down backuppc every time I want to attempt 
 a sync or would syncing to a local disk and rsync'ing from that be 
 sufficient?  I'd really like to know the specifics of the hardlink and inode 
 problem talked about in this thread like how to find out how many I have and 
 what the threshold is for Trouble and how the rest of the community deals 
 with getting pools of 100+GB offsite in less than a week of transfer time.

100 Gigs might be feasible - it depends more on the file sizes and how 
many directory entries you have, though.  And you might have to make the 
first copy on-site so subsequently you only have to transfer the changes.

 Lots of info requests, I know, but I really appreciate the help.  My ISP and 
 all the experts I've tapped are completely stumped on this one.

The root of the problem is that rsync has to include the entire archive 
in one pass to map the matching hardlinks - and it has to be able to 
hold the directory and inode table in RAM to do it at a usable speed. 
The other limiting issue is that the disk heads have to move around a 
lot to read and re-create all those directory entries and update the 
inode link counts.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup the backuppc pool with bacula

2009-06-10 Thread Jon Forrest
jhaglund wrote:

 What does one use if not rsync? 

In an admittedly non-backuppc environment I've been
experimenting with using 'rsync -W' (this means
don't use the rsync algorithm) to see if
problems similar to the ones you describe go away.
I'm still not sure of the result.

Using rsync with the -W argument means that
complete files will be transfered instead
of changed pieces. In an environment where
files tend to change completely, or not at
all, it makes sense to try this because it
means that rsync itself has less to do.

I read somewhere that the rsync algorithm is
intended for environments where disk bandwidth
is greater than network bandwidth. That's a good
way to think about it.

Cordially,

-- 
Jon Forrest
Research Computing Support
College of Chemistry
173 Tan Hall
University of California Berkeley
Berkeley, CA
94720-1460
510-643-1032
jlforr...@berkeley.edu

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backing up a USB-disk?

2009-06-10 Thread Magnus Larsson
Hi!

I have backuppc runing on my home LAN, backing up 5 different computers,
and working very well. However, I have a USB disk where I keep some
stuff that I would like to back up as well. Is there a way of doing this
with backuppc? It is not always turned on and plugged in, so setting its
mount point as one directory to backup during normal backup doesn't work
(shows as empty folder when I try, when it has not been on for some time
during the backup process). And as I gather, an incremental backup when
it is disconnected means it shows as empty, even if only one run, right?
So finding the backup might be tricky.

What I would like is to have it as a separate host, and then do manual
backups when I want to. Can I do this even though the host it is
connected to already is a backuppc host? This would mean defining one
host as a subdir of another host, in the config.pl. With the same host
name and ip. 

Anyone got advice on this? Grateful for any help. 


Magnus Larsson


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a USB-disk?

2009-06-10 Thread Tino Schwarze
Hi Magnus,

On Wed, Jun 10, 2009 at 09:05:26PM +, Magnus Larsson wrote:

 What I would like is to have it as a separate host, and then do manual
 backups when I want to. Can I do this even though the host it is
 connected to already is a backuppc host? This would mean defining one
 host as a subdir of another host, in the config.pl. With the same host
 name and ip. 

You may simple configure another host (name it, for example,
myserver-usbdisk), then set $Config{ClientNamAlias}.

See
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_clientnamealias_
See also $Conf{BackupsDisable} on how to disable automatic backup of a
particular host:
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_backupsdisable_

HTH,

Tino.

-- 
What we nourish flourishes. - Was wir nähren erblüht.

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a USB-disk?

2009-06-10 Thread Holger Parplies
Hi,

Tino Schwarze wrote on 2009-06-10 23:20:29 +0200 [Re: [BackupPC-users] Backing 
up a USB-disk?]:
 On Wed, Jun 10, 2009 at 09:05:26PM +, Magnus Larsson wrote:
 
  What I would like is to have it as a separate host, and then do manual
  backups when I want to. Can I do this even though the host it is
  connected to already is a backuppc host?

Yes.

  This would mean defining one host as a subdir of another host, in the
  config.pl. With the same host name and ip. 

No. Instead:

 You may simple configure another host (name it, for example,
 myserver-usbdisk), then set $Config{ClientNamAlias}.
 
 See
 http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_clientnamealias_
 See also $Conf{BackupsDisable} on how to disable automatic backup of a
 particular host:
 http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_backupsdisable_

note that you could even set the PingCmd to a shell script that checks
whether the disk is mounted (eg. '[ -f /path/to/usbdisk/.thisistheusbdisk ]'
if you have a file '.thisistheusbdisk' in the root of your USB disk's file
system - please ask if you need more details). Then you can even keep
automatic backups running (just in case you forget manual backups and the
disk happens to be plugged in at wakeup time).

You might also want to set EMailNotifyOldBackupDays or EMailNotifyMinDays.

Regards,
Holger

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Lesson in netiquette (was Re: Upgrading from etch to lenny, was backups not working - backupdisable picking up incorrectly)

2009-06-10 Thread Chris Robertson
Tino Schwarze wrote:
 On Wed, Jun 10, 2009 at 01:30:35PM -0400, Jim McNamara wrote:
   
 Have you specifically done a dist-upgrade from etch to lenny?
   

 [...90 lines snipped...]

   
 By the way, top posting (writing above the previous post) is frowned upon by
 most mailing lists. Most mail programs handle it well, but people trying to
 read the thread via archives or on older software have trouble when someone
 writes above the older text.
 

 Full-quoting is about the same league.
   

Heh, not to mention thread hijacking...

(http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg14800.html)

Live and learn.

 SCNR, Tino.
   

Chris

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] send emails to customers AND admin?

2009-06-10 Thread Chris Robertson
error403 wrote:
 Hi, I'm trying to find  a way to send an email to the personal email of the 
 people I'm doing their backups for.  I tried to search but the terms email 
 and message are so general it gives me almost all the posts on the forum!  :?

Something like http://linuxgazette.net/issue72/teo.html?

Chris

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup from the internet?

2009-06-10 Thread Les Mikesell
Chris Robertson wrote:
 error403 wrote:
 Hi, I want to make backups for some members of my family and I'm trying to 
 find a way to use something else than netbios names, but rather 
 dynamicdomains like useralias.no-ip.com .  Is there any way to do that with 
 backuppc?
 
 Define the host as useralias.no-ip.com instead of NETBIOSHostName.
 
   Also, two computers are in the same remote house so I might need to do 
 some port redirection on their router but I would need to know how to use an 
 alternate port.
 
 This is entirely dependent on the NAT device...  The good news is you 
 could forward (for example) port 1418 on the NAT device to port 22 on 
 computer A and port 1419 on the NAT device to port 22 on computer B.
 
   I'm thinking of installing/using some sftp server sofware on their 
 computer.
   
 
 Better would be an rsyncd service, as that would allow you to only 
 transfer changes.

If they are unix/linux/mac boxes you can use rsync over ssh.  On windows 
you can use ssh port forwarding to connect to rsync in daemon mode.

-- 
   Les Mikesell
lesmikes...@gmail.com




--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup from the internet?

2009-06-10 Thread Chris Robertson
Les Mikesell wrote:
 Chris Robertson wrote:
   
 error403 wrote:
 
   I'm thinking of installing/using some sftp server sofware on their 
 computer.
   
   
 Better would be an rsyncd service, as that would allow you to only 
 transfer changes.
 

 If they are unix/linux/mac boxes you can use rsync over ssh.  On windows 
 you can use ssh port forwarding to connect to rsync in daemon mode.
   

Indeed.

Given the mention of NetBios in the original message, I made the 
assumption that Windows clients were (exclusively) involved.  Thanks for 
clarifying the other available options.

Chris

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a USB-disk?

2009-06-10 Thread Filipe Brandenburger
Hi,

On Wed, Jun 10, 2009 at 17:57, Holger Parplieswb...@parplies.de wrote:
 note that you could even set the PingCmd to a shell script that checks
 whether the disk is mounted (eg. '[ -f /path/to/usbdisk/.thisistheusbdisk ]'
 if you have a file '.thisistheusbdisk' in the root of your USB disk's file
 system - please ask if you need more details).

You can use the mountpoint command (present in RHEL 5 or SuSE 10) to
test if that path is a mount point for some volume.

mountpoint -q /path/to/usbdisk
(will set $? to 0 if it's mounted, non-zero otherwise)

No need to create .thisistheusbdisk files inside your USB then.

HTH,
Filipe

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup the backuppc pool with bacula

2009-06-10 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2009-06-10 15:45:22 -0500 [Re: [BackupPC-users] backup 
the backuppc pool with bacula]:
 jhaglund wrote:
  There are several implied references here to likely problems with rsync
  and how they are all deal breakers. [...] I just need to understand what's
  going on.
 
 It boils down to how much RAM rsync needs to handle all the directory 
 entries and hardlinks and the amount of time it takes to wade through 
 them.

... where the important part is the hardlinks (see below), because that simply
can't be optimized, the file list - while probably consuming more memory in
total - can and has been in 3.0 (probably meaning protocol version 30, i.e.
rsync 3.x on both sides).

  I'm running rsync 3.0.6 but the server is 2.6.x.  I have ~ 1.9 files
  found by rsync and it always fails on some level. [...]
 
 3.x on both ends might help. It claims to not need the whole directory 
 in memory at once - but you'll still need to build a table to map all 
 the inodes with more than one link  (essentially everything) to 
 re-create the hardlinks so you have to throw a lot of RAM at it anyway. 

Please read the above carefully. It's not about so many hardlinks (meaning
many links to one pool file), it's about so many files that have more than one
link - whether it's 2 or 32000 is unimportant (except for the size of the
complete file list, which additional hardlinks will make larger). In normal
situations, you have a file with more than one link every now and then. rsync
expects to have to handle a few of them. With a BackupPC pool it's practically
every single file, millions of them or more in some cases. And for each and
every one of them, rsync needs to store (at least) the inode number and the
full path (probably relative to the transfer root) to one link (probably the
first one it encounters, not necessarily the shortest one). Count for yourself:

cpool/1/2/3/12345678911234567892123456789312
pc/foo/0/f%2fhome/fuser/ffoo

pc/hostname/123/f%2fexport%2fhome/fwopp/f.gconf/fapps/fgnome-screensaver/f%25gconf.xml

Round up to a multiple of 8, add maybe 4 bytes of malloc overhead, 4 bytes for
a pointer, and factor in that we're simply not used anymore to optimizing
storage requirements at the byte level.


You're probably going to say, why not simply write that information to
disk/database?.

Reason 1: That's a lot of temporary space you'll need. If it doesn't fit in
  memory, we're talking about GB, not a few KB.
Reason 2: Access to this table will be in random order. It's not a nice linear
  scan. Chances are, you'll need to read from disk almost every time.
  No cache is going to speed this up much, because no cache will be
  large enough or smart enough to know when which information will be
  needed again. The same applies to a database.
Reason 3: rsync is a general purpose tool. It can't determine ahead of time
  how many hardlink entries it will need to handle. It could only
  react to running out of memory. Except for BackupPC pools, it would
  probably *never* need disk storage.

 You shouldn't actually crash unless you run out of both ram and swap, 
 but if you push the system into swap you might as well quit anyway.

This is the same as reason 2. You should realize that disk is not slightly
slower than RAM, it's many orders of magnitude slower. It won't take 2 hours
instead of 1 hour, it will take 1 hours (or more) instead of 1. That is
over one year. Swap works well, as long as your working set fits into RAM.
That is not the case here. [In reality, it might not be quite so dramatic,
but the point is: you don't know. It simply might take a year. Or 10.
Supposing your disks last that long ;-]

  What does one use if not rsync?
 
 The main alternative is some form of image-copy of the archive 
 partition.  This is only practical if you have physical access to the 
 server or very fast network connections.

Physical access probably meaning, that you can transport your copy to and
from the server. Never underestimate the bandwidth of a station waggon full
of tapes hurtling down the highway. (Andrew S. Tanenbaum).

  Do I really need to shut down backuppc every time I want to attempt a
  sync or would syncing to a local disk and rsync'ing from that be
  sufficient?

Try something like time find /var/lib/backuppc -ls  /dev/null to get a
feeling for just how long only traversing the BackupPC pool and doing a stat()
on each file really takes. Then remember that syncing to a local disk is in
no way simpler than syncing to a remote disk - the bandwidth for copying is
simply higher, so that is the only place you get a speedup.

From a different perspective: either it's going to be fast enough that
shutting down BackupPC won't hurt, or it's going to be *necessary* to shut
down BackupPC, because having it modify the file system would hurt.

Just imagine the pc/ directory links on your copy would point to a 

Re: [BackupPC-users] Backing up a USB-disk?

2009-06-10 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Holger Parplies wrote:
 Hi,

 Tino Schwarze wrote on 2009-06-10 23:20:29 +0200 [Re: [BackupPC-users]
Backing up a USB-disk?]:
 On Wed, Jun 10, 2009 at 09:05:26PM +, Magnus Larsson wrote:
 You may simple configure another host (name it, for example,
 myserver-usbdisk), then set $Config{ClientNamAlias}.

 See

http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_clientnamealias_
 See also $Conf{BackupsDisable} on how to disable automatic backup of a
 particular host:

http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_backupsdisable_

 note that you could even set the PingCmd to a shell script that checks
 whether the disk is mounted (eg. '[ -f
/path/to/usbdisk/.thisistheusbdisk ]'
 if you have a file '.thisistheusbdisk' in the root of your USB disk's file
 system - please ask if you need more details). Then you can even keep
 automatic backups running (just in case you forget manual backups and the
 disk happens to be plugged in at wakeup time).
Or using the default config which says that if the share backup is
empty then consider it a failure. Thus you can allow backuppc to
regularly attempt a backup automatically, if the USB disk is not
connected then it will fail, if it is connected then you get a new backup.

Regards,
Adam
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkowcIcACgkQGyoxogrTyiWQ8gCgqqpQXZFu7mDI0eMLHS37swo3
VXYAoIgjhjQPQj8aXBw1jeDjzxaURuTm
=d/A9
-END PGP SIGNATURE-


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a USB-disk?

2009-06-10 Thread Jeffrey J. Kosowsky
Tino Schwarze wrote at about 23:20:29 +0200 on Wednesday, June 10, 2009:
  Hi Magnus,
  
  On Wed, Jun 10, 2009 at 09:05:26PM +, Magnus Larsson wrote:
  
   What I would like is to have it as a separate host, and then do manual
   backups when I want to. Can I do this even though the host it is
   connected to already is a backuppc host? This would mean defining one
   host as a subdir of another host, in the config.pl. With the same host
   name and ip. 
  
  You may simple configure another host (name it, for example,
  myserver-usbdisk), then set $Config{ClientNamAlias}.
  

To be complete you would probably need to also do the following:
1. On the original (non-alias) version of the host, exclude the mount
   point for the USB disk

2. On the alias version, set the share name to start the backup at the
   mount point

3. Set DumpPreUserCmd to test to make sure the usb disk is mounted and
   return non-zero exit status if not. This makes sure that the
   alternate host is only backed up if the disk is mounted.
   Also set:
$Conf{UserCmdCheckStatus} = 1

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup the backuppc pool with bacula

2009-06-10 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 04:22:03 +0200 on Thursday, June 11, 2009:
  Hi,
  
  Les Mikesell wrote on 2009-06-10 15:45:22 -0500 [Re: [BackupPC-users] backup 
  the backuppc pool with bacula]:
   jhaglund wrote:
There are several implied references here to likely problems with rsync
and how they are all deal breakers. [...] I just need to understand 
what's
going on.
   
   It boils down to how much RAM rsync needs to handle all the directory 
   entries and hardlinks and the amount of time it takes to wade through 
   them.
  
  ... where the important part is the hardlinks (see below), because that 
  simply
  can't be optimized, the file list - while probably consuming more memory in
  total - can and has been in 3.0 (probably meaning protocol version 30, i.e.
  rsync 3.x on both sides).
  

Holger, I may be wrong here, but I think that you get the more
efficient memory usage as long as both client  server are version =3.0 
even if protocol version is set to  30 (which is true for BackupPC
where it defaults back to version 28). 

I think protocol 30 has more to do with the changes from md4sums to
md5sums plus the ability to have longer file names (255 characters I
think) plus other protocol extensions. But I'm not an expert and my
understanding is that the protocols themselves are not well documented
other than looking through the source code.

I'm running rsync 3.0.6 but the server is 2.6.x.  I have ~ 1.9 files
found by rsync and it always fails on some level. [...]
   
   3.x on both ends might help. It claims to not need the whole directory 
   in memory at once - but you'll still need to build a table to map all 
   the inodes with more than one link  (essentially everything) to 
   re-create the hardlinks so you have to throw a lot of RAM at it anyway. 
  
  Please read the above carefully. It's not about so many hardlinks (meaning
  many links to one pool file), it's about so many files that have more than 
  one
  link - whether it's 2 or 32000 is unimportant (except for the size of the
  complete file list, which additional hardlinks will make larger). In normal
  situations, you have a file with more than one link every now and then. rsync
  expects to have to handle a few of them. With a BackupPC pool it's 
  practically
  every single file, millions of them or more in some cases. And for each and
  every one of them, rsync needs to store (at least) the inode number and the
  full path (probably relative to the transfer root) to one link (probably the
  first one it encounters, not necessarily the shortest one). Count for 
  yourself:
  
   cpool/1/2/3/12345678911234567892123456789312
   pc/foo/0/f%2fhome/fuser/ffoo
  
  pc/hostname/123/f%2fexport%2fhome/fwopp/f.gconf/fapps/fgnome-screensaver/f%25gconf.xml
  
  Round up to a multiple of 8, add maybe 4 bytes of malloc overhead, 4 bytes 
  for
  a pointer, and factor in that we're simply not used anymore to optimizing
  storage requirements at the byte level.
  
  
  You're probably going to say, why not simply write that information to
  disk/database?.
  
  Reason 1: That's a lot of temporary space you'll need. If it doesn't fit in
memory, we're talking about GB, not a few KB.
  Reason 2: Access to this table will be in random order. It's not a nice 
  linear
scan. Chances are, you'll need to read from disk almost every time.
No cache is going to speed this up much, because no cache will be
large enough or smart enough to know when which information will be
needed again. The same applies to a database.
  Reason 3: rsync is a general purpose tool. It can't determine ahead of time
how many hardlink entries it will need to handle. It could only
react to running out of memory. Except for BackupPC pools, it would
probably *never* need disk storage.
  
   You shouldn't actually crash unless you run out of both ram and swap, 
   but if you push the system into swap you might as well quit anyway.
  
  This is the same as reason 2. You should realize that disk is not slightly
  slower than RAM, it's many orders of magnitude slower. It won't take 2 hours
  instead of 1 hour, it will take 1 hours (or more) instead of 1. That is
  over one year. Swap works well, as long as your working set fits into RAM.
  That is not the case here. [In reality, it might not be quite so dramatic,
  but the point is: you don't know. It simply might take a year. Or 10.
  Supposing your disks last that long ;-]
  
What does one use if not rsync?
   
   The main alternative is some form of image-copy of the archive 
   partition.  This is only practical if you have physical access to the 
   server or very fast network connections.
  
  Physical access probably meaning, that you can transport your copy to and
  from the server. Never underestimate the bandwidth of a station waggon full
  of tapes hurtling down the highway. 

Re: [BackupPC-users] Backing up a USB-disk?

2009-06-10 Thread Magnus Larsson
Great, thanks!!

That part on checking whether the disk is mounted - could you give me
some more details on that? Never used such a script here. 

Magnus


ons 2009-06-10 klockan 23:57 +0200 skrev Holger Parplies:
 Hi,
 
 Tino Schwarze wrote on 2009-06-10 23:20:29 +0200 [Re: [BackupPC-users] 
 Backing up a USB-disk?]:
  On Wed, Jun 10, 2009 at 09:05:26PM +, Magnus Larsson wrote:
  
   What I would like is to have it as a separate host, and then do manual
   backups when I want to. Can I do this even though the host it is
   connected to already is a backuppc host?
 
 Yes.
 
   This would mean defining one host as a subdir of another host, in the
   config.pl. With the same host name and ip. 
 
 No. Instead:
 
  You may simple configure another host (name it, for example,
  myserver-usbdisk), then set $Config{ClientNamAlias}.
  
  See
  http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_clientnamealias_
  See also $Conf{BackupsDisable} on how to disable automatic backup of a
  particular host:
  http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_backupsdisable_
 
 note that you could even set the PingCmd to a shell script that checks
 whether the disk is mounted (eg. '[ -f /path/to/usbdisk/.thisistheusbdisk ]'
 if you have a file '.thisistheusbdisk' in the root of your USB disk's file
 system - please ask if you need more details). Then you can even keep
 automatic backups running (just in case you forget manual backups and the
 disk happens to be plugged in at wakeup time).
 
 You might also want to set EMailNotifyOldBackupDays or EMailNotifyMinDays.
 
 Regards,
 Holger
 
 --
 Crystal Reports - New Free Runtime and 30 Day Trial
 Check out the new simplified licensing option that enables unlimited
 royalty-free distribution of the report engine for externally facing 
 server and web deployment.
 http://p.sf.net/sfu/businessobjects
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backup of novel servers

2009-06-10 Thread Benedict simon

Dear All,

i am using BackupPC to succesfully backup up linux client and working fine.

i also have 2 novell Netware servers which i would like to backup with
backuppc

does backuppc support backin up Novell Netware servers

I have 4.11 and 5 server

thanks and regards


-- 
Network ADMIN
-
KUWAIT MUNICIPALITY:

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/