Re: [BackupPC-users] Log says Pool is 0.00GB, but pool is big and growing

2009-08-18 Thread Christian G. von Busse
Hi,

Saturday, August 15, 2009, 3:42:53 PM, you wrote:

  I'm experiencing some strange difficulties with BackupPC
  (3.1.0-3ubuntu1 on Ubuntu 8.04 LTS). It appears that BackupPC is not
  recognizing that it put files into the pool already. The log shows
  nightly a message according to which the pool is 0 GB, consisting of 0
  directories, whereas the pool actually exists - it's 195,120 MB
  currently, and growing day by day, cluttering my harddisk.
  Any idea what could be the issue/what I could try to resolve this? If
  you need any information from my config, please let me know.
 The most likely cause is that IO::Dirent fails on certain file systems.
 the other most likely cause is that you (incorrectly) moved your TopDir
 (resulting in pooling not working). Did you? How?

I don't think I did, although I might have accidentally during my
initial tries to install backuppc. What I did do, though, is delete
the pool once - after the pool grew much too big.

But since all other efforts did not lead to success, yet, I am more
than willing trying to fix this. So if we just assume that I moved my
topdir incorrectly - how could I fix this?

Thanks, Christian


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd on Vista 64-bit cygwin vs SUA

2009-08-18 Thread Erik Hjertén
Koen Linders skrev:
 I don't know what you mean with SUA environment, but I use Deltacopy in
 Vista 64 bit via rsyncd.

 http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp

 Works without a problem atm. Easy to use and you can copy the files to other
 computers and easily register the service.

   
I second that. Works like a charm.

Cheers
/Erik

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd on Vista 64-bit cygwin vs SUA

2009-08-18 Thread Bernhard Ott
Koen Linders wrote:
 I don't know what you mean with SUA environment, but I use Deltacopy in
^^
It's Microsoft Subsystem for UNIX-based Applications:
http://technet.microsoft.com/en-us/library/cc779522(WS.10).aspx

Regards,
Bernhard


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] New user- loads of questions

2009-08-18 Thread Nigel Kendrick
Morning,
 
I have just started to play with backuppc and am making good strides - local
(SMB) backups are working fine and I am just about to have a look at
rsync-based backups from a couple of local Linux servers before moving on to
SMB/rsync via SSH and some VPNs.
 
I am diligently RTFM-ing, supplemented with the stuff found via Google -
which is a bit overwhelming, so I'd appreciate some short cuts from anyone
with a bit more real-world experience if possible:
 
1) I presume(?) SMB-based backups cannot do block-difference-level copies
like rsync? We have a number of remote (over VPN) Windows servers and I'd
like to backup their MSSQL database dumps - they are around 700MB at the
moment and I presume via SMB the whole lot will get transferred every time?
 
2) I have seen a number of guides for cwrsync on Windows-based PCs. Any
votes on the best one and the best place to read up on this? I presume that
since we'd be backing up via VPN, we could run rsync directly rather than
via an SSH tunnel?
 
3) As the remote sites are linked via VPN, I could mount the remote shares
to the local backup server and use rsync 'directly' - any pros/cons doing
things this way (speed, reliability etc?), or is an rsync server on the
remote servers a better approach?
 
4) I am running the backup server on CentOS 5.3 and installed backuppc from
the Centos RPM. Ideally I'd like to run the app as the normal 'apache' user
- I read up on a few generic notes about doing this and got to a point where
backuppc wouldn't start properly as it couldn't create the LOG file. I then
went round in circles looking at file permissions before putting things back
the way they were in order to do some more learning. Is there a
simple-to-follow guide for setting up backuppc to not use mod_perl - I have
read the docs but am still not getting there.
 
Many thanks
 

Nigel Kendrick


 
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New user- loads of questions

2009-08-18 Thread Holger Parplies
Hi,

Nigel Kendrick wrote on 2009-08-18 12:04:16 +0100 [[BackupPC-users] New user- 
loads of questions]:
 I have just started to play with backuppc and am making good strides - local
 (SMB) backups are working fine

I hope you mean SMB backups of local Windoze machines, not of the BackupPC
server ;-). In any case, welcome to BackupPC.

 [...]
 1) I presume(?) SMB-based backups cannot do block-difference-level copies
 like rsync? We have a number of remote (over VPN) Windows servers and I'd
 like to backup their MSSQL database dumps - they are around 700MB at the
 moment and I presume via SMB the whole lot will get transferred every time?

Correct. I'm not sure how well rsync will handle database dumps, though. You
should try that out manually (if you haven't done so already). Please also
remember that BackupPC will store each version independently, though possibly
compressed (i.e. BackupPC only does file-level deduplication, not block-level).
You only save bandwidth with rsync on transfer, not on storage.

 2) I have seen a number of guides for cwrsync on Windows-based PCs. Any
 votes on the best one and the best place to read up on this? I presume that
 since we'd be backing up via VPN, we could run rsync directly rather than
 via an SSH tunnel?

As far as I know, rsync doesn't work correctly on Windoze (rsyncd does,
though). With a VPN, I'd definitely recommend plain rsyncd. I don't backup
Windoze myself, but Deltacopy is mentioned often on the list - there's a
thread from today [rsyncd on Vista 64-bit cygwin vs SUA] which you might want
to check out.

 3) As the remote sites are linked via VPN, I could mount the remote shares
 to the local backup server and use rsync 'directly' - any pros/cons doing
 things this way (speed, reliability etc?), or is an rsync server on the
 remote servers a better approach?

If you mount the remote shares locally, you lose the benefit of the rsync
protocol *completely*, because the remote rsync instance is running on the
local computer and will need to read each whole file over the network in order
to figure out which blocks don't need to be transferred (locally) ;-). You
still get better backup precision on incrementals than with tar, but a remote
rsync server (or rsync over ssh for UNIX clients) is definitely the better
approach. I can't think of any advantages of mounting the remote shares,
except that it may be slightly easier to set up (but does that count?).

 4) I am running the backup server on CentOS 5.3 and installed backuppc from
 the Centos RPM. Ideally I'd like to run the app as the normal 'apache' user
 - I read up on a few generic notes about doing this and got to a point where
 backuppc wouldn't start properly as it couldn't create the LOG file. I then
 went round in circles looking at file permissions before putting things back
 the way they were in order to do some more learning. Is there a
 simple-to-follow guide for setting up backuppc to not use mod_perl - I have
 read the docs but am still not getting there.

I believe the default *is* for BackupPC to *not* use mod_perl. Your RPM may
differ, but the upstream documentation will not reflect this.

The BackupPC CGI script needs to be run as backuppc user for various reasons
(access to the pool FS, access to the BackupPC server daemon, use of the
BackupPC Perl library), so you can either run the web server as backuppc user
or implement some form of changing UID (the CGI script - BackupPC_Admin (or
index.cgi on Debian, don't know about Centos) - is normally setuid backuppc,
but that can't work with mod_perl, I believe).

Do you have a reason for not wanting to run apache as backuppc user (eg. other
virtual hosts)? I'm no apache expert, but *removing* use of mod_perl is bound
to be easier than getting it set up (I never did set it up, though). Just make
sure that your changes are not lost if you, one day, decide to upgrade the RPM
package. Backing them up with BackupPC is a good idea, but remember that at
the point where you would need to access them, your web interface would not be
working ...

Regards,
Holger

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with hardlink-based backups...

2009-08-18 Thread David
Thanks for the replies


On Mon, Aug 17, 2009 at 3:05 PM, Les Mikeselllesmikes...@gmail.com wrote:
 You can exclude directories from the updatedb runs

Only works if the data you want to exclude (such as older snapshots)
are kept in a relatively small number of directories, or you need to
make a lot of exclude rules, like one for each backup. In my case,
each backed up server/user PC/etc, is independant, and has it's own
directory structure with snaphots, etc.

And actually backuppc also has a problematic layout for locate rules:

__TOPDIR__/pc/$host/nnn - One of those directories for each backup version.

So basically, if you have a large number of files on a server, it
seems like you need to entirely exclude the server from updatedb,
otherwise the snapshot directories are going to cause a huge updatedb
database.

Which kind of defeats the point of having updatedb running on the
backup server. Which is why I've disabled it here :-(.

 Du doesn't make any files unless you redirect its output

Usually I make du files on servers, so I can copy the files back to my
workstation, and use a graphical tool like xdiskusage to get a better
idea of where space is used.

- and it can be constrained to the relevant top
 level directories with the -s option.

Yep, but it is still going to take days :-(. And then afterwards you
often still need to run 'du' on those lower levels to see where the
space is actually going.

 Backuppc maintains its own status showing how much space the pool uses and how
 much is left on the filesystem. So you just look at that page often enough to
 not run out of space.

Sounds like a 'df'- like display on the web page, but for the backuppc
pool rather than a partition.

Please correct me if I'm mistaken, but that doesn't really help people
who want to find which files and dirs are taking up the most space, so
they can address it (like, tweak the number of backed up generations,
or exclude additional directories/file patterns, etc).

Normally people use a tool like 'du' for that, but 'du' itself is next
to unusable when you have a massive filesystem, which can easily be
created by hardlink snapshot-based backup systems :-(


 Backuppc won't start a backup run if the disk is more than 95% (configurable) 
 full.


Sounds useful, but it doesn't really address my problem of 'du' (and
locatedb, and others) having major problems with this kind of backup
layout.


 It is best done pro-actively, avoiding the problem instead of trying to fix it
 afterwards because with everything linked, it doesn't help to remove old
 generations of files that still exist.  So generating the stats daily and
 observing them (both human and your program) before starting the next run is 
 the
 way to go.


1. Removing old generations does help. The idea is to remove old
churn that took place in that version. In other words, files which
no longer have any references after that generation is removed
(because all previous generations referring to those files via hard
links, are also gone by this point).

2. Proactive is good, but again, with a massive directory structure,
it's hard to use tools like du to check which backups you need to
finetune/prune/etc.


 Also, you really want your backup archive on its own mounted filesystem so it
 doesn't compete with anything else for space and to give you the possibility 
 of
 doing an image copy if you need a backup since other methods will be too slow 
 to
 be practical.  And 'df' will tell you what you need to know about a filesystem
 fairly quickly.


Our backups are stored under a LVM which is used only for backups. But
again, the problem is not disk usage causing issues for other
processes. The problem is, once the allocated area is running out of
space, how to check *where* that space is going to, so you can take
informed action. 'df' is only going to tell you that you're low on
space, not where the space is going.

- David.

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New user- loads of questions

2009-08-18 Thread Nigel Kendrick
 
Holger - thanks for the quick feedback - a few comments and answers below:

-Original Message-
From: Holger Parplies [mailto:wb...@parplies.de] 
Sent: Tuesday, August 18, 2009 2:49 PM
To: Nigel Kendrick
Cc: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] New user- loads of questions

Hi,

Nigel Kendrick wrote on 2009-08-18 12:04:16 +0100 [[BackupPC-users] New
user- loads of questions]:
 I have just started to play with backuppc and am making good strides -
local
 (SMB) backups are working fine

I hope you mean SMB backups of local Windoze machines, not of the BackupPC
server ;-). In any case, welcome to BackupPC.

  -- Yes, backing up Windows machines on the LAN via SMB


 [...]
 1) I presume(?) SMB-based backups cannot do block-difference-level copies
 like rsync? We have a number of remote (over VPN) Windows servers and I'd
 like to backup their MSSQL database dumps - they are around 700MB at the
 moment and I presume via SMB the whole lot will get transferred every
time?

Correct. I'm not sure how well rsync will handle database dumps, though. You
should try that out manually (if you haven't done so already). Please also
remember that BackupPC will store each version independently, though
possibly
compressed (i.e. BackupPC only does file-level deduplication, not
block-level).
You only save bandwidth with rsync on transfer, not on storage.

  -- Thanks, it's as I thought with SMB (all or nothing transfers). 
  -- Got 2TB of RAID 1 to play with so storage not an issue!

 2) I have seen a number of guides for cwrsync on Windows-based PCs. Any
 votes on the best one and the best place to read up on this? I presume
that
 since we'd be backing up via VPN, we could run rsync directly rather than
 via an SSH tunnel?

As far as I know, rsync doesn't work correctly on Windoze (rsyncd does,
though). With a VPN, I'd definitely recommend plain rsyncd. I don't backup
Windoze myself, but Deltacopy is mentioned often on the list - there's a
thread from today [rsyncd on Vista 64-bit cygwin vs SUA] which you might
want
to check out.

  -- Already started working with cwrsync/rsyncd and grabbed some files
from a local Win2K machine. 
  -- Going to try across the VPN later. Looking a 700MB MSSQL database
dumps - hoping to be pleased!
  -- Just subscribed to the list so only seeing posts from around mid-day
onwards but will check the archives.

 3) As the remote sites are linked via VPN, I could mount the remote shares
 to the local backup server and use rsync 'directly' - any pros/cons doing
 things this way (speed, reliability etc?), or is an rsync server on the
 remote servers a better approach?

If you mount the remote shares locally, you lose the benefit of the rsync
protocol *completely*, because the remote rsync instance is running on the
local computer and will need to read each whole file over the network in
order
to figure out which blocks don't need to be transferred (locally) 

[snip]

  -- Thanks, seems like rsyncd over the VPN is the way to go. 
  -- Also looks like rsync is more tolerant of high VPN latency


 4) I am running the backup server on CentOS 5.3 and installed backuppc
from
 the Centos RPM. Ideally I'd like to run the app as the normal 'apache'
user
 - I read up on a few generic notes about doing this and got to a point
where
 backuppc wouldn't start properly as it couldn't create the LOG file. I
then
 went round in circles looking at file permissions before putting things
back
 the way they were in order to do some more learning. Is there a
 simple-to-follow guide for setting up backuppc to not use mod_perl - I
have
 read the docs but am still not getting there.

I believe the default *is* for BackupPC to *not* use mod_perl. Your RPM may
differ, but the upstream documentation will not reflect this.

The BackupPC CGI script needs to be run as backuppc user for various reasons
(access to the pool FS, access to the BackupPC server daemon, use of the
BackupPC Perl library), so you can either run the web server as backuppc
user
or implement some form of changing UID (the CGI script - BackupPC_Admin (or
index.cgi on Debian, don't know about Centos) - is normally setuid backuppc,
but that can't work with mod_perl, I believe).

Do you have a reason for not wanting to run apache as backuppc user

  -- May not be an issue, but I have one server running SugarCRM in a 9-5
operation and
  am planning to have the server do some overnight backups of LAN-based
machines and I am 
  just pre-empting this upsetting SugarCRM - it may not.

  -- I have another that's a small Asterisk (Trixbox) server (again, 9-5
only), where Apache has to be run as 'trixbox' and I am wondering how this
may all fit together!


Thanks again,

Nigel


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core 

Re: [BackupPC-users] rsyncd on Vista 64-bit cygwin vs SUA

2009-08-18 Thread Bernhard Ott
Koen Linders wrote:
 I don't know what you mean with SUA environment, but I use Deltacopy in
 Vista 64 bit via rsyncd.
 
 http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp
 
 Works without a problem atm. Easy to use and you can copy the files to other
 computers and easily register the service.
 
 Greetings,
 Koen Linders

So I will have to  play around with DeltaCopy (yet another 
win-client-solution ;-))!

Thanks,
Bernhard

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New user- loads of questions

2009-08-18 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Nigel Kendrick wrote:
   -- May not be an issue, but I have one server running SugarCRM in a 9-5
 operation and
   am planning to have the server do some overnight backups of LAN-based
 machines and I am 
   just pre-empting this upsetting SugarCRM - it may not.
 
   -- I have another that's a small Asterisk (Trixbox) server (again, 9-5
 only), where Apache has to be run as 'trixbox' and I am wondering how this
 may all fit together!

You are probably better off leaving your apache config as-is, but
getting it to run the backuppc cgi-scripts as user backuppc. This is
done with the suexec module in apache, which is pretty much standard for
most apache systems... You might need to install the package, or enable
the module...

I run a stand-alone backuppc server, but even that uses suexec to run
backuppc scripts :)

PS, try and run backuppc on a dedicated machine, mixing it with machines
doing real work, means your backups might be on the machine you need
to restore!

Regards,
Adam
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkqKwgYACgkQGyoxogrTyiWj4ACgujNPTbwlPoM8G4UMxS/9oGL0
7+sAn2UqwpqMDLBSHwALFHQ7IJOXXEes
=kcTL
-END PGP SIGNATURE-

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New user- loads of questions

2009-08-18 Thread Michael Stowe
 Morning,

 I have just started to play with backuppc and am making good strides -
local
 (SMB) backups are working fine and I am just about to have a look at
rsync-based backups from a couple of local Linux servers before moving
on to
 SMB/rsync via SSH and some VPNs.

 I am diligently RTFM-ing, supplemented with the stuff found via Google -
which is a bit overwhelming, so I'd appreciate some short cuts from
anyone with a bit more real-world experience if possible:

 1) I presume(?) SMB-based backups cannot do block-difference-level
copies like rsync? We have a number of remote (over VPN) Windows servers
and I'd like to backup their MSSQL database dumps - they are around
700MB at the moment and I presume via SMB the whole lot will get
transferred every time?

You are correct sir; though depending on the file structure, some files
pretty much get transferred in their entirety (I'm looking at you, windows
registry.)

 2) I have seen a number of guides for cwrsync on Windows-based PCs. Any
votes on the best one and the best place to read up on this? I presume
that
 since we'd be backing up via VPN, we could run rsync directly rather
than via an SSH tunnel?

You are correct about obviating ssh with a VPN.  cwrsync does  have a
filename-length limitation, but otherwise, I've found it perfectly useful.
 The biggest problem most people have when getting it working is personal
firewalls.

Well, that, and open files, which is addressed here:
http://www.goodjobsucking.com/?p=62

 3) As the remote sites are linked via VPN, I could mount the remote
shares to the local backup server and use rsync 'directly' - any
pros/cons doing things this way (speed, reliability etc?), or is an
rsync server on the remote servers a better approach?

Local rsync over remote smb means that pretty much every file has to be
read over the WAN in its entirety, whether it has been backed up or not. 
So I guess that's a con.

 4) I am running the backup server on CentOS 5.3 and installed backuppc from
 the Centos RPM. Ideally I'd like to run the app as the normal 'apache' user
 - I read up on a few generic notes about doing this and got to a point
where
 backuppc wouldn't start properly as it couldn't create the LOG file. I then
 went round in circles looking at file permissions before putting things
back
 the way they were in order to do some more learning. Is there a
 simple-to-follow guide for setting up backuppc to not use mod_perl - I have
 read the docs but am still not getting there.

I can't help you there.



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with hardlink-based backups...

2009-08-18 Thread Les Mikesell
David wrote:
 
 You can exclude directories from the updatedb runs
 
 Only works if the data you want to exclude (such as older snapshots)
 are kept in a relatively small number of directories, or you need to
 make a lot of exclude rules, like one for each backup. In my case,
 each backed up server/user PC/etc, is independant, and has it's own
 directory structure with snaphots, etc.
 
 And actually backuppc also has a problematic layout for locate rules:
 
 __TOPDIR__/pc/$host/nnn - One of those directories for each backup version.
 
 So basically, if you have a large number of files on a server, it
 seems like you need to entirely exclude the server from updatedb,
 otherwise the snapshot directories are going to cause a huge updatedb
 database.
 
 Which kind of defeats the point of having updatedb running on the
 backup server. Which is why I've disabled it here :-(.

Why not just exclude the _TOPDIR_ - or the mount point if this is on its 
own filesystem?

 Backuppc maintains its own status showing how much space the pool uses and 
 how
 much is left on the filesystem. So you just look at that page often enough to
 not run out of space.
 
 Sounds like a 'df'- like display on the web page, but for the backuppc
 pool rather than a partition.

It keeps both a summary of pool usage (current and yesterday) and totals 
for each backup run of number of files broken down by new and existing 
files in the pool and the size before and after compression.  A glance 
at the pool percent usage and daily change tells you where you stand.

 Please correct me if I'm mistaken, but that doesn't really help people
 who want to find which files and dirs are taking up the most space, so
 they can address it (like, tweak the number of backed up generations,
 or exclude additional directories/file patterns, etc).

There's not a good way to figure out which files might be in all of your 
backups and thus not help space-wise when you remove any instance(s) of 
it.  But the per-host, per-run stats where you can see the rate of new 
files being picked up and how much they compress is very helpful.

 Normally people use a tool like 'du' for that, but 'du' itself is next
 to unusable when you have a massive filesystem, which can easily be
 created by hardlink snapshot-based backup systems :-(

That's probably why backuppc does it internally - that and keeping track 
of compression stats and which files are new.

 It is best done pro-actively, avoiding the problem instead of trying to fix 
 it
 afterwards because with everything linked, it doesn't help to remove old
 generations of files that still exist.  So generating the stats daily and
 observing them (both human and your program) before starting the next run is 
 the
 way to go.

 
 1. Removing old generations does help. The idea is to remove old
 churn that took place in that version. In other words, files which
 no longer have any references after that generation is removed
 (because all previous generations referring to those files via hard
 links, are also gone by this point).

Of course, but you do it by starting with a smaller number of runs than 
you expect to be able to hold.  Then after you see that the space 
consumed is staying stable you can adjust the amount of history to keep.

 2. Proactive is good, but again, with a massive directory structure,
 it's hard to use tools like du to check which backups you need to
 finetune/prune/etc.

This may well be a problem with whatever method you use.  It is handled 
reasonable well in backuppc.

 Also, you really want your backup archive on its own mounted filesystem so it
 doesn't compete with anything else for space and to give you the possibility 
 of
 doing an image copy if you need a backup since other methods will be too 
 slow to
 be practical.  And 'df' will tell you what you need to know about a 
 filesystem
 fairly quickly.

 
 Our backups are stored under a LVM which is used only for backups. But
 again, the problem is not disk usage causing issues for other
 processes. The problem is, once the allocated area is running out of
 space, how to check *where* that space is going to, so you can take
 informed action. 'df' is only going to tell you that you're low on
 space, not where the space is going.

One other thing - backuppc only builds a complete tree of links for full 
backups which by default run once a week with incrementals done on the 
other days.  Incremental runs build a tree of directories but only the 
new and changed files are populated, with a notation for deletions.  The 
web browser and restore processes merge the backing full on the fly and 
the expire process knows not to remove fulls until the incrementals that 
depend on it have expired as well.  That, and the file compression might 
take care of most of your problems.

-- 
Les Mikesell
 lesmikes...@gmail.com



--
Let Crystal Reports handle the reporting - Free 

Re: [BackupPC-users] Problems with hardlink-based backups...

2009-08-18 Thread Jon Craig
On Tue, Aug 18, 2009 at 10:25 AM, Davidwizza...@gmail.com wrote:

 Sounds useful, but it doesn't really address my problem of 'du' (and
 locatedb, and others) having major problems with this kind of backup
 layout.


A personal desire on your part to use a specific tool to get
information that is presented in other ways hardly constitues a
problem with BackupPC.  The linking structure within BackupPC is the
magic behind deduping files.  That it creates a huge number of
directory entries with a resulting smaller number of inode entries is
the whole point.  Use the status pages to determine where your space
is going.  It gives you information about the apparent size (full size
if you weren't de-duping) and the unique size (that portion of each
backup that was new.  This information is a whole lot more useful that
whatever your gonna get from DU.  DU takes so long because its a dumb
tool that does what its told and you are in effect telling it to
iterate accross each server multiple times (1 per retained backup) for
each server you backup.  If you did this against the actual clients
the time would be similiar to doing it against BackupPC's topdir.

As a side note are you letting available space dictate you retention
policy?  It sounds like you don't want to fund the retention policiy
you've specified otherwise you wouldn't be out of disk space.  Buy
more disk or reduce your retention numbers for backups.

Look at the Host Summary page.  Those servers with the largest Full
Size or a disspoportionate number of retained fulls/incrementals are
the hosts to focus pruning efforts on. Now select a candidate and
drill into the details for that host.  On the Host ??? Backup
Summary page look at the File Size/Count Reuse Summary table.  Look
for backups with a large New Files - Size/MB value.  These are the
backups where your host gained weight.  You can review the XferLOG
to get a list of files in this backup (note the number before the
filename is the file size).  Now you can go to the filesystem and
wholesale delete a backup or pick/choose through a backup for a
particular file (user copies a DVD BLOB to their server).  This wont
immediately free the space (although someone posted a tool that will)
as you will have to wait for the pool cleanup to run.  If its a
particular file, you may need to go through several backups to find
and kill the file (again someone posted a tool to do this I believe).

Voila', you've put your system on a diet, but beware, you do this once
and management will expect you to keep solving their under resourced
backup infrastructure by doing it again and again.  Each time your
forced to make decisions about is this file really junk or might a
user crawl up my backside when they find it can't be restored.  You've
also violated the sanctity of your backups and this could cause
problems if your ever forced to do some foresics on your system for a
legal case.

-- 
Jonathan Craig

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New user- loads of questions

2009-08-18 Thread Filipe Brandenburger
Hi,

On Tue, Aug 18, 2009 at 07:04, Nigel
Kendricksupport-li...@petdoctors.co.uk wrote:
 4) I am running the backup server on CentOS 5.3 and installed backuppc from
 the Centos RPM. Ideally I'd like to run the app as the normal 'apache' user
 - I read up on a few generic notes about doing this and got to a point where
 backuppc wouldn't start properly as it couldn't create the LOG file. I then
 went round in circles looking at file permissions before putting things back
 the way they were in order to do some more learning. Is there a
 simple-to-follow guide for setting up backuppc to not use mod_perl - I have
 read the docs but am still not getting there.

I've been through the same with CentOS RPM for BackupPC, eventually I
just stopped using it...

I rebuilt the SRPM for BackupPC from Fedora, it builds just fine in
CentOS 5 and works just fine. It actually starts working out of the
box, without any changes to Apache needed, you're able to use Apache
with user apache and even run other applications on the same server.

I reported those issues to CentOS, but they still found that requiring
the users to change Apache's user was acceptable and did not want to
change that...

You may also try EPEL's package of BackupPC, I believe it will be the
same as the one I rebuilt from Fedora's SRPMS:
http://download.fedora.redhat.com/pub/epel/5/i386/BackupPC-3.1.0-3.el5.noarch.rpm

Otherwise, get Fedora's SRPM here:
ftp://ftp.nrc.ca/pub/systems/linux/redhat/fedora/linux//releases/11/Everything/source/SRPMS/BackupPC-3.1.0-5.fc11.src.rpm

And use these instructions to rebuild it:
http://wiki.centos.org/HowTos/RebuildSRPM

HTH,
Filipe

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is there a speed setting?

2009-08-18 Thread Bowie Bailey
Jeremy Mann wrote:
 
 I'm watching a live output of Ganglia showing network usage while the
 backups are going. Also simple math.. I just finished one full backup, 16
 GB in 143 minutes. That's simply unacceptable for a full backup.

You should be able to get faster transfer rates than that.  I just
checked my last full backup and it was running at 8.2MB/s on a 100Mb/s
network (296GB full backup in 613 minutes).

I can't help with solving your problem, but I can verify that BackupPC
with rsync is definitely capable of backup speeds higher than what you
are seeing.

BTW - I am connecting to an rsync server with no encryption.  If you
connect through SSH, that may affect your transfer rates.

-- 
Bowie

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC File::RsyncP issues

2009-08-18 Thread Jim Leonard
First off, I'm a happy user of BackupPC; I'm only posting because I have 
an architecture question resulting in bad performance that I'm hoping 
someone can answer.

I have a need to back up Windows clients.  I got smb-based backups 
working very well, or so I thought -- no matter what I tried, I couldn't 
get some files backed up, even though the backuppc user was in the 
Administrators group and I had run icacls to grant backuppc full file 
rights.  So I went through the trouble of setting up rsyncd and now I'm 
backing up every file without an issue, except that things are moving 
VERY slowly, even for new files that haven't been backed up before. 
With smb, which used smbclient to do the transfers, I was seeing 
transfer speeds of 40-65MB/s over a gigabit network -- with rsync-based 
backups, I am seeing about 6MB/s, ten times slower.

I profiled File::RsyncP which is what BackupPC_dump appears to be using, 
and found this troubling report after a profile time of one day:

time elapsed (wall):   86034.3727
time running program:  85959.5328  (99.91%)
time profiling (est.): 74.7665  (0.09%)

%TimeSec.  #calls   sec/call  F  name
83.30 71605.7838   913708   0.078368  ?  File::RsyncP::pollChild
15.98 13737.1191  261  52.632640 File::RsyncP::writeFlush
  0.21  176.3028121432   0.001452 File::RsyncP::getData
(snip)

As you can see, pollChild is called a ridiculously large number of 
times, which is eating up nearly 70% of the CPU time trying to do a 
backup.  This is extremely inefficient and completely explains why my 
backups are taking so long over rsync (the CPU spends most of it's time 
in pollChild).

So, my questions are:

- Is there a reason BackupPC needs to emulate rsync through File::RsyncP 
instead of just using rsync itself?

- If not, is anyone maintaining File::RsyncP who can optimize that code 
and/or redesign it?

Thanks in advance for any advice.
-- 
Jim Leonard (trix...@oldskool.org)http://www.oldskool.org/
Help our electronic games project:   http://www.mobygames.com/
Or check out some trippy MindCandy at http://www.mindcandydvd.com/
A child borne of the home computer wars: http://trixter.wordpress.com/


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is there a speed setting?

2009-08-18 Thread Jim Leonard
Holger Parplies wrote:
 ah, so you're actually having a problem. Up to this point I wasn't sure if you
 weren't just misinterpreting some figures.

No, he and I are seeing the same thing -- File::RsyncP is a real 
problem.  I get decent transfers with actual rsync, but File::RsyncP has 
some serious design issues (see my other post with profiling information 
titled File::RsyncP issues).  Is the author of that module (Craig 
Barratt) still around and/or maintaining it?

If anyone is getting more than 10MB/s out of BackupPC rsyncd transfers, 
I would be quite surprised (and would like to know what the backup 
hardware was).
-- 
Jim Leonard (trix...@oldskool.org)http://www.oldskool.org/
Help our electronic games project:   http://www.mobygames.com/
Or check out some trippy MindCandy at http://www.mindcandydvd.com/
A child borne of the home computer wars: http://trixter.wordpress.com/

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC File::RsyncP issues

2009-08-18 Thread Jim Leonard
Jim Leonard wrote:
 As you can see, pollChild is called a ridiculously large number of 
 times, which is eating up nearly 70% of the CPU time trying to do a 
 backup.  This is extremely inefficient and completely explains why my 
 backups are taking so long over rsync (the CPU spends most of it's time 
 in pollChild).

Wait a second, hold off -- I was able to reproduce bad behavior using 
actual rsync.  I'm still investigating, and will report back with results.
-- 
Jim Leonard (trix...@oldskool.org)http://www.oldskool.org/
Help our electronic games project:   http://www.mobygames.com/
Or check out some trippy MindCandy at http://www.mindcandydvd.com/
A child borne of the home computer wars: http://trixter.wordpress.com/

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backups fail on some XP laptops after running for hours

2009-08-18 Thread brianbe2

Installed BackupPC a few months ago. After creating all users using one as a 
template for the rest I would think they'd all behave the same. Tested the 
rsync on all the laptops initially and had pretty good results.

About a week later my bosses laptop, scheduled for an incremental, began 
backing up at 10:00 am and was still running when she shutdown her laptop at 
7:00 pm. Obviously the backup failed. The previous days incrementals lasted 
only 45 minutes or so. 

From that time onward the backups begin and just keep running until the user 
powers off. This is occurring on at least three laptops now. 

It seems like rsync begins very strong as the CPU utilization for that users 
process runs up to 25% on the Suse 10 server, then after about 30~45 minutes it 
gradually throttles back almost to zero.

I've tried excludes of the pagefile, ntuser.dat, system volume information, etc 
with no change. Updated to a newer rsync than what came with 
cygwin-rsyncd-2.6.8_0.zip on my bosses laptop...

No SSH, just running on port 873, firewall has the port opened via the netsh 
firewall commands by running service.bat from the package mentioned above. 

It almost seems like rsync.exe is stalling on the client laptops. It's process 
shows little CPU utilization, naturally, when time runs onward it decreases to 
nearly zero.

I could REALLY use some help and or pointers to get this back to the peppy 
speed it once used to have. 

Your thoughts and suggestions?

+--
|This was sent by bbett...@alfseed.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC File::RsyncP issues

2009-08-18 Thread Jim Leonard
Holger Parplies wrote:
  first of all, where are you seeing these figures, and what are you 
measuring?

Rather than try to convince you of my competence, I will offer up these 
benchmarks for the exact same endpoint machines and file (a 2 gigabyte 
uncompressable *.avi file that did NOT exist on the target):

Unix rsync-Unix rsync:   60MB/s
Windows SMB-Unix smbclient:  65MB/s
Windows rsyncd-Unix rsync:5MB/s
Windows rsyncd-BackupPC_dump: 5MB/s

As you can see, something is now clearly wrong with the windows rsyncd 
source.  I confirmed this by profiling actual rsync in Unix and saw that 
77% of its time was spent waiting for data (which mirrors exactly what 
File::RsyncP::pollsys was doing, wasting 77% of its time waiting for 
data).  So the problem isn't BackupPC, it's windows rsyncd.

I initially used cygwin rsync; for the above test, I switched it out for 
DeltaCopy's rsync.  BOTH VERSIONS had this kind of crappy speed.  Both 
versions showed hardly any CPU or filesystem usage; they just simply run 
slowly for a reason I can't figure out.  The network isn't slow (gigabit 
ethernet), the checksums aren't taking a long time (it's a brand new 
file that doesn't exist on the target so there's nothing to checksum), 
the hard drive isn't slow (raid-0 SATA stripe capable of 130GB/s read 
speeds) -- it just simply serves data really really slowly.

I can't believe this is an isolated incident.  Other people have got to 
be seeing this.  Other than cygwin and DeltaCopy, is there any specific 
version of rsyncd I should be using?  Any flags I can set in BackupPC 
that can improve speed?

  The primary purpose of the rsync protocol is to save network 
bandwidth. So if,
  for example, you are transferring only one tenth the amount of data 
for a full
  backup, and that takes the same time as with SMB, your network 
throughput will

These are not incrementals, but full backups, and the speed as 
previously mentioned is 1/10th that of SMB.  SMB backups are quite fast 
on this same infrastructure (around 65MB/s) but I can't use SMB because 
of XP/Vista/Win7 permission problems.

  I believe Craig is researching other alternatives (a fuse FS to handle
  compression and deduplication, so BackupPC could, in fact, use native 
rsync).

I hope that doesn't become mandatory, because that would limit BackupPC 
to Unix versions that support FUSE (not all do).
-- 
Jim Leonard (trix...@oldskool.org)http://www.oldskool.org/
Help our electronic games project:   http://www.mobygames.com/
Or check out some trippy MindCandy at http://www.mindcandydvd.com/
A child borne of the home computer wars: http://trixter.wordpress.com/

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC File::RsyncP issues

2009-08-18 Thread Holger Parplies
Hi,

Jim Leonard wrote on 2009-08-18 17:00:05 -0500 [[BackupPC-users] BackupPC 
File::RsyncP issues]:
 First off, I'm a happy user of BackupPC; I'm only posting because I have 
 an architecture question resulting in bad performance that I'm hoping 
 someone can answer.
 [...]
 With smb, which used smbclient to do the transfers, I was seeing 
 transfer speeds of 40-65MB/s over a gigabit network -- with rsync-based 
 backups, I am seeing about 6MB/s, ten times slower.

first of all, where are you seeing these figures, and what are you measuring?
The primary purpose of the rsync protocol is to save network bandwidth. So if,
for example, you are transferring only one tenth the amount of data for a full
backup, and that takes the same time as with SMB, your network throughput will
be only one tenth as high. That is not a problem, but rather a feature, and it
indicates that network bandwidth is not, in fact, your bottleneck. There are
other good reasons to use rsync just the same. And, yes, I read your mail in
the other thread, but it's still not obvious what you are actually observing,
and what you are interpreting.

Secondly, what are you comparing? Due to a feature of the interpretation of
attrib files by the rsync XferMethod, the first backup (well, all up to the
first full, to be a bit more exact) after switching from non-rsync to rsync
will re-transfer all data (which would make the backup slow, but not
low-bandwidth). In any case, you should run at least one full rsync backup
(per host) before starting measurements.

Have you got very large growing files (or probably: large *changing* files) in
your backup? They could also lead to an explanation (outside File::RsyncP, by
the way).

 I profiled File::RsyncP which is what BackupPC_dump appears to be using, 
 and found this troubling report after a profile time of one day:
 
 time elapsed (wall):   86034.3727
 time running program:  85959.5328  (99.91%)
 time profiling (est.): 74.7665  (0.09%)
 
 %TimeSec.  #calls   sec/call  F  name
 83.30 71605.7838   913708   0.078368  ?  File::RsyncP::pollChild
 15.98 13737.1191  261  52.632640 File::RsyncP::writeFlush
   0.21  176.3028121432   0.001452 File::RsyncP::getData
 (snip)
 
 As you can see, pollChild is called a ridiculously large number of 
 times, which is eating up nearly 70% of the CPU time trying to do a 
 backup.

Did you look at the code, or are you inferring that the number is ridiculous
from the name of the function? I don't know enough about the rsync protocol
(yet) to say for sure if the number of calls could be reduced and how, but
the calls to pollChild() seem to make sense to me.

What strikes *me* as unreasonable is the 261 calls to writeFlush() taking an
average of 52.6 seconds. Or maybe there was a wrap-around in the counter?

You should also note that not all of the work is done inside File::RsyncP, so
it's not 70% of the backup time spent there.

Don't get me wrong. I'm not saying that it wouldn't be good to significantly
increase BackupPC performance, if it can be done in the context of how
BackupPC works or can work.

 This is extremely inefficient and completely explains why my 
 backups are taking so long over rsync

Does it? Please share the explanation ...

 So, my questions are:
 
 - Is there a reason BackupPC needs to emulate rsync through File::RsyncP 
 instead of just using rsync itself?

Yes. Craig wouldn't have gone to the trouble of implementing File::RsyncP for
BackupPC if there wasn't, would he? (You are aware that Craig is also the
author of BackupPC, aren't you? ;-)

How would you propose using rsync to update a compressed deduplicated pool
with a separate directory for each backup, mangled file names and file
attributes stored seperately?

 - If not, is anyone maintaining File::RsyncP who can optimize that code 
 and/or redesign it?

If there is no reason to use it, someone should optimize it? ;-)

I believe Craig is researching other alternatives (a fuse FS to handle
compression and deduplication, so BackupPC could, in fact, use native rsync).
If that proves unviable, upgrading File::RsyncP to protocol version 30 would
probably be next. But File::RsyncP is open source, so you're free to optimize
it yourself :-). If I find any time at all, I'll take a closer look at the
matter, but that's pretty much an if (0) ...

Regards,
Holger

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 100,000+ errors in last nights backup

2009-08-18 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Holger Parplies wrote:
 Hi,
 
 Adam Goryachev wrote on 2009-08-13 15:42:26 +1000 [Re: [BackupPC-users] 
 100,000+ errors in last nights backup]:
 [...]
 I've frequently managed to cause two backuppc_dump's to run in parallel
 where one was scheduled by backuppc and one was run manually by me from
 the command line. It would be nice if backuppc_dump could do a simple
 check to see if the new directory already exists, and if so, simple exit
 (or error and exit).
 
 while a check would be possible, it's not quite as simple as that. What
 happens when the machine crashes during running backups? The new/ directory
 won't disappear by itself (well, BackupPC could move all new/ directories to
 trash on startup, but, according to your logic, you might just be running
 BackupPC_dump manually ...). File locking? Put up a big sign don't run
 BackupPC_dump manually unless you know what you are doing? ;-)

Of course, but programs shouldn't really be designed around what happens
when a system crashes (though they should try and handle it well).

A simple failure message if the new directory exists telling the admin
to rm -rf backuppc/pc/host/new or something to that effect would be
sufficient

 Mainly I run backups manually so I can see exactly what is happening
 during the backup and where/why it is failing or taking so long.
 
 Maybe there should/could be a way to serverMesg BackupPC to do a backup for a
 specific host with a -v switch and verbose logging directed to a specific file
 (i.e. make BackupPC_dump -v take a file name argument and pass that over via
 the BackupPC daemon). Please remind me in about two weeks ;-).

Well, I think there is already the ability to increase the log level,
and hence see more information in the log, but this has two issues:
1) I don't really want to modify the config to increase the log level, I
only want it to apply for the current run.
2) The existing logs are not flushed per line, they are only flushed
after a certain amount of bytes (probably per buffsize or whatever it is).

So, perhaps this could be achieved by fiddling the loglevel value with a
parameter, which could also force the log files to flush after each
line. I am pretty sure perl has a method to set this (per line flush or
buffer size flush) which can be set after the file open() and applies
until the file close()

PS, I know it hasn't been two weeks, but thought the above would be
easier to implement...

Regards,
Adam

- --
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkqLeWAACgkQGyoxogrTyiVJHwCdEOgs6aT/Wopku3NLN+ErFsNa
6EIAn2qoMeEzF6BwrNLuhkaZ7OZN8ByD
=k+qX
-END PGP SIGNATURE-

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/