[BackupPC-users] FSArchiver?

2010-12-08 Thread hansbkk
I've been investigating how to backup BackupPC's filesystem,
specifically the tree with all the hard links (BTW what's the right
name for it, the one that's not the pool?)

The goal is to be able to bring a verified-good copy of the whole
volume off-site via a big cahuna sata drive.

I don't have enough RAM (or time!) for rsync -H and cp -a

I was originally looking at block-level partition imaging tools, from
mdmraid (RAID1'ing to a removable drive) to dd to Acronis.

I'm also looking at BackupPC_tarPCCopy, which seems great, but

What I'm really looking for is to be able to just mount the resulting
filesystem on any ol' livecd, without having to restore anything,
reconstruct LVM/RAID etc complexities just to get at the data - the
source volume is an LV running on a RAID6 array, but I want the target
partition to be a normal one.

I've come across this tool: http://www.fsarchiver.org/Main_Page

Does anyone have experience with it?

Any and all feedback/suggestions welcome.

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] cpool weirdness - pc directories inside cpool???

2010-12-08 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 23:39:10 -0500 on Wednesday, December 8, 
2010:
 > Here is a weird one...
 > I noticed that some of my cpool files in the x/y/z/ directory (which
 > should of course just be compressed files with md5sum names) are
 > themselves *directories*.
 > 
 > Even weirder some of the directories themselves have *mangle*-name
 > type files.
 > 
 > E.g.,
 > ls -ald 0/5/5/0555b4a02d758d3f570c0d535cc27b4f
 > drwxr-x---3 backuppc backuppc 4096 Dec  6 01:13 
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/
 > 
 > #find 0/5/5/0555b4a02d758d3f570c0d535cc27b4f
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fEsl
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fHelp
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fHelp/fENU
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fHelp/attrib
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fReader
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fReader/fActiveX
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fReader/fBrowser
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fReader/fHowTo
 > 0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat
 > etc...
 > 
 > I have several hundred directories like this in the cpool.
 > 
 > I have *no* idea how pc-like files could find their way into the cpool
 > rather than the pc tree.
 > 
 > Has anybody seen anything like this before?
 > 
 > (the only semi-unusual thing I do is to nfs-mount my /var/lib/backuppc
 > directory but I don't know how this would cause this type of problem
 > unless something is truly messed up with the nfs-mount causing file
 > names to be corrupted & confused)
 > 

YIKES my cpool seems to be really messed up.
Some of the cpool/x/y/z directories are now just short files.
Looking through the logs I do see occasional messages of form:
mkdir /var/lib/BackupPC//cpool/d/2/4: File exists at 
/usr/share/BackupPC/lib/BackupPC/Lib.pm line 899

Since this error can cause major failures, I'm surprised that an email
isn't sent out.
In fact, I think it would be helpful that any error that causes a
failure or potential pool corruption should be mailed to the user. IMO
such errors are more destructive and devious than the standard emails
warning that one system wasn't backed up for a few days.

Also, I noticed recent errors of form:
BackupPC_trashClean failed to empty /var/lib/BackupPC//trash
The files left in the trash are all fmangled pc directory trees (with
no files in them). There don't seem to be any ownership or permissions
problems.


Again, not sure what is causing these errors and not sure if they are
at all related to the previous weirdness problem mentioned

Interestingly, all the failures seem to have occurred in the last few
days which is about when I (ironically) added a 3rd disk to my 2-disk
RAID 1 as a temporary backup. The disk was mounted over USB and I
noticed that it would fail every couple of days - so maybe software
RAID messed something up here 

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Another jLib/fixLinks issue.

2010-12-08 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 13:58:28 -0500 on Tuesday, December 7, 2010:
 > Robin Lee Powell wrote at about 15:40:04 -0800 on Monday, December 6, 2010:
 >  > This is *fascinating*.
 >  > 
 >  > >From the actually-fixing-stuff part of the run, I get:
 >  > 
 >  >   ERROR: 
 > "tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg"
 >  - Too many links if added to "59c43b51dbdd9031ba54971e359cdcec"
 >  > 
 >  > to which I say "lolwut?" and investigate.
 >  > 
 >  > $ ls -li /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec*
 >  > 2159521202 -rw-r- 31999 backuppc backuppc 76046 Oct  7 08:29 
 > /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec
 >  > 2670969865 -rw-r- 31999 backuppc backuppc 76046 Oct 16 15:15 
 > /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_0
 >  >   79561977 -rw-r- 31999 backuppc backuppc 76046 Oct 22 22:07 
 > /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_1
 >  >  156369809 -rw-r- 31999 backuppc backuppc 76046 Oct 31 09:06 
 > /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_2
 >  > 3389777838 -rw-r- 31999 backuppc backuppc 76046 Nov  7 09:10 
 > /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_3
 >  >  106188559 -rw-r- 31999 backuppc backuppc 76046 Nov 13 15:10 
 > /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_4
 >  >  247044591 -rw-r- 31999 backuppc backuppc 76046 Nov 19 17:20 
 > /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_5
 >  >  293083240 -rw-r- 31999 backuppc backuppc 76046 Nov 26 06:14 
 > /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_6
 >  >  513555136 -rw-r- 31999 backuppc backuppc 76046 Dec  1 19:37 
 > /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_7
 >  >   52908307 -rw-r-  7767 backuppc backuppc 76046 Dec  4 10:37 
 > /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_8
 >  > $ ls -li 
 > /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 > 374791856 -rw-r- 1 backuppc backuppc 76046 Dec  4 08:03 
 > /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 >  > 
 >  > That's a bunch of files with *thirty two thousand* hard links.
 >  > Apparently that's a limit of some kind.  BackupPC handles this by
 >  > adding new copies, a hack that BackupPC_fixLinks is apparently
 >  > unaware of.
 > 
 > BackupPC_fixLinks does know about the limit and in fact is careful not
 > to exceed it (using the same hack) when it combines/rewrites links.
 > Other than that, I'm not sure where you think BackupPC_fixLinks needs
 > to be aware of it?
 > 
 > To be fair, since I don't have any systems with that many hard links,
 > I have not tested that use case so perhaps my code is missing
 > something (I haven't looked through the logic of how BackupPC_fixLinks
 > traverses chains in a while so maybe there is something there that
 > needs to be adjusted for your use case but again since I haven't
 > encountered it I probably have not given it enough thought)
 > 

Robin, can you let me know in what way you think BackupPC misses here?
It seems to me that my program does the following:
1. It avoids calling a pool element a duplicate if the sum of the
number of links in the duplicates exceeds the maximum link number
(i.e. the pool duplicate is justified)

2. When it fixes/combines links, it avoids exceeding the maximum link
   number and creates a new element of the md5sum chain instead.

Is there any other way that maxlinks comes into play that I am missing?

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-08 Thread Jeffrey J. Kosowsky
Robin Lee Powell wrote at about 17:02:06 -0800 on Monday, December 6, 2010:
 > On Mon, Dec 06, 2010 at 01:17:43PM -0800, Robin Lee Powell wrote:
 > > 
 > > So, yeah.  More than one link, matches something in the pool, but
 > > not actually linked to it.  Isn't that *awesome*?  ;'(
 > > 
 > > I very much want BackupPC_fixLinks to deal with this, and I'm
 > > trying to modify it to do that now.
 > 
 > Seems to be working; here's the diff.  Feel free to drop the print
 > statements.  :)
 > 
 > For all I know this will eat your dog; I have no idea what else I
 > broke.  I *do* know that it should be a flag, because I expect that
 > checksumming *everything* takes a very, very long time.

I looked through your code... it is certainly a quick-and-dirty patch
and it  may even work for your purposes but...
1. It is needlessly doing a lot of file comparisons rather than inode
   number comparisons so it can be much speeded up.
2. It mishandles some use cases by automatically always going down
   the first case of the if statement on any non-zero file...

So, I re-did the logic to be significantly faster by first comparing inode
numbers rather than md5sums to verify chain matches and only when that
fails does it actually look at the file contents to find a potential
match. I also preserved the original logic so it still works when
links=1 or file size =0 or when you are not interested in verifying
the pc heirarchy links. I also added a new -V (verify) flag to turn on
and off this option.

At the same time I did some scattered minor code cleanup -- looking
back this code is a bit amateurish since it was one of the first real
perl programs I ever wrote. I don't have the time though to do a
thorough rewrite but I cleaned up a little and improved some of the
commenting and documentation.

Note the file comparison part of the code (which is really only now
significant when a good fraction of the total cpool entries are dups
or pc entries are missing) can probably be reduced by almost a factor
of 2 if instead of using md5sums, you calculate the md4sum and compare
it to the md4sum checksums that are appended to each cpool file (note
this only occurs with rsync and I think only the second time the file
is backed up). In general, I think the bad files are typically a small
fraction of the entire pool or pc tree so it probably
is not worth the effort to decode and test the md4sums.

Anyway here is the diff. I have not had time to check it much beyond
verifying that it seems to run -- SO I WOULD TRULY APPRECIATE IT IF
YOU CONTINUE TO TEST IT AND GIVE ME FEEDBACK. Also, it would be great
if you would let me know approximately what speedup you achieved with
this code vs. your original.

Thanks

---
--- BackupPC_fixLinks.pl2009-12-22 07:50:24.291625432 -0500
+++ BackupPC_fixLinks.pl.test   2010-12-08 23:48:43.845678288 -0500
@@ -11,7 +11,7 @@
 #   Jeff Kosowsky
 #
 # COPYRIGHT
-#   Copyright (C) 2008, 2009  Jeff Kosowsky
+#   Copyright (C) 2008, 2009, 2010  Jeff Kosowsky
 #
 #   This program is free software; you can redistribute it and/or modify
 #   it under the terms of the GNU General Public License as published by
@@ -29,12 +29,12 @@
 #
 #
 #
-# Version 0.2, released Aug 2009
+# Version 0.3, released December 2010
 #
 #
 
 use strict;
-#use warnings;
+use warnings;
 use File::Path;
 use File::Find;
 #use File::Compare;
@@ -53,7 +53,7 @@
 %Conf   = $bpc->Conf(); #Global variable defined in jLib.pm (do not use 'my')
 
 my %opts;
-if ( !getopts("i:l:fb:dsqvch", \%opts) || @ARGV > 0 || $opts{h} ||
+if ( !getopts("i:l:fb:Vdsqvch", \%opts) || @ARGV > 0 || $opts{h} ||
 ($opts{i} && $opts{l})) {
 print STDERR <  Read innodes from file and proceed with 2nd pc tree pass
--lRead links from file and proceed with final repair pass
+
+-i   Read pool dups from file and proceed with 2nd pc tree pass
+-lRead pool dups & bad pc links from file and proceed
+ with final repair pass
+ NOTE: -i and -l options are mutually exclusive. 
+-s   Skip first pass of generating (or tabulating if
+ -i or -l options are set) cpool dups
 -f   Fix links
 -c   Clean up pool - schedule BackupPC_nightly to run 
  (requires server running)
--s   Skip first pass of generating/reading cpool dups
 -b Search backups from  (relative to TopDir/pc)
+-V   Verify links of all files in pc path (WARNING: slow!)
 -d   Dry-run
 -q   Quiet - only print summaries & results
 -v   Verbose - print details on each relink
 -h   Print this usage message
+
 EOF
 exit(1);
 }
 my $file = ($opts{i} ? $opt

[BackupPC-users] cpool weirdness - pc directories inside cpool???

2010-12-08 Thread Jeffrey J. Kosowsky
Here is a weird one...
I noticed that some of my cpool files in the x/y/z/ directory (which
should of course just be compressed files with md5sum names) are
themselves *directories*.

Even weirder some of the directories themselves have *mangle*-name
type files.

E.g.,
ls -ald 0/5/5/0555b4a02d758d3f570c0d535cc27b4f
drwxr-x---3 backuppc backuppc 4096 Dec  6 01:13 
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/

#find 0/5/5/0555b4a02d758d3f570c0d535cc27b4f
0/5/5/0555b4a02d758d3f570c0d535cc27b4f
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fEsl
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fHelp
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fHelp/fENU
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fHelp/attrib
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fReader
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fReader/fActiveX
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fReader/fBrowser
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat 6.0/fReader/fHowTo
0/5/5/0555b4a02d758d3f570c0d535cc27b4f/fAcrobat
etc...

I have several hundred directories like this in the cpool.

I have *no* idea how pc-like files could find their way into the cpool
rather than the pc tree.

Has anybody seen anything like this before?

(the only semi-unusual thing I do is to nfs-mount my /var/lib/backuppc
directory but I don't know how this would cause this type of problem
unless something is truly messed up with the nfs-mount causing file
names to be corrupted & confused)

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-08 Thread Jeffrey J. Kosowsky
Craig Barratt wrote at about 22:38:42 -0800 on Tuesday, December 7, 2010:
 > Robin writes:
 > 
 > > > In fact, it was for applications like this that I had suggested a
 > > > while back adding the partial md5sum to the attrib file so that
 > > > the reverse lookup can be done more cheaply 
 > > 
 > > That would, in fact, be fantastic.
 > > 
 > > > (the need for all of this will be obviated when Craig finishes the
 > > > next version :P )
 > > 
 > > Oh?  How's that looking?
 > 
 > There's good progress, but I've been busy the last several weeks,
 > so lately it's slow.
 > 
 > Yes, it now uses full-file md5 and that digest is stored in the
 > attribute file.  It supports rsync3, acls and xattrs.  Plus there
 > are no hardlinks - reference counting is done via a simple database.

Sounds truly awesome...
Although I will stick with Linux, just wondering does getting rid of
hard links mean that it will be possible to run this on Windoze for
those who like that type of stuff...

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ubuntu 10.10 Backuppc 3.2.0

2010-12-08 Thread Craig Barratt
Chris,

> 2010-12-08 11:41:09 full backup started for share tech
> 2010-12-08 11:47:10 unexpected repeated share name  skipped

What is your setting for $Conf{XXXShareName}, where XXX is your
XferMethod?

Craig

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Status and stop backup from command line

2010-12-08 Thread Robin Lee Powell
On Wed, Dec 08, 2010 at 05:14:51PM +, Keith Edmunds wrote:
> I need to be able to stop backups running during the working day.
> I'm aware of BlackoutPeriods and, mostly, that manages to achieve
> what I need. However, there are times when, for whatever reason,
> backups overrun. What I want to do is have a cron job that can run
> at the start of the day and:
> 
>  - list any running backups

sudo -u backuppc BackupPC_serverMesg status queues 

I also append: | sed 's/,/,\n/g' | less

>  - stop them

sudo -u backuppc BackupPC_serverMesg stop [hostname]

>  - notify by email of the actions taken

mailx

See /usr/local/bin/BackupPC, the Main_Check_Client_Messages
subroutine, for all the commands it'll take.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which "this parrot
is dead" is "ti poi spitaki cu morsi", but "this sentence is false"
is "na nei".   My personal page: http://www.digitalkingdom.org/rlp/

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Status and stop backup from command line

2010-12-08 Thread Keith Edmunds
I need to be able to stop backups running during the working day. I'm
aware of BlackoutPeriods and, mostly, that manages to achieve what I need.
However, there are times when, for whatever reason, backups overrun. What
I want to do is have a cron job that can run at the start of the day and:

 - list any running backups
 - stop them
 - notify by email of the actions taken

Are there command line tools that can achieve the first two items listed
above?

Thanks,
Keith

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ubuntu 10.10 Backuppc 3.2.0

2010-12-08 Thread John Rouillard
On Wed, Dec 08, 2010 at 03:23:48PM +, Chris Robinson wrote:
> Hi
> 
> I am getting the following errors, active directory.  I have tried
> error level 1 and 9 but can not get any more information.
> 
> 2010-12-08 11:41:09 full backup started for share tech
> 2010-12-08 11:47:10 unexpected repeated share name  skipped
> 2010-12-08 11:47:16 Backup aborted ()
> 2010-12-08 12:01:47 full backup started for share tech
> 2010-12-08 12:07:18 unexpected repeated share name  skipped
> 2010-12-08 12:07:26 Backup aborted ()
> 2010-12-08 14:32:16 full backup started for share tech
> 2010-12-08 14:36:54 unexpected repeated share name  skipped
> 2010-12-08 14:36:59 Backup aborted ()

Does:

   http://www.adsm.org/lists/html/BackupPC-users/2010-02/msg00099.html

help? (just googled for the error message "unexpected repeated ..."

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Ubuntu 10.10 Backuppc 3.2.0

2010-12-08 Thread Chris Robinson

Hi

I am getting the following errors, active directory.  I have tried
error level 1 and 9 but can not get any more information.

2010-12-08 11:41:09 full backup started for share tech
2010-12-08 11:47:10 unexpected repeated share name  skipped
2010-12-08 11:47:16 Backup aborted ()
2010-12-08 12:01:47 full backup started for share tech
2010-12-08 12:07:18 unexpected repeated share name  skipped
2010-12-08 12:07:26 Backup aborted ()
2010-12-08 14:32:16 full backup started for share tech
2010-12-08 14:36:54 unexpected repeated share name  skipped
2010-12-08 14:36:59 Backup aborted ()

--

Regards

Chris Robinson
W: http://business.krc.org.uk
E: busin...@krc.org.uk
T: 01708 701767
F: 020 7099 6814
M: 07887 98 33 55

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Different scheduling for different folders on same server

2010-12-08 Thread Cyril Lavier
Hi.

I have another question on Backuppc.

Here is the situation :

We have 2 big folders (500 and 600GB) full off small files.

They are on the same server, and apparently, with backuppc, I can only 
backup a full server.

But with these two folders, a full backup lasts for more than 20 hours, 
and these folders are expected to grow in the near future.

So I would like to know if there's a way to schedule full backups like 
this on a time span of 4 weeks

1st saturday : full folder1, incremental folder2
1st week, monday to friday : incremental folder1 and folder2
2nd saturday : full folder2, incremental folder1
2nd week, monday to friday : incremental folder1 and folder2
3rd saturday : incremental folder1 and folder2
3rd week, monday to friday : incremental folder1 and folder2
4th saturday : incremental folder1 and folder2
4rd week, monday to friday : incremental folder1 and folder2

The important part is the first two weeks.

If anybody has some ideas about how to do something like this with 
backuppc, this could help me a lot.

Thanks.

-- 
Cyril LAVIER | Systems Administrator | LTU Technologies
132 Rue de Rivoli - 75001 Paris France
(tel) +33 (0)1 53 43 01 71 | (mail) clav...@ltutech.com
LTU technologies - Making Sense of Visual Content |  www.LTUtech.com 


--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fatal error (bad version): setterm: $TERM is not defined.

2010-12-08 Thread Les Mikesell
On 12/8/10 7:26 AM, Alan Taylor wrote:
> Greetings,
>
> I am setting up backuppc 3.2 from the tar file on a CentOS 5.5 server.
> The server is dedicated to backuppc and does nothing else.
> I'm using rsync, have the cgi interface working and can 'ssh
> backu...@taurus' (i.e. as user backuppc ssh to the server itself
> (taurus) and to the other computers on the network). SSH is operating
> on passwordless keys.
>
> Running the command:
> ./BackupPC_dump -v -f taurus
> produces the error:
> Fatal error (bad version): setterm: $TERM is not defined.
> $TERM *is* defined (as xterm) for all users (root, some_normal_user
> and the backuppc user)
>
> Don't quite know what to try next ... ??

A) You need to be able to ssh r...@target_host, starting as the backuppc user 
on 
the server, and

B) There must not be any output generated from the remote login that will come 
before the rsync starts.  It looks like something is executing in your 
.profile, 
.bashrc, etc.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Fatal error (bad version): setterm: $TERM is not defined.

2010-12-08 Thread Alan Taylor
Greetings,

I am setting up backuppc 3.2 from the tar file on a CentOS 5.5 server.
The server is dedicated to backuppc and does nothing else.
I'm using rsync, have the cgi interface working and can 'ssh
backu...@taurus' (i.e. as user backuppc ssh to the server itself
(taurus) and to the other computers on the network). SSH is operating
on passwordless keys.

Running the command:
./BackupPC_dump -v -f taurus
produces the error:
Fatal error (bad version): setterm: $TERM is not defined.
$TERM *is* defined (as xterm) for all users (root, some_normal_user
and the backuppc user)

Don't quite know what to try next ... ??

Output below:
###
backu...@taurus/usr/local/BackupPC/bin $ ./BackupPC_dump -v -f taurus
cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 taurus
cmdSystemOrEval: finished: got output PING taurus.mydomain
(192.168.8.3) 56(84) bytes of data.
64 bytes from taurus.mydomain (ip_add): icmp_seq=1 ttl=64 time=0.026 ms

--- taurus.mydomain ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms

cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 taurus
cmdSystemOrEval: finished: got output PING taurus.mydomain (ip_add)
56(84) bytes of data.
64 bytes from taurus.mydomain (ip_add): icmp_seq=1 ttl=64 time=0.014 ms

--- taurus.mydomain ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms

CheckHostAlive: returning 0.014
full backup started for directory /
started full dump, share=/
Running: /usr/bin/ssh -q -x -l root taurus /usr/bin/rsync --server
--sender --numeric-ids --perms --owner --group -D --links --hard-links
--times --block-size=2048 --recursive --checksum-seed=32761
--ignore-times . /
Xfer PIDs are now 10805
xferPids 10805
Rsync command pid is 10805
Fetching remote protocol
Got remote protocol 1953785203
Fatal error (bad version): setterm: $TERM is not defined.

Checksum seed is 980251237
Got checksumSeed 0x3a6d7265
Sent exclude: /bak/backuppc
Sent exclude: /media
Sent exclude: /mnt
Sent exclude: /proc
Sent exclude: /sys
Sent exclude: /tmp
Sent exclude: /var/tmp

Many thanks/
Alan

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/