Re: [BackupPC-users] __TOPDIR__ and $Conf{TopDir}

2009-11-26 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Holger Parplies wrote:
 Hi,
 
 Tino Schwarze wrote on 2009-11-24 23:40:13 +0100 [Re: [BackupPC-users] 
 __TOPDIR__ and $Conf{TopDir}]:
 PS: Maybe there should be a prominent note just at the $Config{TopDir}
 setting in config.pl? And a like to the wiki article?
 
 as far as that statement goes, you are correct. There should be, but there
 isn't for older versions of BackupPC. Aside from that, pretty much all points
 made in this thread are obsolete, as BackupPC 3.2.0beta0 fixes the issue.
 Starting with this version, it should in fact be possible to change TopDir
 just by setting $Conf{TopDir} (no, I haven't tried it out ...).
 
 You need to be aware of the fact, though, that BackupPC can't magically locate
 config.pl through a setting inside the file itself. Older versions of BackupPC
 put config.pl in $TopDir/conf/, and newer versions of BackupPC are backwards
 compatible to this state. If that is how your config.pl is located, you've got
 a chicken-and-egg problem, but that doesn't mean 3.2.0beta0 won't let you
 change your storage location just by setting $Conf{TopDir} (see below).
 FHS-compatible BackupPC installations (probably including just about all
 packaged versions) put config.pl somewhere under /etc, which is obviously not
 related to $TopDir, so these cases should be safe.

Not wanting to beat a dead horse... but isn't the usual method of
dealing with this simply a command line parameter which would specify
the location of the config file which would then specify the TopDir
whose value is not used to access the config file ?

ie, either the program uses it's hardcoded default value (or tries 3 or
4 values in a specific order that the package builder can customise if
needed)
OR
the user will specify the full path of the config.pl file as a command
line parameter at run-time...

Sure, there are a lot of 'standalone' applications in backuppc, but if
the user wants to use a non-default config.pl location, then they will
know they need to specify it when calling those applications...

Just my 2c

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAksOX08ACgkQGyoxogrTyiX/lgCcCxBEk1rIV0PMwhGyks421SiC
D/IAoNYIbnpY2CUbkMyOrXUbPkPVspFH
=odi+
-END PGP SIGNATURE-

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup of x00 TB of data

2009-11-26 Thread Ralf Gross
Hi,

I'm using backuppc and bacula together for a long time. The amount of
data to backup is growing massively lately (mostly large video files).
At the moment I'm using backup 2 tape for the large raid arrays. Next
year I may have to backup 300-400 TB. Backuppc is used for a small
amount of data, 3-4 TB.

We are looking for some high end solutions for the primary storage
right now (NetApp etc), but this will be very expensive. Most of the
data will be written once and then not touched for a long time. Maybe
not read again at all. There is also no need for a HA solution. 

So I will also look into cheaper solutions with more raid boxes. I
don't see a major problem with this, except for backups.

Using snapshots with NetApp filers would be a very nice way to handle
backups of these large amounts of data (only delta is stored). Tapes
are more compicated to handle as backup to disk.

Does anyone have experience with using backuppc and these massiv
amount of data? I can't imagine a pool with x00 TB or using dozens of
backuppc instances with smaller pools.

Any thoughs? This might be a bit of topic, but if someone has a clever
idea I would be interested to hear!

Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] How I back up 500GB of BackupPC Pool

2009-11-26 Thread Christian Völker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,
as lots of other people I had serious issues transferring the pool to a
remote site. Rsync needs days to sync the hardlinks. dump/restore
doesn't work properly (or dumps too much data, don't remember). drbd
needs a proxy through small lines (has to be purhcased). Full image
backup is too much data for the line or needs manual intervention (bring
the USB drive manually to remote office).

So I created a small script which will backup my pool to an remote
office through the line. It'll split the image in 512MB chunks, compress
them and store them on the USB drive. After the backup is done, I'll use
rsync to transfer the relatively small chunks to remote office.

What doese it do so?
Reads the disk image in 512MB chunk and copies them to ramdisk /dev/shm
(you'll need enough ram though!). It'll calculate the md5sum of the
chunk and compare to the existing one. If they differ, it'll gzip the
chunk and copy it to the USB and update the md5.
If md5 are same, it'll just remove the chunk from ramdisk.

Disadvantage is the script has to read the entire device every time is
runs. Another disadvantge might be the sizing for the chunks. If they
are to large a single bit change in a block will force the transfer of
the whole chunk. Too small chunks might reduce performance (not tested!).
It scales nearly linear with thesize of disk. If no data has changed
it'll need double time on double size.

It's not perfect at the moment, but it works. My initial backup now runs
 for 3 hrs and has transferred 100G (compressed to 94G). Mostly waiting
for gzip to finish- so if you prefer, just skip compression.

Any comments and suggestions are welcome.


#!/bin/bash
date
SRC=/dev/sdb1
DST=/srv/backuppc
TMP_FILE=/dev/shm/tmp
DST_FILE=$DST/backuppc.gz.iso
service backuppc stop
mount /srv || exit
if [ ! -e /srv/backuppc ] ; then mkdir /srv/backuppc; fi
rm -fr /dev/shm/*
sync
rm -f $DST/backuppc.log

i=0
loop=`losetup -f`

bs=$[64*1024] # blocksize 64k
fs=$[1024*1024*512] # filesize 512M
count=$[$fs/$bs] # block count per file (16384)
devlen=`df | grep $SRC| awk '{print $2}'` # #of 1k blocks
umount /var/lib/BackupPC
devlen=$[$devlen*1024] # #of bytes
devbsblocks=$[$devlen/$bs] # #of 64k blocks
# In case device does not perfectly fit into 64 blocks
if [ $[$devbsblocks*$bs] -ne $devlen ] ; then
devbsblocks=$[$devbsblocks+1] ; fi

files=$[$devbsblocks/$count] # #of blocks per dest file
if [ $[$files*$count] -ne $devbsblocks ] ; then files=$[$files+1] ; fi

while [ $i -le $files ] ; do
losetup -o $[$i*$count*$bs] $loop $SRC
free=$[`df /dev/shm | grep shm|awk '{print $4}'`*1024]
while [ $free -lt $fs ] ; do
sleep 30s
free=$[`df /dev/shm | grep shm|awk '{print $4}'`*1024]
done
#dd if=$loop bs=$bs count=$count | tee $TMP_FILE.$i | md5sum 
$TMP_FILE.$i.md5
dd if=$loop bs=$bs count=$count of=$TMP_FILE.$i
md5sum $TMP_FILE.$i  $TMP_FILE.$i.md5
losetup -d $loop
if [ -e $DST_FILE.$i.md5 ] ; then
echo diff $DST_FILE.$i.md5 $TMP_FILE.$i.md5
diff $DST_FILE.$i.md5 $TMP_FILE.$i.md5
if [ $? -ne 0 ] ; then
echo $i is different  $DST/backuppc.log
( cat $TMP_FILE.$i | gzip -2   $DST_FILE.$i ;
rm -f $TMP_FILE.$i ) 
mv $TMP_FILE.$i.md5 $DST_FILE.$i.md5
else
rm -f $TMP_FILE.$i $TMP_FILE.$i.md5
fi
else
( cat $TMP_FILE.$i | gzip -2   $DST_FILE.$i ; rm -f
$TMP_FILE.$i ) 
mv $TMP_FILE.$i.md5 $DST_FILE.$i.md5
fi

i=$[$i+1]
done
umount /srv
service backuppc start
date


Greetings Christian
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with CentOS - http://enigmail.mozdev.org/

iD8DBQFLD8Ih0XNIYlAXmzsRAs9MAJsFi2H/pxWQOBQohPcXRXtXYUqjagCfUM3c
VHc9dbKXsVZP+d1s9KPn6xI=
=E6yw
-END PGP SIGNATURE-

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Windows bare metal restore

2009-11-26 Thread Les Mikesell
On Wed, Nov 25, 2009 at 11:30 PM, Chris Bennett ch...@ceegeebee.com wrote:
 There would also be the additional (and potentially not trivial)
 work to get clonezilla to run on an active windows system. I'm not
 sure if it can read all the NTFS metada from a shadow copy but there
 has to be some way to do it since Norton Ghost is able to clone a
 live system using some type of shadow copy.

 You may have realised this already, but the previous suggestion was to
 just have a minimal clone of an equivalent base Windows system - it
 doesn't have to be a clone of the server you want to restore.



Yes, the idea would be to have a minimal base image of the OS that you
would not have to update, or at least not frequently, along with a
bare-metal re-installer that you would boot from CD.  But, I'm not
sure if anyone knows exactly what needs to be in this image for
Windows or whether you have to update it after installing windows
updates.

-- 
 Les Mikesell
  lesmikes...@gmail.com

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Windows bare metal restore

2009-11-26 Thread Bob Weber

On 11/26/2009 11:23 AM, Les Mikesell wrote:
 On Wed, Nov 25, 2009 at 11:30 PM, Chris Bennettch...@ceegeebee.com  wrote:

 There would also be the additional (and potentially not trivial)
 work to get clonezilla to run on an active windows system. I'm not
 sure if it can read all the NTFS metada from a shadow copy but there
 has to be some way to do it since Norton Ghost is able to clone a
 live system using some type of shadow copy.

  
 Yes, the idea would be to have a minimal base image of the OS that you
 would not have to update, or at least not frequently, along with a
 bare-metal re-installer that you would boot from CD.  But, I'm not
 sure if anyone knows exactly what needs to be in this image for
 Windows or whether you have to update it after installing windows
 updates.


I have cloned Windows systems (win 2k to XP) by using sysrescucd.  First 
I would use ntfsresize to re-size the file system to the smalist disk I 
would restore to.  This is not necessary if you will always use the same 
disk size or larger.  Then I would copy the first 100 megs or so (not 
really sure how much is necessary but this amount allways worked) using 
dd.  If the destination disk was a different size I would use fdisk to 
resize the c: partition to the size of the new disk making sure the 
partition always started at the same block as the original.  If the size 
is the same I would use fdisk to write the the original partition table 
back so the kernel would know the disk partition table changed (this 
saves a reboot since the dd copy process copies a new partition table 
also).  Next I would use ntfsclone to clone the whole ntfs file system. 
   ntfsclone just copies the data and directory structure so it is 
usually pretty fast.  If the disk is bigger than the original I would 
use ntfsresize to resize the ntfs file system to the new partition.

This seems pretty complicated but I used a script to automate the 
process.  I used this at a local school system to create classroom 
images saved to Linux server that I could later use to populate all the 
computers in a class.   The 2 files for each image were compressed with 
gzip and transferred with ssh to and from the Linux server.  Check out 
the documentation that comes with ntfsclone to see examples.

...Bob

-- 
This message has been scanned for viruses and dangerous content
by MailScanner4.66/SA 3.2.3/Bingo, and is believed to be clean.


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Exclude a directory.

2009-11-26 Thread boulate

Hi all !

First, please excuse me for my bad English,  i'll try to do my best.

I just have a problem with backuppc. I have to backup a server using SMB 
protocol, and everything work well.

The problem is that I have to exclude a directory of the backup. I read the 
documentation and try to configure le folder to exclude from the web interface, 
but it doesn't work.

So i tried to edit the /etc/backuppc/myhost.pl, and now it look like that :

$Conf#123;SmbShareName#125; = #91;
nbsp; 'commun'
#93;;
$Conf#123;SmbSharePasswd#125; = 'password';
$Conf#123;SmbShareUserName#125; = 'administrator';
$Conf#123;BackupFilesExclude#125; = #123;
nbsp; 'commun' = #91;'/Commun/Informatique/Folder_to_exclude/',
'/Commun/Informatique/Second_folder_to_exclude/'#93;,
#125;;


I tried with a * after the Folder_to_exclude/ and I still have the same 
problem.

What did I'm doing wrong ?

I looked at the documentation, and search on internet, and I still can't solve 
this problem ... help !

+--
|This was sent by boula...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/