Re: [BackupPC-users] Move BackupPC

2008-08-08 Thread Daniel Denson

you should start a new thread instead of hijacking an existing one.

anyway. you should consider NOT moving to raid5 as it is very very slow 
with backuppc.  specifically, write speed is less that half that of a 
raid1 and way less than a raid0+1.


Sam Przyswa wrote:

Holger Parplies a écrit :
  

Diederik De Deckere wrote on 2008-08-07 19:47:38 +0200 [[BackupPC-users]  Move 
BackupPC]:
  


Hi,

Were're about to change one of our backup server from raid1 to raid5.   
What would be the safest way to backup BackupPC and restore it to the  
new system?

  

http://www.catb.org/~esr/faqs/smart-questions.html
  



This reply it's a little bit off topic and not very useful...

But it's a reply...

Sam.



  
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] sync with bittorrent

2008-07-28 Thread Daniel Denson
I recently read a tip on lifehacker about checking and fixing downloaded 
ISO media with bittorrent.  bittorrent is designed for small incremental 
part downloads and organizing that data which could make it a nice fit 
for remote filesystem syncing with any filesystem that can do readable 
snapshots.

consider make an LVM snapshot and then a torrent file for it.  setup 
your backuppc server as a bittorrent tacker.  send the torrent to the 
remote machine and run it with rtorrent or some cli torrent client.

i'm going to time torrent creation on my filesystem, i will return results

thanks

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] sync with bittorrent

2008-07-28 Thread Daniel Denson
1m5.6s to add 6159 files or  .01 seconds per file.  the torrent is 200K
i did a single directory, ill try this whole server next.

Daniel Denson wrote:
 I recently read a tip on lifehacker about checking and fixing 
 downloaded ISO media with bittorrent.  bittorrent is designed for 
 small incremental part downloads and organizing that data which could 
 make it a nice fit for remote filesystem syncing with any filesystem 
 that can do readable snapshots.

 consider make an LVM snapshot and then a torrent file for it.  setup 
 your backuppc server as a bittorrent tacker.  send the torrent to the 
 remote machine and run it with rtorrent or some cli torrent client.

 i'm going to time torrent creation on my filesystem, i will return 
 results

 thanks


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] sync with bittorrent

2008-07-28 Thread Daniel Denson
too many files for bittorrent to handle, need to wrap the fileset up 
with tar unfortunately.  also, bittorrent wont handle the hardlinks so 
again it needs wrapped up in tar.

oh well, it was a thought.

Daniel Denson wrote:
 1m5.6s to add 6159 files or  .01 seconds per file.  the torrent is 200K
 i did a single directory, ill try this whole server next.

 Daniel Denson wrote:
 I recently read a tip on lifehacker about checking and fixing 
 downloaded ISO media with bittorrent.  bittorrent is designed for 
 small incremental part downloads and organizing that data which could 
 make it a nice fit for remote filesystem syncing with any filesystem 
 that can do readable snapshots.

 consider make an LVM snapshot and then a torrent file for it.  setup 
 your backuppc server as a bittorrent tacker.  send the torrent to the 
 remote machine and run it with rtorrent or some cli torrent client.

 i'm going to time torrent creation on my filesystem, i will return 
 results

 thanks



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] sync with bittorrent

2008-07-28 Thread Daniel Denson

yes

Martin Leben wrote:

Daniel Denson wrote:
  
I recently read a tip on lifehacker about checking and fixing downloaded 
ISO media with bittorrent.  bittorrent is designed for small incremental 
part downloads and organizing that data which could make it a nice fit 
for remote filesystem syncing with any filesystem that can do readable 
snapshots.


consider make an LVM snapshot and then a torrent file for it.  setup 
your backuppc server as a bittorrent tacker.  send the torrent to the 
remote machine and run it with rtorrent or some cli torrent client.




Hmm... Have understood you correct if what you want to achieve is a sync of a 
large file set without the huge memory overhead of rsync?


/Martin


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
  
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Howto backup BackupPC running on a RAID1 with mdadm for offline-storage

2008-07-28 Thread Daniel Denson
I am just using 32bit ubuntu with 4GB(3.4GB available) and it is working 
nicely. 


   as far as rsync in concerned, i think that you need to have a ton of
   ram and a fast CPU to make large fileset transfers work, which I
   have.  I doubt a 1GB p3 1Ghz is going to cut it. Hmmm, you probably
   need a 64-bit OS too, so you can use that ram in one process.


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc on 64bit

2008-05-08 Thread Daniel Denson
Actually, there are still a lot of issues when running a 64bit server 
system as some software has not been ported up or requires a specific 
library that has not been ported to 64bit.Mostly, it is libraries 
that are 64bit and 32bit programs that can't use them.

But back on point here, Backuppc is completely agnostic about the CPU it 
runs on as it is a script-based program.  The only thing that matters is 
if the utilities it uses are of the right version number to work.

Tino Schwarze wrote:
 On Thu, May 08, 2008 at 09:09:30AM -0400, Leandro Tracchia wrote:
   
 Would I have a problem running backuppc on a 64bit processor with Ubuntu
 64bit OS???
 

 IMO times are over when 32bit/64bit was an issue. The stuff has been in
 production for several years now. It is mature.

 Bye,

 Tino.

   

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to install BackupPc-3.1.0 in RHEL-5

2008-05-01 Thread Daniel Denson
Sure!  

Using the ubuntu server install, you can build a backuppc server, ready to 
backup clients, with nothing but the bare minimum software and services, in 
less than an hour. 
Maybe 10 total clicks/commands including the apt-get install backuppc

This server will have no extra services and will be more naturally secure 
because of it.

With a centos/redhat/suse system you have more default services and software 
installed.  They usually setup a more complete desktops which use up ram and 
disk space and typically have extra desktop related services.

this is mainly personal preference but the less that runs on a server 
unneccissarily, the better.

I have played with other systems and none are as easy as an ubuntu server setup.


On Wed, 30 Apr 2008 07:11:58 -0700, [EMAIL PROTECTED] wrote:
 
 On Apr 29, 2008, at 11 
  the Centos5 RPM works perfectly on RHEL5.  I prefer a debian based  
  system as it is quicker and easier to build a dedicated BPC server  
  with no extra stuff, Centos5/RHEL5 is a lot heavier for the default  
  install.
 
 
 
 Can you elaborate on your last statement?
 
 Tony
 
 -
 This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
 Don't miss this year's exciting event. There's still time to save $100. 
 Use priority code J8TL2D2. 
 http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC data on a Samba share

2008-04-22 Thread Daniel Denson
Quite simply, this is not possible using any kind of standard method.  
You could loopmount a ext3 disk image from the samba share but you will 
then be going through 2 software layers to get to the filesystem which 
will be quite slow!

Best to get a NAS disk that supports NFS or to setup another server and 
export the drives via iscsi or ataoe

shacky wrote:
 Anyone is using BackupPC with the data directory (var/lib/backuppc) on
 a remote Samba share?
 I'm trying to do this but I have a lot of problems (like some timeout
 errors of smbclient in /var/log/messages).
 I wish to know if this configuration should work or if it is normal it
 to make me some problems.
 The fact is that I need to have the BackupPC data on a network storage
 server (NAS), a Lacie network hard disk which accept Samba, HTTP or
 FTP.

 Thank you very much for your help!
 Bye.

 -
 This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
 Don't miss this year's exciting event. There's still time to save $100. 
 Use priority code J8TL2D2. 
 http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
   

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] do a manual fill of a backup

2008-04-15 Thread Daniel Denson
Does anyone know how to do a manual fill of a backup from the command 
line?  what I am doing is changing incrementals to fulls and fulls to 
incrementals.  when going full-incr there is not problem but when going 
from incr-full I would like to do the fill process on that backup.  I 
don't know that it really matters that much in the end but I want to 
know that there is a copy/hard link of every file in this 'full' backup.

I am aware of the circumstances that an incr backup wouldn't be complete 
or wouldn't be as thorough in checking for file changes but in my 
circumstance my files always have their mtime changed.

Any ideas?

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Wildly different speeds for hosts

2008-04-15 Thread Daniel Denson
yet another situation when IO is the enemy.   I know most people are 
most concerned with IO performance on the server, but the client also 
must be able to keep up or you get .66MB's or something.

Raman Gupta wrote:
 Raman Gupta wrote:
   
 I have three hosts configured to backup to my PC. Here are the speeds
 from the host summary:

 host 1:  24.77 GB,  14,000 files, 18.78 MB/s (slower WAN link)
 host 2:   1.27 GB,   4,000 files,  1.89 MB/s (faster WAN link)
 host 3:   4.82 GB, 190,000 files,  0.66 MB/s (fast LAN link)

 They all use rsync with the same setup, other than the exclude list.
 Backups are configured to run one at a time so there is no overlap
 between them.

 The speed of host 3 concerns me. Host 3 is by far the beefiest
 machine, and on the fastest network link of all the hosts, but yet
 backs up at only 0.66 MB/s (incrementals are even slower).
 

 Ok, it seems that the number of files has a large non-linear affect on
 the performance of BackupPC. I excluded a bunch of stuff from my host
 3 backup, and the new stats are:

 host 3:4.2 GB,  85,000 files,  2.19 MB/s

 For a file count reduction factor of 2.2, there was a speed increase
 factor of 3.3.

 Cheers,
 Raman

 -
 This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
 Don't miss this year's exciting event. There's still time to save $100. 
 Use priority code J8TL2D2. 
 http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
   

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup to USB disk.

2008-04-15 Thread Daniel Denson
I think I understand your want in that you would like to have backuppc 
check that a drive is hooked up before trying to use it for backups.  If 
I am correct then I would suggest you build a quick daemon script that 
starts backuppc when the device is hotplugged and stops backuppc when 
unplugged.  I dont know what your linux distro is but hotplug scripts 
are pretty easy, do some google work to get the file and method for a 
hotplug script on a specific device.

Martin Leben wrote:
 Mauro Condarelli wrote:
   
 Hi,
 I asked this before, but no one answered, so I will try again :)

 I am using a large (500G) external USB disk as backup media.
 It performs reasonably, so no sweat.

 Problem is:
 Is there a way to do a pre-check to see if the drive is actually mounted
 and, if not, just skip the scheduled backup?
 It would be easy to put a do_not_backup file in the directory over which
 I mount the remote.
 I could then do a test to see if that file is present (no disk) or if it
 is absent (something was mounted over it.
 Unfortunately I have no idea where to put such a test in BackupPC!

 Can someone help me, please?

 Related issue:
 I would like to use a small pool of identical external HDs in order to
 increase further security.
 


 Hi Mauro,

 Considering what it seems like you want to achieve, I would suggest another 
 approach: Use at least three disks in a rotating scheme and RAID1.

 Say I have three disks labeled 1, 2 and 3. Then I would rotate them according 
 to 
 the schedule below, which guarantees that:
 - there is always at least one disk in the BackupPC server.
 - there is always at least one disk in the off-site storage.
 - all disks are never at the same location.

 1 2 3   (a = attached, o = off-site)
 a o o
 a a o - RAID sync
 o a o
 o a a - RAID sync
 o o a
 a o a - RAID sync
 . . .

 An even safer approach would of course be to rotate four disks where at least 
 two disks are always attached to the BackupPC server.

 Good luck!
 /Martin Leben


 -
 This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
 Don't miss this year's exciting event. There's still time to save $100. 
 Use priority code J8TL2D2. 
 http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
   

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync xfer error

2008-04-12 Thread Daniel Denson
better to put a small token file in /srv so that it does not apear empty.

echo backuppc token file, please do not delete  /srv/.backuppc_token





Martin Leben wrote:
 Mauro Condarelli wrote:
   
 Tony Schreiner ha scritto:
 
 There is a config variable called  BackupZeroFilesIsFatal.
 If that is set to 1, and your share is empty files, the backup will fail.

 Set it to 0 or skip /srv.
 Tony

   
 Thanks,
 That was it.
 Now it is crunching (on another share).

 Thanks again
 Mauro
 

 Hi Mauro,

 Setting BackupZeroFilesIsFatal to 0 might be dangerous. It is configurable 
 for a 
 reason. Think about what happens in the following scenario:

 - BackupZeroFilesIsFatal is set to 0.
 - On /srv you have mounted a disk or something.
 - Suddenly the mount disappears due to you fat fingering the configuration 
 (remember that human errors are the most common errors) or faulty hardware or 
 something else.

 Now when BackupPC comes along and wants to backup /srv it does that without 
 complaining, even though it contains no files. Depending on the schedule and 
 retention settings you might have lost your backup completely. Especially if 
 this continues for some days/weeks and you don't notice it.

 So the recommendation is to leave BackupZeroFilesIsFatal at 1. Don't add 
 /srv 
 to the backup until there is data on it. If the client machine has other 
 directories you are backing up and if /srv is a dynamic mount that 
 sometimes 
 isn't used, I would recommend that you create a separate host alias in which 
 you 
 backup only /srv.

 BR
 /Martin Leben


 -
 This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
 Don't miss this year's exciting event. There's still time to save $100. 
 Use priority code J8TL2D2. 
 http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
   

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no cpool info shown on web interface

2008-04-12 Thread Daniel Denson
did you change the $TopDIR entry in config.pl?  I have found that the 
mechanism for reporting disk usage requires than $TopDIR be 
/var/lib/backuppc.  its best to mount your target disk onto that 
location.  you can either mount it directly or mount it via bind, which 
is what i do.

cp -Rp /var/lib/backuppc /data/backuppc
mv /var/lib/backuppc /var/lib/backuppc_original
mkdir -p /var/lib/backuppc
chown backuppc:backuppc /var/lib/backuppc

add to fstab
/data/backuppc /var/lib/backuppc none bind 0 0

mount -a

I find that the symlink just doesn't work 100%, but the mount (or mount 
-o bind) works perfectly.


Bernhard Ott wrote:
 Les Mikesell wrote:
   
 Bernhard Ott wrote:
 
 quote_03:
   
 2008-04-11 01:00:36 Pool nightly clean removed 0 files of size 0.00GB
 2008-04-11 01:00:36 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 
 0 max links), 1 directories
 2008-04-11 01:00:36 Cpool nightly clean removed 0 files of size 0.00GB
 2008-04-11 01:00:36 Cpool is 0.00GB, 0 files (0 repeated, 0 max 
 chain, 0 max links), 4369 directories
 
 Once again, the backups are *fine*, restores *do work*, transfer logs 
 ok but I really don't like the idea that BackupPC_nightly might not 
 work the way it should.
 I checked the permissions, docs, archives and ... I'm stuck.
   
 Have you done anything unusual like moving the archive location after 
 installation?

 

 Not AFAIK g, I symlinked the whole backuppc directory to the (debian) 
 standard location /var/lib/backuppc.
 No linking errors (there are hardlinks, find /var/lib/backuppc/pc/ 
 -links +500 -printf '%n %k %p\n' gives me lots of files).
 I hope it is correct that only the backuppc-directory must reside on the 
 same file system?

 Hmm...maybe I moved the pool after the first test runs, I just can't 
 remember (the path didn't change) ... but if there is something wrong 
 with the pool, shouldn't there be massive linking problems?

 Sorry that I repeat that the Host Summary is correct.
 BTW, it's the only server I'm running on the amd64 platform...

 Can I run the BackupPC_nightly in debug mode? Any logs that I 
 may/could/should provide?

 Bernhard

 -
 This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
 Don't miss this year's exciting event. There's still time to save $100. 
 Use priority code J8TL2D2. 
 http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
   

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc_zipCreate

2008-04-10 Thread Daniel Denson
wikified

Daniel Denson wrote:
 I'm feeling a bit dumb about Backuppc_zipCreate but I finally figured 
 it out.  Some people have figured it out on the mailing list but never 
 put it in a really coherent statement so I thought i'd drop this 
 tidbit on everyone

 Backuppc_zipCreate -h host -n dumpNum -c compressionlevel -s sharename 
 space directory  file.zip

 dumpNum = number of the dump OR number counting backwards.  -1 is last 
 backup, -2 is second to last. 27 is the backup numbered 27
 compressionlevel = 0-9, the best balance is 3.  4 takes 15% longer for 
 1-5% gain in compression, 5-9 take MUCH longer without a whole lot of 
 gain.

 Backuppc_zipCreate -h desktop1 -n -1 -c 3 -s docs /   yesterday.zip

 create a yesterday.zip from host desktop1 from the backup -1 days 
 ago(yesterday) out of the share c and the directory / aka root of 
 c, you could also do -s c /windows

 the important thing, that is not well described in the help text when 
 you run Backuppc_zipCreate without options, is that for the -s 
 shareName you need the share name, then a space, and then the 
 directory under the share.  so if your share is c and you want 
 everything, put -s c / NOT -s c/

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backuppc_zipCreate

2008-04-09 Thread Daniel Denson
I'm feeling a bit dumb about Backuppc_zipCreate but I finally figured it 
out.  Some people have figured it out on the mailing list but never put 
it in a really coherent statement so I thought i'd drop this tidbit on 
everyone

Backuppc_zipCreate -h host -n dumpNum -c compressionlevel -s sharename 
space directory  file.zip

dumpNum = number of the dump OR number counting backwards.  -1 is last 
backup, -2 is second to last. 27 is the backup numbered 27
compressionlevel = 0-9, the best balance is 3.  4 takes 15% longer for 
1-5% gain in compression, 5-9 take MUCH longer without a whole lot of gain.

Backuppc_zipCreate -h desktop1 -n -1 -c 3 -s docs /   yesterday.zip

create a yesterday.zip from host desktop1 from the backup -1 days 
ago(yesterday) out of the share c and the directory / aka root of 
c, you could also do -s c /windows

the important thing, that is not well described in the help text when 
you run Backuppc_zipCreate without options, is that for the -s 
shareName you need the share name, then a space, and then the directory 
under the share.  so if your share is c and you want everything, put 
-s c / NOT -s c/

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to take Backup of windows machine through Backuppc-3.1.0

2008-04-01 Thread Daniel Denson
I'm sorry to be the guy that does this but this is well documented.  If 
you are having some problem AFTER you have followed the instructions 
then you were not clear about that.  You must post what error you are 
having in that circumstance.

kanti wrote:
 Hie All , I want to take backup of a windows client machine by Backuppc-3.1.0 
 .And my backuppc server is running on Fedora 7.  Can anyone tell me how can i 
 out from this problem . Any Idea ???

 Plz try to help me out this problem . 

 Thanks a lot in Advance ..

 Thanks  Bye 

 Appu

 +--
 |This was sent by [EMAIL PROTECTED] via Backup Central.
 |Forward SPAM to [EMAIL PROTECTED]
 +--



 -
 Check out the new SourceForge.net Marketplace.
 It's the best place to buy or sell services for
 just about anything Open Source.
 http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
   

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] suggestion -- nice down BackupPC_link

2008-03-27 Thread Daniel Denson
not really.  IO is not CPU bound and nicing a process only changes its 
CPU usage priority (on linux 2.6)

Now, if you have a processes eating up 100% of the CPU, renicing a 
processes that uses heavy IO *can* have an effect as the program using 
all the IO could gain(or loose) the ability to get to the CPU in a 
timely manner.

generally speaking though, renicing and IO bound task wont make any 
difference

Carl Wilhelm Soderstrom wrote:
 On 03/27 10:29 , Tony Schreiner wrote:
   
 Does nice and renice have much of an effect on I/O bound tasks?
 

 I don't know the scheduler well enough to know for certain myself. I believe 
 it does.

   

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_link got error -4 when calling MakeFileLink

2008-03-27 Thread Daniel Denson
It is not generally recommended to change the $TopDIR logical but on 
Debian it is not really even doable.  Debian and Ubuntu have $TopDIR 
hardcoded to /var/lib/backuppc, just mount your target their either by 
directly mounting it or using bind.

mount -o bind /target /var/lib/backuppc

fstab
/target /var/lib/backuppc bind 0 0


Nils Breunese (Lemonbit) wrote:
 Masta Yogi wrote:

   
 I installed BackupPC  3.1.0 on a Debian Etch machine and then moved  
 the Topdir to a directory in /mnt/...

 Now, if I do backups, I get these errors such as:

 2008-03-27 09:14:21 BackupPC_link got error -4 when calling  
 MakeFileLink(/mnt/backup/medium1/pc/schlepptop.test/0
 /f%2fhome%2fsven/f.Tribler/fbsddb/attrib,  
 89acbd9cfc3dc5c21977ee0dcca44e08, 1)

 How does this error affect my backups ? Is it a serious error ? How  
 can I fix it ?
 

 It is not recommended to change the value of TopDir. Either mount your  
 backup partition at the location used by the package (/var/lib/ 
 backuppc in case of Debian I believe) or use a symlink or bind mount.  
 And what filesystem do you use for your backups? It needs to support  
 hardlinks or BackupPC won't be able to do its business.

 Nils Breunese.

 -
 Check out the new SourceForge.net Marketplace.
 It's the best place to buy or sell services for
 just about anything Open Source.
 http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
   

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up to BackupPC and Tape

2008-03-27 Thread Daniel Denson
I just have a script pull a tar archive off of backuppc monthly so I can 
backup to dvd, but you can easily write tar archives to tape(tar 
originally was TapeARchive).  just read the man pages for your OS's tape 
utilities on how to write tar archives to tape.

if you are looking at backing up the whole $TopDIR or something, again, 
consider just writing a quick script to dump the whole thing to tape  
and run it whenever you like.  amanda and bacula are great when you need 
a very thorough tape backup system, but in this case you really need a 
very simple system of just dumping the whole directory to tape, compressed.

Les Mikesell wrote:
 Tino Schwarze wrote:
   
 Hi there,

 we've been running BackupPC for over 2 years now and it's great! Now I'm
 going to implement the next step in our backup strategy. We've got a
 Bacula set up and running which currently stores our database dumps to
 tape. Now I'd also like to backup the BackupPC data to tape.

 I configured an archive host and was able to successfully create
 archives. I've got the following circumstances/wishes and I'm looking
 for suggestions, pointers, best-practices etc.

 - tapes have 400 GB raw capacity, about 60 is used by database dumps
 - I've got 26 hosts backed up, some of them are already out of business
   and just kept
 - the sum of all full backups is 424 GB, backup sizes are:
   * one host with 122 GB
   * one with 93 GB
   * 10 hosts with 10-30 GB each
   * the rest below 10 GB (6 below 4 GB)
 - creating a TAR for a host takes about 5 minutes/GB, so the 120 GB host
   should take about 10 hours

 I'm looking for a way to schedule tape backup in such a way that
 - I'm not exceeding a tape's size
 - creating the archives doesn't take longer than about 16 hours so the
   process doesn't interfere with the next BackupPC run; I'm envisioning
   that BackupPC's ready with the servers in the morning so the backup
   server has all day to perform tape backup.
 - each of BackupPC's hosts is backed up to tape regularly
 - created archives are removed automatically after being written to tape
   (streaming directly to tape would be marvellous!)
 - all of this is nicely integrated with Bacula

 Has anybody done something similar before?
 

 Backuppc isn't great at handling tapes.  If I were doing it, I think I'd 
 try just letting bacula do its own separate copy to tape and skew the 
 full/incrementals.  Actually I do something similar with amanda which 
 figures out the incremental mix on its own to fit the tape.  If I did 
 have to only make one backup run, I'd probably get some very big disks 
 and configure backuppc to archive fixed size files into directories like 
 you would to archive to DVD (but much bigger), then use a separate 
 script to copy these files to tape.

   

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] nexenta vmware test

2008-03-25 Thread Daniel Denson
sure you could!  but that would be tooo easy!.

I didn't even check to see if that was in the config.pl file. oops

Nils Breunese (Lemonbit) wrote:
 dan wrote:

   
 you need to link /usr/sbin/ping  to /bin/ping
 ln -s /usr/sbin/ping /bin/ping
 

 You could also set $Conf{PingPath} maybe?

 Nils.

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2008.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
   

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] file_rsyncp_perl

2008-03-22 Thread Daniel Denson
does anyone know what the specific dependancy on file_rsyncp_perl 0.68 
is in backuppc?  I'm working on this nexenta install and only have 0.52 
available.  I forced the backuppc install but I'd like to know if the 
version number was just choosen as that is the version backuppc was 
tested with or if there is a specific function in that version that 
isn't available in an older version.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] nexenta vmware test

2008-03-22 Thread Daniel Denson
here are a quick backup with nexenta, zfs, zfs compression=on, 
atime=off, backuppc compression=OFF,raidz,1GB ram dedicated, 2CPU 
dedicate(3Ghz)


note that this is vmware BUT the numbers arent too bad.

This is tar over nfs, rsync wont work because of the file_rsyncp_perl 
version 0.52 too low of version number.


   2008-03-22 22:00:27 User backuppc requested backup of test 
http://192.168.1.111/backuppc/index.cgi?host=test (test 
http://192.168.1.111/backuppc/index.cgi?host=test)
   2008-03-22 22:00:28 Backup failed on test 
http://192.168.1.111/backuppc/index.cgi?host=test (File::RsyncP module 
version (0.52) too old: need 0.68)
 


   Backup#  TypeFilled  Level   Start Date  Duration/mins   Age/days
   Server Backup Path
   0
   http://192.168.1.111/backuppc/index.cgi?action=browsehost=localhostnum=0
fullyes 0   3/22 21:45  6.4 0.0 
/var/lib/backuppc/pc/localhost/0


   File Size/Count Reuse Summary

   Existing files are those already in the pool; new files are those
   added to the pool. Empty files and SMB errors aren't counted in the
   reuse and new counts.


Totals  Existing Files  New Files
   Backup#  Type#Files  Size/MB MB/sec  #Files  Size/MB 
#Files
   Size/MB
   0
   http://192.168.1.111/backuppc/index.cgi?action=browsehost=localhostnum=0
full36285   562.9   1.4612210   175.5   29593   388.4


   Compression Summary

   Compression performance for files already in the pool and newly
   compressed files.


Existing Files  New Files
   Backup#  TypeComp Level  Size/MB Comp/MB Comp
Size/MB
   Comp/MB  Comp
   0
   http://192.168.1.111/backuppc/index.cgi?action=browsehost=localhostnum=0
fulloff 175.5   175.5   0.0%388.4   388.4   -0.0%


Of course the Comp percent is not being shown, here it is from a 'zfs 
get compressratio data/backuppc'


   NAME   PROPERTY   VALUE  SOURCE
   data/backuppc  compressratio  1.60x  -


1.46MB/sec isn't too bad, especially in vmware.  I actually think that 
vmware's terrible networking caused more of a slowdown than the mediocre 
IO.  It is very easy to watch ZFS buffer out the IO slowness, I could 
see data being written in bursts which really helped bypass the low IO 
in vmware.


I can't run this on real hardware at the moment, my SATA 
controller(intel P35 chipset) isn't supported on the opensolaris 
kernel.  I will try to get my hands on an IDE based system.
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] zfs-fuse real world

2008-03-20 Thread Daniel Denson
I will run whatever specific test you would like with Bonnie++, just 
give me the command line arguements you would like to see.  i have each 
filesystem mounted to /test$filesystem so you can include that if you 
like.  I have never used bonnie++ before. 

let me know what you want to have run and i will try to get some results 
today or tomorrow.

thanks

David Rees wrote:
 On Wed, Mar 19, 2008 at 11:07 PM, dan [EMAIL PROTECTED] wrote:
   
 CPU e8400 3Ghz Dual Core.
 single 7200rpm 16MB cache 200GB maxtor drive.
 ubuntu 7.10
 

 You don't mention how much memory you have in the machine...

   
 FILE COUNT
 138581 634MB average of 4.68KB per file(coped the /etc directory 20 times)
 

 This doesn't look like a large enough data set, unless you are
 dropping all caches in between each test. See
 http://linux-mm.org/Drop_Caches
 You should run `sync; echo 3  /proc/sys/vm/drop_caches` before each test.

   
 find all files and run 'wc -l'(access speed) (wow zfs-fuse is slow here)
 zfs compression=gzip9.688 sec
  zfs compression=off10.734 sec
 *ext3.3218 sec
 *reiserfs.431 sec
 jfs36.18 sec
 *xfs.310 sec
 

 I've used jfs before and would have noticed that it performed an order
 of magnitude worse than the other filesystems - I have to think that
 there is something peculiar with your benchmark.

   
 copy from RAM to disk(/dev/shm - partition w/ filesystem.  bus speed not a
 factor)
 

 Why read from /dev/shm ? Something like this would be better:

 time dd if=/dev/zero of=/tmp/bigfile count=1 bs=1M

 Adjust count as necessary to ensure that you are writing out
 significantly more data than you have available RAM.

   
  issues:jfs and xfs both did write caching and then spent periods catching
 up.
 ext31m13s8.68MB/s
 jfs3m21s3.15MB/s
 *reiserfs20s31.7MB/s (WOW!)
 xfs2m56s3.60MB/s
  zfs (CPU bound)2m22.76s4.44MB/s
 

 All of your numbers seem to be very slow. I would expect at least
 25MB/s, probably 50MB/s for ext3, jfs, reiserfs and xfs.

 Could you try running an established disk IO benchmark tool like bonnie++?

 -Dave
   

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] user permission always reset to root

2007-12-28 Thread Daniel Denson
you cannot use a CIFS mounted disk to store backuppc data because it 
cannot support hardlinks


Arthur Odekerken wrote:

Hi,
 
I have a weird problem with my setup.
It consists of a SuSE Linux 10.2 distro in combination with a LaCie 
ethernet disk mini, which I mounted on /lacie/backup with cifs.

As topdir I setup /lacie/backup for my backups, but I get several errors:
 
2007-12-28 23:20:36 qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver: mkdir 
/lacie/backup/pc/qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver/new//fPST/: 
Permission denied at 
/usr/local/BackupPC/lib/BackupPC/Xfer/RsyncFileIO.pm line 628
2007-12-28 23:21:17 Backup failed on qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver (Child 
exited prematurely)
2007-12-28 23:21:17 Running BackupPC_link qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver (pid=8006)
2007-12-28 23:21:17 qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver: mkdir 
/lacie/backup/cpool/1/a: Permission denied at 
/usr/local/BackupPC/lib/BackupPC/Lib.pm line 741
2007-12-28 23:21:17 Finished qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver 
(BackupPC_link qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver)
2007-12-28 23:22:09 User backuppc requested backup of qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver (qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver)
2007-12-28 23:22:09 Started full backup on qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver (pid=8038, 
share=PST)
2007-12-28 23:22:26 qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver: mkdir 
/lacie/backup/pc/ qsbserver 
http://10.1.1.100/cgi-bin/BackupPC_Admin?host=qsbserver/new//fPST/: 
Permission denied at 
/usr/local/BackupPC/lib/BackupPC/Xfer/RsyncFileIO.pm line 628


When I check permissions several files seem to be created as root 
which could explain the errors.

Can anyone help me with this?
 
Many thanks,

Arthur
 
 



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
  


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copy pool one PC at a time

2007-12-28 Thread Daniel Denson
Bryan Penney wrote:
 The original document I quoted was for an older version, but I found 
 one for 2.9.1 and is still says it doesn't understand hardlinks

 http://www.seas.upenn.edu/~bcpierce/unison//download/releases/unison-2.9.1/unison-manual.pdf
  


 I've copied a much smaller pool (150GB) using rsync when we first went 
 to a production server.

 Both of the servers have 2GB of RAM.
 After I get the drives for the new server, I will try rsync.  It will 
 be interesting to see how long it takes to copy all of this data with 
 all of those hardlinks.

 thanks for the help.

 Bryan



 On 12/28/2007 4:50 PM, dan wrote:
 no it wouldnt, but i though it did.  is that statement for an older 
 version?  it may just not handle it.  rsync should work if you have 
 enough RAM

 On Dec 28, 2007 3:10 PM, Bryan Penney  [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 In reading about Unison I found a statement in the Caveats and
 Shortcomings section that said Unison does not understand hard 
 links

 If this is true, would Unison work in this situation?

 On 12/28/2007 2:28 PM, dan wrote:
  no, you will have to copy the entire 'pool' or 'cpool' over.  you
  could copy individual pc backups, BUT when backuppc nightly 
 runs it
  will remove any hardlinks from the pool that are not needed
  elsewhere.  when you copy over pc backups after that, the will
 not use
  hardlinks and so your filesystem usage will go up a lot.  i
 would very
  much suggest you do it all in one shot.
 
  i know that time is against you on this and that 2TB even over
 gigabit
  is 5 hours so i would suggest that you rsync the files over 
 once and
  leave your other machine up running backups, then once it has
  finished, turn backups off and rsync the source to the target
 again.
  then you will have the bulk of the data over and only have to pull
  changes.  i worry about the  file count for 2TB being too much for
  rsync so consider Unison for the transfers.  In my reading i have
  found that though unison has the same issue as rsync(same
 algorythms)
  for a high number for files, it can handle more files in less
 memory.
 
  I have done this method to push about 800GB over and it worked
 well,
  but my backup server has 2GB of RAM and runs gigabit.
 
  maybe consider adding some network interfaces and channel bonding
  them.  i dont know if you have parts lying around but channel
 bonding
  in linux is pretty easy and you have agrigate each NICs 
 bandwidth to
  reduce that transfer time though i suspect that your drives are 
 not
  much faster than 1 gigabit NIC so you might not get much 
 benefit on
  gigabit.
 
 
 
  On Dec 28, 2007 10:17 AM, Bryan Penney [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED]
  mailto: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
 
  We have a server running BackupPC that has filled up it's 2TB
   partition
  (96% full anyway).  We are planning on moving BackupPC to
 another
  server
  but would like bring the history of backups over without
 waiting the
  extended period of time (days?) for the entire pool to copy.
  Is there
  any way to copy pieces of the pool, maybe per PC, at a
 time?  This
  would allow us to migrate over the course of a few weeks 
 without
  having
  days at a time with no backups.
 
 
 
 - 


  This SF.net email is sponsored by: Microsoft
  Defy all challenges. Microsoft(R) Visual Studio 2005.
  http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
 mailto:BackupPC-users@lists.sourceforge.net
  mailto:BackupPC-users@lists.sourceforge.net
 mailto:BackupPC-users@lists.sourceforge.net
  List:
https://lists.sourceforge.net/lists/listinfo/backuppc-users
  https://lists.sourceforge.net/lists/listinfo/backuppc-users
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
 http://backuppc.sourceforge.net/
  http://backuppc.sourceforge.net/
 
 



a long time.  you got gigabit?

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: