Re: [BackupPC-users] excluding files from backup

2011-09-04 Thread Ralf Gross
SSzretter schrieb:
 It's set to :
 
 $Conf{BackupFilesOnly} = {};
 
 My file count / reuse summary is basically not changed (which I would expect 
 some numbers to drop) in the web admin for the machine's latest backup.
 
 In the latest xfer log for this morning, just a SMALL sampling:
 
 create d 755   0/0   0 System Volume Information
   create d 755   0/0   0 temp
   create d 755   0/0   0 TempEI4
   create d 755   0/0   0 WINDOWS
   create d 755   0/0   0 WINDOWS/$968930Uinstall_KB968930$
   create d 755   0/0   0 
 WINDOWS/$968930Uinstall_KB968930$/spuninst
   create d 755   0/0   0 WINDOWS/$hf_mig$
   create d 755   0/0   0 WINDOWS/$hf_mig$/KB2079403
   create d 755   0/0   0 WINDOWS/$hf_mig$/KB2079403/SP3QFE
   create d 755   0/0   0 WINDOWS/$hf_mig$/KB2079403/update
   create d 755   0/0   0 WINDOWS/$hf_mig$/KB2115168
 
  pool 644   0/0   30216 WINDOWS/Prefetch/MOFCOMP.EXE-01718E95.pf
   pool 644   0/0   55314 WINDOWS/Prefetch/MRT.EXE-1B4A8D49.pf
   pool 644   0/07844 WINDOWS/Prefetch/MRTSTUB.EXE-0574A4ED.pf
   create   644   0/0  110134 WINDOWS/Prefetch/MSACCESS.EXE-175F0AD1.pf
   pool 644   0/0  171224 WINDOWS/Prefetch/MSCORSVW.EXE-1366B4F5.pf


I don't see any context in your mail.

 
 +--
 |This was sent by sszret...@hotmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--


Ah, Backup Central again.

Ralf

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best FS for BackupPC

2011-05-27 Thread Ralf Gross
Holger Parplies schrieb:
 Hi,
 
 Carl Wilhelm Soderstrom wrote on 2011-05-26 06:05:48 -0500 [Re: 
 [BackupPC-users] Best FS for BackupPC]:
  On 05/26 12:20 , Adam Goryachev wrote:
   BTW, specifically related to backuppc, many years ago, reiserfsck was
   perfect as it doesn't have any concept or limit on 'inodes'... Same for
   mail and news (nntp) servers. Do XFS/JFS have this feature? I'll look
   into these things another day, when I have some time :)
  
  There are indeed 'inodes' listed in the 'df -i' output of XFS filesystems.
  However, I've never heard of anyone hitting the inode limit on XFS, unlike
  ext3.
 
 of course XFS *has* inodes, and I wondered about the 'df -i' output, too, when
 I tried it yesterday. I don't remember reiserfs giving any meaningful
 information for 'df -i' ... nope, '0 0 0 -'. I sincerely hope that XFS doesn't
 have *static inode allocation*, meaning I have to choose the number of inodes
 at file system creation time and waste any space I reserve for them but do not
 turn out to need. That was one main concern when choosing my pool FS.
 Actually, mkfs.xfs(8) explains a parameter '-i maxpct=value':
 
 This  specifies  the  maximum percentage of space in
 the filesystem that can be allocated to inodes.  The
 default  value  is 25% for filesystems under 1TB, 5%
 for filesystems under 50TB and  1%  for  filesystems
 over 50TB.
 
 The further explanation says this is achieved by the data block allocator
 avoiding lower blocks, which are needed for obtaining 32-bit inode numbers.
 It leaves two questions unanswered (to me, at least):
 ...

have a look at the inode64 mount option.

http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for.3F

Ralf

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync restore skipping non-regular file ...

2010-06-14 Thread Ralf Gross
Hi,

I switched from tar to rsync a few weeks ago. Now I hat to restore the
first file from an older backup (before the tar - rsync switch).

I get the following messages in the xfer log:

Running: /usr/bin/ssh -c blowfish -q -x -l root vu0em003-1
/usr/bin/rsync --server --numeric-ids --perms --owner --group -D
--links --hard-links --times --block-size=2048 --relative
--ignore-times --recursive --checksum-seed=32761 . /tmp/
Xfer PIDs are now 27459
Got remote protocol 30
Negotiated protocol version 28
Checksum caching enabled (checksumSeed = 32761)
Got checksumSeed 0x7ff9
Sending /server/projekte/path/to/file...  file.xls (remote=/file.xls) type = 0
  restore   770 50872/1095 1789952 /tmp/file.xls
Remote[2]: skipping non-regular file file.xls
Finished csumReceive
Finished csumReceive
Done: 1 files, 1789952 bytes


backuppc show a status of success for the restore, but no file was
restored to /tmp.

I can successfully download the file as zip archive or by just
clicking on the file in the tree view.

I can also restore the same file with rsync when it is in a backup
that was done more recently with rsync (after the switch from tar to
rsync).

Is this a known problem? Is there anything I can do to restore these
older files with rsync (besides switching to tar as restore method).

Ralf

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-06-06 Thread Ralf Gross
Les Mikesell schrieb:
 Ralf Gross wrote:
   
  the RsyncP man page tells me this:
  
  http://search.cpan.org/~cbarratt/File-RsyncP-0.68/lib/File/RsyncP.pm
  
  File::RsyncP does not compute file deltas (ie: it behaves as though
  --whole-file is specified) or implement exclude or include options
  when sending file. File::RsyncP does handle file deltas and exclude
  and include options when receiving files.
  
  
  Thus no need to try the --whole-file option.
 
 That's when sending files - as in doing a restore.  When doing backups RsyncP 
 is 
 on the receiving side and a stock rsync is sending - and will do deltas. 
 Whether on not it is a win to compute deltas probably depends on the 
 relationship to available bandwidth and CPU, but it might be worth a try.  
 I'd 
 guess --whole-file might generally  be a win on files with random changes but 
 not on growing logfiles where the deltas are all past the end of the previous 
 copy.

the --whole-file option didn't help. The second full backup since
changing to rsync has finished now, and it took 600 min. less than the
last couple of full backups before. On the other hand the incremental
backups now need 3-4h in contrast to 60-80 min. before.

I'll stay with rsync for now, maybe with a longer interval between the
full backups. rsync should catch moved/deletet files.

Ralf

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-31 Thread Ralf Gross
Les Mikesell schrieb:
 Ralf Gross wrote:
  Ok, the first rsync full backup (488) completed. It took 500min. longer than
  the last tar full backup (482).
  
  Backup  TypeFilled  Level   Start Date  Duration/mins  Age/days
  482 fullyes 0   5/19 02:05  3223.2 11.5
  483 incrno  1   5/21 07:4989.6  9.2
  484 incrno  2   5/22 03:05   136.4  8.4
  485 incrno  3   5/23 03:05   119.1  7.4
  486 incrno  4   5/24 03:05   111.4  6.4
  487 incrno  1   5/25 03:05   165.9  5.4
  488 fullyes 0   5/26 21:00  3744.2  3.7
  489 incrno  1   5/29 12:15   394.1  1.1
  490 incrno  2   5/30 03:05   190.8  0.4
  
  I'm not sure if the checksum caching will compensate this in after the 3rd
  backup. Anything else I could do to tune rsync?
  
 
 You could force a full to start on Friday evening so weekly scheduling will 
 keep 
 the full runs on weekends if they take more than a night to complete.  
 Depending 
 on how much daily change you have, you might want to set incremental levels 
 for 
 the intermediate runs.

I use BackupPC and bacula for backups, I once lost a complete backuppc
pool/filesystem by a defect raid controller. So I need 2 backup
windows. But the option to do a full backup only once in 2 weeks
sounds a resonable option.

What I not quite understand is that inc. backup also take much longer
than before.
 
 A more extreme change would be to edit Rsync.pm to not add the --ignore-times 
 option on fulls.  I haven't needed this myself yet but I think it would make 
 a 
 big difference in speed - at the expense of not checking files for unlikely 
 but 
 possible differences.

Hm, I think I'll leave this optios as it is. In the list archives I
found some posts about the --whole-file option, but no definitive
answer if RsyncP supports it and if it's usefull at all.


Ralf
 

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-30 Thread Ralf Gross
Ralf Gross schrieb:
 Les Mikesell schrieb:
  On 5/26/2010 3:41 PM, Ralf Gross wrote:
   Ralf Gross schrieb:
   write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0}
  
  
   smells like a time out, but I don't know where. I found a couple of 
   messages
   with similar output in the list archives, but none of them had a 
   solution yet.
  
   *grr*
  
   I only traced the Xfer PID, not the PID. BackupPC_dump seems to be
   active and comparing the file list with the pool and I see high cpu
   load.
  
   I'm sure that I haven't seen that as I abortet the backup before.
   Now I'll have will wait until tomorrow morning...
  
  Until the 2nd full completes, the server side has to uncompress the 
  stored copy to compute the checkums on existing files.  And there may be 
  some quirk about switching from tar to rsync that I've forgotten.  Maybe 
  the 1st run will add the checksum cache for files you already have.
 
 
 The full rsync is still running sind 5/26 21:00. I'll report back when
 it's done. 


Ok, the first rsync full backup (488) completed. It took 500min. longer than
the last tar full backup (482).

Backup  TypeFilled  Level   Start Date  Duration/mins  Age/days
482 fullyes 0   5/19 02:05  3223.2 11.5
483 incrno  1   5/21 07:4989.6  9.2
484 incrno  2   5/22 03:05   136.4  8.4
485 incrno  3   5/23 03:05   119.1  7.4
486 incrno  4   5/24 03:05   111.4  6.4
487 incrno  1   5/25 03:05   165.9  5.4
488 fullyes 0   5/26 21:00  3744.2  3.7
489 incrno  1   5/29 12:15   394.1  1.1
490 incrno  2   5/30 03:05   190.8  0.4

I'm not sure if the checksum caching will compensate this in after the 3rd
backup. Anything else I could do to tune rsync?

Ralf


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-28 Thread Ralf Gross
Les Mikesell schrieb:
 On 5/26/2010 3:41 PM, Ralf Gross wrote:
  Ralf Gross schrieb:
  write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0}
 
 
  smells like a time out, but I don't know where. I found a couple of 
  messages
  with similar output in the list archives, but none of them had a solution 
  yet.
 
  *grr*
 
  I only traced the Xfer PID, not the PID. BackupPC_dump seems to be
  active and comparing the file list with the pool and I see high cpu
  load.
 
  I'm sure that I haven't seen that as I abortet the backup before.
  Now I'll have will wait until tomorrow morning...
 
 Until the 2nd full completes, the server side has to uncompress the 
 stored copy to compute the checkums on existing files.  And there may be 
 some quirk about switching from tar to rsync that I've forgotten.  Maybe 
 the 1st run will add the checksum cache for files you already have.


The full rsync is still running sind 5/26 21:00. I'll report back when
it's done. 

Ralf

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] speed up backups

2010-05-26 Thread Ralf Gross
Hi,

I'm using BackupPC without major problems since a few years now. Our
main fileserver has now reached 3,3 TB and it takes 2 days (18 MB/s)
to do a full backup with tar method.

I'd like to find out if there is something I can do to speed up the full
backups without changing the hardware.

The file server and the BackupPC server are connected by Gigabit Ethernet
links. The BackupPC pool is on a harware raid device (RAID6) with 1 GB
cache. The setup of the file server is a bit complex with drbd/lvm

Both server are running debian with xfs as filesystem.

I'd have expected high iowait numbers during backup, either on the file
server or the BackupPC side. But the numbers look ok, not too much iowait.


BackupPC server:

04:35:01  CPU  %user %nice   %system   %iowait%steal %idle
04:45:01  all  37.97  0.00  7.29 10.86  0.00 43.88
04:55:01  all  35.87  0.00  5.32  9.11  0.00 49.70
05:05:01  all  34.69  0.00  4.96  9.91  0.00 50.44
05:15:02  all  41.40  0.00  5.88  6.22  0.00 46.50
05:25:01  all  44.81  0.00  6.14  4.18  0.00 44.87
05:35:01  all  39.34  0.00  5.41  8.41  0.00 46.84
05:45:02  all  44.90  0.00  6.84  3.59  0.00 44.68
05:55:02  all  32.12  0.00  5.87  5.90  0.00 56.12
06:05:01  all  34.23  0.00  6.32  6.28  0.00 53.17
06:15:02  all  30.66  0.00  6.01  7.31  0.00 56.01
06:25:01  all  18.76  0.00  3.69  7.38  0.00 70.17
06:35:03  all  22.08  0.00  5.27  6.61  0.00 66.04
06:45:01  all  39.50  0.00 11.54  0.37  0.00 48.59
06:55:01  all  37.16  0.00  9.91  2.18  0.00 50.75
07:05:02  all  24.52  0.00  4.99  8.99  0.00 61.50
07:15:01  all  11.46  0.00  2.65 13.12  0.00 72.77
07:25:02  all  11.65  0.00  3.16 11.16  0.00 74.03
07:35:01  all  25.32  0.00  5.48  7.22  0.00 61.97
07:45:01  all  26.68  0.00  6.71  6.99  0.00 59.62
07:55:02  all  29.74  0.00  5.80  4.08  0.00 60.38
08:05:01  all  42.30  0.00  6.34  3.49  0.00 47.87
08:15:01  all  18.21  0.00  4.26 21.14  0.00 56.39
08:25:01  all  25.73  0.00  5.32 20.17  0.00 48.78
08:35:01  all  34.94  0.00  6.42  5.61  0.00 53.03
08:45:02  all  26.25  0.00  5.00 10.71  0.00 58.04
08:55:01  all  48.16  0.00  8.18  0.53  0.00 43.14
09:05:01  all  44.54  0.00  7.10  2.25  0.00 46.11
Average:  all  29.70  0.00  5.48  8.36  0.00 56.46

file server:

06:25:01  CPU  %user %nice   %system   %iowait%steal %idle
06:35:01  all   5.91  0.10  7.25  5.71  0.00 81.03
06:45:01  all   6.08  0.00  9.49  4.91  0.00 79.52
06:55:01  all   5.71  0.00  8.72  3.91  0.00 81.65
07:05:01  all   5.67  0.00  6.89  4.16  0.00 83.28
07:15:01  all   5.59  0.00  5.72  5.89  0.00 82.79
07:25:01  all   5.24  0.00  9.86  6.27  0.00 78.62
07:35:01  all   5.91  0.00 13.82  5.10  0.00 75.17
07:45:01  all   5.10  0.00  7.54  4.82  0.00 82.55
07:55:01  all   4.35  0.00  6.69  3.53  0.00 85.43
08:05:01  all   1.77  0.00  3.57  2.17  0.00 92.49
08:15:01  all   1.81  0.00  2.44  3.73  0.00 92.02
08:25:01  all   2.10  0.00  4.75  3.08  0.00 90.07
08:35:01  all   2.10  0.00  6.37  3.60  0.00 87.92
08:45:01  all   2.31  0.00  5.09  3.46  0.00 89.15
08:55:01  all   2.05  0.00  4.06  1.34  0.00 92.56
09:05:01  all   2.00  0.00  3.63  2.31  0.00 92.06
Average:  all   2.50  0.00  3.84  2.91  0.00 90.75


After all I've read, switching to rsync instead of tar doesn't seem to be a
better choice.

Disk I/O on the file server doesn't seem to be the bottleneck either. I can
boost the disk I/O during backup with other tools (dd, cat, bonnie++) to more
than 50 MB/s.

Any ideas if I can tune my BackupPC settings to speed things up?

Ralf

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Ralf Gross
Pedro M. S. Oliveira schrieb:

 Have you tried the rsync method, it should be way faster than tar.

I think rsync is most useful with servers that have a slow network
connection. But the network speed is not the problem, more precisely I
don't exactly know what the real bottlenck is.

Ralf

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Ralf Gross
Tyler J. Wagner schrieb:
 
 
 On Wednesday 26 May 2010 14:34:40 Sorin Srbu wrote:
  -Original Message-
  From: Les Mikesell [mailto:lesmikes...@gmail.com]
  Sent: Wednesday, May 26, 2010 2:55 PM
  To: General list for user discussion, questions and support
  Subject: Re: [BackupPC-users] speed up backups
  
  After the 1st 2 fulls, rsync should be better if you have enabled checksum
  caching.  You do need plenty of RAM to hold the directory listing if you
  
  have a
  
  large number of files.
  
  That was the checksum= 31thousandsomething to be added somewhere. I need to
  find that mail in the archives...
  
 
 Add to RsyncArgs and RsyncRestoreArgs:
 
 --checksum-seed=32761
 
 The best thing about BackupPC is that all help is available from the web 
 interface.

Ok, I give it a shot. Changed the Xfer method to rsync, updated to
rsync 3.0.2 (I know I will not benefit much because of BackupPC's own
rsync perl module) and added the --checksum-seed option.

Right now the rsync process is consuming 370 MB and is still growing.
The file server has 7.000.000 files. Let's see what happens...

Ralf

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Ralf Gross
Ralf Gross schrieb:
 Ralf Gross schrieb:
  Tyler J. Wagner schrieb:
   
   
   On Wednesday 26 May 2010 14:34:40 Sorin Srbu wrote:
-Original Message-
From: Les Mikesell [mailto:lesmikes...@gmail.com]
Sent: Wednesday, May 26, 2010 2:55 PM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] speed up backups

After the 1st 2 fulls, rsync should be better if you have enabled 
checksum
caching.  You do need plenty of RAM to hold the directory listing if 
you

have a

large number of files.

That was the checksum= 31thousandsomething to be added somewhere. I 
need to
find that mail in the archives...

   
   Add to RsyncArgs and RsyncRestoreArgs:
   
   --checksum-seed=32761
   
   The best thing about BackupPC is that all help is available from the web 
   interface.
  
  Ok, I give it a shot. Changed the Xfer method to rsync, updated to
  rsync 3.0.2 (I know I will not benefit much because of BackupPC's own
  rsync perl module) and added the --checksum-seed option.
  
  Right now the rsync process is consuming 370 MB and is still growing.
  The file server has 7.000.000 files. Let's see what happens...
 
 Hm, after 45 minutes the memory usage stopped growing at 540 MB. But now I
 don't see any activity at all. Neither on the file server, nor on the BackupPC
 server.  Nothing in the BackupPC log since start of the backup. Nothing in the
 the NewFileList file.
 
 On the file server I get this info with lsof:
 
 # lsof | grep rsync
 rsync 31139  root  cwd   DIR  104,1 4096  
 2 /
 rsync 31139  root  rtd   DIR  104,1 4096  
 2 /
 rsync 31139  root  txt   REG  104,1   384304 
 571696 /usr/bin/rsync
 rsync 31139  root  mem   REG0,0   
 0 [heap] (stat: No such file or directory)
 rsync 31139  root  mem   REG  104,197928
 2026757 /lib/ld-2.3.6.so
 rsync 31139  root  mem   REG  104,126088
 2023687 /lib/libacl.so.1.1.0
 rsync 31139  root  mem   REG  104,131784
 2023965 /lib/libpopt.so.0.0.0
 rsync 31139  root  mem   REG  104,1  1286104
 2026779 /lib/libc-2.3.6.so
 rsync 31139  root  mem   REG  104,115568
 2023689 /lib/libattr.so.1.1.0
 rsync 31139  root0u unix 0x81020f1ffc80
 29056104 socket
 rsync 31139  root1u unix 0x81020f1ffc80
 29056104 socket
 rsync 31139  root2u unix 0x81020f1ff380
 29056106 socket
 
 
 strace on the BackupPC server (BackupPC_dump process):
 
 $strace -f -p 11972
 Process 11972 attached - interrupt to quit
 select(8, [7], NULL, [7], NULL
 
 
 strace on the file server (rsync):
 
 $strace -f -p 31139
 Process 31139 attached - interrupt to quit
 select(1, [0], [], NULL, {11, 972000}
 

next try

# strace -e trace=\!file -f -p 12795

[60 minutes later]

fstat(3, {st_mode=S_IFDIR|0755, st_size=12288, ...}) = 0
fcntl(3, F_SETFD, FD_CLOEXEC)   = 0
getdents64(3, /* 126 entries */, 4096)  = 4096
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, \374\17\0\7, 4)  = 4
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, he/pagelinks\6\0\0\0\35\216\333H:K\ttext_html..., 4092) = 4092
getdents64(3, /* 85 entries */, 4096)   = 2752
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, \374\17\0\7, 4)  = 4
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, rsion:\5\2ip\7\0\0\0\33I\246E\7\0\0\0/bin/ip8\5\6..., 4092) = 4092
getdents64(3, /* 0 entries */, 4096)= 0
close(3)= 0
fstat(3, {st_mode=S_IFDIR|0700, st_size=16384, ...}) = 0
fcntl(3, F_SETFD, FD_CLOEXEC)   = 0
getdents64(3, /* 2 entries */, 4096)= 48
getdents64(3, /* 0 entries */, 4096)= 0
close(3)= 0
fstat(3, {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
fcntl(3, F_SETFD, FD_CLOEXEC)   = 0
getdents64(3, /* 11 entries */, 4096)   = 320
getdents64(3, /* 0 entries */, 4096)= 0
close(3)= 0
mmap(NULL, 29110272, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x2ac342e63000
munmap(0x2ac342e63000, 29110272)= 0
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0}


smells like a time out, but I don't know where. I found a couple of messages
with similar

Re: [BackupPC-users] speed up backups

2010-05-26 Thread Ralf Gross
Ralf Gross schrieb:
 write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0}
 
 
 smells like a time out, but I don't know where. I found a couple of messages
 with similar output in the list archives, but none of them had a solution yet.

*grr*

I only traced the Xfer PID, not the PID. BackupPC_dump seems to be
active and comparing the file list with the pool and I see high cpu
load.

I'm sure that I haven't seen that as I abortet the backup before.
Now I'll have will wait until tomorrow morning...

Ralf 

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-19 Thread Ralf Gross
Gerald Brandt schrieb:
 
  
  I think I've to look for a different solution, I just can't imagine a 
  pool with  10 TB. 
  
  
   * I have recently taken my DRBD mirror off-line and copied the BackupPC 
   directory structure to both XFS-without-DRBD and an EXT4 file system for 
   testing. Performance of the XFS file system was not much different 
   with, or without DRBD (a fat fiber link helps there). The first 
   traversal of the pool on the EXT4 partition is about 66% through the 
   pool traversal after about 96 hours. 
  
  nice ;) 
 
 You may want to look at this thread 
 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg17234.html
  

I've seen this thread, but the pool sizes there are max. in the lower
TB region.

Ralf

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-19 Thread Ralf Gross
Les Mikesell schrieb:
 Ralf Gross wrote:
 
  I think I've to look for a different solution, I just can't imagine a
  pool with  10 TB.
 
 Backuppc's usual scaling issues are with the number of files/links more than 
 total size, so the problems may be different when you work with huge files.  
 I 
 thought someone had posted here about using nfs with a common archive and 
 several servers running the backups but I've forgotten the details about how 
 he 
 avoided conflicts and managed it.  Maybe this would be the place to look at 
 opensolaris with zfs's new block-level de-dup and a simpler rsync copy.

ZFS sounds nice, but we have no experience with opensolaris or ZFS.
And I heard in the past that not all of ZFS's features are ready for
production.

bit off topic:
Right now I'm looking for a cheap storage solution that is based
on supermicro chassis with 36 drive bays (server) or 45 drive bays
(expansion unit) in 4 HU. Frightening, that would be 810 TB in one
Rack (36 + 45 HDDs x 5 x 2 TB, 40 HU) with 5 servers. Only problem is
power, cooling and backup.

Ralf

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-19 Thread Ralf Gross
Les Mikesell schrieb:
 On 2/19/2010 9:42 AM, Ralf Gross wrote:
  Les Mikesell schrieb:
  Ralf Gross wrote:
 
  I think I've to look for a different solution, I just can't imagine a
  pool with  10 TB.
 
  Backuppc's usual scaling issues are with the number of files/links more 
  than
  total size, so the problems may be different when you work with huge 
  files.  I
  thought someone had posted here about using nfs with a common archive and
  several servers running the backups but I've forgotten the details about 
  how he
  avoided conflicts and managed it.  Maybe this would be the place to look at
  opensolaris with zfs's new block-level de-dup and a simpler rsync copy.
 
  ZFS sounds nice, but we have no experience with opensolaris or ZFS.
 
 That's something that could be fixed.

sure, but it's something I can't estimate right now.

 
  And I heard in the past that not all of ZFS's features are ready for
  production.
 
 In the past, nothing worked on any OS.

that a bit hard...
 
  bit off topic:
  Right now I'm looking for a cheap storage solution that is based
  on supermicro chassis with 36 drive bays (server) or 45 drive bays
  (expansion unit) in 4 HU. Frightening, that would be 810 TB in one
  Rack (36 + 45 HDDs x 5 x 2 TB, 40 HU) with 5 servers. Only problem is
  power, cooling and backup.
 
 What's generating that kind of data?  Can you make whatever it is write 
 copies to 2 different places so you don't have to deal with finding the 
 differences in something that size for incrementals?  Or perhaps store 
 it in time-slice volumes so you know where the changes you need to back 
 up each day will be?

The data is mainly uncompressed raw video data (AFAIK HDF, I don't
work with the data). Users come with external HDD's and copy the data
on the samba files servers. 

Right now we backup 70 TB to tape, but I would like to get rid of
tapes. For the large RAID volumes there is also no regular backup. We
make it by acclamation. But this should change now...



Ralf

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-19 Thread Ralf Gross
Timothy J Massey schrieb:
 Ralf Gross ralf-li...@ralfgross.de wrote on 02/19/2010 10:42:35 AM:
 
  bit off topic:
  Right now I'm looking for a cheap storage solution that is based
  on supermicro chassis with 36 drive bays (server) or 45 drive bays
  (expansion unit) in 4 HU. Frightening, that would be 810 TB in one
  Rack (36 + 45 HDDs x 5 x 2 TB, 40 HU) with 5 servers. Only problem is
  power, cooling and backup.
 
 That's why companies like EMC and NetApp get big money for selling you 
 nearly the *exact* same hardware:  but with software and services designed 
 to handle things like...backup.
 
 With storage sets of that size, there's really very little you can do 
 outside of snapshots, volume management and lots and lots of disk (and 
 chassis and processor and power and ...) redundancy.  Simply traversing a 
 file system of that size is going to take more time than you have for a 
 backup window.  If you want anything approaching daily backups, you can't 
 do it at the filesystem level.  :(
 
 And even for things like off-site backup, it's far easier to have a 
 smaller version of your big array off-site and sync a snapshot 
 periodically (taking advantage of the logging/COW filesystem of the array 
 system) than it is to try to traverse an entire 800TB filesystem (or 
 multiple filesystems that add up to 800TB).

Your are absolutely right. I hope we will realize part of the storage
with eg. NetApp and only a small part with a cheap solution which
doesn't need backup or only in a best effort way.

The data is not changing much, most of the files just lie there and
will not be read again.

Ralf

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-19 Thread Ralf Gross
dan schrieb:
 you would need to move up to 15K rpm drives to have a very large array and
 the cost will grow exponentially trying to get such a large array.
 
 as Les said, look at a zfs array with block level dedup.  I have a 3TB setup
 right now and I have some been running a backup against a unix server and 2
 linux servers in my main office here to see how the dedup works
 
 opensolaris:~$ zpool list
 NAME  SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 rpool  74G  5.77G  68.2G 7%  1.00x  ONLINE  -
 storage  3.06T   1.04T  2.02T 66%  19.03x  ONLINE  -
 
 this is just rsync(3) pulling data over to a directory
 /storage/host1 which is a zfs fileset off pool storage for each host.
 
 my script is very simple at this point
 
 zfs snapshot storage/ho...@`date +%Y.%m.%d-%M.%S`
 rsync -aHXA --exclude-from=/etc/backups/host1excludes.conf host1:/
 /storage/host1
 
 to build the pool and fileset
 format #gives all available disks
 zpool status will tell you what disks are already in pools
 zpool create storage mirror disk1 disk2 disk3 etc etc spare disk11 cache
 disk12 log disk13
 #cache disk is a high RPM disk or SSD, basically a massive buffer for IO
 caching,
 #log is a transaction log and doesnt need a lot of size but IO is good so
 high RPM or smaller SSD
 #cache and log are optional and are mainly for performance improvements when
 using slower storage drives like my 7200RPM SATA drives
 zfs create -o dedup=on (or dedup=verify) -o compression=on -o storage/host1
 
 dedup is very very good for writes BUT requires a big CPU.  dont re-purpose
 your old P3 for this.
 compression is actually going to help your write performance assuming you
 have a fast CPU.  it will reduce the IO load and zfs will re-order writes on
 the fly.
 dedup is all in-line so it reduces IO load for anything with common blocks.
 it is also block level not file level so a large file with slight changes
 will get deduped.
 
 dedup+compression really needs a fast dual core or quad core.
 
 if you look at my zpool list above you can see my dedup at 19x and usage at
 1.04 which effectively means Im getting 19TB in 1TB worth of space.  my
 servers have relatively few files that change and the large files get
 appended to so I really only store the changes.
 
 snapshots are almost instant and can be browsed at
 /storage/host1/.zfs/snapshot/ and are labeled by the @`date xxx` so i get
 folders for the dates.  these are read only snapshots and can be shared via
 samba or nfs.
 zfs list -t snapshot
 
 opensolaris:/storage/host1/.zfs/snapshot# zfs list -t snapshot
 NAME
 rpool/ROOT/opensola...@install   270M  -  3.26G  -
 storage/ho...@2010.02.19-48.33
 
 zfs set sharesmb=on storage/ho...@2010.02.19-48.33
 -or-
 zfs set sharenfs=on storage/ho...@2010.02.19-48.33
 
 
 if you dont want to go pure opensolaris then look at nexenta.  it is a
 functional opensolaris-debian/ubuntu hybrid with ZFS and it has dedup.  it
 does not currently share via iscsi so keep that in mind.  I believe it also
 uses a full samba package for samba shares while opensolaris can use the
 native CIFS server which is faster than samba.
 
 opensolaris can also join Active Directory. You also need to extend your AD
 schema.  If you do you can give a priviliged use UID and GUI mappings in AD
 and then you can access the windows1/C$ shares.  I would create a backup
 user and add them to restricted groups in GP to be local administrators on
 the machines (but not domain admins).  You would probably want to figure out
 how to do a VSS and rsync that over instead of the active filesystem because
 you will get tons of file locals if you dont.
 
 good luck

Thanks for you detailed reply. I'll have a look at nexenta, right now
www.nexenta.org seems to be down.

Ralf

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-19 Thread Ralf Gross
Chris Robertson schrieb:
 Chris Robertson wrote:
  Ralf Gross wrote:

  Gerald Brandt schrieb: 

  
  You may want to look at this thread 
  http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg17234.html
   
  

  I've seen this thread, but the pool sizes there are max. in the lower
  TB region.
 
  Ralf

  
 
  Not all of them...
 
  http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg17240.html
 
  Chris
 
 Sorry for the noise...  I was looking at the size of the full backups, 
 not the pool.  On a side note, that's some serious compression, 
 de-duplication, or a massive problem with the pool.

I stumbeld across this too.

Ralf

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-18 Thread Ralf Gross
Chris Robertson schrieb:
 Ralf Gross wrote:
  Hi,
 
  I'm faced with the growing storage demands in my department. In the
  near future we will need several hundred TB. Mostly large files. ATM
  we already have 80 TB of data with gets backed up to tape.
 
  Providing the primary storage is not the big problem. My biggest
  concern is the backup of the data. One solution would be using a
  NetApp solution with snapshots. On the other hand is this a very
  expensive solution, the data will be written once, but then only read
  again. Short: it should be a cheap solution, but the data should be
  backed up. And it would be nice if we could abandon tape backups...
 
  My idea is to use some big RAID 6 arrays for the primary data, create
  LUNs in slices of max. 10 TB with XFS filesystems.
 
  Backuppc would be ideal for backup, because of the pool feature (we
  already use backuppc for a smaller amount of data).
 
  Has anyone experiences with backuppc and a pool size of 50 TB? I'm
  not sure how well this will work. I see that backuppc needs 45h to
  backup 3,2 TB of data right now, mostly small files.
 
  I don't like very large filesystems, but I don't see how this will
  scale with either multiple backuppc server and smaller filesystems
  (well, more than one server will be needed anyway, but I don't want to
  run 20 or more server...) or (if possible) with multiple backuppc
  instances on the same server, each with a own pool filesystem.
 
  So, anyone using backuppc in such an environment?

 
 In one way, and compared to some my backup set is pretty small (pool is 
 791.45GB).  In another dimension, I think it is one of the larger 
 (comprising 20874602 files).  The breadth of my pool leads to...
 
 -bash-3.2$ df -i /data/
 FilesystemInodes   IUsed   IFree IUse% Mounted on
 /dev/drbd0   1932728448 47240613 18854878353% /data
 
 ...nearly 50 million inodes used (so somewhere close to 30 million hard 
 links).  XFS holds up surprisingly well to this abuse*, but the strain 
 shows.  Traversing the whole pool takes three days.  Attempting to grow 
 my tail (the number of backups I keep) causes serious performance 
 degradation as I approach 55 million inodes.
 
 Just an anecdote to be aware of.

I think I've to look for a different solution, I just can't imagine a
pool with  10 TB.

 
 * I have recently taken my DRBD mirror off-line and copied the BackupPC 
 directory structure to both XFS-without-DRBD and an EXT4 file system for 
 testing.  Performance of the XFS file system was not much different 
 with, or without DRBD (a fat fiber link helps there).  The first 
 traversal of the pool on the EXT4 partition is about 66% through the 
 pool traversal after about 96 hours.

nice ;)

Ralf

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] experiences with very large pools?

2010-02-15 Thread Ralf Gross
Hi,

I'm faced with the growing storage demands in my department. In the
near future we will need several hundred TB. Mostly large files. ATM
we already have 80 TB of data with gets backed up to tape.

Providing the primary storage is not the big problem. My biggest
concern is the backup of the data. One solution would be using a
NetApp solution with snapshots. On the other hand is this a very
expensive solution, the data will be written once, but then only read
again. Short: it should be a cheap solution, but the data should be
backed up. And it would be nice if we could abandon tape backups...

My idea is to use some big RAID 6 arrays for the primary data, create
LUNs in slices of max. 10 TB with XFS filesystems.

Backuppc would be ideal for backup, because of the pool feature (we
already use backuppc for a smaller amount of data).

Has anyone experiences with backuppc and a pool size of 50 TB? I'm
not sure how well this will work. I see that backuppc needs 45h to
backup 3,2 TB of data right now, mostly small files.

I don't like very large filesystems, but I don't see how this will
scale with either multiple backuppc server and smaller filesystems
(well, more than one server will be needed anyway, but I don't want to
run 20 or more server...) or (if possible) with multiple backuppc
instances on the same server, each with a own pool filesystem.

So, anyone using backuppc in such an environment?

Ralf

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Ralf Gross
Robin Lee Powell schrieb:
 RedHat GFS *really* doesn't like directories with large numbers of
 files.  It's not a big fan of stat() calls, either.


Well, a network Cluster Filesystem is no fun to backup and might very
well be the bottleneck.

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-15 Thread Ralf Gross
Robin Lee Powell schrieb:
 On Tue, Dec 15, 2009 at 02:33:06PM +0100, Holger Parplies wrote:
  Robin Lee Powell wrote on 2009-12-15 00:22:41 -0800:
   Oh, I agree; in an ideal world, it wouldn't be an issue.  I'm
   afraid I don't live there.  :)
  
  none of us do, but you're having problems. We aren't. 
 
 How many of you are backing up trees as large as I am?  So far,
 everyone who has commented on the matter has said it's not even
 close.


One job here is 6117136 files and 3072478.4 bytes. It takes 2881 min
(48h) for a full backup to complete. But I'm using tar over a GbE
connection and not rsync, so this is something completely different.
 
Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup of x00 TB of data

2009-11-26 Thread Ralf Gross
Hi,

I'm using backuppc and bacula together for a long time. The amount of
data to backup is growing massively lately (mostly large video files).
At the moment I'm using backup 2 tape for the large raid arrays. Next
year I may have to backup 300-400 TB. Backuppc is used for a small
amount of data, 3-4 TB.

We are looking for some high end solutions for the primary storage
right now (NetApp etc), but this will be very expensive. Most of the
data will be written once and then not touched for a long time. Maybe
not read again at all. There is also no need for a HA solution. 

So I will also look into cheaper solutions with more raid boxes. I
don't see a major problem with this, except for backups.

Using snapshots with NetApp filers would be a very nice way to handle
backups of these large amounts of data (only delta is stored). Tapes
are more compicated to handle as backup to disk.

Does anyone have experience with using backuppc and these massiv
amount of data? I can't imagine a pool with x00 TB or using dozens of
backuppc instances with smaller pools.

Any thoughs? This might be a bit of topic, but if someone has a clever
idea I would be interested to hear!

Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Barracudaware

2009-09-03 Thread Ralf Gross
Jim Leonard schrieb:
 Tino Schwarze wrote:
  I'm using bacula to backup the generated tar files and have them deleted
  afterwards.
 
 This is off-topic, I apologize, but if you are using Bacula, then why do 
 you have a BackupPC installation?

I also use bacula and backuppc to backup some TB of data. I've lost
once my 4 TB backuppc pool because of a error which the RAID
controller didn't detect until the reiserfs was so badly damaged that
not much was left. For important data I always use two different
systems.

And I don't think it's off-topic to just mention how someone stores
it's backuppc tar files with bacula.

Ralf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pool is 0.00GB comprising 0 files and 0 directories....

2009-05-28 Thread Ralf Gross
Craig Barratt schrieb:
 Ralf writes:
 
  thanks, this seems to solve the problem:
 
 Sounds like you have the IO::Dirent + xfs problem.  It's fixed
 in 3.2.0 beta0.

Hm, BackupPC_Nightly is working again. But the status page still shows
0.00GB as pool size (after applying Tino's patch).

# Pool is 0.00GB comprising 0 files and 0 directories (as of 5/27
# 14:00), 


Ralf

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pool is 0.00GB comprising 0 files and 0 directories....

2009-05-28 Thread Ralf Gross
Ralf Gross schrieb:
 Craig Barratt schrieb:
  Ralf writes:
  
   thanks, this seems to solve the problem:
  
  Sounds like you have the IO::Dirent + xfs problem.  It's fixed
  in 3.2.0 beta0.
 
 Hm, BackupPC_Nightly is working again. But the status page still shows
 0.00GB as pool size (after applying Tino's patch).
 
 # Pool is 0.00GB comprising 0 files and 0 directories (as of 5/27
 # 14:00), 


Ok, the next BackupPC_Nightly run fixed this.

Pool is 2409.91GB comprising 4681786 files and 4369 directories (as of
5/28 14:18)

Ralf

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Pool is 0.00GB comprising 0 files and 0 directories....

2009-05-27 Thread Ralf Gross
Hi,

I use BackupPC since many years without hassle. But something seems to
be broken now.

BackupPC 3.1 (source)
Debian Etch
xfs fs


Recently the pool was running full and I added some additional disks to the
raid volume. backuppc showed the pool size before as 0.00GB, but I didn't
realize that there might be a problem.


#  Other info:
* 0 pending backup requests from last scheduled wakeup,
* 0 pending user backup requests,
* 0 pending command requests,
* Pool is 0.00GB comprising 0 files and 0 directories (as of 5/18 14:00),
* Pool hashing gives 0 repeated files with longest chain 0,
* Nightly cleanup removed 0 files of size 0.00GB (around 5/18 14:00),
* Pool file system was recently at 60% (5/27 08:56), today's max is 100% 
(5/18 14:00) and yesterday's max was 100%. 


$ grep TopDir /etc/BackupPC/config.pl
 $Conf{TopDir} = '/data/BackupPC';


$ df -h | grep data
 /dev/sdc  6,9T  4,1T  2,8T  60% /data/BackupPC


The pool and the pc directory are on the same filesystem, so that shouldn't be
a problem.

$ ls -l /data/BackupPC/
insgesamt 16
drwxrwxr-x 18 backuppc backuppc   134 2009-05-27 08:56 cpool
drwxrwxr-x 24 backuppc backuppc 16384 2009-05-27 08:56 pc
drwxr-x---  2 backuppc backuppc 6 2009-05-18 13:12 trash

/data/BackupPC/ was the TopDir since I started with backuppc, and I sure
pooling/linking did work once.

logfile:

2009-05-18 14:00:02 Pool nightly clean removed 0 files of size 0.00GB
2009-05-18 14:00:02 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max 
links), 0 directories
2009-05-18 14:00:02 Cpool nightly clean removed 0 files of size 0.00GB
2009-05-18 14:00:02 Cpool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max 
links), 4369 directories



BackupPC_nightly finished way too fast (5 sec) and does nothing.

$ /usr/local/BackupPC/bin/BackupPC_nightly 0 255
BackupPC_stats 0 = pool,0,0,0,0,0,0,0,0,0,0,
BackupPC_stats 1 = pool,0,0,0,0,0,0,0,0,0,0,
BackupPC_stats 2 = pool,0,0,0,0,0,0,0,0,0,0,
BackupPC_stats 3 = pool,0,0,0,0,0,0,0,0,0,0,
[...]
BackupPC_stats 253 = cpool,0,17,0,0,0,0,0,0,0,0,
BackupPC_stats 254 = cpool,0,17,0,0,0,0,0,0,0,0,
BackupPC_stats 255 = cpool,0,17,0,0,0,0,0,0,0,0,
BackupPC_nightly lock_off


BackupPC: Host Summary

Hosts with good Backups

There are 22 hosts that have been backed up, for a total of:

* 260 full backups of total size 31110.73GB (prior to pooling and 
compression),
* 183 incr backups of total size 409.15GB (prior to pooling and 
compression). 



I've no idea what is going on. The clients are backed up and I can access the
backups and restore them.


Any idea what to check next?

Ralf

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pool is 0.00GB comprising 0 files and 0 directories....

2009-05-27 Thread Ralf Gross
Bernhard Ott schrieb:
  I use BackupPC since many years without hassle. But something seems to
  be broken now.
  
  BackupPC 3.1 (source)
  Debian Etch
  xfs fs
  
 
 Hi Ralf,
 look for the thread no cpool info shown on web interface (2008-04)in 
 the archives, Tino Schwarze found a solution for a xfs-related issue:

thanks, this seems to solve the problem:

[...]
BackupPC_stats 254 = pool,0,0,0,0,0,0,0,0,0,0,
BackupPC_stats 255 = pool,0,0,0,0,0,0,0,0,0,0,
BackupPC_stats 0 = cpool,18221,18,5134732,829336,5368848,5799,3,4,3,7063,333254
BackupPC_stats 1 = 
cpool,18230,17,7346812,878464,2342356,5666,12,5,2,31999,836449
BackupPC_stats 2 = 
cpool,18249,17,7228044,2254536,2028604,5658,3,2,2,5199,1158229
[...]

Slowly I'm getting back some free space too.

Ralf

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backup the backuppc pool with bacula

2009-05-19 Thread Ralf Gross
Hi,

there is a regular discussion on how to backup/move/copy the backuppc
pool. Did anyone try to backup the pool with bacula?

I need to expand the raid volume where the pool is stored (Arecac RAID
controller). Doing this without backup is a bit frightening (I didn't
use LVM for the filesystem).

I also use bacula to backup parts of the data. Quoting the manual:

hardlinks=yes--no When enabled (default), this directive will cause hard
 links to be backed up. However, the File daemon keeps track of hard
 linked files and will backup the data only once. The process of keeping
 track of the hard links can be quite expensive if you have lots of them
 (tens of thousands or more). This doesn't occur on normal Unix sys-
 tems, but if you use a program like BackupPC, it can create hundreds
 of thousands, or even millions of hard links. Backups become very
 long and the File daemon will consume a lot of CPU power checking
 hard links. In such a case, set hardlinks=no and hard links will not
 be backed up. Note, using this option will most likely backup more
 data and on a restore the file system will not be restored identically
 to the original.


Has anyone tried to backup the pool with bacula?

Ralf

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables 
unlimited royalty-free distribution of the report engine 
for externally facing server and web deployment. 
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC vs. Bacula

2008-07-20 Thread Ralf Gross
Nils Breunese (Lemonbit) schrieb:
 Arch Willingham wrote:
 
  I have been looking at (and installed) both packages. I have tried  
  to find a comparison of the advantages and disadvantages of each as  
  compared to the other but found nothing very informative. Any ideas- 
  thoughts from anyone out there?
 
 - BackupPC is more geared towards backing up to hard drives, Bacula is  
 more geared towards backing up to tape.

You can use tape or disk volumes with bacula. I find it difficult to
use tapes with backuppc for regular backup.


 - Bacula uses a Bacula agent on each host you backup, BackupPC uses  
 stock rsync(d)/tar/smbclient on the hosts you backup.

ACK

 - BackupPC has a nice web interface that makes it very easy to restore  
 files.

There are some web-gui projects for bacula (maybe too many) and bat
(qt app). But they are add-ons and the integration is not as 
easy as with backuppc.


IMHO the biggest difference is the pooling feature backuppc offers.
There is nothing like this in bacula at the moment.

Ralf

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] problems after moving pool (trashClean, BackupPC_link)

2007-11-02 Thread Ralf Gross
Ralf Gross schrieb:
 
 I had to change the disks in our backuppc server to expand /var/lib/backuppc.
 So I umounted it, ran fsck.reiserfs, copied the partition with dd and netcat 
 to
 an other system as file, added disks, copied the disk image back with
 dd/nc, ran resize_reiserfs, fsck.reiserfs. Everything was ok, no
 errors. Then I mounted the new fs and started backuppc (at ~23.30 last night).
 
 I already did this a couple of times, but this time I see errors in
 the log files.
 
 2007-11-01 01:02:59  trashClean : Can't read 
 /var/lib/backuppc/trash/1193875302_21827_0/f%2f/fusr/fshare/fdoc/fpackages/fprocmail:
  Datei oder Verzeichnis nicht gefunden
 2007-11-01 01:03:01  trashClean : Can't read 
 /var/lib/backuppc/trash/1193875302_21827_0/f%2f/fusr/fshare/flocale/fis/fLC_MESSAGES:
  Datei oder Verzeichnis nicht gefunden
 2007-11-01 01:03:01  trashClean : Can't read 
 /var/lib/backuppc/trash/1193875302_21827_0/f%2f/fusr/fshare/flocale/flv: No 
 such file or directory
 2007-11-01 01:03:01  trashClean : Can't read 
 /var/lib/backuppc/trash/1193875302_21827_0/f%2f/fusr/fshare/flocale/fnb_NO/fLC_MESSAGES:
  No such file or directory
 2007-11-01 01:03:51 Finished incr backup on serverxx07
 2007-11-01 01:03:54 Started full backup on serverxxtest (pid=22058, share=/)
 2007-11-01 01:03:55 Finished incr backup on serverxx06
 2007-11-01 01:04:08 Started incr backup on serverxx02 (pid=22071, share=/)
 2007-11-01 01:04:19 Finished incr backup on serverxx02
 [...]
 2007-11-01 01:42:10 BackupPC_link got error -3 when calling 
 MakeFileLink(/var/lib/backuppc/pc/serverxx04/617/f%2f/flib/fmodules/f2.6.5-7.244-smp/fkernel/fdrivers/fisdn/attrib,
  45968a40280a17e322074be7b416b174, 1)
 2007-11-01 01:42:10 BackupPC_link got error -3 when calling 
 MakeFileLink(/var/lib/backuppc/pc/serverxx04/617/f%2f/flib/fmodu
 [...]
 
 
 It seems that 2 instances of BackupPC and BackupPC_trashClean were running. I
 stopped backuppc with the init script, but 2 processes were still running. I
 then killed them, now only one instance of BackupPC and BackupPC_trashClean 
 are
 active.
 
 
 I'm not sure how serious these error messages are. Is it something I can 
 forget
 now, or should I start over and create the partition again? The dd image is
 still on the other server.

I haven't found anything unusual in last night's log, so I guess this
temp. problem didn't confuse backuppc too much.

Ralf

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] problems after moving pool (trashClean, BackupPC_link)

2007-11-01 Thread Ralf Gross

Hi,

I had to change the disks in our backuppc server to expand /var/lib/backuppc.
So I umounted it, ran fsck.reiserfs, copied the partition with dd and netcat to
an other system as file, added disks, copied the disk image back with
dd/nc, ran resize_reiserfs, fsck.reiserfs. Everything was ok, no
errors. Then I mounted the new fs and started backuppc (at ~23.30 last night).

I already did this a couple of times, but this time I see errors in
the log files.

2007-11-01 01:00:01 Running BackupPC_nightly -m 0 127 (pid=21816)
2007-11-01 01:00:01 Running BackupPC_nightly 128 255 (pid=21817)
2007-11-01 01:00:01 Running 2 BackupPC_nightly jobs from 0..15 (out of 0..15)
2007-11-01 01:00:01 Next wakeup is 2007-11-01 02:00:00
2007-11-01 01:00:01 Running BackupPC_nightly -m 0 127 (pid=21822)
2007-11-01 01:00:01 Running BackupPC_nightly 128 255 (pid=21823)
2007-11-01 01:00:01 Next wakeup is 2007-11-01 02:00:00
2007-11-01 01:00:03 Started incr backup on serverxx03 (pid=21820, share=/)
2007-11-01 01:00:03 Started incr backup on serverxx04 (pid=21821, share=/)
2007-11-01 01:00:03 Started incr backup on serverxx04 (pid=21826, share=/)
2007-11-01 01:00:03 Started full backup on serverxxtest (pid=21825, share=/)
2007-11-01 01:00:03 Started incr backup on serverxx02 (pid=21819, share=/)
2007-11-01 01:00:03 Started incr backup on serverxxsut0002 (pid=21827, share=/)
2007-11-01 01:00:03 Started full backup on serverxx01 (pid=21818, share=/)
2007-11-01 01:00:05 Started full backup on serverxxnab0002 (pid=21859, share=/)
2007-11-01 01:01:42 Finished incr backup on serverxxsut0002
2007-11-01 01:01:45 Started incr backup on serverxx06 (pid=21944, share=/)
2007-11-01 01:02:07 Finished incr backup on serverxx04
2007-11-01 01:02:07 Finished incr backup on serverxx04
2007-11-01 01:02:11 Started full backup on serverxxnab0001 (pid=21959, share=/)
2007-11-01 01:02:11 Started incr backup on serverxx05 (pid=21960, share=/)
2007-11-01 01:02:14 Finished incr backup on serverxx03
2007-11-01 01:02:24 Backup failed on serverxx06 ()
2007-11-01 01:02:29 Started incr backup on serverxx07 (pid=21998, share=/)
2007-11-01 01:02:59  trashClean : Can't read 
/var/lib/backuppc/trash/1193875302_21827_0/f%2f/fusr/fsrc/flinux-2.6.5-7.244/fdrivers/fmedia/fdvb/fttpci:
 No such file or directory
2007-11-01 01:02:59  trashClean : Can't read 
/var/lib/backuppc/trash/1193875302_21827_0/f%2f/fusr/fshare/fdoc/fpackages/fprocmail:
 Datei oder Verzeichnis nicht gefunden
2007-11-01 01:03:01  trashClean : Can't read 
/var/lib/backuppc/trash/1193875302_21827_0/f%2f/fusr/fshare/flocale/fis/fLC_MESSAGES:
 Datei oder Verzeichnis nicht gefunden
2007-11-01 01:03:01  trashClean : Can't read 
/var/lib/backuppc/trash/1193875302_21827_0/f%2f/fusr/fshare/flocale/flv: No 
such file or directory
2007-11-01 01:03:01  trashClean : Can't read 
/var/lib/backuppc/trash/1193875302_21827_0/f%2f/fusr/fshare/flocale/fnb_NO/fLC_MESSAGES:
 No such file or directory
2007-11-01 01:03:51 Finished incr backup on serverxx07
2007-11-01 01:03:54 Started full backup on serverxxtest (pid=22058, share=/)
2007-11-01 01:03:55 Finished incr backup on serverxx06
2007-11-01 01:04:08 Started incr backup on serverxx02 (pid=22071, share=/)
2007-11-01 01:04:19 Finished incr backup on serverxx02
[...]
[...]
2007-11-01 01:40:29  trashClean : Can't read 
/var/lib/backuppc/trash/1193877299_23099_0/f%2f/fusr/fi486-suse-linux/finclude: 
No such file or directory
2007-11-01 01:42:08 Finished  admin  (BackupPC_nightly -m 0 127)
2007-11-01 01:42:08 Pool nightly clean removed 0 files of size 0.00GB
2007-11-01 01:42:08 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max 
links), 1 directories
2007-11-01 01:42:08 Cpool nightly clean removed 1586 files of size 0.47GB
2007-11-01 01:42:08 Cpool is 28.12GB, 322942 files (226 repeated, 15 max chain, 
21365 max links), 4369 directories
2007-11-01 01:42:08 Running BackupPC_link serverxxsut0002 (pid=26565)
2007-11-01 01:42:08 Finished  admin  (BackupPC_nightly -m 0 127)
2007-11-01 01:42:08 Pool nightly clean removed 0 files of size 0.00GB
2007-11-01 01:42:08 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max 
links), 1 directories
2007-11-01 01:42:08 Cpool nightly clean removed 1586 files of size 0.47GB
2007-11-01 01:42:08 Cpool is 28.12GB, 322942 files (226 repeated, 15 max chain, 
21365 max links), 4369 directories
2007-11-01 01:42:08 Running BackupPC_link serverxx04 (pid=26567)
2007-11-01 01:42:08 Finished serverxxsut0002 (BackupPC_link serverxxsut0002)
2007-11-01 01:42:08 Running BackupPC_link serverxx04 (pid=26569)
2007-11-01 01:42:09 BackupPC_link got error -3 when calling 
MakeFileLink(/var/lib/backuppc/pc/serverxx04/617/f%2f/flib/fmodules/f2.6.5-7.244-smp/fkernel/fsound/fcore/fseq/attrib,
 ba9f682ce6f7b26d3f6e134febd5ce5e, 1)
2007-11-01 01:42:09 BackupPC_link got error -3 when calling 
MakeFileLink(/var/lib/backuppc/pc/serverxx04/617/f%2f/flib/fmodules/f2.6.5-7.244-smp/fkernel/fsound/fcore/attrib,
 aa330514b453e5ce74ffe5ff269fad7d, 1)
2007-11-01 01:42:09 

Re: [BackupPC-users] /var/lib/backuppc replace HDD

2007-07-09 Thread Ralf Gross
Krsnendu dasa schrieb:
 I have a dedicated disk for BackupPC it is using LVM. Can I use dd to
 clone this to a newer harddrive?

I've done this a few weeks ago and it worked.

Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] How do I configure BackupPC to find repeated files?

2007-07-04 Thread Ralf Gross
Peter Carlsson schrieb:
 I know there are identical files on the three hosts although they are
 not located in identical directory tree. How do I configure BackupPC
 to find these repeated/identical files?

BackupPC finds these files automaticially during backup. Only one copy
exist in the pool.


http://backuppc.sourceforge.net/faq/BackupPC.html
Identical Files

BackupPC pools identical files using hardlinks. By ``identical
files'' we mean files with identical contents, not necessary the
same permissions, ownership or modification time. Two files might
have different permissions, ownership, or modification time but
will still be pooled whenever the contents are identical. This is
possible since BackupPC stores the file meta-data (permissions,
ownership, and modification time) separately from the file
contents.

For more details - 'BackupPC Design' on the same page.

Ralf


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] start backuppc_nightly manual

2007-07-03 Thread Ralf Gross
Stefan Degen schrieb:
 
 is ist possible to start backuppc_nightly by hand?

As user backuppc:

/usr/share/backuppc/bin/BackupPC_nightly 0 255

Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] move datadir to a CIFS mounted windows share

2007-06-18 Thread Ralf Gross
CORNU Frédéric schrieb:
   As stated in the README.Debian, I tried to move my DATADIR to
   a remote location : A Windows machine wich sees its content
   backed up daily by corporate backup system.  I created a
   backuppc folder on the samba share and made a symbolic link in
   /var/lib to that folder.  Samba share is correctly mounted as
   backuppc user, so all files are owned by the linux backuppc
   user.  The issue : BackupPC server cannot start. I get this
   line in logs -- 2007-06-18 12:05:00 unix bind() failed:
   Operation not permitted.
   
   Has anyone managed to do this already ?

I don't think this will work, because backuppc needs to be able to
create hardlinks. I guess this will not work on a mounted windows
share?

Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to move backuppc data from lvm to bigger disks/lvm?

2007-06-15 Thread Ralf Gross
Adam Goryachev schrieb:
 Ralf Gross wrote:
  Hi,
 
  I want to upgrade the backuppc data space of one of my backuppc
  server. /var/lib/backuppc (reiserfs) is at the moment a plain lvm
  (1TB, 4x250GB, 740GB used) and I want to update to raid5/lvm (1,5TB,
  4x500GB).
 
  I did upgrade an other server which had no lvm volume a feew weeks
  ago. This was easy, I just copied the reiserfs partition to the new
  system with dd an netcat and resized/grow the partition afterwards.
 
  What is the best way to do this with lvm? I have attached 2 external
  USB disks (500GB+ 300GB = 800GB with lvm) as a temp. storage for the
  old data, because the 4 on-board SATA ports are all used by the old
  backuppc data.
 
  I'm not sure if I can just dd the old lvm volume to one big file on the
  USB disk, replace the disks, dd the file back to the lvm volume and
  resize the reiserfs fs? 
 
  dd if=/dev/mapper/VolGroup00-LogVol00 bs=8192 of=backuppc.dump
 
  ..replace disks, create new lvm volume...
 
  dd if=backuppc.dump of=/dev/mapper/bigger-lvm-volume bs=8192
 
  I think the dd data includes information about the lvm volume/logical
  groups. I guess A lvm snapshot will not help much.

 I think if you do that, you will have problems.. I would do this:
 stop backuppc and unmount the filesystem (or mount readonly)
 resize the reiserfs filesystem to  800G

I already tried this, but resize_reiserfs gives me an bitmap error.
I realized that my first idea with dd and the backuppc.dump file will
need a additional gzip command to work, because the destiantion fs is
smaller than the source.

 resize the LVM partition to 800G
 dd the LVM partition containing the reiserfs filesystem to your spare
 LVM partition
 replace the 4 internal HDD's
 create the new LVM / RAID/etc setup on the new drives
 dd the USB LVM partition onto the internal LVM partition you have configured
 resize the reiserfs filesystem to fill the new LVM partition size

Because resize_reiserfs is not working, this is no option :(
 
 I don't promise it will work, but if it doesn't, you do at least still
 have your original drives with all the data,
 The problem I see in your suggestion is that you are copying a 1TB
 filesystem/partition into a 800GB one therefore if you have stored data
 at the end of the drive, then it will be lost, the above should solve
 that problem.

At the moment I'm transfering the data with cp, but in the last 12
hours only 50% of the data (~380GB) were copied. And this is only the
cpool directory. But this is what I expected with cp.

I thought about an other fancy way...

* remove the existing volg/lv data on the usb disks
* use vgextend to expand the existing backuppc volg with the 2 usb
  disks
* pvmove the data from 3 of the 4 old disks to the usb disks
* remove the 3 old disks with vgreduce
* replace 3 of the 4 disks with the new ones
* create a raid 5 with 3 new disk (3x 500GB = 1TB)
* create a new pv on the raid
* expand the backuppc volg with vgextend
* pvmove the last old disk and the usb disks to the raid pv
* remove the last old disk + usb disks with vgreduce
* replace the last old disk with the new one
* grow the raid 5 (this is possible since kernel 2.6.17 or so...)
* pvresize the raid 5 pv


Sounds like a lot of fun ;)

Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to move backuppc data from lvm to bigger disks/lvm?

2007-06-15 Thread Ralf Gross
Holger Parplies schrieb:
 Hi,
 
 Adam Goryachev wrote on 15.06.2007 at 11:28:13 [Re: [BackupPC-users] How to 
 move backuppc data from lvm to bigger disks/lvm?]:
  Ralf Gross wrote:
   [...]
   dd if=/dev/mapper/VolGroup00-LogVol00 bs=8192 of=backuppc.dump
   [...]
   I think the dd data includes information about the lvm volume/logical
   groups.
  [...]
  The problem I see in your suggestion is that you are copying a 1TB
  filesystem/partition into a 800GB one therefore if you have stored data
  at the end of the drive, then it will be lost, the above should solve
  that problem.
 
 just to make it clearer:
 the device file name /dev/mapper/VolGroup00-LogVol00 means you have not been
 very imaginative when choosing VG and LV names, but aside from that, it

That's true, but with only one volg/lv I didn't care much about it.

 represents a plain block device. The filesystem is not and should not be
 aware of how the underlying block device is implemented. Reading
 /dev/mapper/VolGroup00-LogVol00 gives you the concatenation of the raw
 blocks it consists of in ascending order, just like reading /dev/sda1,
 /dev/sda, /dev/fd0 or /dev/sr0 does (/dev/sr0 is likely not writable though)
 - nothing more and nothing less. Meta-information about VG and LVs is
 stored in the PVs outside the data allocated to any LV.

Ok, then 

dd if=/dev/VolGroup00/LogVol00 bs=8192 | gzip -2 -c  backuppc.dump.gz
gzip -dc backuppc.dump.gz | dd of=/dev/fancy-vg-name/fancy-lv-name

should work. I created a file with 
'dd if=/dev/zero of=/var/lib/backuppc/null'
this should help compressing the unused space better.
 
 I agree that you will need at least as much space as your LV takes up if you
 want to copy it. I would add that copying into a file will probably give you
 more trouble than copying to a block device (provided it is large enough).
 There's simply one layer less of arbitrary file size limits you would be
 dealing with.

Yeah, last time I did this on an other system  I was able to use
resize_reiserfs and it worked very well. I've no idea why
resize_reiserfs is now giving me this bitmap error.

Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] How to move backuppc data from lvm to bigger disks/lvm?

2007-06-14 Thread Ralf Gross
Hi,

I want to upgrade the backuppc data space of one of my backuppc
server. /var/lib/backuppc (reiserfs) is at the moment a plain lvm
(1TB, 4x250GB, 740GB used) and I want to update to raid5/lvm (1,5TB,
4x500GB).

I did upgrade an other server which had no lvm volume a feew weeks
ago. This was easy, I just copied the reiserfs partition to the new
system with dd an netcat and resized/grow the partition afterwards.

What is the best way to do this with lvm? I have attached 2 external
USB disks (500GB+ 300GB = 800GB with lvm) as a temp. storage for the
old data, because the 4 on-board SATA ports are all used by the old
backuppc data.

I'm not sure if I can just dd the old lvm volume to one big file on the
USB disk, replace the disks, dd the file back to the lvm volume and
resize the reiserfs fs? 

dd if=/dev/mapper/VolGroup00-LogVol00 bs=8192 of=backuppc.dump

..replace disks, create new lvm volume...

dd if=backuppc.dump of=/dev/mapper/bigger-lvm-volume bs=8192

I think the dd data includes information about the lvm volume/logical
groups. I guess A lvm snapshot will not help much.

Any ideas?

Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Windows backup and desaster recovery with BackupPC

2007-06-09 Thread Ralf Gross
Hi,

at the momente I'm taking backups of linux clients only. We have a
couple of Windows server that use ghost (or a similar software) to
dump an backup image to a linux share which then gets backed up by
backuppc.

This is working fine, but is a waste of space. We keep 2 images (2
weeks) which results in 400GB for about 10 server. Most of the server
are mostly indentical.

The people that are responsible for these server want to be able to
recover the whole machine in as little time as possible. I know that I
can use rsyncd or smb for backing up windows machines. But what would
be a good solution to restore a server from scratch without reinstall
the OS first? Is there a way to achieve this with backuppc and some
add-on's? I'd love to use the pooling for the windows clients too.
BTW: I've found some posts about using VSS with rsync.

Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] ACLs workaround

2007-05-17 Thread Ralf Gross
Daniel Haas schrieb:
 
 But now I have the problem, that we need a grained right-management
 on our samba-server. So I have to implement ACLs.
 As I read in the list ACLs are not supported by backuppc.  But I
 read that star is workingh with ACLs and the rsync command normally
 work with ACls, too.

ACL and xattr support is coming with rsync 3.0 which is still CVS.
There are also patches for 2.6.x to support ACL's. I used star to copy
a large amount of data with ACL's between 2 server. This was working
quite well. But star or a patched rsync version are not supported by
backuppc.
 
 So is there a wokaround to handle this problem?

Backup your ACL's and xattr's to a text file (getfacl, getfattr).
That's not an very practicable  solution, because you might miss some
ACL's if you run a cronjob for this only a few times a day.

I would also be interested in a backuppc solution for this. At the
moment this is what I miss most (by far) in backuppc.


Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup from a debian etch

2007-02-28 Thread Ralf Gross
Bruno Sampayo schrieb:
 
 I did the tar reinstall with version 1.4, but I'm still have the 
 problem, the lastest status  that  I  got from the backuppc is:
 
 Contents of file /var/lib/backuppc/pc/portal/XferLOG.0.z, modified 
 2007-02-28 13:35:58 (Extracting only Errors)

Ok, but you are using rsync as Xfer method, not tar.
 
 Running: /usr/bin/ssh -q -x -l root portal /usr/bin/rsync --server 
 --sender --numeric-ids --perms --owner --group --devices --links --times 
 --block-size=2048 --recursive --one-file-system --exclude=/lost+found 
 --exclude=/tmp --exclude=/chroots/smc.samurai.com.br/var/run 
 --exclude=/chroots/smc.samurai.com.br/dev --exclude=/var/run 
 --exclude=/sys --exclude=/opt/Plone-2.5/zeocluster/server/etc/zeo.zdsock 
 --exclude=/opt/Plone-2.5/zeocluster/client2/var/ 
 --exclude=/opt/Plone-2.5/zeocluster/client1/var/ --exclude=/dev 
 --exclude=/proc --exclude=/opt/zope2.8.8/var/zopectlsock 
 --exclude=/vmlinuz --exclude=/var/lib/backuppc/log/ --ignore-times . /
 Can't open /var/lib/backuppc/pc/portal/new//f%2f/ for empty output
 [ skipped 1 lines ]

Try changing rsync option --devices to -D. Maybe you have to update
File::RsyncP too.


Ralf

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] SSH config problemcausingfileListReceivefailed?

2007-02-15 Thread Ralf Gross
Arch Willingham schrieb:
 The backuppc server is machine2. I.E. it is trying to backup itself. 
 
 I ran that first command you gave me and it gave a weird message
 The authenticity of machine2 can't be established. RSA key
 fingerprint is blah, blah, blah. Are you sure you want to continue
 connecting? Please type yes or no...I typed yes, gave it root's
 password and tried running BackupPC again. 
 
 This time, instead of the previous fileListReceive failed errors I
 get Unable to read 4 bytes errors and this is in the error log:
 
 Running: /usr/bin/ssh -q -x -l root machine2 /usr/bin/rsync --server
 --sender --numeric-ids --perms --owner --group -D --links
 --hard-links --times --block-size=2048 --recursive --ignore-times .
 / Xfer PIDs are now 7245 Read EOF: Connection reset by peer Tried
 again: got 0 bytes Done: 0 files, 0 bytes Got fatal error during
 xfer (Unable to read 4 bytes) Backup aborted (Unable to read 4
 bytes)
 
 
 BTW...I also ran that second command you gave me and I copied the
 output from it to below (holy macaroniI have no idea what all
 that stuff means!).
 
 
 [EMAIL PROTECTED] ~]# /usr/bin/ssh -vvv -x -l root machine2

You should run this command on the backuppc side as user backuppc.
Either su to the backuppc user and execute the command or try 'su -c
backuppc /usr/bin/ssh -vvv -x -l root machine2'.

Ralf

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup from a debian etch

2007-02-14 Thread Ralf Gross
Bruno Sampayo schrieb:
 
 I tried to make backup with backuppc on debian sarge, and the 
 server client is a Debian etch kernel: 2.6.8-2-386.
 When I start the full backuppc for this machine, I got a error with 
 the follow message:
 
 
 [EMAIL PROTECTED]:~$ /usr/bin/perl /usr/share/backuppc/bin/BackupPC_dump 
 -f portal
 started full dump, share=/
 xferPids 7258
 xferPids 7258,7259
 
 
 dump failed: aborted by signal=ALRM
 link portal

Are you using tar as transfer method? There has been a change in tar
1.16 that reports a failure if a file changed during backup. etch is
using tar 1.16. If this is your problem, try tar 1.15.1 or a recent
backuppc version where this problem is fixed.

Ralf

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] SSH config problem causing fileListReceivefailed?

2007-02-12 Thread Ralf Gross
Arch Willingham schrieb:
 Beats me, I don't even know what it does. Its the way it is set in
 the default config.pl file. I just copied the default.pl file to
 machine2.pl file and ran with it.
 
 Is there something I should change?

No, the -x is okay. I didn't notice that it's an lowercase -x, my
fault.

Ralf

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] SSH config problem causingfileListReceivefailed?

2007-02-12 Thread Ralf Gross
Arch Willingham schrieb:
 Wooohh...I hate to be a dummy but that's the sound of
 this all going way over my head :) !!! If the -x is ok, what do I
 need to change to have BackupPC backup itself?

I've no idea. -x disables X11 forwarding, thus I don't know why it's
complaining abut xlib.

What happens if you ssh from the backuppc server to machine2 as
backuppc user? Adding -vvv will gibe you a more verbose output,
removing -q will show you more warning messages.

/usr/bin/ssh -q -x -l root machine2 

or 

/usr/bin/ssh -vvv -x -l root machine2

Ralf

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] RHEL4 fresh load - child exited prematurely

2007-01-31 Thread Ralf Gross
Les Mikesell said:
 I've used **3** different computers with wildly different hardware.  On
 the host side, I've used **4** different computers (and most of them are
 high-end server hardware) with wildly different hardware.  It's not
 related to a specific brand or type of hardware.

 But lots of other people including myself run rsync without errors so it
 has to be something unique to your situation.  That 'no route to host'
 message isn't coming from rsync - it is a system error that it is
 reporting. Maybe cables from a different vendor would help.

This is maybe a bit off topic, but I've recently set up 3 new server. All
have Intel e1000 NICs, and all had different network errors.

* e1000 with 82573E chipset: wrong EEPROM value
  - updated EEPROM with ethtool
* all e1000 NICs: TCP Segmentation Offload not working correctly
  - disabled with ethtool
* all e1000 NICs: default vm.min_free_kbytes value too small
  - increased vm.min_free_kbytes to 16384

Some of the errors occured everytime I did a benchmark with netpip/netio,
some only occured infrequently during backup or with certain applications.
Sometimes the e1000 device just hang for a couple of seconds. The
interesting messages were always in the kernel log.

I've also see switches behaving very strange. Maybe testing the backup
with a direct connection between two computers would be a good idea too
(I've not followed the thread completely, maybe this already happend...).

Ralf


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar error 256

2007-01-19 Thread Ralf Gross
Craig Barratt schrieb:
 What version of tar are you using?  Torsten reported that the
 newest version has changed the exit status in the case of
 certain relatively benign warnings that are therefore considerd
 fatal by BackupPC.
 
 Could you also look through the XferLOG file and confirm that
 it is this same warning that tar is reporting?
 
 I still need to fix this issue for 3.x.

Is there a workaround for backuppc 2.x or 3.x? I'm seeing more of these errors
and it's definitively because of a file that has changed during read.

Test with tar 1.16 (debian etch)

* changed file

/tmp$ /usr/bin/ssh -c blowfish -q -x -n -l root wl000346 env LC_ALL=C /bin/tar
-c -vv -f - -C /home/rg/test --totals --newer=2007-01-12  .  /tmp/foo.tar.gz
;  echo exitstate: $?
/bin/tar: Option --after-date: Treating date `2007-01-12' as 2007-01-12 00:00:00
drwxr-xr-x root/root 0 2007-01-19 09:08 ./
-rw-r--r-- root/root   3387392 2007-01-19 09:14 ./TEST--TEST
/bin/tar: ./TEST--TEST: file changed as we read it
Total bytes written: 3389440 (3.3MiB, 5.1MiB/s)
exitstate: 1

* no change

/tmp$ /usr/bin/ssh -c blowfish -q -x -n -l root wl000346 env LC_ALL=C /bin/tar
-c -vv -f - -C /home/rg/test --totals --newer=2007-01-12  .  /tmp/foo.tar.gz
;  echo exitstate: $?
/bin/tar: Option --after-date: Treating date `2007-01-12' as 2007-01-12 00:00:00
drwxr-xr-x root/root 0 2007-01-19 09:08 ./
-rw-r--r-- root/root   5099520 2007-01-19 09:14 ./TEST--TEST
Total bytes written: 5109760 (4.9MiB, 7.8MiB/s)
exitstate: 0


Test with tar 1.15.1 (solaris)

* changed file

[EMAIL PROTECTED]:/tmp$ /usr/bin/ssh -c blowfish -q -x -n -l root bang env 
LC_ALL=C /usr/local/bin/tar -c -vv -f - -C /export/home/rg/test --totals 
--newer=2007-01-12  .  /tmp/foo.tar.gz ;  echo exitstate: $?
/usr/local/bin/tar: Treating date `2007-01-12' as 2007-01-12 00:00:00 + 0 
nanoseconds
drwxr-x--- rg/other  0 2007-01-19 09:21:05 ./
-rw-r- rg/other  0 2007-01-04 14:46:00 ./foo
-rw-r- rg/ve   1122304 2007-01-19 09:21:12 ./TEST--TEST
/usr/local/bin/tar: ./TEST--TEST: file changed as we read it
Total bytes written: 1126400 (1.1MiB, 4.2MiB/s)
exitstate: 0

* no change

/tmp$ /usr/bin/ssh -c blowfish -q -x -n -l root bang env LC_ALL=C
/usr/local/bin/tar -c -vv -f - -C /export/home/rg/test --totals
--newer=2007-01-12  .  /tmp/foo.tar.gz ;  echo exitstate: $?
/usr/local/bin/tar: Treating date `2007-01-12' as 2007-01-12 00:00:00 + 0 
nanoseconds
drwxr-x--- rg/other  0 2007-01-19 09:21:05 ./
-rw-r- rg/other  0 2007-01-04 14:46:00 ./foo
-rw-r- rg/ve933888 2007-01-19 09:21:11 ./TEST--TEST
Total bytes written: 942080 (920KiB, 3.6MiB/s)
exitstate: 0


Ralf

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar error 256

2007-01-19 Thread Ralf Gross
Holger Parplies schrieb:
  Craig Barratt schrieb:
   What version of tar are you using?  Torsten reported that the
   newest version has changed the exit status in the case of
   certain relatively benign warnings that are therefore considerd
   fatal by BackupPC.
  [...]
  
  Is there a workaround for backuppc 2.x or 3.x? I'm seeing more of these 
  errors
  and it's definitively because of a file that has changed during read.
 [downgrade debian tar] 
 
 when you want to switch to the etch version again).
 All instances of 'sudo' are meant to document what requires root priviledges
 and what doesn't. You can, of course, do everything as root without 'sudo'.

Luckily the deb was still in my apt cache. I set the packet on hold
now. I thought about this before, but I would like a solution that is
working wth the new tar too. I think it's just a question of time
until Craig will come up with a better solution.
 
 I don't really like that approach, and it might be cumbersome if you are
 talking about many client machines, but otherwise it's rather easy to do
 and probably safe (and you wrote one machine).
 
I've just updated/installed 3 debian etch servers - more to come ;)
 
 Another possibility could be to write a wrapper around either ssh on the
 server or tar on the client to change an exit code of 1 to an exit code of 0, 
 but that probably has the problem of affecting more serious errors as well
 (if it was as simple as patching exit code 1 to 0, I guess there would be a
 fix in place already). You could even do this in BackupPC itself, *possibly*
 as simple as changing line 213 (in 3.0.0beta3) in Xfer::Tar.pm as in
 
 -if ( !close($t-{pipeTar}) ) {
 +if ( !close($t-{pipeTar}) and $? != 256 ) {
 
 but that
 a) is *totally* untested,
 b) will affect all clients and not only one and
 c) will make all failures returning exit code 1 to be regarded as ok
(provided it even works)
 d) will of course void your BackupPC warranty ;-)

Downgrading to tar 1.16 seem to me to be the preffered method at the moment.
 
 - four good reasons not to try it unless you are really desperate :-). With
 a wrapper around ssh or tar you can at least limit the effect to one client.
 But downgrading tar still seems safest to me.

Yes.
 
 I hope someone can give you a better solution.

I think it's funny that this change was not classified as Incompatible change 
in the Changelog...

Ralf

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar error 256

2007-01-17 Thread Ralf Gross
Craig Barratt schrieb:
 Bradley writes:
 
  Running into a problem with my larger machines doing backups. 90% of
  the time, the backup ends with the following message:
 
  backup failed (Tar exited with error 256 () status)
 
  I believe I read somewhere that it was due to a file changing during
  backup, probably in combination with the latency introducted in backup
  across the network.
 
  The reason I went with tar in the first place is that I read
  that rsync consumes more memory the larger the file list is,
  and this box has 256MB of RAM.
 
  My question at this point is the best approach to fixing this
  problem. I have been running backuppc since the beginning of the
  year, so I have at least 2 fulls and a weeks worth of incrementals,
  so, at least in theory, the number of files being rsynced should
  not be overly large. So should I convert my problem children to
  rsync, or should I convert everything over to rsync? Or is there a
  workaround for tar?
 
 What version of tar are you using?  Torsten reported that the
 newest version has changed the exit status in the case of
 certain relatively benign warnings that are therefore considerd
 fatal by BackupPC.
 
 Could you also look through the XferLOG file and confirm that
 it is this same warning that tar is reporting?
 
 I still need to fix this issue for 3.x.

I'm getting the same error from one client since last week. It's an
debian etch server that was updates just before the problem started.
tar version 1.16, no problems with the old 1.15.

Ralf

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Authorization Required

2006-11-24 Thread Ralf Gross
Eric Snyder said:
 No, my BackupPC_Admin is already in my cgi-bin directory. Do both the
 symlink and the BackupPC_Admin need to be in the cgi-bin directory?

If you want to use http://debian/backuppc/ as your BackupPC link you need
a index file that apache knows (index.cgi).

 Thanks for the links on perl, I will read up and learn how to configure
 Apache2 with perl support. I am a windows guy and am only now learning
 linux. It's fun and I see why linux is much more secure than windows. It
 is however, very different to get things done in, windows being all plug
 and play and linux being very configuration file driven and compiling
 things rather than installing with exe files.

On debian perl should already be installed. You can check which perl
packages are installed with something like

dpkg -l *perl* | grep ^ii
ii  libarchive-zip-perl   1.16-1  Module for manipulation
of ZIP archives
ii  libcompress-bzip2-perl2.09-1  Perl interface to Bzip2
compression library
ii  libcompress-zlib-perl 1.41-1  Perl module for creation
and manipulation of
[snip]

Looking at the error.log in /var/log/apache2/ might help too.

Ralf


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Authorization Required

2006-11-23 Thread Ralf Gross
Eric Snyder said:
 I just installed BackupPC 3.0 on a new installation on Debian. I had a
 previous installation and have a SCSI drive with backup data from the
 old install so I am hoping to get reconfigured to use those backups. The
 problem currently is that I am getting an Authorization Required
 prompt and the install did not give me a username/password combination
 like that older version did.

 Where is the htpasswd file located so I can change the password to
 something I can use?

I just finished a fresh install on ubuntu 6.06 and put the following at
the end of my /etc/apache2/sites-enabled/000-default file (virtial host
section).

Alias /backuppc/ /usr/local/BackupPC/cgi-bin/
Directory /usr/local/BackupPC/cgi-bin/
AllowOverride None

Options ExecCGI FollowSymlinks
AddHandler cgi-script .cgi
DirectoryIndex index.cgi

AuthGroupFile /etc/BackupPC/htgroup
AuthUserFile /etc/BackupPC/htpasswd
AuthType basic
AuthName BackupPC admin
require valid-user
/Directory

I also created a symlink index.cgi - BackupPC_Admin in the cgi directory
and put the user www-data into group backuppc.

Ralf



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Authorization Required

2006-11-23 Thread Ralf Gross
Eric Snyder said:
 OK. I have done this. For now I have commented out the
 security/authorization section. When I request http://debian/backuppc/ I
 get a file not found.

Did you create the symlink index.cgi - BackupPC_Admin?

 When I request http://debian/backuppc/BackupPC_Admin I get the
 BackupPC_Admin file as a text file. I am guessing that I need perl on my
 Apache2. is this correct?

http://backuppc.sourceforge.net/faq/BackupPC.html#requirements
http://backuppc.sourceforge.net/faq/BackupPC.html#step_8__cgi_interface

I'm not using mod_perl but perl-suid.

Ralf


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] ACL support for File-RsyncP?

2006-08-18 Thread Ralf Gross
Hi,

I successfully tested a patched rsync version that supports ACL's. Because
backuppc uses File-RsyncP and not rsync directly, it doesn't profit from
that.

How is the state of File-RsyncP's ACL support?

Ralf


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep the last n revisions of files

2006-08-09 Thread Ralf Gross
Casper Thomsen said:

 Maybe this is a feature request, and maybe it is just a show off how dumb
 I am---let's see.

 What would be really great to have is the possibility to ensure that I
 have the last n revisions of files; no matter how many fulls or
 incrementals. I guess this is not the main goal of BackupPC (that is, to
 somewhat be a revision control system) but nontheless it is a feature (at
 least) I would higly appreciate, and I guess my users would also
 appreciate it.

I also think this would be the job of a revision control system.

 I read the comprehensive config.pl file, the FAQ, scanned through the
 latest 50 e-mails in -users and -devel, searched the mailing list and of
 course googled. However, I haven't found anything about what I've just
 described. Maybe I have just searched for the wrong terms?

 Any pointers, good ideas, work-arounds or whatever is of course
 appreciated. Thanks in advance!

How will you ensure that a file has not been changed several times since
the last backup? In what cycle would you start your backup to get every
existing version of that file?

Ralf




-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup never ending from a edubuntu client

2006-07-19 Thread Ralf Gross
don Paolo Benvenuto said:
 I installed backuppc on a ubuntu server, and I back up all the othere
 ubuntu pc of my lan.

 I have a problem backing up from one of the ubuntu clients, precisely
 from a edubuntu pc. The edubuntu developers ensure me that edubuntu is
 identical to ubuntu in all that refers to rsync and ssh.

 The backup of the edubuntu client never ends. With strace I could see
 that the rsync on the client go timeout.

 The config is the same as for the other clients (all working perfectly),
 except for the dirs not to backup: I added in that config the dirs
 corresponding to /dev /proc etc. below /opt/ltsp/i386

 I use the rsync method.

Try to replace --devices with -D in $Conf{RsyncArgs}. I also had different
problems with some clients I backup with rsync. This rsync option changed
a while ago.

Ralf




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't open /var/lib/backuppc/pc... for empty output

2006-06-06 Thread Ralf Gross
Ralf Gross said:

 since last weekend I have a problem with the rsync backup of one of our
 server. I'm not sure if this is related to an update of the Solaris
 rsyncd
 (sunfreeware 2.6.6-2.6.8) last Friday.

 This is definitely a problem with the 2.6.8 (sunfreeware) rsync. I
 switched back to 2.6.6 and have no more problems with that particular
 module.

Wellnow I have the same problem with an other host. But this time it's
a linux system.

Can't open /var/lib/backuppc/pc/xx/new//fideva-share/ for empty output
  create 0 /   0
Can't open /var/lib/backuppc/pc/xx/new//fideva-share/ for empty output
  create 0 /   0
Can't open /var/lib/backuppc/pc/xx/new//fideva-share/ for empty output
  create 0 /   0

I found 2 other messages regarding this error message, but there seems to
be no solution for this yet.

Ralf




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] backuppc bug tracker?

2006-06-06 Thread Ralf Gross
Hi,

I was looking at the backuppc home page and the sourceforge project page
for the bug tracker that is backuppc is using. I couldn't find any info on
how to file a bug, what is the recommended way to do this?

Ralf



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't open /var/lib/backuppc/pc... for empty output

2006-06-01 Thread Ralf Gross
Ralf Gross said:

 since last weekend I have a problem with the rsync backup of one of our
 server. I'm not sure if this is related to an update of the Solaris rsyncd
 (sunfreeware 2.6.6-2.6.8) last Friday.

This is definitely a problem with the 2.6.8 (sunfreeware) rsync. I
switched back to 2.6.6 and have no more problems with that particular
module.

Ralf



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Can't open /var/lib/backuppc/pc... for empty output

2006-05-30 Thread Ralf Gross
Hi,

since last weekend I have a problem with the rsync backup of one of our
server. I'm not sure if this is related to an update of the Solaris rsyncd
(sunfreeware 2.6.6-2.6.8) last Friday.

BackupPC: Debian GNU/Linux 3.1 (sarge/stable), rsync version 2.6.4
protocol version 29, backuppc 2.1.1-2sarge1, /var/lib/backuppc on reiserfs
(no inode problem)

Client: Sun Solaris 8, rsync version 2.6.8  protocol version 29

BackupPC backs up 6 of 8 rsync modules successfully, before getting
trouble with the server module.

$Conf{RsyncShareName} = [ 'opt', 'usr', 'local', 'partners', 'projekte',
'home', 'server', 'var', 'etc'];

$ grep Connected to module /tmp/XferLOG.bad
Connected to module opt
Connected to module usr
Connected to module local
Connected to module partners
Connected to module projekte
Connected to module home
Connected to module server



*** XferLOG.bad
Connected to $host:873, remote version 29
Connected to module server
Sending args: --server --sender --numeric-ids --perms --owner --group
--devices --links --times --block-size=2048 --recursive . .
Xfer PIDs are now 9874
  create d 755   0/0 512 .
  create d2755   0/12048 cvs
[-- SNIP --]
  create d 777   0/1 512 tftpboot
Can't open /var/lib/backuppc/pc/$host/new//fserver/ for empty output
  create 0 /   0
Can't open /var/lib/backuppc/pc/$host/new//fserver/ for empty output
  create 0 /   0
Can't open /var/lib/backuppc/pc/$host/new//fserver/ for empty output
  create 0 /   0
[-- SNIP 20421162 identical lines--]
Done: 0 files, 0 bytes
Got fatal error during xfer (aborted by signal=ALRM)


A simple rsync backup on command line is backing up this module without a
problem.

$ rsync -av $host::module /tmp/foobar/
Password:
receiving file list ...
[-- SNIP --]
tftpboot/
tftpboot/bench
tftpboot/bug41r1.out
tftpboot/bug41r1.readme
tftpboot/bug43r1.out
tftpboot/bug43r1.readme
tftpboot/installit
tftpboot/ppc.boot
tftpboot/ppc.boot.25
sent 1475563 bytes  received 11839901025 bytes  3642940.04 bytes/sec total
size is 11870176070  speedup is 1.00

I found a recent post to the list with the same error message.
Subject: loop in empty folder?
http://thread.gmane.org/gmane.comp.sysutils.backup.backuppc.general/7084/focus=7084

But this seems not to be the same problem as here, because the server
module  and the tftpboot subdirectories are not empty!

Any ideas?

Ralf






___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] scheduling priority of BackupPC_tarExtract

2005-11-17 Thread Ralf Gross
Hi,

today I noticed a very high load (15) on one backuppc server. The reason
was the BackupPC_tarExtract process. As a result of the hight load other
applications running on this host had problems and timed out. This is not
our main backuppc host, it is mainly used as our monitoring system
(nagios, cacti), but also to backup 2 machines.

Is there a way to reduce (nice, renice) they backuppc processess with a
lower priority, that the monitoring isn't affected by high cpu loads
caused by backuppc?

I tried to put 'nice -n 10' somewhere in the BackupPC_dump near the place
where BackupPC_tarExtract is started, but this didn't work.

Any ideas?

Ralf



---
This SF.Net email is sponsored by the JBoss Inc.  Get Certified Today
Register for a JBoss Training Course.  Free Certification Exam
for All Training Attendees Through End of 2005. For more info visit:
http://ads.osdn.com/?ad_id=7628alloc_id=16845op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] $Conf{FullKeepCnt} with $Conf{FullPeriod} set to -1

2005-08-31 Thread Ralf Gross
Craig Barratt schrieb:
 Ralf Gross writes:
 
  Craig Barratt schrieb:
 
   Ralf Gross writes:
  
  Is there any side effect on setting $Conf{IncrPeriod} to a very high
  value?
 
 My first reaction was yes, but the answer is actually no.  The normal
 scheduler makes sure a full backup has to be at least $Conf{FullPeriod}
 after the last full backup, and at least $Conf{IncrPeriod} after the
 last incremental backup.
 
 But the BackupPC_serverMesg command starts a manual backup, which
 ignores all the background scheduling rules.  So a large value is
 fine.  That will prevent any normally scheduled backup from occurring
 (provided an incremental backup exists).
 
 In fact, this will move all control to cron, so if you disable your
 crontab entries then no backups will occur without you having to
 set $Conf{FullPeriod} to a negative value (again, provided an
 incremental backup exists).  (And in fact, because of how the
 code is written, if you use a value above, say, 1 million,
 then you don't even need a incremental backup to prevent all
 regular backups.)

That seems to to be exactly what I want!

Ralf


---
SF.Net email is Sponsored by the Better Software Conference  EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] $Conf{FullKeepCnt} with $Conf{FullPeriod} set to -1

2005-08-26 Thread Ralf Gross
Craig Barratt schrieb:
 Ralf Gross writes:

  I schedule backups exclusively with cron with this option and
  crontab
  entries.
 
  $Conf{FullPeriod} =  -1;
 
  5 20 * * 5 /usr/share/backuppc/bin/BackupPC_serverMesg backup zorg
  zorg
  root 1 /dev/null 21
 
  5 20 * * 1-4 /usr/share/backuppc/bin/BackupPC_serverMesg backup
  zorg zorg
  root 0 /dev/null 21

 Yes, this is the right way to do it.  However, I recommend setting
 $Conf{IncrPeriod} to a number bigger than 0.95 because currently
 there will be a race condition between BackupPC doing it's normal
 scheduling of the incremental and the manual incremental via cron.
 Eg:

 $Conf{IncrPeriod} = 1.1;

Ok, I changed it to 1.1

 Alternatively, you don't need to do the incrementals with cron since
 BackupPC can schedule those every day after the full.

The full backups only take place at weekend, the incremental backups
from Monday to Friday. I like to know when each backup starts and
therefore I'll stay with cron. At the moment backuppc backups only 6
hosts.

  Now I want to keep the last 4 weekly full backups + 3 monthly full
  backups. So I changed the config.
 
  $Conf{FullKeepCnt} = [4, 0, 3];
 
  I'm not sure if this will work with $Conf{FullPeriod} set to -1
  after
  reading this part of the documentation.
 
  # Entry #n specifies how many fulls to keep at an interval of
  # 2^n * $Conf{FullPeriod} (ie: 1, 2, 4, 8, 16, 32, ...).

 No, it won't work correctly with a negative $Conf{FullPeriod}.
 You should set $Conf{FullPeriod} to slightly more than 7, so
 that $Conf{FullKeepCnt} works correctly, but cron beats BackupPC
 for each full.

I changed it to 7.1. If I want to disable full backups for a host for
awhile, it is not sufficient to just comment out the crontab entry
anymore, I have to remember to set $Conf{FullPeriod} to -1 again?

Ralf


---
SF.Net email is Sponsored by the Better Software Conference  EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/