Re: [BackupPC-users] FS and backuppc performance

2009-03-19 Thread Pedro M. S. Oliveira
With the amount of data I reported and number of files I just have 6% of inodes 
occupied so I don't think that is really a problem, do you use XFS for any 
special purpose besides dynamic inode creation? What do you think about 
recovery and maintenance tools for XFS. And least but not lest don't you have a 
bigger processor overhead with XFS?

Usually people tend to say processor is not important while backing up but from 
what I've seen if you have like 8 or more hosts backing up data the processor 
and memory are stressed up. if you have to manage a FS with a large processor 
demand can't this be a bottleneck?
Cheers,
Pedro M. S. Oliveira

On Wednesday 18 March 2009 19:30:33 Carl Wilhelm Soderstrom wrote:
 On 03/18 05:48 , Pedro M. S. Oliveira wrote:
  What FS do you guys use recommend/used and why?
 
 I typically use XFS for backuppc data pools, and ext3 for the root
 filesystem. I don't want to run out of inodes like ext3 can do. :)
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--
--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] cloning the pool

2009-03-19 Thread Les Mikesell
Koen Linders wrote:
 If you want an idea what isn't possible;
 
 A year ago I tried copying a much pool much smaller too an USB disk than my
 current (see lower), using a Xeon 2.8 GHz/1 MB with 2 GB DDR and it ran out
 of memory copying via rsync -H
 
 Somewhere in the mailing is other information. 
 
 Someone said he does an rsync on a 2million file pool worked perfectly for
 him with 2 GB of memory. Not for me.

Rsync 3.x may need less memory - and the requirements may be different 
when the target is on a remote machine.

-- 
   Les Mikesell
lesmikes...@gmail.com




--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] cloning the pool

2009-03-19 Thread Koen Linders
If you want an idea what isn't possible;

A year ago I tried copying a much pool much smaller too an USB disk than my
current (see lower), using a Xeon 2.8 GHz/1 MB with 2 GB DDR and it ran out
of memory copying via rsync -H

Somewhere in the mailing is other information. 

Someone said he does an rsync on a 2million file pool worked perfectly for
him with 2 GB of memory. Not for me.

Now I stop backuppc at night and do: dd if=/dev/sda5 of=/dev/sdb1 bs=4K 
It works perfectly. I managed to copy this pool back to another server with
much bigger raid1 array formatted ext3 with same blocksize. And it works
with a problem afterwards afaik.

Pool is 235.52GB comprising 718235 files and 4369 directories (as of 19/3
04:12), 
Pool hashing gives 121 repeated files with longest chain 11, 
Nightly cleanup removed 6736 files of size 5.27GB (around 19/3 04:12), 
Pool file system was recently at 61% (19/3 10:02), today's max is 61% (19/3
04:00) and yesterday's max was 61%.

Greetings,
Koen Linders

-Oorspronkelijk bericht-
Van: stoffell [mailto:stoff...@gmail.com] 
Verzonden: woensdag 18 maart 2009 21:57
Aan: General list for user discussion, questions and support
Onderwerp: Re: [BackupPC-users] cloning the pool

 I want to clone the pool to a local disk attached via USB.
 I can't made it with a dd because the pool is on a raid volume
 that don't contain the pool only.

We're about to do exactly the same thing. This to get ourselves a
weekly off-site copy. We will use 500 GB external disks to rsync -aH
the complete backuppc directory to this disk. We will use lvm and some
encrypted filesystem for enhanced security.

We'll have to test it out because the wiki is not very clear about it:
rsync has different limitations than cp - don't ask me whether it's
better or worse. It's simply something different to try.

It might be nice to have some case studies / usage scenarios on
the backuppc wiki ?

I'll report our experiences after we tested it all out..

cheers
stoffell


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FS and backuppc performance

2009-03-19 Thread Carl Wilhelm Soderstrom
On 03/19 11:56 , Pedro M. S. Oliveira wrote:
 With the amount of data I reported and number of files I just have 6% of
 inodes occupied so I don't think that is really a problem, do you use XFS
 for any special purpose besides dynamic inode creation? 

The ability to be resized while mounted is good as well; tho I don't use it
much.
There may be a performance improvement over ext3; tho it's very hard to say.
(Backuppc is a fairly unusual load situation; and hard to benchmark well).
I've not noticed a performance problem from it.

I used to use reiserfs on backuppc installations; but after a couple of
years, some corruption bugs turned up which made me abandon it. I didn't
want to go back to the inode limitations of ext3 tho; so I went with XFS.

 Usually people tend to say processor is not important while backing up but

Backuppc will use all the processor, ram, and disk speed you give it. I've
not had a box where they weren't all pegged. I tend to limit concurrent
backups to 2; maybe 3 or 4 on a really high-end box (multiple processors and
a proven fast disk array); to control disk-head thrashing.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FS and backuppc performance

2009-03-19 Thread Les Mikesell
Carl Wilhelm Soderstrom wrote:
 
 Backuppc will use all the processor, ram, and disk speed you give it. I've
 not had a box where they weren't all pegged. I tend to limit concurrent
 backups to 2; maybe 3 or 4 on a really high-end box (multiple processors and
 a proven fast disk array); to control disk-head thrashing.

One thing I think is missing from backuppc that amanda has had for years 
is a concept of grouping (or excluding...) by network connectivity.  I 
have a mix of local and remote targets and would like to be able to 
control concurrency to permit 1 or 2 local backups plus separate limits 
for each independent WAN path.

-- 
   Les Mikesell
lesmikes...@gmail.com


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Debian Etch to Lenny problem with external disks

2009-03-19 Thread olafkewl
Les Mikesell a écrit :

[...]
 The whole archive needs to be on the same filesystem.  If you use a 
 separate disk/partition it needs to be mounted or symlinked at the 
 /var/lib/backuppc level.
   

Thanks a lot, this worked out !


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FS and backuppc performance

2009-03-19 Thread stoffell
2009/3/18 Pedro M. S. Oliveira pmsolive...@gmail.com:
 From what I've seen on the list there are some people using XFS, Ext3, and
 so on. What's your experience with the different file systems?
 What FS do you guys use recommend/used and why?

We use XFS on a 3-disk raid 5 (3x500gb).

Just because we're used to using XFS and it performs well with a lot
of small files.

cheers
stoffell

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Grouping by network connectivity (aka replacement queue mechanism)

2009-03-19 Thread John Rouillard

This was originally part of:

 Subject:  Re: [BackupPC-users] FS and backuppc performance
 In-Reply-To: 49c29c96.4030...@gmail.com

I am starting a new thread on this rather then hijacking the original
thread.

On Thu, Mar 19, 2009 at 02:27:18PM -0500, Les Mikesell wrote:
 Carl Wilhelm Soderstrom wrote:
  
  Backuppc will use all the processor, ram, and disk speed you give it. I've
  not had a box where they weren't all pegged. I tend to limit concurrent
  backups to 2; maybe 3 or 4 on a really high-end box (multiple processors
  and a proven fast disk array); to control disk-head thrashing.
 
 One thing I think is missing from backuppc that amanda has had for years 
 is a concept of grouping (or excluding...) by network connectivity.  I 
 have a mix of local and remote targets and would like to be able to 
 control concurrency to permit 1 or 2 local backups plus separate limits 
 for each independent WAN path.

I would like this too. I currently use semaphore to create a set of
available slots and lock the slot during the backup using a pre/post
dump command.

Most of our hosts are named after the site they are at:

  box1.site1.example.com
  box1.site2.example.com
  box2.site3.example.com

etc. With semaphore I create one resource pool for each remote site
based on how many parallel backups I am willing to allow from that
site:

  Semaphore site1 has 20 resources.
Resource 0 is available.
Resource 1 is available.
Resource 2 is available.
Resource 3 is available.
Resource 4 is available.
Resource 5 is available.
Resource 6 is available.
Resource 7 is available.
Resource 8 is available.
Resource 9 is available.
Resource 10 is available.
Resource 11 is available.
Resource 12 is available.
Resource 13 is available.
Resource 14 is available.
Resource 15 is available.
Resource 16 is available.
Resource 17 is available.
Resource 18 is available.
Resource 19 is available.

  Semaphore site2 has 2 resources.
Resource 0 is available.
Resource 1 is taken by PID 29224.

  Semaphore site3 has 2 resources.
Resource 0 is available.
Resource 1 is available.

Using some home written scripts (runUserCmds, CheckQueue), I set:

  $Conf{DumpPreUserCmd} = '/etc/BackupPC/bin/runUserCmds -t $type \
-c $client -H $host -P $cmdType CheckQueue';
  $Conf{DumpPostUserCmd}= '/etc/BackupPC/bin/runUserCmds -t $type \
-c $client -H $host -P $cmdType CheckQueue';

which locks one of the available semaphores if it's a PreUserCmd
and unlocks if it's a PostUserCmd. If it can't lock a semaphore, it
exits with exit code 1, and because:

  $Conf{UserCmdCheckStatus} = 1;

is enabled in the config, the host is skipped for that cycle.

So it is doable in BackupPC without any core changes and the upside of
this is that you can group by factors other than remote site. The
downside is that the log file shows:

  2009-03-02 08:50:07 DumpPreUserCmd returned error status 256... exiting

every time the host is scheduled to be backed up but is unable to
reserve a slot. Also you can have backups fail when they are starved
for resources.

For example:

One thing I have to watch is bandwidth usage. My plan for handling
that is to allocate bandwidth in 64KB/s (512Kb/s) chunks, and use the
CheckQueue script to determine what the bw limit is for the given host
(by scanning /etc/BackupPC/pc/hostname.pl or config.pl). Then I just
reserve the proper number of chunks to reserve that bandwidth.

So I have a site that is bw limited to 2Mb/s (approx 4 chunks), I will
allocate 4 resources in the pool for the site.

If one of the hosts (one_mb) at that site has a bwlimit of 1Mb/s, then
it won't run unless there are at least 2 free resources. So no more
than 2 512Mb/sec hosts can be running.

Semaphore does support fair queing where nothing queued after one_mb
will run till one_mb has run. This guarantees that one_mb will get run
at some point. However this doesn't work with BackupPC's queing
mechanism.  With

  $Conf{MaxBackups} = 8;

to keep reasonable on the system, any backup that is run and queued
waiting for a resource uses one of these 8 slots. So I could have 7
jobs waiting on a resource for site2, but yet backups for site1 and
site3 have plenty of resources available. the only way I can see
around this is to set:

  $Conf{MaxBackups} = 1;

or some such number, and have an additional queue:

  Semaphore actual_running_backups has 8 resources.
Resource 0 is available.
Resource 1 is taken by PID 29224.
Resource 2 is available.
Resource 3 is available.
Resource 4 is available.
Resource 5 is available.
Resource 6 is available.
Resource 7 is available.

so basically BackupPC will queue a host that needs a backup, and the
control of how many are actually running is totally external to
BackupPC. I haven't tried this yet, but I think it will work.

(BTW, semaphore is a ksh impletation of semaphores written by John
Spurgeon 

[BackupPC-users] Error: auth required, but service web_backup is open/insecure

2009-03-19 Thread Clint Alexander
Hi everyone.

As the subject says, I've run into this issue where BackupPC believes the 
connection is unsecure when it is not. I did some general debugging and info 
collection...


 ON BACKUPPC CLIENT


# cat /etc/hosts.allow
empty

# cat /etc/hosts.deny
empty

# /usr/bin/rsync --version
rsync  version 2.6.8  protocol version 29
Copyright (C) 1996-2006 by Andrew Tridgell, Wayne Davison, and others.

# cat /etc/rsyncd.conf
motd file   = /etc/rsyncd/rsyncd.motd
log file= /var/log/rsyncd.log
pid file= /var/run/rsyncd.pid
lock file   = /var/run/rsyncd.lock
port= 873
syslog facility = local5
max connections = 4
use chroot  = no
uid = root
gid = root

[web_backup]
  comment = Backups  Restores
  path= /websites
  auth users  = backuppc
  secrets file= /etc/rsyncd/rsyncd.secrets


# cat /etc/rsyncd/rsyncd.secrets
backuppc:password




 ON BACKUPPC SERVER


Software versions are:
- Red Hat Enterprise Linux Server release 5.3
- BackupPC: v3.1.0
- File-RsyncP: v0.68


$ id
uid=101(backuppc) gid=102(backuppc) groups=102(backuppc)

$ cat /etc/hosts
192.168.0.67myserver

$ pwd
/var/lib/BackupPC/temp

$ ls -la
total 8
drwxr-xr-x 2 backuppc backuppc 4096 Mar 19 20:24 .
drwxr-x--- 8 backuppc root 4096 Mar 19 20:01 ..

$ rsync -qazrt --delete backu...@myserver::web_backup /var/lib/BackupPC/temp
Password: entered password

(a few seconds passed and then back to prompt)

$ ls -la
total 16
drwxr-xr-x 4 backuppc backuppc 4096 Mar 19 19:50 .
drwxr-x--- 8 backuppc root 4096 Mar 19 19:45 ..
drwxr-x--- 4 backuppc backuppc 4096 Mar 19 19:48 test_folder_1
drwsrws--- 4 backuppc backuppc 4096 Mar 19 19:48 www.mysite1.com


(Confirmed rsync was working and authentication was required)


$ ./BackupPC_dump -f -v myserver
cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 myserver
cmdSystemOrEval: finished: got output PING myserver (192.168.0.67) 56(84) bytes 
of data.
64 bytes from myserver (192.168.0.67): icmp_seq=1 ttl=64 time=0.101 ms

--- myserver ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms

cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 myserver
cmdSystemOrEval: finished: got output PING myserver (192.168.0.67) 56(84) bytes 
of data.
64 bytes from myserver (192.168.0.67): icmp_seq=1 ttl=64 time=0.081 ms

--- myserver ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms

CheckHostAlive: returning 0.081
full backup started for directory web_backup
started full dump, share=web_backup
Connected to myserver:873, remote version 29
Negotiated protocol version 28
Error connecting to module web_backup at myserver:873: auth required, but 
service web_backup is open/insecure
Got fatal error during xfer (auth required, but service web_backup is 
open/insecure)
cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 myserver
cmdSystemOrEval: finished: got output PING myserver (192.168.0.67) 56(84) bytes 
of data.
64 bytes from myserver (192.168.0.67): icmp_seq=1 ttl=64 time=0.095 ms

--- myserver ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms

cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 myserver
cmdSystemOrEval: finished: got output PING myserver (192.168.0.67) 56(84) bytes 
of data.
64 bytes from myserver (192.168.0.67): icmp_seq=1 ttl=64 time=0.082 ms

--- myserver ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms

CheckHostAlive: returning 0.082
Backup aborted (auth required, but service web_backup is open/insecure)
Not saving this as a partial backup since it has fewer files than the prior one 
(got 0 and 0 files versus 0)
dump failed: auth required, but service web_backup is open/insecure
-bash-3.2$


Now, I changed this to not require authentication just for testing...
$Conf{RsyncdAuthRequired} = 0

... and I got this error (snipped only relevant info...)

$ ./BackupPC_dump -f -v myserver
...
full backup started for directory web_backup
started full dump, share=web_backup
Connected to myserver:873, remote version 29
Negotiated protocol version 28
Error connecting to module web_backup at myserver:873: unexpected response: ''
Got fatal error during xfer (unexpected response: '')
cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 myserver
cmdSystemOrEval: finished: got output PING myserver (192.168.0.67) 56(84) bytes 
of data.
64 bytes from myserver (192.168.0.67): icmp_seq=1 ttl=64 time=0.094 ms



So, I've confirmed everything is working on the outside of BackupPC. I found 
this thread which touches on what may be the issue: 
http://www.adsm.org/lists/html/BackupPC-users/2008-06/msg00295.html

However, this patch did not work for me.