Re: [BackupPC-users] BackupPC 4.1.x vs. MacOS rsyncd - SOLVED

2017-07-10 Thread Josh Malone

On 7/10/17 9:47 AM, Josh Malone wrote:

On 7/7/17 10:50 AM, Bob Katz wrote:
Dear Josh: I've got OSX working totally with backuppc using rsyncd on 
OSX11. My method is described in detail below, I've posted this in the 
past, but I can't find the message on the server right now so I'm 
reposting. Hope this helps!



Thanks, but I'm trying to figure out why BPC 3.x worked with the stock 
OSX rsync and now 4.x doesn't. I really can't install homebrew and a 
second rsync on all my OSX machines just for backuppc.


Is there a way to affect what rsync command gets run so I can remove the 
s option?


-Josh



I finally traced it down to the --protect-args option which I've deleted 
from the RsyncArgs config for my Mac host and all seems well. The short 
option "-s" for "--protect-args" was just not parsable by my brain until 
now.


-Josh

--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 4.1.x vs. MacOS rsyncd

2017-07-10 Thread Josh Malone

On 7/7/17 10:50 AM, Bob Katz wrote:
Dear Josh: I've got OSX working totally with backuppc using rsyncd on 
OSX11. My method is described in detail below, I've posted this in the 
past, but I can't find the message on the server right now so I'm 
reposting. Hope this helps!



Thanks, but I'm trying to figure out why BPC 3.x worked with the stock 
OSX rsync and now 4.x doesn't. I really can't install homebrew and a 
second rsync on all my OSX machines just for backuppc.


Is there a way to affect what rsync command gets run so I can remove the 
s option?


-Josh

--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 4.1.x vs. MacOS rsyncd

2017-07-06 Thread Josh Malone

All,

I'm currently using backuppc to backup file from a Mac system via 
rsyncd. In 3.x this is working fine, but my new 4.1.3 server fails the 
backup with zero files. Logs on the mac system indicate that perhaps new 
rsync options are being used in 4.x that MacOS rsyncd doesn't like:


rsyncd[49844]: rsync: on remote machine: -slHogDtprcx: unknown option

Indeed, if I run a test rsync locally on the Mac, I get complaints about 
the options. If I leave of the "s" from the options string, the command 
works.


Is anybody successfully using BackupPC 4.1.x to back up MacOS 10.10?

Is there any way to customize the options used on the Mac client?

Thanks,

-Josh

--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Uncompressed pool compressed pool size with compresslevel 3

2010-09-08 Thread Josh Malone

Ed McDonagh wrote:



The 3.2.0beta1 version didn't have the graph patch.

Could this be anything to do with your issue?

For my own benefit, can anyone tell me if B. Alexander's debian build of
3.2 has the graphing patch? I haven't tested it yet. And on that note,
does anyone have any success or otherwise to report on the debian build
on Ubuntu, particularly 10.04?


Sorry for the thread hijack. Where is this graph patch of which you speak?

-Josh

--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)
BOFH excuse #327:

The POP server is out of Coke



smime.p7s
Description: S/MIME Cryptographic Signature
--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup backuppc

2010-08-31 Thread Josh Malone

Farmol SPA wrote:

 Hi list.

I would like to ask which is the simplest yet effective way to dump
backuppc stuff (mainly __TOPDIR__) eg to a removable hard disk that will
be used in a disaster recovery scenario where the plant were destroyed
and I need to restore data from this survivor device. Is a rsync -aH
enough?

TIA.
Alessandro


If your 'topdir' is its own filesystem, lvm, etc., you can use 'dump' to back 
up a snapshot of your pool. Adapting the link earlier:


#!/bin/bash

EXTDISK=/dev/sdc
POOLDISK=/dev/mapper/Group0-Vol0

lvcreate -l '100%free' -s -n snapshot /dev/volgroup/backuppc
mount $EXTDISK /mnt/tmp
dump -a0f - /dev/volgroup/backuppc | gzip 
   /mnt/tmp/pool-backup.0.gz



--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)
BOFH excuse #327:

The POP server is out of Coke



smime.p7s
Description: S/MIME Cryptographic Signature
--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] nmblookup = good; ping = enemy

2010-07-28 Thread Josh Malone

Frank J. Gómez wrote:

$ host vostro1400
vostro1400 has address 8.15.7.117
vostro1400 has address 63.251.179.13
Host vostro1400 not found: 3(NXDOMAIN)


Looks like DNS wildcarding on the part of Verizon to me. I suspect 'host 
foobarblablah' would return the exact same thing.


As Les said, you need to fully-quality your hostnames or replace all your ping 
 commands with 'nmblookup' commands.


-Josh

--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)
BOFH excuse #253:

Recursivity, call back if it happens again




smime.p7s
Description: S/MIME Cryptographic Signature
--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Move single host backup to different path

2010-07-20 Thread Josh Malone
On Tue, 20 Jul 2010 13:09:37 +0100, James Wells
james.we...@webcertain.com wrote:
 The only way I can see it working would be to symlink the new path into
 the old path, but no guarantees it will like it.

I'm going to guess that wouldn't work, since backuppc relies on being able
to make hard-links to files in the cpool, and hard links can cross
filesystems.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #81:

CPU-angle has to be adjusted because of vibrations
coming from the nearby road


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] High Backup Data Volumes After Re-adding an Excluded Directory

2010-07-19 Thread Josh Malone
On Sun, 18 Jul 2010 10:24:08 -0400, Norbert Hoeller nhoel...@sinet.ca
wrote:
 While trying to diagnose the high backuppc data volumes issue posted to
 the mailing list on June 14th, I had excluded a directory structure
 containing about 140MB of data. I removed the exclude once Craig had
 provided a fix for File::RsyncP and noticed that backup volumes jumped
by
 about 150MB. Tracing suggested that all the files in the previously
 excluded directory structure were being backed up on every incremental
 backup, even though the content of the files was unchanged (the first
 incremental backup after the directory was added indicated that backuppc
 had found the file in the backup pool). 
 
 Although the contents of the files had not changed, I had 'touch'ed the
 files during the period where the directory structure has been excluded
 so that Google Sitemap would index them . It seems that the backuppc
 incremental backup got confused and repeatedly selected the files for
 backup even though the file date was no longer changing.  
 
 File::RsyncP/rsync should have determined that the contents of the files
 were identical to the pool copy. Verbose logging suggests that checksums
 were exchanged, but rsync did nothing with them (the remote system
 reported false_alarms=0 hash_hits=0 matches=0). The reason is not clear.
 I had enabled checksum caching at one point but disabling checksum
 caching it did not change the symptoms. 
 
 The problem was 'fixed' by doing a full backup. It appears that this
 caused rsync to properly compare checksums and backuppc updated the file
 date - the next incremental backup did not check the files that
 previously had been copied in full. I 'touch'ed one of the files and
 verified that the next incremental backup checked the file but rsync
 found no changed blocks.

During incremental backups, backuppc will backup any files not already in
the previous full, IIRC. Also, unless you change it in the config, backup
dump levels don't increment on each successive incremental; each
incremental is a level 1 meaning Backup all files changed since the
previous full.

You can change IncrLevel in the config to 1, 2, 3, 4, 5, 6, 7 or such
to change this behaviour.

-Josh


-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #81:

CPU-angle has to be adjusted because of vibrations
coming from the nearby road


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What do I need for backing up the pool?

2010-07-01 Thread Josh Malone
The last time I backed up my entire pool I simply used 'dump' on the ext3 
filesystem on Linux. I'd simply run 'dump -a0f - /dev/disk| ssh freenas 
dumpfile' ... or any other way you choose to move the data from dump.


--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)
BOFH excuse #415:

Maintenance window broken



smime.p7s
Description: S/MIME Cryptographic Signature
--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What do I need for backing up the pool?

2010-07-01 Thread Josh Malone
'dump' just read the filesystem inode-by-inode and writes the contents to a 
file. It's a lot like 'dd' but it knows about the ext3 filesystem and won't 
copy empty blocks, and can be read using the 'restore' utility to pull out 
individual files.


So just mount your freenas and 'dump -a0f /mnt/freenas/backuppc-dump 
/dev/sda1'  or whatever your disk device is.


-Josh

--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)
BOFH excuse #415:

Maintenance window broken



smime.p7s
Description: S/MIME Cryptographic Signature
--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cannot stat: No such file or directory

2010-06-30 Thread Josh Malone

huffie wrote:


huffie wrote:

err.. i did a link to tar for gtar...
[office bin]# ls -la gtar
lrwxrwxrwx 1 root root 3 May 24 18:38 gtar - tar

could that be the reason?


 I tried with tar i get similar error

Running: /bin/tar -c -f - -C /samba  --totals .
full backup started for directory /samba
Xfer PIDs are now 10937,10936
/bin/tar: Substituting `.' for empty member name
/bin/tar: : Cannot stat: No such file or directory


D'oh! I mis-interpreted the error message. Tar is being called correctly; the 
error message is coming from 'tar' itself. It appears that tar simply has 
nothing to back up and 'cannot stat' the nothing it's supposed to be backing 
up. What is the tarsharename that it's supposed to be backing up?  Also - what 
version of tar are you using? (tar --version)


-Josh

--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)
BOFH excuse #415:

Maintenance window broken



smime.p7s
Description: S/MIME Cryptographic Signature
--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cannot stat: No such file or directory

2010-06-29 Thread Josh Malone
On Mon, 28 Jun 2010 23:52:57 -0400, huffie
backuppc-fo...@backupcentral.com wrote:

 any ideas/suggestions?

Yeah:

 /bin/gtar: : Cannot stat: No such file or directory

Is that really the right path for your tar? Apparently the backuppc
command can't find it.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #415:

Maintenance window broken


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full Backups, Incrementals and Filling In

2010-06-21 Thread Josh Malone
On Sun, 20 Jun 2010 19:41:34 -0400, Mike Kallies mike.kall...@gmail.com
wrote:
 Hello Everyone,
 
 I've been using BackupPC in production and at home enviornments for the
 past couple years.  It's a great program, but I'll add that it takes a
 while to get the hang of how it does things.
 
 So here's my question.  Is it necessary to take a full backup at a
 regular interval?  I mean, can't BackupPC fill-in the incrementals to
 simulate a full then do a superficial compare of the backup directory
 tree against the client's directory tree (e.g., compare the
 date/time/size/attributes)?  Sort of flattening an incremental?
 
 I want to do this because I'm currently setting up BackupPC to use rsync
 to keep backups of hosted website.  By default, my backup policy is
 going to take a full backup every week.  This is going to transfer 1GB
 of data every week... that's not ideal.  The web host isn't going to
 like me, and it shouldn't be necessary for me to transfer that much
data.

Ah - when using 'rsync' or 'rsyncd' as the transport, backuppc has a
slightly different notion of what a full backup is than other backup
software.

When using rsync, only the block differences are EVER sent across the wire
(unless you add '--whole-files' to the cmdline, IIRC). In backuppc, the
only difference between a full and an incremental is how rsync determines
what data to send. In an incremental, files are checked via last-modified
time; in a full backups, a block-level compare is done across all files.
This means the only real performance hit on a full backup is that the
client rsync must read every block of every file -- but no extra data is
sent across the wire.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #415:

Maintenance window broken


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hello backuppc network problem of newbie

2010-06-18 Thread Josh Malone

fakessh wrote:

hello all
hello backuppc network
hello list

i am a newbie of backuppc
this makes two days that I try to make it work
i am under centos 5.5 and BackupPC-3.1.0-6.el5
I configure a user with an access key with ssh without a password
the ssh access is OK
I use rsync
I apologize for being a little wordy and vague
but I do not know how to be as clear as possible and as precise as
possible


when I run BackupPC in verbose mode I get this
[r...@localhost swilting]# su - backuppc
-bash-3.2$ /usr/share/BackupPC/bin/BackupPC_dump -v   -f r13151.ovh.net
cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 r13151.ovh.net
cmdSystemOrEval: finished: got output PING r13151.ovh.net (87.98.186.232)
56(84) bytes of data.
64 bytes from r13151.ovh.net (87.98.186.232): icmp_seq=1 ttl=55 time=57.9
ms

--- r13151.ovh.net ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 57.965/57.965/57.965/0.000 ms

cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 r13151.ovh.net
cmdSystemOrEval: finished: got output PING r13151.ovh.net (87.98.186.232)
56(84) bytes of data.
64 bytes from r13151.ovh.net (87.98.186.232): icmp_seq=1 ttl=55 time=56.6
ms

--- r13151.ovh.net ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 56.645/56.645/56.645/0.000 ms

CheckHostAlive: returning 56.645
full backup started for directory /
started full dump, share=/
Running: /usr/bin/ssh -q -x -l root r13151.ovh.net /usr/bin/rsync --server
--sender --numeric-ids --perms --owner --group -D --links --hard-links
--times --block-size=2048 --recursive --ignore-times . /
Xfer PIDs are now 7042
xferPids 7042
Read EOF: Connexion ré-initialisée par le correspondant
Tried again: got 0 bytes
Done: 0 files, 0 bytes
Got fatal error during xfer (Unable to read 4 bytes)
cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 r13151.ovh.net
cmdSystemOrEval: finished: got output PING r13151.ovh.net (87.98.186.232)
56(84) bytes of data.
64 bytes from r13151.ovh.net (87.98.186.232): icmp_seq=1 ttl=55 time=57.2
ms

--- r13151.ovh.net ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 57.263/57.263/57.263/0.000 ms

cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 r13151.ovh.net
cmdSystemOrEval: finished: got output PING r13151.ovh.net (87.98.186.232)
56(84) bytes of data.
64 bytes from r13151.ovh.net (87.98.186.232): icmp_seq=1 ttl=55 time=56.7
ms

--- r13151.ovh.net ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 56.791/56.791/56.791/0.000 ms

CheckHostAlive: returning 56.791
Backup aborted (Unable to read 4 bytes)
Not saving this as a partial backup since it has fewer files than the
prior one (got 0 and 0 files versus 0)
dump failed: Unable to read 4 bytes
-bash-3.2$ 


Are you able to ssh to the target (client) machine *as the backuppc user*? 
It's possible that the ssh command is waiting for a user to accept the remote 
host key of the client machine.


(su - backuppc; ssh r...@r13151.ovh.net)

-Josh


--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)
BOFH excuse #355:

Boredom in the Kernel.




smime.p7s
Description: S/MIME Cryptographic Signature
--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [Solved] Slow web-GUI access

2010-05-12 Thread Josh Malone

 Something less than a dozen is a small number.  With 3, every write is
 going to 
 require every disk head to seek and you'll wait until the slowest one
 completes 
 before the next operation.

Not to be contrary, but I'm running just fine on a 4-spindle MD-raid5
system under RHEL5 for by backuppc server. The array has 4 400GB WD SATA
disks. I think it helps to tune the array slightly:

 Chunk Size : 256K

Also, make sure native command queueing is enabled and working (hdparm -I
/dev/sdx)

BackupPC_nightly runs in 15 minutes across a pool like:

# Pool is 345.69GB comprising 1212100 files and 4369 directories (as of
5/12 02:07),
# Pool hashing gives 8331 repeated files with longest chain 192,
# Nightly cleanup removed 380 files of size 0.19GB (around 5/12 02:07),
# Pool file system was recently at 37% (5/12 09:54), today's max is 37%
(5/12 02:00) and yesterday's max was 37%.

Also, read performance shouldn't be a problem on a raid-5, but you can
increase the per-spindle read-ahead using

   blockdev --setra 4096 /dev/sdx

This will read-ahead 4096*512k blocks (2MB) from each spindle. I haven't
set this on this machine but I do on others. This is a per-boot setting so
do it in rc.local or equiv.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The dread Unable to read 4 bytes / Read EOF: Connection reset by peer

2010-05-11 Thread Josh Malone
On Mon, 10 May 2010 20:14:54 -0500, Nick Bright o...@valnet.net wrote:
 Let me start off by saying this: I know what I'm doing.
 
 I've been running backuppc for about two years on two servers, backing 
 up about 25 servers (mix of windows and linux).
 
 This is the ONLY machine I've ever had this problem with that wasn't SSH

 authentication problems, and what's worse is that it worked for almost a

 year before it stopped working. I'm convinced it's something about the 
 shell or environment of the client system, but I've been trying to 
 figure this out since last *November* and I'm just not getting anywhere 
 with it. Every single hit just says your SSH keys are messed up, but 
 they most certainly are /not/ messed up, as evidenced below.
 
 All of the configuration is the same as my other linux servers. I can 
 find absolutely nothing preventing this from working, but it fails every

 time!

diag info snipped

Hi. I run BackupPC on RHEL5 and I've never had the slightest problems.
That said, here's all I can think of:

Have you checked the 'messages' and 'secure' logs on the target server?
Are your target servers (the backed-up hosts) also running CentOS5?

I would try running the actual rsync-over-ssh command as the backuppc user
on the backuppc server:

   usr/bin/ssh -q -x -l root TargetServer /usr/bin/rsync --server \
  --sender --numeric-ids --perms --owner --group -D --links \
  --hard-links --times --block-size=2048 --recursive --port=38271 \
  --ignore-times . /

And see if you get anything from that command or in the logs of the target
server.

If all else fails, try starting sshd in the foreground (sshd -D) on the
target server and watch the connection and process start up.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The dread Unable to read 4 bytes / Read EOF: Connection reset by peer

2010-05-11 Thread Josh Malone
On Tue, 11 May 2010 12:26:47 -0500, Nick Bright o...@valnet.net wrote:
 On 5/11/2010 9:53 AM, Les Mikesell wrote:
 On 5/10/2010 8:14 PM, Nick Bright wrote:

 Let me start off by saying this: I know what I'm doing.

  
 [...]

 full backup started for directory / (baseline backup #259)
 started full dump, share=/
 Running: /usr/bin/ssh -q -x -l root TargetServer /usr/bin/rsync
--server
 --sender --numeric-ids --perms --owner --group -D --links --hard-links
 --times --block-size=2048 --recursive --port=38271 --ignore-times . /
  
 What does --port=38271 mean here?  Isn't that supposed to be used for
 standalone rsyncd, not over ssh?


 Thank you for taking the time to reply.
 
 It's the port that SSHD is listening on. I had been stripping that out 
 because the guy that runs the target server is a little paranoid about 
 his SSH access.

Can you run:

  /usr/bin/ssh -q -x -l root TargetServer /usr/bin/rsync --version

And get a proper output? If not then something is up with the remote
server. You mention paranoia of the remote server admin -- is it possible
that in his authorized_keys file he's limited the command that can be run
via that key? If so, is it correct. That's a huge recipe for ssh disaster
in my experience w/ backuppc.


-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bare metal restore

2010-05-10 Thread Josh Malone
On Mon, 10 May 2010 14:02:53 +0200, Boniforti Flavio
fla...@piramide.ch wrote:
 Hello list...
 
 I was wondering if I may be doing some sort of bare metal restore of a
 Linux server, if I'd be backing it up *completely* on my backuppc
 server.


Theoretically, a bare-metal restore should be possible by backing up the
entire filesystem. The restore procedure to a new piece of bare-metal would
be:

  1. Boot from rescue media
  2. Partition the new disk and mkfs it
  3. Restore the server image to the new disk (either by networked rsync
or untar'ing a tarball downloaded from the backuppc restore interface)
  4. chroot into the restored disk and install grub (bootloader)
  5. exit chroot, unmount new disk and reboot the system


In practice though, I've found it takes lots of tries to perfect the above
procedure and it's often easier to re-install the base OS and just restore
critical config files, application files and data to the box. Bare-metal
restores *sound* sexy, but really they're often just not useful.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bare metal restore

2010-05-10 Thread Josh Malone
On Mon, 10 May 2010 16:41:16 +0200, Boniforti Flavio
fla...@piramide.ch wrote:
 
 I liked your explanation... ;-)
 I think I'll be doing *full* backuppc backup of my server as a first
 step to have constant backups.
 
 My thouhgts are related to eventually recovering the situation. As the
 server I want to back up is barely a squid proxy, I don't have to back
 up great amounts of data as it would be in case of a backuppc pool
 itself.
 What my concern is about, is the fact that when I'd be reinstalling from
 scratch on a new HDD, how would I get to the same state of installed/not
 installed packages as it was on its latest useful backup? Is there any
 way to somehow extract some sort of Sysmte State (like on Windows
 boxes) to know which packages are installed, and which aren't?

The best way to make sure your OS installs are repeatable and
non-deterministic is to script them. Here we use RHEL so we install
machines via kickstart. Previously I've used wrapper scripts to
'sysinstall' on FreeBSD and similar for Debian's installer (with lots of
help from its author). If you have your OS install procedure automated you
never have to worry about bare-metal restores. Just kick off the
re-install, then restore the unique data... you can even restore all of
/etc to the newly-installed box and it should work (modulo any changes to
fstab, ethernet devices, etc...)

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best way to backup Windows clients?

2010-05-06 Thread Josh Malone
On Thu, 6 May 2010 08:51:12 -0700, Kris Lou k...@themusiclink.net wrote:
 Hey people,
 I'm wondering what the preferred method is to backing up Windows
clients,
 especially in terms of performance.  Currently, I'm simply mounting
 windows drives via SMB/autofs, and running rsync over that.  I like it
 because I don't have to install any additional software on the clients.
  But I know/read that a lot of people run Cygwin and rsync directly.
  What are the advantages of that?   
 Thanks,

I find it works better than smbfs as it pretty much bypasses permissions
(if I run the rsyncd as a local service) and then I just have to deal with
(much simpler) rsyncd access control (allowed hosts, secrets, etc.).

Your mileage may vary but moving my 1 client from smb to rsyncd solved all
my 'access denied' problems (after a good deal of attempting to fix the smb
access).

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc backup Freebsd box

2010-04-23 Thread Josh Malone
On Thu, 22 Apr 2010 22:40:51 -0400, carnivora
backuppc-fo...@backupcentral.com wrote:
 Thanks for your reply guys.
 
 I made it!
 Thanks bro..
 i just replace my conf from /usr/bin/rsync to /usr/local/bin/rsync 

Right - I completely forgot ports install to /usr/local on BSD. It's been
a while since I ran BSD... sorry.

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc backup Freebsd box

2010-04-22 Thread Josh Malone
On Thu, 22 Apr 2010 00:01:32 -0400, carnivora
backuppc-fo...@backupcentral.com wrote:
 Dear all,
 
 have anybody here already configure backuppc to backup Freebsd box?
 i tried but failed!
 
 
 The error is : 
 
 Fatal error (bad version): /usr/bin/rsync: Command not found.

Well there's your problem -- rsync isn't installed on the backed-up host. 

cd /usr/ports/net/rsync  make install

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migrating backup machines

2010-04-21 Thread Josh Malone
When I did this last time I just used dump (on Linux).

mke2fs -j /dev/newdisk
mount /dev/newdisk /mnt/newdisk
ummount /dev/olddisk
cd /mnt/newdisk
dumpe2fs -a0f - /dev/olddisk | restore -rf -

 go get some coffee... and a bagel... and wait some more

rm restoresymtable
cd /
umount /mnt/newdisk


There - /dev/newdisk now has all your data. You can also run some tricks
to dump over the network by piping the output of dump into ssh, nc, etc.
Just pipe the output of nc -l into restore -rf - in the proper
directory on the new machine. 

Something like dump -a0f - | ssh r...@remotemachine (cd /newpath; restore
-rf -) should also work, but check 
me on the pipe-into-subshell bit. Haven't tried it before -- I always just
mount the old disk on the new machine.

-Josh


-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migrating backup machines

2010-04-21 Thread Josh Malone
On Wed, 21 Apr 2010 18:33:17 +0200, Frédéric Massot
frede...@juliana-multimedia.com wrote:
 Josh Malone a écrit :
 When I did this last time I just used dump (on Linux).
 
 mke2fs -j /dev/newdisk
 mount /dev/newdisk /mnt/newdisk
 ummount /dev/olddisk
 cd /mnt/newdisk
 dumpe2fs -a0f - /dev/olddisk | restore -rf -
 
 I think you intended to write dump -a0f command from the dump Debian
 package.  :o)

On RHEL5 the main binary is dump, and dumpe2fs seems to be a wrapper
bin... not sure. Anyway, 'dump -a0f -' should work fine.

And yes... this will take a long time but will handle hardlinks properly.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc: browsing Archive backup

2010-04-12 Thread Josh Malone
On Mon, 12 Apr 2010 22:35:30 +0200, Antonio Ramos Recoder (MONTREL S.
A.) antonio.ra...@montrel.es wrote:
 Dear Josh, 
 
  Thank you for your answer, unfortunately, I think I didn't explain
 myself properly, so I try again: 
 
  The backup policy I use is the one that the programs offers me, which
 means a full backup on day one and the six incremental backups on days 2
 through 7. Also the program allows me to save this one week backups in
 different dvds or external drives. 

How are you saving this backup to dvd/external drive? I assumed you were
using the 'archive' host type. All this does is copy the files from the
pool to the external media. These files are still in your pool and
restorable without needing this external media.


 The problem I have is that now I want
 to restore a particular week, let's say from last April, I have backed
up
 using this system in an external drive. How can I do this?

Your backups from last april should still be in your pool provided you
keep enough fulls and have enough space. Make sure you set
$Conf{FullKeepCnt} to a high enough number to keep data as far back as you
like. My server still has data from November of 2008 in its oldest full
backups.

If you have an 'archive' on external media, you don't need backuppc, just
untar the files and copy them over to the machine you want to restore them
onto.

  I would also like to know the proper way to use backuppc if I want to
 continue to work this way (saving the weekly backups in external drives
 and later on, when the need arises, restore one of these backups). Am I
 using the program in an efficient way? What do you recommend me? 

I think perhaps you don't have you backup policy set quite right for your
requirements. Look over the documentation for the backup schedule:

 
http://backuppc.sourceforge.net/faq/BackupPC.html#what_to_backup_and_when_to_do_it


-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #360:

Your parity check is overdrawn and you're out of cache.


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Which files were actually backed up?

2010-04-09 Thread Josh Malone
On Fri, 09 Apr 2010 09:58:41 -0400, Leandro Tracchia
itmana...@alexanderconsultants.net wrote:
 That means I have to know ahead of time which 
 directories were backed up. What I'm looking for is a 
 flat list of the full pathnames of files that were 
 backed up. A new list should be generated for every 
 job. I'm guessing there is no such feature for this.
 
 Perhaps some programmers out there now how to 
 implement this.

# cd /path/to/backuppc/pc/yourclient

# /path/to/backuppc/bin/BackupPC_zcat XferLOG.backup_number.z | grep
'create ' | grep -v 'create d'
  create   700 4294967295/4294967295  162816 file_that_was_backed_up
  create   700 4294967295/4294967295  662528 another_file
  create   700 4294967295/4294967295  263680 yet_another

-Josh


-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_nightly takes too much time

2010-04-07 Thread Josh Malone
On Wed, 7 Apr 2010 16:47:55 -0400, David Williams
dwilli...@dtw-consulting.com wrote:
 Josh,
 
 Interesting that.  Is there an easy way to convert from ext3 to ext4? Or
do
 you need to reformat?  Also, did you change all your hard drives to ext4
or
 just the drive backuppc backs up to?

It can be migrated (#include always-backup.h)

   umount /dev/foo
   tune4fs -O extents,uninit_bg,dir_index /dev/foo
   fsck -pDf /dev/foo

I did not convert the / filesystem to ext3... just out of lazyness (as I'd
have to boot from CD to do it, etc.)

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_nightly takes too much time

2010-04-07 Thread Josh Malone
On Wed, 7 Apr 2010 15:56:19 -0500, Richard Shaw hobbes1...@gmail.com
wrote:
 On Wed, Apr 7, 2010 at 5:29 AM, Tino Schwarze backuppc.li...@tisc.de
 wrote:
 On Wed, Apr 07, 2010 at 12:11:31PM +0200, Norbert Schulze wrote:
 
 [SNIP]
 
 OS is Ubuntu 9.04 32Bit
 IMHO it is better to migrate to a 64Bit-System!?

 I don't see an urgent reason to migrate to 64 bit... I would have
 installed this machine 64 bit at the beginning, just because it's a 64
 bit machine. You'll lose some performance, but it might be barely
 noticeable.
 
 I'm not so sure that's the case. My understanding is that a 32-bit OS
 can only address a little over 3GB of physical memory, since the
 system has 8GB, I would think you would want to upgrade to a 64-bit
 OK.
 
 Richard

Using PAE, you can have 3.5 G of usable ram on a system. HOWEVER, each
individual process only has a 4GB virtual address space, so only 4G of ram
per process. If you have 1 memory-intensive process you can make use of 8G
of ram on a 32-bit system.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] pre-backup encryption? user wants fi les to be inaccessible even to me :-)

2010-03-29 Thread Josh Malone
On Mon, 29 Mar 2010 10:19:08 -0400, Frank J. Gómez fr...@crop-circle.net
wrote:

 course of their day-to-day activities.  If I get hit by a bus, they are
 going to be in a bit of trouble.  What measures do y'all have in place
 to ensure your employer can continue on without you?

Hard-copy password sheets in a safe hardly ever fail. Get one with a combo
lock and tell a trusted individual (CEO maybe? board chairman?) the code.
Ideally, the sheet should be in a signature-sealed envelope (sign across
the seal) so you can tell if this individual has accessed the sheet. 

Alternately, you can buy safes with both a key and combo lock: give 1
person the key, tell another the combination -- presto: 2-factor/2-person
authentication.

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Installing on Red Hat Enterprise

2010-03-26 Thread Josh Malone
On Fri, 26 Mar 2010 10:21:16 -0400, John  BORIS jbo...@adphila.org
wrote:
 I am trying to get BackupPC installed on Red Hat Enterprise. I
 downloaded the latest file from sourceforge
 
 BackupPC-3.2.0beta1.tar.gz
 
 Following the directions in the documents I ran
 
 perl configure.pl
 
 I used all of the default settings. When I was finished I then copied
 the linux-backuppc file from the init.d directory to /etc/init.d.
 
 I copied the BackupPC.conf file from the httpd directory of the src to
 /etc/httpd/conf.d.
 
 I edited the BackupPC.conf file (in /etc/httpd/conf.d) so the directory
 portion points to /usr/local/BackupPC
 
 When I try httpd://localhost/BackupPC
 
 I get challenged for a username and password and this will read the
 .htaccess file I created but then I get nothing. I edited the hosts file
 in /etc/BackupPC for the two hosts I am starting with also.
 
 This is a umpteenth time I have tried this install. I have asked
 previous users I had contact with and they tell me that the BackupPC
 should be getting installed in /usr/share/BackupPC and that the programs
 live at an sbin directory. This sbin directory never gets created?
 
 
 Is there another tar ball I need to start with for Red Hat, or some src
 files I should be using.
 
 Any pointers would be greatly appreciated.

Is the backuppc process running?

Are you using suexec? If this is a dedicated backuppc server (like mine)
just change the apache user to 'backuppc'. I'm running on RHEL5.4 using the
standard tarball with no issues, but with apache running as backuppc.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] pre-backup encryption? user wants fi les to be inaccessible even to me :-)

2010-03-23 Thread Josh Malone
On Tue, 23 Mar 2010 15:41:14 -0400, Frank J. Gómez fr...@crop-circle.net
wrote:
 I have an interesting situation here.  One of my users refuses to
 participate in the system of backups because she's concerned about the
 security of her files.  She agreed to participate if I can make the
 system work such that even I am unable to see the contents of her
 files.  She's running Windows -- XP Home, I believe.

Stop. Proceed no further.

I fought this battle at a previous employment with a member of the legal
team who refused allow any possibility of the sysadmins seeing the data on
his computer. We eventually gave him an external jazz drive and made him
swear to back up himself.

Said company left a multi-million dollar hole in the ground when it
eventually cratered (not just tanked).

In IT you have to trust your sysadmins. If you don't trust the people who
run your security, your networks, your backups, etc... what are they doing
working for you? If possible, bring this situation to the user's supervisor
and let him/her know the risk that the user is putting on the company by
not backing up. If not possible, quit. I'm not trolling, I'm actually
serious. You don't want to be anywhere near a company that his this lack of
faith in its IT department.

http://lopsa.org/CodeOfEthics

-Josh


-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Having Problems Creating Archives

2010-03-23 Thread Josh Malone
On Tue, 23 Mar 2010 12:29:16 -0700, Kris Lou k...@themusiclink.net
wrote:
 Hey,
 I have a working installation of 3.2.0beta1 on CentOS, but for some
 reason I can't get the Archive function to work.  Working from the web
 interface, I create the archive host, change the target to
 /Backups/Archive (mounted share on the BackupPC server), check the
 hosts I want archived, and then it fails with: 
 Error: Can't open/create /Backups/pc/localhost/archiveReq.1035.0 
 (archive hostname = localhost for this trial).
 It doesn't create the pool for this host.  Is anybody else having these
 problems with 3.2.0beta1?The archive string/command is the default
 $Installdir/bin/BackupPC_archiveHost $tarCreatePath $splitpath $parpath
 $host $backupnumber $compression $compext $splitsize $archiveloc
$parfile
 * 

When I created my archive host I had to manually mkdir the equiv of
/Backups/pc/localhost and chown it to backuppc. It seems that since no
backups are done of 'archive' hosts the /pc/ directory isn't created
autmoatically like it is for regular hosts.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Install issues BackupPC on RHEL5

2010-03-18 Thread Josh Malone
On Thu, 18 Mar 2010 09:19:23 -0400, John  BORIS jbo...@adphila.org
wrote:
 I am trying to get BackupPC installed correctly on RHEL 5. The install
 proceeds just fine without errors but the Apache side was not installed
 properly. The httpd.conf does not include the proper information so it
 is not pointing to the correct directory structure and the
 authentication is not correct. Has anyone on the list got this  working
 properly on Red Hat? I am using the latest version 3.2.0 beta1.
 
 TIA

I'm running BPC on RHEL5 quite happily. I installed from source tarball
into /usr/local/backuppc. 
My server is dedicated to backup, so I just run apache as the backuppc
user.

   User backuppc
   Group apache

   ScriptAlias /cgi-bin/ /usr/local/backuppc/cgi-bin/


I also use active directory for authentication using mod_authz_ldap:

Location /cgi-bin/BackupPC_Admin
SSLRequireSSL
AuthBasicProvider ldap
AuthType basic
AuthzLDAPAuthoritative off
AuthName Active Directory Authentication
AuthLDAPURL ldap://our.windows.pdc:3268
our.windows.bdc:3268/DC=example,DC=com?sAMAccountName?sub
AuthLDAPBindDN stubu...@example.com
AuthLDAPBindPassword password
Satisfy All
Require valid-user
Order allow,deny
Allow from all
/Location




IIRC, I had to install a few extra perl modules using cpan but that was
trivial.

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] archive host questions

2010-03-08 Thread Josh Malone
On Sun, 7 Mar 2010 17:42:10 -0600 (CST), Gerald Brandt g...@majentis.com
wrote:
 Hi,
 
 I've been trying to automate archive (from iSCSI to USB drive) for
 offsite backups. Last Fridays ran fine (but took almost 20 hours). This
 Fridays failed at exactly 1200 minutes in, by a SIGALRM. Is there a time
 limit on archives?
 
 The backups are only 282 GB total (written to USB drive) on a quad core
 xeon with 1.5 GB RAM. Should it be this slow? I do see a lot of CPU time
 in WAIT.
 
 Can I get more than 1 archive running at a time? That is, I have 20
hosts
 I want to archive, can I have 4 archive processes running at once, each
 working with 5 different hosts?
 
 Thanks,
 Gerald

I don't know about the hard time limit, but I'm writing about 56GB
(compressed) of archives to firewire drives in ~4 hours. I notice
compression is the limiting factor in my archives, but I'm willing to take
the performance hit to save space as my backups are VERY compressible.

On a quad-core CPU, you should basically be seeing 1 core maxed-out per
archive process since it will be running the compression program (gzip,
bzip, etc) with another one handling the tarcreate process. So, even if
it's possible to run more than 1 archive job at a time (I don't know if it
is) you won't really be able to run more than 2 on a quad-core box before
stalling again on CPU time.

What compression setting are you using? As I said - gzip is definitely the
bottleneck in my archives. Also, if you can get away from USB and switch to
eSATA or fireware you'll see good improvements there. USB is quite a CPU
hog too, in my experience.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] high load and stuck processes

2010-03-05 Thread Josh Malone

 It's hard to judge; but basically if there are a lot of processes
waiting
 for I/O (a 'D' state in 'top'); try cutting down the number of
concurrent
 backups. You'll have to judge for yourself what the best number for you
is.
 It may be that things work fastest when there's a certain amount of disk
 contention; but no more and no less.

Also - you need a good filesystem to handle lots (or even not so many) of
backups. I reently switched from EXT3 to EXT4 and saw on order of magnitude
(I kid you not, 10+ hours to 1) reduction in the backup time and system
load. Unfortunately, I think this introduced some problems in the RHEL5
ext4 code so I also switched from 32-bit RHEL5 to 64-bit -- that seems to
have cleared up the problems.

-Josh


-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #202:

kernel panic: write-only-memory (/dev/wom0) capacity 
exceeded.


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup From Web Browser

2010-03-05 Thread Josh Malone
On Fri, 05 Mar 2010 12:11:08 -0600, Les Mikesell lesmikes...@gmail.com
wrote:

 I reconfigured a host and then went to that host and asked for a full
 backup.  Same results.  The odd thing is that the first check comes
back
 as ok.  But when you go back to the host page, the failed ping count
has
 increased.

Silly question: can the backuppc user run the configured ping command?

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] high load and stuck processes

2010-03-05 Thread Josh Malone
On Fri, 05 Mar 2010 13:38:16 -0500, Raman Gupta rocketra...@fastmail.fm
wrote:
 When you switched to ext4 and got this performance improvement, did 
 you simply upgrade your existing ext3 volumes via tune2fs, or did you 
 rebuild the entire filesystem so that the existing on-disk structures 
 were migrated as well?
 
 Cheers,
 Raman

The command I ran was:

  umount /dev/foo
  tune4fs -O extents,uninit_bg,dir_index /dev/foo
  fsck -pDf /dev/foo

I can't remember if I had already mounted noatime before I made the switch
or not, but I also set noatime on the mount options. This is in a md-raid5
configuration across 4 7k spindles. It was choking on about 9 gigs of mysql
data per night and it's now fine. Transfer speed went from .5 to 1
MBytes/sec to 20+. I suspect that once of the mysql tables, which is 2GB+
in size, was choking the filesystem pretty badly (but I can't prove it)

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #426:

internet is needed to catch the etherbunny


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying BackupPC to tape for off-site storage - very slow

2010-03-02 Thread Josh Malone
On Tue, 2 Mar 2010 10:53:15 -0600, Sean Carolan scaro...@gmail.com
wrote:
 Hello BackupPC users:
 
 We have a BackupPC system that has been working well for us for the
 past three years.  There is about 1.2 terabytes of data on our
 BackupPC partition and we'd like to be able to spool it off to tape
 for off-site storage.  We have an HP d2d device that gets about 50-60
 MB/s throughput during testing.  When I try to back up our backuppc
 partition however, I only get around 25MB per *minute*.  At this rate
 it will take days to back up the entire partition.  I'm using bacula
 to manage the tape backups.
 
 Would this go faster if I unmounted the partition and tried to do a
 block-level copy of the entire thing?  How would you handle this?

Just a user in a similar situation chiming in :)

Wanting to do the same thing as you (but with a smaller pool), I started
using 'dump' to dump the pool disk to first tape, then disk, and found dump
consumed an awful lot of memory due to the number of links it had to deal
with. I switched to use using GNU tar and things were quite manageable. I'm
not familiar with your HP device but I just tar'red off to an external AIT
drive at first and changed tapes manually when needed. The dump took a
while (AIT ain't fast) but I wasn't worried.

However, now I've switched to just making aux copies using the 'archive'
host type built into backuppc. The disadvantage is that I don't get the
_entire_ backup history off site, just the latest synthetic-full. However,
the advantages are numerous:

  - takes far less time (would take _even_ less if I turned off
compression)
  - it's triggered from the web CGI so I can hand off this task to an
operator
  - the aux-copies are just tarballs so can do a bare-metal restore if
needed
 (i.e., you don't need BackupPC or it's utils to read them)

I'm now just using external firewire hard drives since I'm only writing
about 60GB instead of the full ~380G in my pool. I make an aux-copy every 2
weeks and take it off site. I can hold about 4 to 5 months of aux-copies on
the 4 drives I have available.

In short, unless you *need* to preserve the entire backup history in the
event of a full-site catastrophic failure, I'd just use archive hosts to
create an aux copy.

-Josh

-- 

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
BOFH excuse #202:

kernel panic: write-only-memory (/dev/wom0) capacity 
exceeded.


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Can you make CGI browse page not expand first share by default?

2010-02-12 Thread Josh Malone

Hi,

I've done what I think is pretty good searching for the answer, but apologies 
if this is common knowledge.


When browsing the contents of a backup in the CGI, the behavior seems to be 
for it to automatically expand the tree view of the first share in the backup. 
Furthermore, there seems to be no way to unexpand this share other than to 
expand a different one. This may seems trivial, but my primary use of backuppc 
is to back up /etc from every system in my server room. Since etc, starting 
with 'e', is usually the first share in the list, browsing a backup starts 
with a very long page and I have to scroll way down to get to a different 
share (say, 'var').


Is there any way to make the CGI's browsing interface not expand the first 
share by default?  Or, if not, is there a way to just close/unexpand the first 
share to quickly view the others? This seemingly minor annoyance has gotten 
aggravating to point where I'd really like to do something about it.


Also - if this description is confusing, I can provide screen shots 
illustrating my point. :)


Thanks,

-Josh Malone

--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)
BOFH excuse #16:

somebody was calculating pi on the server



smime.p7s
Description: S/MIME Cryptographic Signature
--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/