Re: [BackupPC-users] BackupPC 3.x Pool on BTRFS?

2016-02-18 Thread Russell R Poyner
Christian,

I've run BackupPC on zfs for several years. As you point out there is a 
lot of overlap between the features of BackupPC and COW file systems. 
However, BackupPC is not designed to take advantage of COW filesystems 
specifically.

My observations:

1. I saw minimal benefit from zfs deduplication when backing up desktop 
machines. It seems that BackupPC's hard-link based dedupe get's most of 
available benefit, with very little duplicated data for zfs to do dedupe 
on. Also with zfs anyway dedupe doesn't work well, and is widely 
discouraged. I've no experience with dedupe on btrfs.

2. Using filesystem compression instead of BackupPC compression works 
fine for me. I haven't really compared it to using BackupPC compression. 
You only want one compression system though. If you use BackupPC 
compression, your file system compression should be off and if you use 
the filesystem's compression BackupPC's compression should be off.

3. I've not done physical pool migration, but zfs send can do some cool 
things. Presumably btrfs send can do similar things.

4. COW filesystems have stability advantages in terms of checksums and 
the possibility to recover from a snapshot if you make a mistake. They 
also can have performance problems due to the large number of seeks that 
can be needed as the file system fragments. It becomes a personal 
decision as to what you want to use.

If you are doing this for a production set up I'd recommend using a 
filesystem you are comfortable and confident with. Learning BackupPC and 
a new filesystem could become an adventure ;-)

RP

On 14/02/16 04:01, Christian Völker wrote:
> Hi all,
>
> I did find only a single reference to the topic- but already some years ago.
>
> So are there any implications someone need to take care of when using
> BackupPC pool on a BTRFS filesystem?
>
> Some features of BTRFS seems to be perfect for BackupPC but I am unsure
> if they have advantages.
>
> Deduplication feature:
> BackupPC hadrlinks identical files to save storage space which is
> already deduplication. But it uses hardlinks to do so- which frequently
> causes issues as you can see in this mailing list. So possibly diasble
> hardlinking in BackupPC (if possible) and let BTRFS do the work with
> deduplication? Disadvantage for BTRFS: it is only a "out-of-band"
> deduplication- so you have to perform the dupe detection by cron or so.
>
> Online-Compression:
> BTRFS uses online-compression- I am unsure if there would be an
> advantage if compression is done by filesystem. At least IMHO it would
> be easier to restore directly from the pool instead of cpool through bash?
>
> Pool-migration/ move:
> If you decide to move your pool to a different location you are happy if
> you have it on something like LVM (or completely virtualized). Then you
> can move the underlying device. Disadvantage: LVM and other tools move
> the full device. If the pool is just filled up to 70% it even transfers
> the 30% of empty space. With btrfs receive and btrfs send the filesystem
> will only transfer the used blocks.
>
> COW-Feature:
> With copy-on-write btrfs writes changed block on a different location
> and then refers to this new location. Has advantages for snapshots and
> so on. But disadvantages for larger files (like virtual disk images) as
> they spread among all devices which causes lots of seeks on rotating
> devices like HDDs. How about pool? Am I right this does not matter for
> backupPC as it always writes full files only? Even rsync-based backups?
>
> Besides of BackupPC someone havving experiences in using BTRFS?
>
> Thanks!
>
> Christian
>
>
>
>
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: 

[BackupPC-users] BackupPC daemon crashes often

2016-02-01 Thread Russell R Poyner
I have a BackupPC server running in a jail on FreeBSD 10. The system 
mostly works, but recently I've noticed that the BackupPC parent process 
is frequently needing to be restarted. I even went to the length of 
having a cron job that checks for the existence of the 'BackupPC -d' 
process and restarts BackupPC if it's missing.


Examining the logs shows the daemon exiting with:
Got signal PIPE... cleaning up

each time.

Has anyone seen this sort of thing, or perhaps have ideas on how I can 
debug it? Finding the PID of the process than sent the signal is not 
easy in Perl, but maybe there is some other approach?

Thanks
Russ Poyner

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync 3.09 fails on windows symlinkd

2014-09-17 Thread Russell R Poyner
I've been using a powershell script to create shadow copies and link 
them to the filesystem in order to expose them to rsyncd. This works 
with my oldish copy of DeltaCopy rsync, but when I use the current 
cygwin-rsync package from the BackupPC web site I'm not able to follow 
the link.

If I tell it to follow the link:
  rsync -r -L rsyncuser@host::shadow
symlink has no referent: C (in shadow)
drwxr-xr-x   0 2014/09/17 11:32:30 .
rsync error: some files/attrs were not transferred (see previous errors) 
(code 23) at main.c(1538) [generator=3.0.9]

If I don't tell it to follow the link:
  rsync -r 
behdadrs...@cbe-win7amd64.che.wisc.edu::shadowdrwxr-xr-x   0 
2014/09/17 11:37:30 .
lrwxrwxrwx  54 2014/09/17 11:37:30 C

On the windows side:
dir c:\shadow

09/17/2014  09:27 AM   SYMLINKD C 
[\\?\GLOBALROOT\Device\HarddiskVolume

I have use chroot = no in rsyncd.conf

I've also tried a variety of links related switches on the rsync client.

Googling symlink has no referent: windows
turns up a few other people with this problem

Anyone been able to solve this one?

Thanks
Russ Poyner

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] signal to Kill DumpPreUserCmd

2014-07-24 Thread Russell R Poyner
I have a bash script that runs on the BackupPC server as DumpPreUserCmd.

I'd like to have the script catch a signal and clean up it's files if a 
backup get's canceled while the script is running. So far I'm trapping 
INT, TERM, ABRT and ALRM but not getting what I want.

Does BackupPC send SIGKILL to the DumpPreUserCmd process? Or something 
else I haven't thought of?

The value of $Conf{UserCmdCheckStatus} seems to not matter.

Background:

This is in the context of a method to create shadow copies and start 
rsyncd on windows clients without having to remotely execute anything on 
the windows box via ssh or winexe.

1. The server starts our DumpPreUserCmd bash script which creates a file 
called hostname.html in a web-readable directory. It then polls the 
windows machine to see if rsyncd has started. Once windows starts it's 
rsyncd the PreUser script exits so that the dump can start.

2. The windows machine runs a script in task_scheduler every 5 minutes 
to see if the file hostname.html exists in the special directory on 
the BackupPC web server. If it does the windows box runs a powershell 
script that creates shadow copies, starts rsyncd and opens a firewall 
hole to allow the backup.

3. On the server when the dump completes DumpPostUserCmd runs and 
removes the hostname.html file.

4. When the periodic task on the windows machine no longer finds the 
hostname.html file it stops the rsyncd service, deletes the shadow 
copies, and closes the firewall hole.

It works fine unless the backup get's interupted while DumpPreUserCmd is 
running and waiting for windows to start it's rsyncd service. In that 
case the hostname.html file get's orphaned.

I *could* create a cron job on the server to look for and remove 
orphaned hostname.html files, but I'm hoping to not need that.

Thanks
Russ Poyner

--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] signal to Kill DumpPreUserCmd

2014-07-24 Thread Russell R Poyner
PIPE

Seems to be the answer. Adding PIPE to the list of signals trapped by 
the script causes it to clean up on exit when the job is killed from the 
web interface.

trap cleanup ; trap - INT ; kill -INT $$ INT
trap cleanup ; exit $? TERM ABRT ALRM PIPE

is the snippet that works. cleanup is a function that removes the 
hostname.html file and returns an appropriate error value.

Russ Poyner


On 07/24/14 14:36, Russell R Poyner wrote:
 I have a bash script that runs on the BackupPC server as DumpPreUserCmd.

 I'd like to have the script catch a signal and clean up it's files if a
 backup get's canceled while the script is running. So far I'm trapping
 INT, TERM, ABRT and ALRM but not getting what I want.

 Does BackupPC send SIGKILL to the DumpPreUserCmd process? Or something
 else I haven't thought of?

 The value of $Conf{UserCmdCheckStatus} seems to not matter.

 Background:

 This is in the context of a method to create shadow copies and start
 rsyncd on windows clients without having to remotely execute anything on
 the windows box via ssh or winexe.

 1. The server starts our DumpPreUserCmd bash script which creates a file
 called hostname.html in a web-readable directory. It then polls the
 windows machine to see if rsyncd has started. Once windows starts it's
 rsyncd the PreUser script exits so that the dump can start.

 2. The windows machine runs a script in task_scheduler every 5 minutes
 to see if the file hostname.html exists in the special directory on
 the BackupPC web server. If it does the windows box runs a powershell
 script that creates shadow copies, starts rsyncd and opens a firewall
 hole to allow the backup.

 3. On the server when the dump completes DumpPostUserCmd runs and
 removes the hostname.html file.

 4. When the periodic task on the windows machine no longer finds the
 hostname.html file it stops the rsyncd service, deletes the shadow
 copies, and closes the firewall hole.

 It works fine unless the backup get's interupted while DumpPreUserCmd is
 running and waiting for windows to start it's rsyncd service. In that
 case the hostname.html file get's orphaned.

 I *could* create a cron job on the server to look for and remove
 orphaned hostname.html files, but I'm hoping to not need that.

 Thanks
 Russ Poyner

 --
 Want fast and easy access to all the code in your enterprise? Index and
 search up to 200,000 lines of code with a free copy of Black Duck
 Code Sight - the same software that powers the world's largest code
 search on Ohloh, the Black Duck Open Hub! Try it now.
 http://p.sf.net/sfu/bds
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] V4 for production?

2014-01-28 Thread Russell R Poyner
I'm in the process of building a BackupPC set up for about 200 users. 
The possibility of incrementals forever that looks to be available in 
V4 is quite appealing. My experience backing up windows PC's with V3 has 
been that the fulls take a long time and are much more likely to fail 
than incrementals. I've been using a script run via cygwin/ssh on the 
windows machines that creates a shadow copy and makes it available to 
BackupPC via rsyncd.

Not having to run regular, trouble-prone fulls of windows clients would 
be great.

So am I insane to be thinking about V4 on a sizeable production system?

R Poyner

--
WatchGuard Dimension instantly turns raw network data into actionable 
security intelligence. It gives you real-time visual feedback on key
security issues and trends.  Skip the complicated setup - simply import
a virtual appliance and go from zero to informed in seconds.
http://pubads.g.doubleclick.net/gampad/clk?id=123612991iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What file system do you use?

2013-12-17 Thread Russell R Poyner
Mark,

Questions, and some comments.

Questions:

What have you done to tune your zfs?
Do you use a ZIL? and or an L2ARC?
How much ram do you have?
What compression level are you using on zfs?

I reflexively put a ZIL on my system but I'm curious if anyone has 
experimented with BackupPC performance on zfs with and without the ZIL.

Comments:

I built a backuppc on zfs system at my last job, but I took the opposite 
approach on compression. I disabled compression and dedupe on zfs and 
let BackupPC handle those jobs. I haven't seen load problems, but I do 
notice that the tranfer speed reported by backuppc varies a lot between 
different windows clients. Anywhere from 1.4 MB/s to 41 MB/s. This is 
partly due to network speed since some machines are on Gb connections, 
but most are on 100Mb. There also seems to be some dependence on the age 
and condition of the windows boxes.

BackupPC reports 76486 Gb of fulls and 1442 Gb of incrementals.
zfs list shows 11.9 Tb allocated from the 65Tb pool for backuppc data.
Which gives me about a 6.4 fold reduction in storage, slightly less than 
the roughly 7.5 fold reduction that you see. My data comes from user 
files on 12 windows 7 machines.

This is a poor comparison since we have different data sets, but it 
would appear that BackupPC's internal dedupe and compression is 
comparable to, or only slightly worse than what zfs achieves. This in 
spite of the expectation that zfs block level dedupe might find more 
duplication than BackupPC's file level dedupe.

Russ Poyner



On 12/17/13 07:50, Mark Campbell wrote:20
 I too am using ZFS, and I can honestly say that ZFS works great, up to a 
 point.  rsync does seem to take up an inordinate amount of resources, but in 
 a smaller shop like mine, it's been tolerable.  I think it would work in a 
 larger shop too, but the system resource requirements (CPU/RAM) would grow 
 larger than what you would expect normally.  I've had a couple of instances 
 of performance issues in my setup, where over time, rsync was uploading data 
 to the system faster than zfs could process it, and so I'd watch my load go 
 through the roof (8.00+ on a quad core system), and I would have to stop 
 BackupPC for an hour or so, so that ZFS could catch up, but other than that, 
 this system has actually handled it fairly well.

 What I really like about ZFS though, is the deduplication coupled with 
 compression.  I've disabled compression in BackupPC to allow ZFS to properly 
 do the dedup  compression (enabling compression in BackupPC kills ZFS' dedup 
 ability, since it messes with the checksums of the files), and I'm getting 
 numbers in the range of 4.xx deduplication.  My ZFS array is 1.12TB in size, 
 yet, according to BackupPC, I've got 1800GB in fulls, and 2400GB in 
 incrementals.  When I query the array for actual disk usage, it says I'm 
 using 557GB of space...  Now that's just too cool.

 Thanks,

 --Mark


 -Original Message-
 From: Tim Connors [mailto:tconn...@rather.puzzling.org]
 Sent: Monday, December 16, 2013 10:00 PM
 To: General list for user discussion, questions and support
 Subject: Re: [BackupPC-users] What file system do you use?

 On Mon, 16 Dec 2013, Timothy J Massey wrote:

 One last thing:  everyone who uses ZFS raves about it.  But seeing as
 (on
 Linux) you're limited to either FUSE or out-of-tree kernel modules (of
 questionable legality:  ZFS' CDDL license is *not* GPL compatible),
 it's not my first choice for a backup server, either.
 I am using it, and it sucks for a backuppc load (in fact, from the mailing 
 list, it is currently (and has been for a couple of years) terrible on an 
 rsync style workload - any metadata heavy workload will eventually crash the 
 machine after a couple of weeks uptime.  Some patches are being tested right 
 now out of tree that look promising, but I won't be testing them myself until 
 it hits master 0.6.3.

 Problem for me is that it takes about a month to migrate to a new filesystem. 
  I migrated to zfs a couple of years ago with insufficient testing.  I should 
 have kept on ext4+mdadm (XFS was terrible too - no faster than ext4, and 
 given that I've always lost data on various systems with it because it's such 
 a flaky filesystem, I wasn't gaining anything).
 mdadm is more flexible than ZFS, although harder to configure.  With
 mdadm+ext4, you can choose any disk arrangement you like without being
 limited to simple RAID-Z(n) arrangements of equal sized disks.  That said, I 
 do prefer ZFS's scrubbing compared to mdadm's, but only slightly.  If I was 
 starting from scratch and didn't have 4-5 years of backup archives, I'd tell 
 backuppc to turn off compression and munging of the pool, and let ZFS do it.

 I used JFS 10 years ago, and niche buggy product would be my description 
 for it.  Basically, go with the well tested popular FSs, because they're not 
 as bad as everyone makes them out to be.

 --
 Tim Connors

 

Re: [BackupPC-users] What file system do you use?

2013-12-17 Thread Russell R Poyner
Thanks Mark.

 From the zfs man page for zol it looks like the default compression is 
lzjb, same as other zfs implementations. I generally use lz4 which is 
basically lzjb with some performance upgrades. It's a minor tweak unless 
you have a lot of uncompressible files.

If you are experiencing decent data rates without a separate ZIL, it 
likely means that BackupPC is not doing synchronous writes. Answers one 
of my longstanding questions about BPC.

It's possible that your performance stalls are related to the size of 
your dedupe table. Performance will tank if you are having to read the 
dedupe table from disk, rather than have all of it cached in ram. This 
is a well known performance issue with zfs dedupe. There is a good 
discussion of the issue here:

http://constantin.glez.de/blog/2011/07/zfs-dedupe-or-not-dedupe

I suspect that using an ssd as l2arc to hold the extra dedupe would give 
adequate performance for backups.

RP

On 12/17/13 11:52, Mark Campbell wrote:
 I've done virtually no tuning of ZFS.  In my initial experimentations with 
 ZFS, I was blowing away my array so often when trying different combinations 
 of BPC commpress/ZFS compress/ZFS dedup that I wrote a little shell script 
 that recreated the ZFS array  populated the necessary directories for 
 BackupPC:

 #!/bin/bash
 bpcdir=/backup/BackupPC
 service backuppc stop
 zpool destroy -f backup
 zpool create backup raidz2 sdc sdd sde sdf sdg sdh
 zfs set compression=on backup
 zfs set dedup=on backup
 zfs set atime=off backup
 mkdir $bpcdir
 chown backuppc $bpcdir
 chmod 750 $bpcdir
 cd $bpcdir
 mkdir cpool pc pool
 chown backuppc *
 chmod 750 *
 service backuppc start

 That's pretty much the extent of my tuning of ZFS.  This is a CentOS 6.4 
 x86_64 system, with ZoL installed.  I've got 6 250GB disks installed on a 
 3ware SATA RAID card set to be stand alone disks, 16GB of RAM, which has 
 served me well, no swapping, so that's been good.  I've tried to avoid 
 intermediary caches for sake of performance.  For the level of compression, 
 just whatever is default (isn't it normally 6?).

 In my case, my data comes from 16 hosts, a mix of linux, winxp, win7,  win8 
 machines.  My biggest backup client is a network drive, CentOS 6 based, that 
 is housing all sorts of files, but is also a repository of backups in and of 
 itself for some Server 2012 backups.  This is a variety of stuff, ranging 
 from SQL Server backups, to full bare metal system backups, which most 
 unfortunately, presents itself as a few gigantic files.  Some of these 
 backups are dozens, if not hundreds of GB in size.  And all that was changing 
 in these files were a few MB worth from day to day, so I can totally 
 empathize with Timothy's gripes with Exchange.  File-based deduplication 
 wasn't helping me here, so that's why I tried out ZFS.  And boy, does it 
 work.  I'm really only doing about 10 backups at a time right now; The same 
 basic system (- ZFS, + BackupCP compression) was reaching 95% capacity with 
 just 3 backups before.  I guarantee that were I storing more backups, my 7.5 
 fold reduction would skyroc
 ket even higher.

 Thanks,

 --Mark


 -Original Message-
 From: Russell R Poyner [mailto:rpoy...@engr.wisc.edu]
 Sent: Tuesday, December 17, 2013 11:12 AM
 To: backuppc-users@lists.sourceforge.net
 Cc: Mark Campbell
 Subject: Re: [BackupPC-users] What file system do you use?

 Mark,

 Questions, and some comments.

 Questions:

 What have you done to tune your zfs?
 Do you use a ZIL? and or an L2ARC?
 How much ram do you have?
 What compression level are you using on zfs?

 I reflexively put a ZIL on my system but I'm curious if anyone has 
 experimented with BackupPC performance on zfs with and without the ZIL.

 Comments:

 I built a backuppc on zfs system at my last job, but I took the opposite 
 approach on compression. I disabled compression and dedupe on zfs and let 
 BackupPC handle those jobs. I haven't seen load problems, but I do notice 
 that the tranfer speed reported by backuppc varies a lot between different 
 windows clients. Anywhere from 1.4 MB/s to 41 MB/s. This is partly due to 
 network speed since some machines are on Gb connections, but most are on 
 100Mb. There also seems to be some dependence on the age and condition of the 
 windows boxes.

 BackupPC reports 76486 Gb of fulls and 1442 Gb of incrementals.
 zfs list shows 11.9 Tb allocated from the 65Tb pool for backuppc data.
 Which gives me about a 6.4 fold reduction in storage, slightly less than the 
 roughly 7.5 fold reduction that you see. My data comes from user files on 12 
 windows 7 machines.

 This is a poor comparison since we have different data sets, but it would 
 appear that BackupPC's internal dedupe and compression is comparable to, or 
 only slightly worse than what zfs achieves. This in spite of the expectation 
 that zfs block level dedupe might find more duplication than BackupPC's file 
 level dedupe.

 Russ Poyner



 On 12/17/13 07:50

[BackupPC-users] BackupPC for laptops?

2013-12-04 Thread Russell R Poyner
At my last job I built a BackupPC setup that worked well for windows 
desktops using vss and rsyncd.

In my new position I'm looking for options for backing up laptops and 
tablets. Most of these machines rarely connect to our wired network or 
vpn. Which means they are normally separated by our firewall from the 
backupPC server. I'm hoping to find a solution that can backup the 
laptops over whatever wireless network they happen to be on.

Solutions like CrashPlan, Carbonite or BackBlaze offer continuous backup 
over nearly any internet connection. Our users don't want a 3rd party 
storing their data so I'm wondering if this sort of thing is possible 
with BackupPC.

Russ Poyner

--
Sponsored by Intel(R) XDK 
Develop, test and display web and hybrid apps with a single code base.
Download it for free now!
http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/