Re: [BackupPC-users] Restoring complete virtualized Windows-Servers / Saving MBR

2012-04-16 Thread hansbkk
Yes, I see BackupPC as a solution for what I call data archive backups,
as opposed to full host bare-metal.

For the latter wrt physical machines I tend to do relatively infrequent
image snapshots of the boot and system partitions, keeping
frequently-changing working data on separate partitions, which are backed
up by BPC.

I treat VM images as part of a third category, together with large media
files, either manually or via scripts, simply copying them to external
(esata) drives that get rotated offsite.

For my use case, it would simply be impractical to have BPC keep so many
multiple copies of old versions of this third category, they're just too
large. The working data handled by the VMs is backed up by BPC (usually via
a central filer), but not the OS/boot partitions.
--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restoring complete virtualized Windows-Servers / Saving MBR

2012-04-16 Thread hansbkk
Two completely separate backup schemes are needed here.

One for full cold-metal restores of the boot/OS level stuff, and IMO this
is best done with imaging style software, in your case specifically
targeted for windoze/ntfs systems. These don't need to be done very
frequently, as little is changing from day to day. BPC is not intended to
provide for this kind of backup, especially regarding Windows. Many Linux
sysadmins simply re-install their OS from automated scripts and then
restore config files rather than bothering to fully image their boot/OS
partitions, but Windows isn't suited to that approach.

The type of backup is for working data, which requires the frequent
full/incremental/archive that BPC is designed for. Details about the
infrastructure under the filesystem are irrelevant to BPC, except when
considering how to optimize performance when a small backup window
becomes an issue.

What you are doing with LVM snapshotting should only be necessary for
certain applications that keep their files open, like DB servers, Outlook
and some FOSS mailsystems. And then only if these services need to be kept
running as close to 24/7 as possible, otherwise your scripts can just close
the server processes down until the backup is complete and then bring them
back up again.

I can't advise on the NTFS-specific extended attributes and newfangled
security stuff, but unless you're using software that specifically
leverages that MS-proprietary stuff, it shouldn't IMO be an issue.
--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] which one better ssh+rsync or rsyncd ?

2012-02-15 Thread hansbkk
On Wed, Feb 15, 2012 at 3:03 PM, J. Bakshi baksh...@gmail.com wrote:

 Greetings to all of you. I have come to know about backuppc recently
 during my search for a net based backup solution which requires bare
 minimal settings at user end and supports various client OS. backuppc
 surely meet my requirement.


For windoze target hosts, I have found the Cygwin implementation of the
OpenSSH server to be relatively well-documented and straightforward to
setup.

Yes it is a bit heavy, but note you don't have to do a full
registry-based install on every client. Just get it a minimal setup running
in portable mode and keep the whole X:\Cygwin tree in sync with rsync or
whatever you already use.

However from a security POV, the SOP is to let Cygwin's normal OpenSSH
setup routine create the special full-rights-to-backup user, password
issues, so for small environments you probably should just go ahead and run
setup on each client anyway, you can still sync later updates.

It's actually nice to know you've got sshd available on your clients for
other purposes, especially to ensure good security for FOSS tools that
don't know from domain-based tools.

And of course the other platforms have the ssh server enabled out of the
box. . .
--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] oh my ... changing backupdirectory

2012-02-13 Thread hansbkk
In addition to the comments from the others note

  - Using fstab for bind mounting works just as well or better in some
cases than symlinks
- You can set things up completely plain-vanilla, test and then move
the folders after.

  - USB isn't really that great a connection for daily mission-critical
disk usage
- I recommend e-Sata if it needs to be external
- or even better, internal hot swap drive bays that let you swap the
drives out



On Mon, Feb 13, 2012 at 5:04 PM, Ingo P. Korndoerfer korndoer...@crelux.com
 wrote:

  hello,

 i have been going around in circles and pretty much grazed all i could
 find on google and then finally found a way to
 get this to work, and though it might be worth communicating this, so it
 can maybe get included in the wiki ?

 i have succesfully installed backuppc under ubuntu and can connect and all
 is fine.

 except, i want my backups on a usb mounted external disk and then could
 not start the backuppc server anymore
 here is what i did:

 i followed the instructures here:


 http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Change_archive_directory#Changing_the_storage_location_of_the_BackupPC_pool

 i.e.

 sudo /etc/init.d/backuppc stop

 cp -dpR /var/lib/backuppc/. /mnt/backups1/backuppc
 mv /var/lib/backuppc /var/lib/backuppc.orig
 ln -s /mnt/backups1/backuppc /var/lib/backuppc

 sudo /etc/init.d/backuppc start

 and get the famous can not create test hardlink error.

 the solution after a lot of trying was to

 cd /mnt/backups1
 sudo su
 chmod -R ugo+rwx backupppc

 and then finally

 sudo /etc/init.d/backuppc start

 would work.

 i am still lacking a grasp of what the real problem was and what would
 have been the proper way to go about this.
 the permissions of the original installation and the copied installation
 (which i am using now via soft link) look like this:

 original install in /var/lib:

 drwxr-x---  7 backuppc backuppc 4096 Feb 13 09:37 .
 drwxr-xr-x 63 root root 4096 Feb 13 10:23 ..
 drwxr-x---  2 backuppc backuppc 4096 Feb 13 10:10 cpool
 drwxr-x---  2 backuppc backuppc 4096 Feb 13 10:13 log
 drwxr-x---  3 backuppc backuppc 4096 Feb 13 10:10 pc
 drwxr-x---  2 backuppc backuppc 4096 Jun 30  2011 pool
 drwxr-x---  2 backuppc backuppc 4096 Jun 30  2011 trash

 after copy to external disk (could not start backuppc server with this,
 but why. the permissions are all the same):

 drwxr-x--- 7 backuppc backuppc 4096 Feb 13 09:37 .
 drwxrwxrwx 5 ingo ingo 4096 Feb 13 10:59 ..
 drwxr-x--- 2 backuppc backuppc 4096 Feb 13 10:10 cpool
 drwxr-x--- 2 backuppc backuppc 4096 Feb 13 10:13 log
 drwxr-x--- 3 backuppc backuppc 4096 Feb 13 10:10 pc
 drwxr-x--- 2 backuppc backuppc 4096 Jun 30  2011 pool
 drwxr-x--- 2 backuppc backuppc 4096 Jun 30  2011 trash

 after chmod (now working)

 drwxrwxrwx 7 backuppc backuppc 4096 Feb 13 09:37 .
 drwxrwxrwx 4 ingo ingo 4096 Feb 13 09:44 ..
 drwxrwxrwx 2 backuppc backuppc 4096 Feb 13 10:54 cpool
 drwxrwxrwx 2 backuppc backuppc 4096 Feb 13 10:54 log
 drwxrwxrwx 3 backuppc backuppc 4096 Feb 13 10:54 pc
 drwxrwxrwx 2 backuppc backuppc 4096 Jun 30  2011 pool
 drwxrwxrwx 2 backuppc backuppc 4096 Jun 30  2011 trash

 is it the parent directory permissions that are messing the thing up ?

 thanks for any comments

 cheers

 ingo



 --
 Try before you buy = See our experts in action!
 The most comprehensive online learning library for Microsoft developers
 is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
 Metro Style Apps, more. Free future releases when you subscribe now!
 http://p.sf.net/sfu/learndevnow-dev2
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Network Backup of backuppc

2012-02-13 Thread hansbkk
Either run a second BPC instance over the WAN directly to the target hosts,
or send compressed tar snapshots, whichever is more appropriate for your
combination of bandwidth, volume of data, backup time window, number of
target hosts, degree of duplicated data etc

On Tue, Feb 14, 2012 at 12:13 AM, Fred Warren fred.war...@gmail.com wrote:

 I would like to run backup-pc on site and keep a duplicate copy offisite.
  So I want 2 backup-pc servers. One onsite and one offsite. With the
 offsite copy not running, but the data being synced with the onsite copy.
 If there is some kind of failure with the onsite copy of backup-pc. I could
 then start the offsite-copy and restore files from there.

 What I have discovered so far is that even if I stop the backupc-service
 on the onsite server, I  cant keep the offsite server updated  via rsync.
 the first time I rsync it works fine. But then the next time I do an
 update, with all new data, deuplication, hand links created and changed on
 the onsite server, the next rsync  is a total disaster.

 Are there some magic sync settings that will allow me to keep a backup of
 the backup-pc data by rsyncing it to another system?




 --
 Try before you buy = See our experts in action!
 The most comprehensive online learning library for Microsoft developers
 is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
 Metro Style Apps, more. Free future releases when you subscribe now!
 http://p.sf.net/sfu/learndevnow-dev2
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup failed (can't find Compress::Zlib)

2012-02-07 Thread hansbkk
On Tue, Feb 7, 2012 at 2:12 AM, Richard Shaw hobbes1...@gmail.com wrote:

 Here's the tail end of my yum transaction if you're curious to see what
 packages are needed:


Have you tried from a clean slate? I'd recommend starting with a complete
from-scratch install via the netinstall and the minimal option, get all
the underlying pre-requisites setup and working properly (SSH,
rsync/tar/SMB etc), focus on setting up the yum-repo config, perhaps
testing that with something small and unrelated, first, and only then
bringing BPC into the picture.
--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] simplest way to back-up to a remote share?

2012-01-24 Thread hansbkk
On Tue, Jan 24, 2012 at 8:38 PM, Ivanus, Radu
radu.iva...@tmdfriction.comwrote:



 I need to backup a network share called \\domain\folder to another network
 share called \\ip\folder (different from the 1st one).

 **


In other words, if a simple single-instance copy to that specific target is
the primary goal, BPC isn't the tool for you.

If you post a more complete description of what you're trying to accomplish
overall, with only absolutely required specific implementation details, the
list will be able to give you better advice.

If your goal is to use BPC, then I'd advise just setting it up according to
basic normal defaults and reading the docs etc to find how how it works, so
that you can devise a specific implementation that matches BPC's
strategies. IOW don't try to learn about the tool *and* try to adapt it to
preconceived strategies at the same time.

Hope this helps.
--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] OT: NAS GUI for native Linux (preferably RHEL)

2012-01-16 Thread hansbkk
On Tue, Jan 17, 2012 at 5:31 AM, Timothy J Massey tmas...@obscorp.comwrote:

 Tyler J. Wagner ty...@tolaris.com wrote on 01/12/2012 04:53:49 PM:

  So how about FreeNAS with BackupPC installed?
 
  http://harryd71.blogspot.com/search/label/backuppc

 Honest answer?  My prejudice against non-Linux UNIX, especially with
 something as important as backup.  I don't want to run into subtle issues
 that won't show themselves until I really, really need those backups...
  (That red text right up near the top of your link?  *That* is what I'm
 talking about...)

 Which is why OpenFiler would be such a natural fit, if the current version
 didn't have serious bugs that haven't been fixed since April, and an
 upstream base that seems to be going away...


So it seems there isn't a perfect fit; IMO the closest would be Openfiler -
and if the newest version (which I haven't even looked at) doesn't suit,
the final legacy version hasn't IMO had any serious issues, won't be
getting and doesn't need any updating, so the rPath issue doesn't really
matter, it should just remain stable, just toss a BPC running in a VM on
top and you're done.

A more involved alternative would be to reverse engineer those components
you want from OF and just create a customized distro based on whatever
you're most comfortable with, e.g. an automated kickstart of CentOS should
serve well, and is a close analogue to the rPath environment anyway.

If the client really needs GUI management of the underlying OS, Ubuntu
might be a better fit, or maybe they deserve a windoze server, can VM a BPC
host on top of that as well.
--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] OT: NAS GUI for native Linux (preferably RHEL)

2012-01-10 Thread hansbkk
I highly recommend OpenFiler. The code itself is open-source, but t's not
very diligently supported by the community. However I was a total newbie to
the world of Linux and have never needed any - it's been solid as a rock,
and has every possible NAS feature readily available from web-driven GUI,
obviously designed for the high-end corporate sector.

The previous version had an outdated stub of BPC installed, so they used to
work together, but this would now need to be added manually.

I would recommend running BPC in a VM so as to disturb OF's appliance
nature as little as possible.

Or in an ideal world, run your Filer standalone and BPC on a dedicated box.
Both projects run well on low-end/older hardware - but note this doesn't
mean I'm advocated consumer-grade hardware for mission-critical tasks. . .

On Wed, Jan 11, 2012 at 1:43 AM, Timothy J Massey tmas...@obscorp.comwrote:

 Hello!

 I'm in the middle of building a Super Backup server.  It will do the
 following:

 Run BackupPC for file-level backups
 Provide NFS share(s) for VMware snapshots
 Provide CIFS share(s) for Windows snapshots and Clonezilla
 Contains a removable SATA tray
 Manage all of this from a GUI

 I am currently doing each of these features on various different BackupPC
 servers already, but in each case it was done manually, by hand, and from
 the command line.  For this iteration, I would like to wrap a GUI around it.

 In the case of BackupPC, it has a GUI and I will continue to use it.
  However, *many* of the functions I would like to have the user perform do
 not:  NFS shares, CIFS shares, users, network settings, etc.  However,
 these are *EXACTLY* the standard function that a NAS does, and there are
 1E6 of these already built.

 So, my question:  is there a NAS GUI out there that can be added on top of
 standard Linux (preferably RHEL, but very willing to consider others)
 that will add most of these functions?  For example, something like the GUI
 for an Iomega NAS would be perfect.  (I thought about using them as the
 hardware and software base and adding BackupPC to them, but there's no
 built-in removable drive, and USB is awkward and slow.  Plus the Linux
 environment is... minimal.)

 I would prefer staying based on a generic Linux install, but I've also
 thought about using a NAS-based distro as the base (such as OpenFiler).  In
 the specific case of OpenFiler, the current version in a bit of a bad place
 at the moment.  There is much concern that the base OS, which is based on
 rPath, will not be available for free users for much longer;  in addition
 the current beta version (2.99) has some known critical bugs in iSCSI
 (which I use), and there have been no updates since April.  So, it's not my
 favorite base to build on...  (Reference:
 https://forums.openfiler.com/viewtopic.php?pid=26228)

 And I'd vastly prefer to stay with Linux, which eliminates FreeNAS and
 Nexenta.

 Many of the Linux-based NAS systems are designed as firmware for dedicated
 (and often vastly inadequate) hardware:  NSLU2 falls into this camp.  I am
 not running this on an embedded device:  It's a full-featured PC-based
 architecture.

 I'm also willing to consider generic Linux system management tools such as
 webmin, but I'd prefer something more focused on NAS-type functions if I
 can get it.  It's been years since I've looked at Webmin, but a quick
 glance seems to show that it hasn't changed much:  it's little more than
 textareas with chunks of the configuration files dumped into them.  I'm
 hoping for something more polished if I can get it.

 Like I said, I'm looking for the general interface provided by every NAS
 I've ever seen.  Of course, each of them is specific to their device.  I'm
 hoping there's a version out there for generic Linux.

 Does anyone have any thoughts or suggestions in this regard?

 Thank you very much for your help!

 Timothy J. Massey
 *Out of the Box Solutions, Inc.* *
 Creative IT Solutions Made Simple!**
 **http://www.OutOfTheBoxSolutions.com*http://www.outoftheboxsolutions.com/
 *
 **tmas...@obscorp.com* tmas...@obscorp.com   22108 Harper Ave.
 St. Clair Shores, MI 48080
 Office: (800)750-4OBS (4627)
 Cell: (586)945-8796


 --
 Write once. Port to many.
 Get the SDK and tools to simplify cross-platform app development. Create
 new or port existing apps to sell to consumers worldwide. Explore the
 Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
 http://p.sf.net/sfu/intel-appdev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a 

Re: [BackupPC-users] OT: NAS GUI for native Linux (preferably RHEL)

2012-01-10 Thread hansbkk
On Wed, Jan 11, 2012 at 8:18 AM, Chris Parsons 
chris.pars...@petrosys.com.au wrote:

  Id highly recommend Nexenta. It is much more feature complete than
 Openfiler and linux. Futher to this, with my experiences, BackupPC performs
 much better on Nexenta than it does on linux.

 I'm sure it's solid for those with experience in it, and ZFS is great
tech, but since Oracle killed its kernel upstream I personally wouldn't
invest much in Nexenta myself; it hasn't has a new release for 15 months
now.

Of course if OpenIndiana gains traction that's another story, but I tend to
prefer mainstream choices for key infrastructure myself. . .
--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The infamous backuppc_link got error -4

2012-01-06 Thread hansbkk
On Fri, Jan 6, 2012 at 10:59 PM, Jim Durand jdur...@hrsg.ca wrote:

 Hey guys! Doing my best to get up to speed with Backuppc, pretty
 impressive so far. Logs are filled with “backuppc_link got error -4”
 errors, and from the research I have done it seems that because my TopDir
 (/mnt/sdb1) and the cpool location (/home/users/backuppc/data/cpool) are on
 different filesystems, hard links will obviously fail. Is the answer to
 soft link “ln –s /mnt/sdb1 /home/users/backuppc/topdir” and change TopDir
 in config.pl to the “/home/users/backuppc/topdir”? It can’t be that easy,
 right?



My recommendation is to not change the TopDir spec even for those version
of BPC that allow it.

My approach is to have a dedicated filesystem for BPC's data, and ideally
this should be one that allows for live in-use maintenance - expansion,
disk swaps. LVM being a good example in low-end environments.

Just bind mount in fstab the partition you want to use for storage in the
place where backuppc wants it (should be /var/lib/backuppc with the debian
family packages).

Or put a symlink there pointing to the location you want, works just as
well, as long as you remember it's just a symlink for any other tools you
use that need to traverse it - IMO bind mounts are lower level and thus
more transparent to user-space tools.

If you do this before the install, everything should land in the right
place and get the right permissions. As you've seen the critical thing is
that the pool/cpool/pc directories must all be in the same filesystem so
hardlinks can work and with versions before 3.2 you can't change the TOPDIR
location after the initial setup

Don't be afraid to wipe and start over a few times while you're getting to
know BPC, it's good practice. Note that once BPC is up and running you
won't have to pay too much attention to it and you'll likely forget much of
what you learned at this stage, so document everything you do and why you
did it,



I've also taken this a step further and moved my config and log folders to
below TOPDIR so everything related to BackupPC is self-contained to the one
filesystem. Note that as far as BPC is concerned everything appears to
still remain in the expected filesystem locations, it's not aware of the
bind mounts and/or symlink magic.

This allows the whole shebang to be easily transferred to a new host for
maintenance/upgrade/disaster recovery situations, doesn't need to be the
underlying OS, just (roughly) the same version of BPC - I've done testing
moving from our production server (CentOS) to a temporary restore server
on a generic desktop running both Ubuntu and even via a Live-CD-only Grml
boot disc - (Grml is a system rescue CD package which bundles BackupPC) and
everything just worked.

To make this even easier, my LVM-over-RAID filesystem is self-contained in
an external drive housing connected via eSata.


Note that the latter bit isn't common, but the top AFAIK is normal
practice. I'm a relative noob here, so if you read anything from the more
experienced users posted here that contradicts anything I say, follow their
advice.
--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] When do files get re-transferred?

2011-12-23 Thread hansbkk
On Sat, Dec 24, 2011 at 5:35 AM, Arnold Krille arn...@arnoldarts.de wrote:
 Well, actually the comparison is done against the last backup of a lower 
 level.

Actually actually g from my understanding their isn't any difference
at all in BackupPC's filesystem between the two if it hasn't been
modified. In fact you can't even say that one instance of the file is
in the last full backup as opposed to in the incremental sets. The
only think that is in those sets is the hardlink, which even at the
underlying OS level, are identical structures, with no distinction to
one being the master as you would have say with symlinks.

Now in the case of a file having been modified since the last full, of
course then the two are different, and of course it only makes sense
to compare to the newest one, since in BPCs storage model there isn't
any benefit to distinguish between incremental vs differential
sets.

I seem to recall this may be an issue however with non-rsync
transports, since I never use them I don't know.


Confirmation of the above would be appreciated; I think not
understanding these issues is a source of confusion for newcomers to
BPC used to thinking in terms defined by traditional backup software
regimes.

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] When do files get re-transferred?

2011-12-23 Thread hansbkk
On Sat, Dec 24, 2011 at 8:20 AM, Les Mikesell lesmikes...@gmail.com wrote:

 On Fri, Dec 23, 2011 at 6:03 PM,  hans...@gmail.com wrote:
  it only makes sense to compare to the newest one, since in BPCs storage 
 model there isn't any benefit to distinguish between incremental vs 
 differential sets.

 The distinction is between the contents of the file and the directory
 entries pointing to it.   The contents of hardlinked files are all the
 same, but rsync doesn't know anything about the hashed filenames for
 the pool links.   It strictly follows the directory tree established
 by the last full run (by default).   The concept of incremental vs.
 differential sort-of relates to the 'incremental level setting that
 permits the comparison to merge in previous incrementals back to the
 last full, finding the latest version of each file .   That involves a
 trade-off of more server side work traversing multiple directory trees
 vs. likely transferring less changed data.

Thanks Les. So my snip above does hold when trying to conserve
bandwidth (say over a WAN), but at the potential cost of increasing
the time the backup session requires. In a high-speed local
environment, processing time can be reduced by always using
differential between fulls (by not enabling the incremental
option).

This only becomes a question if I got it wrong 8-)

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] When do files get re-transferred?

2011-12-23 Thread hansbkk
On Sat, Dec 24, 2011 at 9:34 AM, Les Mikesell lesmikes...@gmail.com wrote:
 Thanks Les. So my snip above does hold when trying to conserve
 bandwidth (say over a WAN), but at the potential cost of increasing
 the time the backup session requires. In a high-speed local
 environment, processing time can be reduced by always using
 differential between fulls (by not enabling the incremental
 option).

 This only becomes a question if I got it wrong 8-)

 The more significant difference may be the wall-clock time time for a
 full rsync run, which always does a full read of all the data on the
 remote side for a block checksum comparison, and may need to
 read/uncompress on the server side.   If that isn't an issue you can
 just do frequent fulls and not worry about doing rsyncs against
 incremental levels.   If it is an issue, or you want to use the least
 bandwidth possible, then you might use incremental levels and less
 frequent fulls.

Yes, in my current usage, I've only been doing fulls since figuring
out it didn't impact storage space usage. I just wanted to clarify
understanding the trade-offs between the other flavors for future
reference in possible other contexts.

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC recovery from unreliable disk

2011-12-21 Thread hansbkk
I know this doesn't help for now, but next time make sure your storage
platform doesn't depend on hardware reliability - of which there is no
such thing, long term.

On the low end I recommend LVM over RAID1 for small, RAID6 for bigger
systems, obviously high-end environments have their SANs.

Just FFR. . .

On Thu, Dec 22, 2011 at 9:50 AM, JP Vossen j...@jpsdomain.org wrote:
 I'm running Debian Squeeze stock backuppc-3.1.0-9 on a server and I'm
 getting kernel messages [1] and SMART errors [2] about the WD 2TB SATA
 disk.  Fine, I RMA'd it and have the new one...  Now what?  I know I can
 either 'dd' or start fresh.  But...


 If I start fresh, I know everything will be work and be valid, but I
 lose my historical backups when I wipe the bad disk and RMA it.


 If I 'ddrescue' BAD -- GOOD, I'll worry about the integity of the
 BackupPC store.  As I understand it, the incoming files are hashed and
 stored, but the store itself is never checked (true?).  So when I do
 backups, if an incoming file hash matches a file already in the store,
 the incoming file is de-duped and dropped.  But what if the file
 actually in the store is corrupt due to the bad disk?

 Am I correct?  If so, is there a way to have BackupPC validate that the
 files in the pool actually match their hash and weren't mangled by the disk?


 Any other solution I'm missing?

 Thanks,
 JP
 ___
 [1] Example kernel errors:

 Security Events for kernel
 =-=-=-=-=-=-=-=-=-=-=-=-=-
 kernel: [4020993.728571] end_request: I/O error, dev sda, sector 81203507
 kernel: [4021009.712952] end_request: I/O error, dev sda, sector 81203507

 System Events
 =-=-=-=-=-=-=
 kernel: [4020983.471256] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0
 action 0x0
 kernel: [4020983.471290] ata3.00: BMDMA stat 0x25
 kernel: [4020983.471315] ata3.00: failed command: READ DMA
 kernel: [4020983.471347] ata3.00: cmd
 c8/00:18:33:11:d7/00:00:00:00:00/e4 tag 0 dma 12288 in
 kernel: [4020983.471351]          res
 51/40:07:33:11:d7/40:00:28:00:00/e4 Emask 0x9 (media error)
 kernel: [4020983.471424] ata3.00: status: { DRDY ERR }
 kernel: [4020983.471446] ata3.00: error: { UNC }
 kernel: [4020983.501157] ata3.00: configured for UDMA/133


 [2] Example SMART error:

 Error 1704 occurred at disk power-on lifetime: 10149 hours (422 days +
 21 hours)
   When the command that caused the error occurred, the device was
 active or idle.

   After command completion occurred, registers were:
   ER ST SC SN CL CH DH
   -- -- -- -- -- -- --
   40 51 40 45 66 01 e0  Error: UNC 64 sectors at LBA = 0x00016645 = 91717

   Commands leading to the command that caused the error were:
   CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
   -- -- -- -- -- -- -- --    
   c8 00 40 3f 66 01 e0 08  46d+13:36:50.242  READ DMA
   ec 00 00 00 00 00 a0 08  46d+13:36:50.233  IDENTIFY DEVICE
   ef 03 46 00 00 00 a0 08  46d+13:36:50.225  SET FEATURES [Set transfer
 mode]

 |:::==|---
 JP Vossen, CISSP            |:::==|      http://bashcookbook.com/
 My Account, My Opinions     |=|      http://www.jpsdomain.org/
 |=|---
 Microsoft Tax = the additional hardware  yearly fees for the add-on
 software required to protect Windows from its own poorly designed and
 implemented self, while the overhead incidentally flattens Moore's Law.

 --
 Write once. Port to many.
 Get the SDK and tools to simplify cross-platform app development. Create
 new or port existing apps to sell to consumers worldwide. Explore the
 Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
 http://p.sf.net/sfu/intel-appdev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scary problem with USB3...

2011-12-16 Thread hansbkk
I highly recommend **against** using any protocol conversion in the
mix USB to eSATA or whatever.

True eSATA is fine - obviously the quality of the hardware is an
issue. Firewire is also OK but getting rarer these days.

Internal SATA to eSATA should also not be a problem, not really doing
any conversion there.

Opening drive bays are really cheap and easy, just protect the drive
well for transport.

There are also eSATA docking stations that take a bare drive
standing up on the desktop.

Careful about hot swapping, all the hardware in the chain, as well as
the drivers on the OS need to support it, and obviously make sure
nothing's accessing the drive and dismount first.


Unless your data isn't important, then do whatever you like. . .

On Fri, Dec 16, 2011 at 4:18 PM, Tim Fletcher t...@night-shade.org.uk wrote:
 On Thu, 2011-12-15 at 16:38 -0500, Zach La Celle wrote:

 Regarding some other responses, I'll be sure to try eSATA next time
 instead of USB to see if that tends to be more stable.  I could also use
 internal drives, I suppose...the real reason we're using external disks
 is so that we can replace them and store them off-site every once in a
 while.  Using a pre-packaged drive was simply more convenient.

 I use one of these external usb/eSATA - SATA docks, they are about £20
 in the UK and let you hot swap a SATA drive a bit like a tape.

 http://www.amazon.co.uk/Startech-Esata-Sata-Dock-2-5-3-5/dp/B0026B8VR0/



 --
 Tim Fletcher t...@night-shade.org.uk


 --
 Learn Windows Azure Live!  Tuesday, Dec 13, 2011
 Microsoft is holding a special Learn Windows Azure training event for
 developers. It will provide a great way to learn Windows Azure and what it
 provides. You can attend the event by watching it streamed LIVE online.
 Learn more at http://p.sf.net/sfu/ms-windowsazure
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: Backing up from BackupPC to BackupPC

2011-12-12 Thread hansbkk
On Mon, Dec 12, 2011 at 2:03 AM, member horvath
mem...@thehorvaths.co.uk wrote:
 Thanks very much for the info.
 my backups are many terabytes in size so making local copies over and above 
 the onsite backup is not practical.
 To remind you I need a 30day/6month onsite and only the most recent offsite.
 Once the initial offsite is performed (This will be very large) The ongoing 
 incremental s will still be several GBs.
 I need the archive option in backuppc to rsync the most recent copy offsite 
 on a daily basis.

I think you need to research and understand a bit more about how BPC
works in order to plan your strategy effectively.

The archive option in backuppc is the same things as local copies
over and above the onsite backup, it was designed to allow people to
take full snapshots off-site, usually using tape media. In your case
the resulting huge monolithic full-set files are impractical to send
over the WAN.

The only way to get incremental functionality over the wire using
BackupPC is to run another BPC server at the remote location on a
separate schedule, and rsync is the way to go if your target hosts
allow.

Les is da man, follow his advice!

Another option is to physically rotate your backup filesystems off-site

  - Never underestimate the bandwidth of a station wagon full of tapes
hurtling down the highway. — Andrew Tanenbaum

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrade on Ubuntu

2011-12-10 Thread hansbkk
You also have the luxury of not worrying about migrating the past data over 8-)

Make sure with your new setup you give Topdir its own dedicated
filesystem, in such a way that it's easy to expand without taking the
system offline for long periods, and ideally also easy to switch from
one host to another. If you don't have a SAN in-house I recommend LVM
over RAID6 but of course YMMV.

On Sat, Dec 10, 2011 at 3:18 PM, Thomas Nilsson
thomas.nils...@responsive.se wrote:
 Hello everyone!

 I've inherited a BackupPC installation which hadn't made any backups for
 504.3 days (gulp!) but have managed to set that right (I think).

 It is on an Ubuntu 8.04 server and now running BackupPC 3.0.0. As the wiki
 is very clear about not mixing package and by hand upgrades, I was
 wondering if there is a way to determine if my installation has been
 performed using apt-get or by hand?

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: Backing up from BackupPC to BackupPC

2011-12-10 Thread hansbkk
On Sat, Dec 10, 2011 at 7:45 PM, member horvath
mem...@thehorvaths.co.uk wrote:
 I've have considered the archive function however I wasn't aware that
 the changes would be rsync'd.
 I thought it would create a tar archive of the most recent backup then
 xfer that to the archive host.
 Am I wrong in thinking this?
 I also need to ensure data integrity by check suming the remote copy
 with the onsite.

The archive function just creates the tar snapshot of the hosts you
specify. You can point that wherever you like, but making it local is
faster. Once its complete your script should then transfer it, however
and wherever you like.

Rsync just happens to be what most people would use for this, and that
protocol includes a checksum verification.

If you use a different transport mechanism, then you many need to
include a checksum routine in your script.

Regarding your scheduling requirements, the best procedure is to get
to know the BackupPC way and work within that, rather than imposing
rules that probably came previous systems. Note that within the BPC
data store, there actually is no physical difference between
incremental and full backup sets - the only difference is the
algorithm use to determine whether or not files have been updated, and
that depends on the transport protocol.

You can configure things so that you have for example  at least this
many  and be conservative.

Also note the archive tar snapshots will not have the space savings
inherent in BPC's data storage deduplication, so it's likely using
this method will result in your offsite storage requirements being
many times larger than the BPC server onsite. Not necessarily a
problem as diskspace is cheap these days, but something to plan for. .
.

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] cpool always empty?

2011-11-22 Thread hansbkk
On Tue, Nov 22, 2011 at 3:45 AM, Bob Proulx b...@proulx.com wrote:

 Obviously due to the advantages of a long term operating system release I 
 would prefer to remain on the Debian Stable release.

I would advise continuing with Debian packaging system for all the
prerequisite infrastructure dependencies, but for BPC itself the
installation from source process is both easy and IMO pretty
bullet-proof, including future upgrades. Or using the package
management ability to pull just that one package from the testing
repo would also work fine, as long as you are/get familiar with how to
guard against dependency-hell confusion. . .

A completely separate point, but related - rather than messing with
changing TopDir or other config options in your BPC setup, I recommend
leaving everything where it wants to be and just using symlinks (or
fstab mountpoints) to point to wherever you'd like.

Related because doing it this way would allow you to stick to your old
BPC version and still put your folders wherever you like. . .

-

Below is my own experimentation, not something supported by the BPC
developers team nor even recommended by me without all the normal YMMV
disclaimers. . .

I use this method to duplicate the old-school setup of having
everything, including the configuration and log folders, under my
TopDir on a self-contained LV for maximum flexibility and portability,
and have successfully used this setup to move my BPC data set from one
host to another completely seamlessly, actually using it to rotate BPC
instances off-site, and have successfully restored using a LiveCD
running a completely different distro from the production host on a
spare user-desktop class PC available at the off-site location.

BackupPC rocks. . .

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC to NFS then to tape via DPM

2011-11-09 Thread hansbkk
 Whatever happens it has to go straight to the NFS as that's the only 
 storage I'll have that's big enough to take anything. Big disks are 
 cheap these days.  Or use one part of the NFS share for backuppc, another 
 for what you send to tape.
Echo Les, what he said.

You will find your BackupPC instance will come in very handy for
ad-hoc restores, and its deduping will allow you to keep versioning
archives long-term.

But IMO don't bother having them backup the whole TopDir, or if you do
try that, treat it as a completely separate run of the tape system,
separate physical tape sets etc, it's just would be nice if it works
ie we might be able to reconstruct the BackupPC instance from tape
but won't count on it.

Set up a secondary mount point (let's call it DumpDir) for scheduled
dump to tape jobs per client/target host - those are the targets for
your tape jobs. This means the tape archive process won't benefit from
BPC's de-duplication, but speed/ease of recovery in a true disaster is
the goal there not saving media space. These are temporary dumps, have
your process deleted when the next dump runs.

Note that this TopDir filesystem, although it in effect will contain
the contents of hundreds of DumpDir instances down the road, will
actually take up a very small fraction of your allocated actual disk
space, depending on #hosts and degree of duplication between them. Be
very careful you don't allow your total space to fill up - ideally you
want TopDir on its own dedicated filesystem. Otherwise perhaps keep
more than one instance per host in your DumpDir, so you can quickly
wipe them out when your monitoring system let's you know you're
hitting the say 80% threshold, giving your team time to fulfill your
request for more disk space.

As you seem to be aware, the lack of an overall centrally coordinated
plan by a BackupPC expert means you'll end up with a bit of a
square-peg-round-hole end result, but if the resources are there to
accommodate the resulting hodgepodge, the result can still be
reliable; in fact it will result in resource-inefficient redundancy
that might just come in handy down the road 8-)

If it turns out the filesystem you're allocated is too precious for
them to be able to accommodate the above, then make sure you put the
BackupPC TopDir on the most reliable and expandable filesystem
(probably the expensive one) and just use a big cheap drive for the
scratch DumpDir filesystem targeted by their tape system.

Disclaimer - above is based on background knowledge from general
experience, if it conflicts with what anyone else tells you here
regarding BackupPC, best to follow them rather than me.

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What is the simplest solution for single PC and USB external HD ?þ

2011-10-21 Thread hansbkk
On Fri, Oct 21, 2011 at 4:03 AM, John Smith daisy1...@hotmail.com wrote:
 I have dual boot computer with two hard drives.  One drive has a
 Vista install and the other has Debian, Swap and two more partitions.

 I installed using aptitude.  The program now works and backs up /etc
 to /var/lib/backuppc.  I would like to be able to back up Vista using
 MS backup program and also be able to back up Debian with the
 same external HD using backuppc.

John,

I am doing this in certain situations, and as people point out it's
not what BackupPC is designed for, but it's doable and one way to
start the learning curve as you say.

I'm sure you're aware that BackupPC requires a *nix filesystem, I
believe current Debians default to ext4 which should be fine, as would
be ext3.

Windoze wants NTFS, and if you're using the built-in system imaging
solution, that will create its own directory in the root of whatever
partition you point it to.

I would advise using whatever drive partitioning tool you're
comfortable with (I use a bootable Parted Magic disc) to divide up
your external HDD something like (this is an example from something I
set up for a client):

(empty space A) [extended (30% logical NTFS) (empty spaceB) (30%
logical ext3/4) (empty spaceC) ]

A is for future primary partitions, perhaps bootable - in my case I've
got a GRUB2 standalone boot partition containing various utility ISOs
I can boot into, then a couple regular distro boot partitions I
backup-restore for testing purposes, one of which is now Ubuntu
containing a current BackupPC install - everything but the data.

All my BackupPC data is under a BPC_topdir folder in the root of the
ext3 partition, including the logs and config folders, which are the
targets of symlinks back in the Ubuntu install so that BackupPC thinks
everything is actually in its default locations, while actually
everything it needs is in one self-contained partition.

Windows 7 periodically backs up C: drive to a cold-metal restore image
to the NTFS partition, while BPC does its magic (much more frequently)
keeping all the variable user data backed up, with historic
version-control. It doesn't need to be limited to the single W7 host
of course, backing up everything across the network is what it's
designed for.

The drive rotates among the various PCs at the client, allowing both
the system images and the data backups to be stored on one external
drive. That drive gets cloned occasionally and the copy stored
off-site. Works well for that situation - the client didn't want to
dedicate a PC to just doing backups, and the imaging side of things
works must faster locally than it would over the network - it's e-SATA
rather than USB as well.

Note that an imaging program like StorageCraft's ShadowProtect (which
I recommend) or Acronis will give much more power and flexibility than
the built-in Windows backup programs, but at least with the W7 version
I've found the latter to be find for basic usage.

Hope this helps. . .

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Minimal cygwin install?

2011-10-05 Thread hansbkk
On Wed, Oct 5, 2011 at 5:01 AM, Les Mikesell lesmikes...@gmail.com wrote:
 I've always just done a full install of Cygwin where I needed it, but
 now I'm looking for an installer package that would be easier for
 others to use.  Is there anything like cwrsync that also includes
 sshd?

Les,

I've found with recent version of Cygwin, there aren't any registry or
other issues preventing a fully portable (as in runs from any
path/drive letter/external device) non-installation setup - IOW, get
it running on one machine and then distribute via a zip file or
however you like.

If you google portable cygwin you'll find a very confusing mishmash
of only occasionally accurate information, but I've spent a fair bit
of time testing and configuring things without anything special and
it just works.

My X:\Cygwin tree contains setup.exe, a download.bat to do a
download only update to a local 0-setup-pkgs folder, and an
install.bat to update/add to the local apps from that setup folder
if needed.

The sshd setup, cygrunsrv and all that still needs to be run on each
client of course, and this part is what is then non-portable, but all
the regular tools work without anything special needed in the
registry, given you set up any path modifications, environment
variables with (again centrally managed) batch files.

Let me know if you want any further specific details, directly
off-list would be fine.

Hans

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Round-tripping config.pl between hand-coding and the web interface

2011-09-09 Thread hansbkk
On Fri, Sep 9, 2011 at 7:30 PM, Holger Parplies wb...@parplies.de wrote:

 A comma at the end of a Perl list without following elements is a *purely
 cosmetic* thing. If you want one there, fine, put it there.

If you are addressing me, I wasn't proposing that at all. I believe
Bowie was simply pointing the fact that it's tolerated as a more
valid workaround (kludge if you prefer) solving the issue I was
speculating was behind someone using that incorrect syntax.

 Your backup really did not finish without errors. It could not, because what
 you actually requested was unfulfillable.

Yes I realize that the problem was caused by my error. I wasn't
complaining, nor did I advocate that BackupPC should do anything
different than what it does. My intention was simply to post the cause
of my problem in the interest of helping others. I'm not sure where I
got the error-causing snippet, but since it's out there somewhere,
it could cause the same problem for someone else.

Now that you mention it, it would be nice if there were an error
logged with more information as to the cause, but personally I reckon
this is just one of those edge cases that the developer(s) are fully
justified to ignore if they prefer.

 The *solution* would be to *patch the code* to retain the comma at the end of
 a list.

I agree, but wouldn't presume to request such an enhancement myself.


 You seem to prefer kludges over solutions. That's fine. Just don't advocate
 them on the list. In fact, don't even present them here without at least a
 prominent notice to the fact that they are no more than ugly kludges. *We*
 will suffer the consequences of people using kludges, not you. In fact, you
 just gave the best example yourself: someone else presented a kludge
 somewhere, you incorrectly copied it, had problems, and came to us for help.

 All of that said, the '' thing does *not even really qualify* as a kludge -
 the end of the list still has no comma. It's just plain nonsense.


Again, I never advocated using a comma at the end of the list - I was
simply responding to Bowie's (I'm sure well-intentioned) post.


 If you can come up with an implementation that can retain the comments within
 the block, I'm sure Craig will happily accept the patch. I'll just say I don't
 think it can be done (correctly, that is, not some kludge that only works half
 of the time - I shouldn't normally have to explicitly state that) without
 reimplementing the Perl parser.

Again, I wasn't complaining, nor advocating that BackupPC do anything
different from what it currently does. I was simply posting a factual
observation, sharing my experience in the interest of furthering
fellow noob's knowledge, helping them avoid making the same mistakes.

 I believe, using both the BackupPC web GUI and a text editor to modify your
 configuration files is not officially supported. You do that at your own risk
 and shouldn't complain about problems it causes.

I realize that, and thought my posting details on my precautionary
procedures would sufficiently demonstrate my awareness of the fact
that I'm living on the edge.

Thanks as always for your detailed and thoughtful feedback.

--
Why Cloud-Based Security and Archiving Make Sense
Osterman Research conducted this study that outlines how and why cloud
computing security and archiving is rapidly being adopted across the IT 
space for its ease of implementation, lower cost, and increased 
reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] xxShareName = /cygwin ??

2011-09-09 Thread hansbkk
On Fri, Sep 9, 2011 at 12:55 PM, Les Mikesell lesmikes...@gmail.com wrote:

 my main question: can rsync be made to treat the meta-filesystem root   
 /cygwin as a ShareName?

 Seems like something you could test easier than explaining the question 
 here...  Commands like 'ls -R /cygdrive'  can recurse over the letters that 
 appear so it seems theoretically possible.

Perhaps I haven't been thorough enough in my testing, but I have tried
it and couldn't get it to work. And as you point out the underlying
protocols seem OK with the idea, which is what made me want to know if
BackupPC doesn't want to do this by design, or I thought perhaps
there's a variable switch to flick somewhere or. . .

But I guess not 8-(


 Alternatively, you could just have the users rsync the contents regularly to 
 a matching place on a dedicated file server that backuppc backs up to keep a 
 history.

Although a good suggestion in general - I've been using Unison to keep
a large personal data set in sync across my multiple desktops for
years - in this particular situation it would be more difficult to
implement than I'd prefer.

Same goes for Adam's ideas; I'm grateful for the suggestions though,
we may well end up having to do something along those lines.


The key concept I'm shooting for here is to try to get BackupPC to grab:

 whatever data is currently mounted on a given client PC, without having to 
 know about it ahead of time.

Just in case there's any reticence based on privacy concerns, these
are 100% company-owned systems, and the staff are fully aware that
nothing personal is allowed near them.

--
Why Cloud-Based Security and Archiving Make Sense
Osterman Research conducted this study that outlines how and why cloud
computing security and archiving is rapidly being adopted across the IT 
space for its ease of implementation, lower cost, and increased 
reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] xxShareName = /cygwin ??

2011-09-09 Thread hansbkk
On Sat, Sep 10, 2011 at 12:01 AM, Richard Shaw hobbes1...@gmail.com wrote:
 I've only skimmed this thread so I apologize if I miss something but I
 was thinking. Might it be a better idea to make the user at least
 somewhat responsible for the backup?


On Sat, Sep 10, 2011 at 12:00 AM, Les Mikesell lesmikes...@gmail.com wrote:
 In that case, I can't help thinking that the right solution would be a
 version control system designed for concurrent access with reasonable
 client interfaces for the conflict resolution operation.   This is

You may be right in an ideal world, but I'm still shooting for
something that requires a minimum of work, especially changes in user
behavior, training, etc.

AFAIC I'm still planning to try for my ideal - finding a way to just
backup everything currently mounted - putting in place the
restrictions I mentioned earlier to keep things as manageable as
possible. Since investigating that will be a bit of a capital-P
Project, I probably won't get a round tuit for a few weeks yet.

Plan B will probably be to keep the roaming disks the canonical
location, automate as much as possible their frequent rsync'ing up to
a central location and then using BackupPC to back it up from there.

Now that I think about it, implementing the latter for the known
drives first would be a quick win, leaving my more out there idea
as a later-on belt and suspenders process. Once a drive gets
converted into the more stable group, it can be excluded from the
grab everything. Anyway, obviously thinking out loud here, don't
want to waste y'all's time any further.


Thanks both Richard and Les for your suggestions. . .


Very off-topic:

 I think Notes is still around, but it is at least as much of a toolkit

My career was very Notes-centric from around 91-Y2K, starting with v2;
IMO IBM's takeover and Ray Ozzie's departure was when Notes jumped the
shark, and given its history over the decade since, I'm *know* I'm not
going back 8-) The programmability came later on (v4+), but even out
of the box there was a lot of great functionality - pretty much
invented the concept of groupware back when the corporate world was
just starting to adjust (understatement!) to plain email and the
Internet still forbade any commercial activity. The replication model
was very kewl for intermittently-connected users, with a rare
combination of user-friendliness and central management I haven't come
across in the open-source world. . .

--
Malware Security Report: Protecting Your Business, Customers, and the 
Bottom Line. Protect your business and customers by understanding the 
threat from malware and how it can impact your online business. 
http://www.accelacomm.com/jaw/sfnl/114/51427462/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] xxShareName = /cygwin ??

2011-09-08 Thread hansbkk
Our users have a variety of storage media from ordinary flash drives
and SD cards to eSata or firewire HDDs, and even some swappable
internal HD's. Much of these data is as or sometimes even more
important than those on the fixed drives.

Just as the notebook users are only intermittently attached to the
LAN, these various drives are only occasionally attached to the
notebooks.

Obviously a management challenge to set policies that will try to
maximise the security of these data, but my question here is
specifically to try to set up config.pl to avoid having to create and
maintain customized hostname.pl's.

I've tried to create an RsyncShareName = /cygwin  - note NOT
specifying a drive letter, the idea being that if a given user has
their F drive inserted one day and their H another, BackupPC will just
grab whatever's there and mounted.

Downside is that we'd also be backing up data from optical media that
happened to be in the DVD drive at the time, but that's a price we're
willing to pay, perhaps handled with strategic exceptional excludes if
it proves worth the headaches.

I haven't been able to get this to work so far. I've taken the
--one-filesystem out of my rsync args, tested with no excludes at all,
no dice.

Is there a way to accomplish this? even if it's a kludge workaround. . .

Or is this a truly idiotic idea that should indeed be prevented by design?

--
Doing More with Less: The Next Generation Virtual Desktop 
What are the key obstacles that have prevented many mid-market businesses
from deploying virtual desktops?   How do next-generation virtual desktops
provide companies an easier-to-deploy, easier-to-manage and more affordable
virtual desktop model.http://www.accelacomm.com/jaw/sfnl/114/51426474/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-08 Thread hansbkk
I managed to track down the source of my original problem, and decided
it was worth posting to the end of this ridiculous thread just in case
it's useful for future googlers. The cause was the empty value - two
quotes at the end of this:

$Conf{RsyncShareName} = [
  '/cygdrive/c',
   ''
];

So don't do this - it causes an otherwise fine backup (AFAICT) to show
on the main status screen as needing attention, but the only error
in the log is a

Backup aborted ()

at the end.

I believe I may have gotten the syntax from an example of another
rule, perhaps the files exclude, and I believe it might be useful when
you're working with long lists (perhaps externally generated/sorted)
and want to be sure to have a comma at the end of every line.

But not with the RsyncShareName variable. Here is a correct example:

$Conf{RsyncShareName} = [
  '/cygdrive/c'
];

Please don't reply to this unless you feel it's important, e.g.
correcting something for the record. If you want to give me meta
feedback - e.g. suggestions for improving the manner in which I
interact with the list - feel free to do so in response to the new
thread I've just started here:
http://blog.gmane.org/gmane.comp.sysutils.backup.backuppc.general

I just can't bear the thought of this one going on any further 8-)

--
Doing More with Less: The Next Generation Virtual Desktop 
What are the key obstacles that have prevented many mid-market businesses
from deploying virtual desktops?   How do next-generation virtual desktops
provide companies an easier-to-deploy, easier-to-manage and more affordable
virtual desktop model.http://www.accelacomm.com/jaw/sfnl/114/51426474/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] xxShareName = /cygwin ??

2011-09-08 Thread hansbkk
Thanks to all for answering, and particularly Holger for your
thoughtful response. Before approaching the social/concept side, I'd
like to be clear  - from a purely technical POV - about my main
question: can rsync be made to treat the meta-filesystem root
/cygwin as a ShareName?

--

Since the larger picture, although perhaps off-topic, seems to be of
interest I'll pursue it a bit.

Regarding your comments on structure - unfortunately I (as must we
all)deal with reality as it presents itself, and ultimately have
little say in day-to-day ICT usage policies. Believe it or not it's
safe to say I'm the most structure-minded person here.

Fortunately some of the social factors are supportive, perhaps
different from the norm in the western corporate world:

- the users are heavily invested in the value of the data; they will
personally suffer much more from its loss than their employer

- management oversight controls can be put in place, no problems with
enforcement; strong culture of obedience, draconian penalties the norm


 no, that will not work. Simple reason: your backup history will contain the 
 files backed up on one day, and the next day, when the drive isn't connected, 
 they will appear to have been deleted (or changed to what now happens to be 
 connected under the same path). Inevitably, the day the disk *is* connected 
 will end up being an incremental backup and will thus expire, whether or not 
 you have more recent backups of the data. Even full backups can expire while 
 older backups are still kept if you use an exponential scheme.


In order to highlight the technical problems I perhaps overstated them here.

When a given drive is visible to BackupPC, it *does* have a home on
one particular host and will usually be mapped to the same drive
letter. True, that drive is sometimes there and sometimes not, so
let's try to overcome the problems you've raised as simply as
possible.

Preserving ancient historical versions of files is not a priority, so
let's assume full backups only and a solid retention policy, always
discarding oldest first. Therefore when a restore is needed, we only
need to look for the most recent set that includes that drive.

But there is a possibility that another client machine got backed up
with the drive attached more recently than its usual host - I'm not
depending on this, but would like if possible to take advantage of it,
especially since backing it up from multiple locations will only cost
time and bandwidth, not disk space. In the restore scenario, say the
owner is Andy - he'll know that Betty and Charles have also be working
with that drive, so it'll be relatively easy to check their host
records as well.

Which brings me back to my original question - I'd really like to know
we're grabbing whatever data is currently mounted on a given client
PC, without having to know about it ahead of time.

 my question here is specifically to try to set up config.pl to avoid having 
 to create and maintain customized hostname.pl's.

Is that possible?

Thanks again for your (plural) help.

--
Why Cloud-Based Security and Archiving Make Sense
Osterman Research conducted this study that outlines how and why cloud
computing security and archiving is rapidly being adopted across the IT 
space for its ease of implementation, lower cost, and increased 
reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Round-tripping config.pl between hand-coding and the web interface

2011-09-08 Thread hansbkk
Started a new thread, originally part of this too-long one:

http://adsm.org/lists/html/BackupPC-users/2011-09/threads.html#00026

On Fri, Sep 9, 2011 at 3:57 AM, Bowie Bailey bowie_bai...@buc.com wrote:
 Just a note that a comma at the end of the last element in an array or
 hash is perfectly fine in Perl.

 This is also a correct example:

 $Conf{RsyncShareName} = [
  '/cygdrive/c',
 ];

Perhaps someone implemented the

last item,
''
 ];

syntax because the web admin script strips that ending comma out.

It also removes

# comments

within a given variable's code block, but not those maintained between
variables.

Otherwise I haven't had problems using both the web admin and
hand-coding. I just use the web interface to start with and stick to
its preferred syntax/structure when editing manually.

However I do use diff tools to double-check against a master
occasionally, and keep dated backups of canonical snapshots in a
central location - next step revision control. . .

Note I'm working on 3.1, more recent behaviour may be different.

--
Why Cloud-Based Security and Archiving Make Sense
Osterman Research conducted this study that outlines how and why cloud
computing security and archiving is rapidly being adopted across the IT 
space for its ease of implementation, lower cost, and increased 
reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-04 Thread hansbkk
On Mon, Sep 5, 2011 at 6:53 AM, Adam Goryachev
mailingli...@websitemanagers.com.au wrote:

 I think the answer to most of this might have been:
 apt-get --purge remove backuppc

 This should remove every trace of the package ever having been
 installed. I only mention this because it might come in handy in the
 future for you.

 In addition, you do realise that every distribution of linux can be
 installed with, or without, the graphical user interface (X Windows +
 window manager/etc). In fact, you could even install and setup your
 server with it installed and then un-install it later, or vice-versa


Adam,

Thanks so much for your helpful message and especially for your
forbearance and encouragement.

Yes, I do, and I have been playing around over the years with
Fedora/CentOS a bit, but have found the learning curve just a bit less
steep on the Debian-esque distros and therefore built up a bit more
experience there. Since my learning objectives are aimed at BackupPC
for now, I wanted to eliminate as many roadblocks as I could.

The fact that a post-disaster recovery scenario would most likely
involve relatively untrained people was also a factor. If a recovery
server could be provided via a customized BackupPC LiveCD, that would
greatly improve the resilience and time-to-recover of our DR plan, and
(again my perception is) that there is a great variety of
user-friendly tools for building LiveCD custom distros in
Debian/Ubuntu than Fedora/CentOS.

All of which I recognize is down-the-road pie-in-the-sky pipe-dreaming
from my current state of knowledge.


On Mon, Sep 5, 2011 at 12:03 AM, Timothy J Massey tmas...@obscorp.com wrote:
 +1.  This was the exact point I was trying to make.
 As an additional point to the original poster:  you said somethng like OK,
 I'll start over, but with my original config and pool.. NO!!!  Start with a
 100% clean setup.  Make it work, and document EVERY LITTLE THING you do to

I just meant I'd keep them when wiping the drive. In the case of the
config.pl, for later reference - I'm using diff tools to check against
the original, changing one parameter at a time and then testing.

Re the cpool, initial experiments start empty with it empty testing
against a small test dirstruc, but once I start working again on the
real system drive excludes, pre-populating that to save
unnecessarily waiting for 18GB back over the wire.

On Mon, Sep 5, 2011 at 12:15 AM, Timothy J Massey tmas...@obscorp.com wrote:


 So, you understand that each distribution is going to set things up 
 differently, which is very likely to contribute to future problems, yet you 
 decide to voluntarily deal with such problems.  All of this after stating 
 that you do not have sufficient skills to even know when you *might* be 
 running into problems.

I plan on experimenting with my advanced goals only after the actual
backups are working successfully, leaving that setup alone and working
with a separate test system. And I do think I have (or am developing)
the skills to be able to see when things are going wrong. Such testing
is how I like to learn, pushing the envelope of what's possible.


You want to dangle a hard drive onto a production server, put BackupPC on that 
server and consider that a backup?  This is wrong on *SO* many levels.  It's 
the wrong configuration, it's the wrong tool and it's serving a purpose that 
makes almost no sense.

Sorry if I wasn't more clear. I didn't mean a production server in
the sense of adding BackupPC to a server fullfilling another function,
I meant the production BackupPC server, the one actually doing real
backups, as opposed to my testing environment.


 Why would you create a solution such  that when one system fails, you risk 
 losing both the production data AND THE BACKUP DATA all at the same time.  
 Imagine a power supply failure.  Couldn't it take out both hard drives?  Sure 
 can.  How about a malicious user that runs rm -rf /.  Gonna wipe out the 
 backup data too.  I can come up with a DOZEN scenarios with zero effort.


I don't understand how you get that, in fact I think the opposite.
That would be true if I were relying on RAID, leaving my multiple
drives in sync with each other, but in fact the three drives in
rotation will each be completely independent instances - here's the
link to my original post asking for feedback on that:


http://comments.gmane.org/gmane.comp.sysutils.backup.backuppc.general/27289



 If a 35% solution works for you, great.  But most people would usually prefer 
 a more useful one.

Of course if I'm given solid details on why my scheme shouldn't work I
won't implement it.

Of if I thought it necessary, we could implement this scheme *in
addition* to a traditional static instance of BackupPC, but at this
point I believe that would only be necessary once sufficient history
won't fit on a single large drive. In which case the offsite rotation
drives would only hold a more recent subset of that stored on the big
RAID array, but 

[BackupPC-users] Would a noob-oriented HowTo be useful on the wiki?

2011-09-04 Thread hansbkk
To all

I will do my best to not abuse the real generosity I have seen in the
list every day over the years by doing my best not to post unnecessary
questions or those not relevant to BackupPC.

I will also do the best I can to give back to the project to the
extent I am able to help - certainly more down the road than at the
moment.

For example - I have extremely detailed notes on my most recent
step-by-step process - which I'm happy to say is proceeding
successfully with a matched set of Ubuntu's Lucid 10.04 (current
LTS) server and its official BackupPC package, rather than mixing the
latest Natty with the older package.

These notes could easily be cleaned up into a Basic installation of
BackupPC on Ubuntu howto for Linux beginners, and I would be happy to
post that to the wiki, if TPTB on the list think it would be helpful
to the project.

On the other hand if more people feel like Tim (at least based on my
interpretation of what he wrote), perhaps you'd prefer to only
encourage people already advanced in Linux skills to implement
BackupPC? It's true having more noobs trying it out would increase the
need for support here.

In which case I won't try to encourage new adopters that don't fit that profile.


Here's a snippet of a contribution for future noob googlers - I don't
claim they're original but didn't keep the references in my notes, I
believe they did come from this list:

You can monitor the progress of a backup by opening two console
windows from the server before initiating:

watching the growth of the pc folder with du:

  - watch -n 10 -d 'sudo du -h --max-depth=4
/var/lib/backuppc/pc/{host_name} |sort -h -r

and

watching the files opened by the backuppc process, matching on an
appropriate string

  -  watch -n 5 lsof -n -u backuppc | egrep 'cygdrive' | awk '{print }'

The latter assumes your client is windoze with the traditional
cygdrive path setup - using backuppc as your search string will give
more general results.


Let's take this opportunity to close that mega-thread, for my
excessive contributions to which I apologize.

Thanks again for your patience.

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-04 Thread hansbkk
On Mon, Sep 5, 2011 at 9:28 AM, Les Mikesell lesmikes...@gmail.com wrote:
 On Sun, Sep 4, 2011 at 12:41 AM,  hans...@gmail.com wrote:

 Re packaging issues, I'm not trying to figure them out at all, AFAIC they're
 a black box that just works - I plan to just observe their results and
 stick to their policies (I didn't realize BPC permissions could vary from
 one distro to another).

 Thinking of it as a black box only works if you don't plan to make
 your own changes.   The disto packagers modify things to fit programs
 into the way each distribution works, so don't expect the components
 to be in the same places, have the same names, owners, permissions,
 etc. between .deb and .rpm packages.  It is up to the packager to make
 those decisions and they are fairly arbitrary.

I just meant I didn't want people to waste their time helping me
troubleshoot packaging-specific issues.

I also don't plan on making changes beyond the symlink/mount
redirection to a dedicated TOPDIR drive, which should be transparent.

 Re OS choices, I don't have the access, knowledge or desire to do my initial
 learning/experimentation in the production CentOS CLI environment; for many
 aspects it's so much easier to work with a distro like Ubuntu at this
 stage.

 That makes no sense at all to me.  CentOS will install and work just
 the same as ubuntu unless you have some unusual hardware and if you
 want a GUI snip

I meant easier *for me*, simply that I've personally climbed a littler
higher up Ubuntu's learning curve, and yes I've found a GUI helpful
for certain things - although I'm working with the server edition,
I've installed ubuntu-desktop, but am manually bringing it up via
startx only when needed.

I admit it's a bit of a crutch, and I'm actively working toward
learning how to do everything from the text console, as that will of
course be my only option once I'm managing BackupPC in production -
that environment doesn't have X at all, and in fact I won't have
access to the physical console anymore.

Thanks for your help Les, and if it isn't out of line I'd like to ask
that we end this mega thread, I'm feeling very kreng jai toward the
list and don't want to take any more of your collective time. . .

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-03 Thread hansbkk
On Sat, Sep 3, 2011 at 11:09 AM, Timothy J Massey tmas...@obscorp.comwrote:

 But would probably be a very good idea.  What would be an even better idea
 would be to grab a spare PC (or a virtual guest) and test it from a
 completely clean installation.  And document the *heck* out of what you do:
  you *will* be doing it again (and again and again).


Well the whole thing is a test system, and I'm not that concerned with
figuring out what went wrong vs moving forward, so I guess I'll just wipe
and restart with a clean OS.

Since I want to use the BackupPC 3.1 package (eventual production system
will be on CentOS5), while I'm at it I'll use the Ubuntu version it's
designed for, Lucid 10.04, rather than the latest Natty 11.04.

Hopefully will eliminate the problems I'm seeing un/re- installing from
the package system.

I plan to keep the pool folders and of course my long-tweaked config.pl, but
will start off from the clean install with as close to defaults as possible
with a small static target share to test with, then make the changes a
little at a time only after I've got the basics working right.

Which as you say I should've done from the start. . .

In the meantime there are a few unanswered questions in the thread above if
anyone has the information to  ontribute more detailed responses I'm sure it
will help others googling later on. . .
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Swapping drives for offsite - the whole shebang

2011-09-03 Thread hansbkk
Will a BackupPC 3.2 system just work with a conf/log/pool/pc filesystem
moved over from 3.1, or is there an upgrade process run on the data?

If the latter, I imagine that would make it difficult to move that data back
to 3.1?

Just thinking of disaster recovery scenarios, maybe building a custom
live-cd boot disc to store offsite with the data drive.
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-03 Thread hansbkk
On Sun, Sep 4, 2011 at 10:49 AM, Jeffrey J. Kosowsky
backu...@kosowsky.orgwrote:

 Just a piece of friendly advice... you seem to have posted dozens of
 posts in the past 24 hours or so... you keep making multiple, often
 non-standard or nonsensical changes to a standard
 configuration... and are asking multiple questions as you dig yourself
 deeper.


Thanks very Jeffrey, and to everyone, both for specific answers to my
questions and for your valuable general advice and feedback.

About my messing things up by my lack of *nix knowledge and making things
too complicated, you're completely right and I apologize for wasting your
time with my scattered approach to the learning curve.

Re packaging issues, I'm not trying to figure them out at all, AFAIC they're
a black box that just works - I plan to just observe their results and
stick to their policies (I didn't realize BPC permissions could vary from
one distro to another). If necessary, I will now be able to just roll back
to a virgin state via CloneZilla, rather than un-installing.

Re OS choices, I don't have the access, knowledge or desire to do my initial
learning/experimentation in the production CentOS CLI environment; for many
aspects it's so much easier to work with a distro like Ubuntu at this
stage. Once I'm confident I've got the BPC side of things working just
right, the CentOS guy can set up the production server however he likes.

My ultimate goal is to have a self-contained BackupPC HDD - conf and log
physically under TOPDIR - which in the event of a disaster can be mounted
to a new host running an arbitrary distro, possibly needing to be created by
a staffer even more ignorant of Linux than myself supported by a
step-by-step howto. Ideally I'd like to figure out how to create a
customized BackupPC LiveCD that could be stored with the drive(s) offsite.

These goals also support my doing the learning/configuration work on an
alternative distro.

But for now, I am starting from scratch with 3.1 on Lucid, working step by
step in departing from the defaults, testing and keeping careful notes in
case I need to come back here with further issues, so as not to waste you
guys' time further.

Thanks again for your help.
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
Running 3.1.0, installed via synaptic on Ubuntu 11.04.

After spending a lot of time refining my excludes, thinking windows open
files were preventing a successful full backup completing, I tried making
the whole target one very small and static directory tree with the same
result.

There isn't anything indicating a problem in the logs, other than Backup
aborted at the end, just before the saved as a partial backup message and
after DeltaGet phase 0 and phase 1.

I've enabled rsync's log but it just lists the files, no errors. Similar
results when running the _dump script manually.

Any ideas? Happy to provide further details if it helps track things down, I
suspect everything's actually backing up, but would like to get the dot on
the i from the status page.

Another issue, I'm sure unrelated, is that I can only access the /backuppc
web admin interface from the BPC server's console (as localhost), not over
the network - I confirmed that the apache.conf contents were included in the
global conf.

Thanks in advance. . .
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
I just ran the _dump script manually again, this time fully deleting
everything under TOPDIR except the pool directories and with the -v verbose
option.

The ending of the process was the same, except for a link host_name just
before the end aborted message.

I'm thinking maybe a permissions issue?

I haven't been able to find in the docs a listing of what the permissions
are supposed to be, and as a *nix noob, I may very well have screwed things
up in that area messing around. I believe I set everything from TOPDIR down
as owned by user backuppc and group www-data.

I'd appreciate a pointer to how it's supposed to be, and in the meantime
will try a complete uninstall and re-install (moving my conf and pool data
elsewhere first) and see how that goes. . .



Running 3.1.0, installed via synaptic on Ubuntu 11.04.

After spending a lot of time refining my excludes, thinking windows open
files were preventing a successful full backup completing, I tried making
the whole target one very small and static directory tree with the same
result.

There isn't anything indicating a problem in the logs, other than Backup
aborted at the end, just before the saved as a partial backup message and
after DeltaGet phase 0 and phase 1.

I've enabled rsync's log but it just lists the files, no errors. Similar
results when running the _dump script manually.

Any ideas? Happy to provide further details if it helps track things down, I
suspect everything's actually backing up, but would like to get the dot on
the i from the status page.

Another issue, I'm sure unrelated, is that I can only access the /backuppc
web admin interface from the BPC server's console (as localhost), not over
the network - I confirmed that the apache.conf contents were included in the
global conf.

Thanks in advance. . .
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Swapping drives for offsite - the whole shebang

2011-09-02 Thread hansbkk
Here's an idea I have completely unrelated to my problem posting, looking
for feedback.

Goal: using single large HDDs as backup media rotating them offsite, in as
simple and bullet-proof as possible a way.

Strategy:

  two hard drives
one with the base server OS installed, install BackupPC 3.1 via package
management
the other set up as one big partition (probably ext3), top-level folder
BackupPC_TOPDIR

  under the latter: create conf and log

  move the contents of /etc/backuppc and /var/logs/backuppc contents to the
above two new directories locations,
and the contents of /var/lib/backuppc to the TOPDIR location itself, so
everything related to BackupPC is there in one place

(here's where I need to get the proper permissions, ideally as the
specific chmod/chown/chgrp commands to use - any pointer appreciated)

leaving the original directories empty as mount points

  use fstab to first mount the second drive at say /mnt/sdb3, then (using
bind mounts) /mnt/sdb3/BackupPC_TOPDIR at /var/lib/backuppc
then (more bind mounts) /var/log/backuppc at /var/lib/backuppc/log and
/etc/backuppc at /var/lib/backuppc/conf

I should be able to clone this second drive to another two, and
transparently swap them out, effectively creating three entirely independent
but functionally identical BackupPC servers. One is live, the second on the
shelf on site, the third stored at a secure offsite location. Replace A with
B, take A offsite and bring C back to sit on the shelf, next time replace B
with C etc, rinse and repeat.

Only downside is if the boss needs to restore a specific day's version of
the corporate vision statement RIGHT NOW and that happens to be offsite, but
that seems a small price to pay for the simplicity of the scheme.

Obviously only works as long as everything fits on one disk - in my case not
a problem.

I suppose another server with the full historical set on LVM-over-RAID array
would handle both those issues, as long as the single drive held at least
one full snapshot, I still think the bullet-proof-ness of my scheme is
better than an off-site-server mirrored over a WAN.

Feedback on any gotchas, and the permissions details would be greatly
appreciated. . .
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
On Sat, Sep 3, 2011 at 2:23 AM, Les Mikesell lesmikes...@gmail.com wrote:

 The ubuntu package should have set everything up correctly.   You
 didn't change TOPDIR or mount something underneath it after the
 install, did you?

 --
  Les Mikesell
   lesmikes...@gmail.com


8-)

Of course I did Les, precisely as outlined in the post crossed in the mail
just now.

I don't have anything else under the TOPDIR, other than keeping a few of my
own scripts in the conf directory.

A related factor as I mentioned, I've been wiping and starting over with
different levels of log verbosity while troubleshooting, letting BPC
re-create the logs and pc folders, leaving the pools in place so all the
stuff doesn't actually have to be transfered over the wire all over again.

I monitor the process with two watch consoles, one doing a du on the pc
folder, the other filtering lsof for the backuppc user, and a CPU activity
monitor showing rsync kicking off on the win client, and everything goes
through just fine, for a while (while I was tweaking all my excludes on the
system drive) it was filling in some of the previously uncaptured files but
the last half-dozen runs have come in with the same filecount and du size
every time.

It just never actually says the full was completed.
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Offline backuppc mirror with rotating external drives

2011-09-02 Thread hansbkk
Was this post in response to mine just now asking for feedback on pretty
much the same topic? If not, pretty amazing example of synchronicity (in
Jung's sense, not regarding data mirroring 8-)
I considered using RAID mirroring or other partition-cloning methods, but at
this point I'm thinking I prefer the simplicity of multiple BackupPC server
instances operating with the same config.pl to archive the same data, each
one thinking it's the only one (kind of like my various levels of
wife/mistress/girlfriends - I know completely gratuitous, just couldn't
resist 8-).

On Sat, Sep 3, 2011 at 2:43 AM, Pavel Hofman pavel.hof...@ivitera.comwrote:

 Hi,

 Since the $SUBJ topic appears in this mailing list on a regular basis, I
 wrote a short description of the solution we have been using for a few
 years, incl. the scripts. It is by no means a comprehensive writeup, nor
 is it a rocket science. Yet it may provide inspiration.


 http://blog.ivitera.com/pavel/it-infrastructure/company-backuppc-server-with-offline-copies

 There are a few other backup-related posts too, perhaps someone will
 find them useful for his work.

 Regards,

 Pavel
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Swapping drives for offsite - the whole shebang

2011-09-02 Thread hansbkk
On Sat, Sep 3, 2011 at 2:59 AM, Les Mikesell lesmikes...@gmail.com wrote:

 There has been a vast amount of discussion on this list covering this
 topic so you should probably wade through the archives.
 My approach is a 3-member software RAID1 where 2 drives are always in
 the server and the 3rd is a set rotated offsite.  This gives you an
 always-available current history plus a disaster-recovery copy and a
 spare or two.   My drives are 750gig (set up some time ago) and I just
 recently got a laptop-size drive to work reasonably well as the
 rotating member - which took some tweaking since it has 4k sectors and
 the partition had to be aligned right.   With this scheme you only
 have to unmount momentarily while breaking the raid, but realistically
 you can't do backups while the new member is syncing because the disk
 is too busy.   Others are doing something similar with LVM snapshots.


Yes I've waded through many megs' worth of the archives researching this,
and discussed the topic with you in fact (maybe six months ago?), and my
post is the result of my thought process after digesting them.
Obviously my proclivity for simplicity is overriding the advantages of the
other methods. For my situation, one key consideration is for a non-geek
staffer to be able to get the data back if there's a fire/explosion whatever
while I'm away on holiday or otherwise unavailable - the company doesn't
have much depth in ICT support. I could walk them through getting Ubuntu+BPC
installed and maybe the fstab edited, but would want to add creating an
array from only one member etc. into the mix. . .

 If you have good network bandwidth you can also simply run another
 independent instance elsewhere hitting the same targets.



This last is exactly what I'm proposing, but the independent instances are
just getting swapped out sequentially rather than multiple machines running
concurrently.

So if at all possible I'd really appreciate feedback on the pro's and cons
of my specific proposed method - can you (if not necessarily from you
specifically Les, you =  the list ) see particular gotcha's I haven't
taken into account?
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
On Sat, Sep 3, 2011 at 3:14 AM, Les Mikesell lesmikes...@gmail.com wrote:

 I don't see any other message yet, but the way to get it right is to
 just mount the partition you want to use for storage in the place
 where backuppc wants it (should be /var/lib/backuppc with the deb
 package).  Or put a symlink there pointing to the location you want.
 If you do this before the install, everything should land in the right
 place and get the right permissions.   The critical things are that
 the pool/cpool/pc directories must all be in the same filesystem so
 hardlinks can work and with versions before 3.2 you can't change the
 TOPDIR location after the initial setup which is already done in the
 deb and rpm packaged versions.   Usually if you get these wrong, you
 get an error when starting the service about not being able to make a
 hard link so I'm not quite sure what is happening


I meant that I'd done exactly as you suggested above, but with bind mounts
rather than symlinks, with the specific locations and steps as outlined in
this post
http://sourceforge.net/mailarchive/forum.php?thread_name=CAOAgVpwU13YitOFF%2BNSXH3rYJVUNu%2B2KjpGqt2STL6sVgpdQ1g%40mail.gmail.comforum_name=backuppc-users
which you've also responded to. Sorry to have multiple threads going at the
same time, but obviously they may be more related than I thought.

I've just uninstalled the backuppc package, then either manually deleted or
verified these were removed:

/etc/backuppc
/usr/share/backuppc
/var/lib/backuppc

and rebooted.

Re-installed, this time without anything extra in the fstab, just letting
the package go where it wants, but in the end of the post-install script,
get a message that the hard-link test failed.

Did I miss something in uninstalling that may have interfered with the
re-install?

Could I please get a pointer to what the permissions should be on these
locations and their contents? That's actually what I was trying to get by
going through the installation again. . .


Thanks much for your help. . .
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
On Sat, Sep 3, 2011 at 3:32 AM, hans...@gmail.com wrote:

 If you do this before the install, everything should land in the right
 place and get the right permissions.   The critical things are that
 the pool/cpool/pc directories must all be in the same filesystem so
 hardlinks can work



Just to confirm yes these are all in the same filesystem - in fact the
whole testing install is in one ext3 partition.

And I'm thinking of running through the install again, this time as you
suggest with my desired mounts in place rather than moving them after the
fact.

But that leads me to still want to know what permissions (chmod/chown/chgrp)
should be on the mount points and/or their targets?

You can tell I'm not fully ofay on this *nix stuff just yet. . .

And if I want to wipe the log and pc folders between test runs, should I
recreate the empty folders and then reset permissions, or just let BackupPC
do it - which I assume it would do correctly?
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Swapping drives for offsite - the whole shebang

2011-09-02 Thread hansbkk
On Sat, Sep 3, 2011 at 3:38 AM, Les Mikesell lesmikes...@gmail.com wrote:

 It turns out that a linux raid1 mirror looks just like the non-raid
 filesystem it contains - or enough that you can mount the single drive
 as if it were a normal partition.  So you can treat the rotated member
 just  the same as your single drive in a recovery scenario.




I'd prefer not to have to deal with the break a mirror/fail the drive/swap
it out/remirror routine. My idea is to simply bring the server down, swap
out the one disk, reboot and walk away.


 One is that a drive failure will mean missed backups.


That's true, but need to set up an auto-notification of the server going
down to handle all the other possible failure causes anyway, and the B drive
is right there on the shelf ready to go. In fact having it NOT inside the
server eliminates the chance that it would be damaged by one of those other
causes - I've had a failed PSU fry almost all the components in a machine. .
.
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
On Sat, Sep 3, 2011 at 3:50 AM, Les Mikesell lesmikes...@gmail.com wrote:


 The ubuntu package should create a backuppc user and that should be
 the owner of everything under TOPDIR.  I think you need to diagnose
 why the link fails but trying the same operation from the shell (su -s
 /bin/bash backuppc if it doesn't have a shell configure for login).


What is the same operation? I'm not up on how to track down the
postinstall script in the install package, is it just doing an
/etc/init.d/backuppc start?


 I think you are making things too complicate with a bunch of bind
 mounts.  Why not just mount the partition as /var/lib/backuppc and if
 you want it to be self-contained, symlink other stuff there?


The references I've seen, both here in the list and elsewhere - hang on a
sec - yes right in the BPC docs
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Change_archive_directory#Bind-mounting_TopDir

treat them as functionally equivalent.

Personally I don't see how bind mounts are any more complex than symlinks;
my impression is that as developers are able to count on modern systems'
handling bind mounts, symlinks are getting deprecated. They also seem less
vulnerable somehow, I've heard of some software/systems being unable to
traverse them - in fact I've read they're pretty much transparent right down
to the kernel level.

If you're saying symlinks are to be preferred over bind mounts then I'd be
happy to switch, but would like to know why, and perhaps the FAQ ref'd above
should include those points. . .

In the meantime, my reinstall without ANY filesystem shenanigans didn't pass
the hardlinks test on startup. Any ideas as to what could be the cause of
that?

Maybe because backuppc user already exists? Should I be logged in as her
when re-installing?
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
Sorry, editing mangled the referents of my pronouns:

I've heard of some software/systems being unable to traverse them [SYMLINKS]
- in fact I've read they're [BIND MOUNTS] pretty much transparent right down
to the kernel level.
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
On Sat, Sep 3, 2011 at 3:50 AM, Les Mikesell lesmikes...@gmail.com wrote:

 The ubuntu package should create a backuppc user and that should be
 the owner of everything under TOPDIR.  I think you need to diagnose
 why the link fails but trying the same operation from the shell (su -s
 /bin/bash backuppc if it doesn't have a shell configure for login).


OK, I wiped and re-installed again.

The four empty folders under TOPDIR are all root root, I chown'd to
backuppc and chgrp'd to www-data and the init start worked fine.

Now I'm just guessing that if I need to reset permissions in the future I
should do the same with -R - is that true for conf and log as well? I
haven't found anything on what these permissions should be and would
appreciate any pointers if the knowledge exists out there. . .

In the past I was running the init start/stop via sudo - would that mess
things up? I was chastised about using sudo when shell'd in as backuppc
before, so I've been mostly working from the sysadmin account created
installing the OS, since backuppc's rights are so restricted. . .

Thanks again for your ongoing help and patience with my learning curve. . .
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
On Sat, Sep 3, 2011 at 4:38 AM, Les Mikesell lesmikes...@gmail.com wrote:

 In general, backuppc needs rw permission on everything, and apache
 (www-data on debian/ubuntu) needs read access to some of it.


Sorry to need such hand-holding, but if I'm above my TOPDIR and execute

chown -R backuppc TOPDIR
chgrp -R www-data TOPDIR
chmod 644 TOPDIR

should that be OK?

Anyone with suggestions for improvement would be most welcome

And what if my CONF and LOG folders are included there as well?

The above is what I was doing when resetting log and pc during my testing.

 Should I recreate the empty folders and then reset permissions, or just
let BackupPC do it - which I assume it would do correctly?

  Does the message link host-name in my log when running _dump -v
manually indicate a hardlinkng problem kicking in **after** the pc
filesystem's already been created?

In the past I was running the init start/stop via sudo - would that have
messed things up?

Should I run the start/stop with su backuppc instead?



I've now tried to uninstall/re-install twice times without success.
Apparently the failure of the hardlink test during the postinstall also
prevented the creation of the default config.pl and hosts files.

This time I'm planning to delete the backuppc user as well as:

/var/lib/backuppc
/etc/backuppc
/var/log/backuppc
/usr/share/backuppc


I'm not going to do anything different with the filesystem until I get the
default install working first, but I'd really rather not have to re-install
the server platform OS itself. . .

is there **anything** else I should do to ensure a clean system state
before re-installing BackupPC?
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
On Sat, Sep 3, 2011 at 4:46 AM, Les Mikesell lesmikes...@gmail.com wrote:

 On Fri, Sep 2, 2011 at 4:38 PM,  hans...@gmail.com wrote:
 
  Or is the message link host-name in my log when running _dump
  -v manually indicate a hardlinkng problem kicking in **after** the pc
  filesystem's already been created?

 I think the fact that the link step isn't completing is your real
 problem, but I still don't know why.  If you get that far it should
 work or tell you why in the logs.


Sorry our posts crossed. By link step do you mean another stage that was
supposed to happen after:

 DeltaGet phase 0 and phase 1.

?

I tried setting the verbosity level to 99, then 3 and 2 and never got any
wheat I could understand out of that chaff.

the link host-name message only came from the manual _dump -v run, never
(as far as I can recall) showed up in the backups initiated via the web
host.

By the way I just remembered I disabled nightly by renaming it .disabled,
and set backups to not run automatically while I was doing my testing (the
docs didn't mention setting it to zero, could that be the problem - what's
the official way to accomplish those two things?
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk
On Sat, Sep 3, 2011 at 5:03 AM, hans...@gmail.com wrote:


 This time I'm planning to delete the backuppc user


Is anything more than removing the line from /etc/passwd required for this?


 as well as:

 /var/lib/backuppc
 /etc/backuppc
 /var/log/backuppc
 /usr/share/backuppc


 I'm not going to do anything different with the filesystem until I get the
 default install working first, but I'd really rather not have to re-install
 the server platform OS itself. . .

 is there **anything** else I should do to ensure a clean system state
 before re-installing BackupPC?

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-02 Thread hansbkk

 On Sat, Sep 3, 2011 at 5:03 AM, hans...@gmail.com wrote:


 This time I'm planning to delete the backuppc user


 Is anything more than removing the line from /etc/passwd required for this?


 as well as:

 /var/lib/backuppc
 /etc/backuppc
 /var/log/backuppc
 /usr/share/backuppc


 I'm not going to do anything different with the filesystem until I get the
 default install working first, but I'd really rather not have to re-install
 the server platform OS itself. . .

 is there **anything** else I should do to ensure a clean system state
 before re-installing BackupPC?



Somehow the package installer sees that there **used to be** a config.pl in
the (non-existing when it started) /etc/backuppc folder and therefore
doesn't install it, nor hosts.

Bringing them in from my zip'd archive AND doing the chown/chgroup -R on
TOPDIR allows the init start to work, but the web admin interface won't
load, even though the backuppc conf is in place under apache's init folder.

Before I wipe the whole drive and start over, is there a relatively recent
howto on manual installing on Debian/Ubuntu, or could someone help me
manually eliminate *ALL* traces of my previous BPC install so the
package-based routines will 100% complete as designed for a virgin system?

In the meantime it's 5:30am here and my two toddlers will be waking me up in
an hour or so so I'm grabbing some shuteye.

I expect perfect answers to all my questions here when I return!

(just kidding 8-)

Thanks for your help so far Les, and in advance to anyone else willing to
chime in. . .
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Block-level rsync-like hashing dd?

2011-04-11 Thread hansbkk
On Mon, Apr 11, 2011 at 12:43 PM, Saturn2888
backuppc-fo...@backupcentral.com wrote:
 But none of that solves the issue we're having now. How in the world do we 
 backup the current pool of data?

Sorry I haven't gone back to read the whole thread - have you tried
and failed already with rsync?

If you have too many hardlinks for rsync, you can use the script tool
specifically for doing this (backupPC-tar? - sorry I don't have the
name handy from here - or a more recent enhanced similar tool that has
been mentioned discussed here several times - search the archives.

Or clone below the filesystem level - if you're unable/unwilling to
take the BPC server offline, then DRDB to a second host (big project!)
or mdadm raid, possibly dd of an LVM snapshot.

If you are able to take it offline, then a partition cloning tool as
just discussed here.

By the way, you keep breaking the thread by not including the proper
subject header (with the Re). Maybe it would be better if you actually
subscribed to the list rather than using Backup Central?

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Block-level rsync-like hashing dd?

2011-04-10 Thread hansbkk
On Sun, Apr 10, 2011 at 12:16 PM, Les Mikesell lesmikes...@gmail.com wrote:
 I've never heard of raid sync affecting the original disk(s).  I've been doing
 it for years, first with a set of firewire external drives (which also had USB
 but it was slower), then the sata bays.  There might be problems in adding 
 more
 members than originally created in the set, though.  In your situation I would
snip

 FYI for all, not doubting Les' good experience doing this  (using
mdraid mirroring as a user-land tool to mirror a partition to an
external drive for offsite backup rotation):

I had a pretty extensive discussion with the Linux-RAID list on
thisand the general conclution was that the way mdraid does the
mirroring would just add unnecessary kruft to the resulting filesystem
and make recovery more difficult.

The bottom line from them was that rather than using RAID to do the
mirroring to a removable drive, better to just use a tool like DD.
When I expressed that I wanted a bit more assurance of a bit-perfect
mirror being done, and was directed to these enhanced versions:

http://dc3dd.sourceforge.net/
http://www.forensicswiki.org/wiki/Dc3dd

Of course there are many COTS partition cloning tools out there as
well if that's a better fit for a given situation.

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restrict machine to do full backups Friday night and incremental on weekdays?

2011-03-31 Thread hansbkk
Forgive me if I'm out of line, but wanted to let you know that your
HTML email is very hard to read, IMO better to just use plain text in
open lists. . .

--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and 
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] mysql addon or plugin?

2011-03-26 Thread hansbkk
MySQLdump is a good tool, there are others, but usually the whole
process is scripted to fit the local environment.

Just like many mail servers, databases should be quiescent (the server
stopped) while the dump takes place to ensure consistency. If you want
to really minimize the downtime, then using LVM snapshotting is a good
option.

1. shut down the service
2 take the snapshot
3 re start the service
4 run the dump from the snapshot
5 delete the snapshot
6 let BPC capture the dump

IMO robustness comes from regular and thorough monitoring and
testing, ultimately by human sysadmins. Automation tech helps make the
tasks easier, but is not a substitute for human attention and test
recoveries from realistic simulations of possible disasters - even
monitoring and testing systems can break down.



On Sat, Mar 26, 2011 at 12:38 PM, Lord Sporkton lordspork...@gmail.com wrote:
 I have seen some examples of using a local backup script to dumpmysql
 to a local dir, then backup that local dir via BPC, but i was hoping
 for something a little more robust. I was looking into replacing the
 tar commands that pipe back to ssh with a mysqldump command so it
 pipes one big sqldump file into BPC, and ive been looking at the local
 script method but neither seems very robust. I was hoping maybe there
 was like a plugin or something that would create an extra backup
 method or something like that. preferrably i was hoping to be able to
 backup individual databases as individual files.

 Is there any current methods for this ?

 Thank you
 Lawrence


--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] mysql addon or plugin?

2011-03-26 Thread hansbkk
I may be wrong here, more reading than experience in this particular
area, but my understanding is that it would still recommend making
sure nothing's writing to the database while the dump is taken.

For a heavily used transactional systems with complex relational
structures, IMO it's possible that otherwise there would be data
inconsistencies - e.g. this table's been updated but that one hasn't
yet completed.

Probably less important for mailstores.

On Sat, Mar 26, 2011 at 6:57 PM, Doug Lytle supp...@drdos.info wrote:
 hans...@gmail.com wrote:
 If you want
 to really minimize the downtime, then using LVM snapshotting is a good
 option.

 1. shut down the service
 2 take the snapshot


 Just an addition to this,

 If using the XFS file system, snapshotting will cause LVM to preform an
 file system freeze, just before the snapshot.  No shutting down the
 service is needed.

 Doug

--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backend to nfs nas share

2011-03-23 Thread hansbkk
Best to use LVM (over RAID if you like) for future expansion flexibility.

I happen to use OpenFiler as a NAS host for the same reason - can act
as an iSCSI host as well as all the mainstream filesharing protocols,
relatively easy to set up compared to building your own from a generic
distro.

Note it is not recommended to run BPC on the OF host itself, best to
treat OF as a black box appliance.

On Tue, Mar 22, 2011 at 7:52 PM, Jeffrey J. Kosowsky
backu...@kosowsky.org wrote:
 Lord Sporkton wrote at about 02:56:59 -0700 on Tuesday, March 22, 2011:
   I was hoping to setup a backend storage system for this to allow me to
   use a much larger file system than is typically available locally. I
   was looking at using an NFS share and connecting backuppc to the NFS
   share. As I understand it NFS has no real limit on the filesystem size
   other than that of the hosting machines filesystem limits, but does
   backuppc have any filesystem requirements or filesystem size limits? I
   noticed the inode issue mentioned in documentation, im looking into
   that one currently for NFS.
  
   I saw the limits about individual file size in the faq but im more
   concerned with the overall filesystem backuppc is running on top of.
  

 BackupPC itself has no such limits. The limits are based on the
 filesystem where TopDir is located.

 Also, btw, the issue is not NFS but the filesystem that NFS is
 mounting from the remote machine. So you need to look on the remote
 machine filesystem partition and see how much space and inodes are
 available.

 --
 Enable your software for Intel(R) Active Management Technology to meet the
 growing manageability and security demands of your customers. Businesses
 are taking advantage of Intel(R) vPro (TM) technology - will your software
 be a part of the solution? Download the Intel(R) Manageability Checker
 today! http://p.sf.net/sfu/intel-dev2devmar
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] File size disappears after moving TopDir

2011-03-19 Thread hansbkk
On Sun, Mar 20, 2011 at 4:27 AM, Mark Edwards m...@antsclimbtree.com wrote:
 I moved my TopDir from a USB hard drive mounted at /var/lib/backuppc, to an
 NFS share mounted at /mnt/nfs/backuppc.  The share is mounted using autofs
 rather than fstab.

Please confirm so it's clear to us noobs: the clean solution would
have been to just mount the NFS share at the old standard location:
/var/lib/backuppc

I can't see messing with source myself. . .

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Basics

2011-03-18 Thread hansbkk
On Thu, Mar 17, 2011 at 10:12 PM, Joe Konecny jkone...@rmtohio.com wrote:
 I'm not claiming I can explain the solution to his problem but this is a good 
 example of why
 microsoft is so successful.  MS has MVP's (most valuable professional's) (and 
 they aren't
 necessarily on ms's payroll) that hang out in forums and relentless answer 
 questions that
 have been asked thousands of times.  With the open source world... you are 
 usually responded
 to with rtfm or don't let the door hit your ass on the way out.


A very helpful classic howto:
http://www.catb.org/~esr/faqs/smart-questions.html

also explains why open-source geeks can give the impression you've formed.

People answering question that have been already been answered
thousands of times are doing nothing but wasting time and bandwidth -
not just their's but everyone else's who participates in that
list/forum.

When my five-year-old pulls a tantrum about changing the DVD right
now, sure it's faster to just do it, but is it the right thing to do?

Of course, reasonable questions should get answered reasonably
promptly, just as every project once past a certain size of user base
should have decent documentation. The question is who should provide
these services? IMO the answer is those users of the free software who
don't have the skills to actually help develop it. Free as in freedom
but not as in beer - if every BackupPC user that didn't contribute
otherwise put in one or two hour per month on a docs wiki, the quality
of the project (and therefore its popularity) would be enormous. And
the developers wouldn't be pestered with as many questions.
end rant

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Basics

2011-03-18 Thread hansbkk
On Fri, Mar 18, 2011 at 8:22 PM, Joe Konecny jkone...@rmtohio.com wrote:
 On 3/18/2011 5:00 AM, hans...@gmail.com wrote:
 This isn't valid.  With today's pipes bandwidth isn't a concern as
 far as forums go.  I realize this is a mailing list and not a

I meant the mental bandwidth of the participants.

 forum but you can probably read the archives if it bothers you.  One
 could argue that the more (quality) traffic we have the better.  I stand
 by my assertion if you want a product to succeed than it is better
 to relentlessly answer questions.  If I had a product I was trying
 to sell, I assure you that is what I would do.  We don't want to
 chase away users.  Then development stops.

Key word is quality. Which IMO means interesting discussions that
further the development efforts as per the smart questions howto.
We aren't selling anything, opensource software isn't a product,
it's a project driven by its users - success is simply satisfying
their needs. If there aren't enough sufficiently skilled and motivated
users around to keep a project going, then it dies and so it should,
but the number of non-contributing users are not a support but a drain
on the communities resources if they don't follow basic netiquette.

If you want to become a user you need to pay the price - in the FOSS
world, that usually means higher knowledge/experience prerequisites
and a steeper learning curve, not to mention the implied obligation to
contribute back to the community.

No one is obliged to help people that haven't read the docs or
searched for answers to FAQs asking basic questions. However if you
would like to spend your time doing so, no one will object. Just don't
talk as if anything you get for free from the generous developers and
user community is yours by right, everything you get here is a gift
for which the only appropriate response is gratitude.

And of course all of this is just IMHO, feel free to disagree, that's
what makes the world so interesting.

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Basics

2011-03-18 Thread hansbkk
On Fri, Mar 18, 2011 at 11:05 PM, Timothy Murphy gayle...@eircom.net wrote:
 Many were basically useless, but I have to believe the post-XP ones
 may be a bit better.

 I actually said that there did not seem to be a standard Microsoft
 backup program that did incremental backups.
 I notice that nobody has actually told me there is one.

And what's your point? It's not appropriate for you to expect to get
an answer to that question here. Please ask it somewhere else. In my
experience, no one one with decent computer knowledge would have
bothered wasting their time even looking at a Microsoft-authored
backup program, when the third-party alternatives are so widely
acknowledged to be superior. The exception would be people whose
career is wedded to Redmond and are seeking to establish themselves as
MS experts in MS-centric work environments. But they're not hanging
out here. . .

Since you're got such strong reasons to stay with MS-only solutions,
why haven't you spent a bit of time and energy investigating and
experimenting with the backup programs that come with your MS OS, and
answer your own question?

 It's like saying you would only put a Ford stereo in your car - or
 rather you want to buy a Michelin car so it'll be compatible with your
 tires, or - OK I'll shut up, be speechless. . .

 That seems a bizarre analogy.
 Most people buy a car with tyres already on it, I think.

My point is that I consider the OS and its built-in tools to be so
fundamental, almost a background part of my total computer
environment. MS itself has proven again and again over the decades to
barely be competent even in its core business (OSs), and flagrantly
inept in other areas its tried to compete in. Note I'm talking about
their technical competence, I have nothing but awe for their business
acumen, at least in past decades. Therefore making decisions based on
*that* brand is like (another out there analogy!) selecting your
stockbroker based on the fact that he works for the bank where you
opened your first savings account as a child.

 I do actually install the tyres recommended by Mitsubishi
 in my Mitsubishi car.
 Don't people usually do that?

No, most people realize that the car manufacturer's main motivation
behind its recommendations is their profitability, not the welfare of
its customers. Either the named tire manufacturer is financially tied
in with Mitsubishi, or they may even have paid a fee for the
recommendation (product placement). If tires were important to me, I'd
pay attention to more objective, knowledgeable sources, review
research reports etc.

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Newbie setup questions

2011-03-13 Thread hansbkk
On Mon, Mar 14, 2011 at 12:16 AM, César Kawar kawa...@gmail.com wrote:
 Yes I'm sure. Without -H option it actually was impossible to sync the pools. 
 It worked without -H but didn't fit on the target USB drive.

Just to toss this out there as a possible explanation - if I've got
this wrong someone please jump in and correct me. The difficult
filesystem is the TOPDIR one. I believe the pools themselves have
never been a problem.

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Newbie setup questions

2011-03-11 Thread hansbkk
I'm not qualified to disagree Cesar, but my understanding is that the issue:

A - Has nothing to do with the size in TB of the filesystem, but the
number of hardlinks - therefore the number of source files, the
frequency of backups and the number of clients.

b Wasn't/isn't related to memory leaks, but the sheer size of the
array that needs to be built in RAM to track all the files being
processed.

There have been extensive discussions on this quite recently,
including about recent versions of rsync improvement of memory leaks
not fixing the core issue, which remains a non-trivial one once the
number of links reaches a critical point relative to the RAM resources
available. Your situation hasn't hit that point yet, and perhaps never
will.

I'm simply pointing out a potential gotcha to watch out for, so the OP
can plan and design with that in mind, and giving alternative
workarounds for if/when the issue does arise, or to be used to prevent
that possibility.

On Fri, Mar 11, 2011 at 4:08 PM, Cesar Kawar kawa...@gmail.com wrote:
 On Fri, Mar 11, 2011 at 10:56 AM, Rob Poe r...@poeweb.com wrote:
 I'm using RSYNC to do backups of 2 BPC servers.  It works swimmingly, you 
 plug the USB drive into the BPC server, it auto-mounts, emails that it's 
 starting, does an RSYNC dump (with delete), flushes the buffers, dismounts 
 and emails.

 Sounds great Rob, would you be willing to post the script?

 Rsync'ing is all fine and good until your hardlinked filesystem (I
 don't know the proper term for it, as opposed to the pool) gets too
 big. It's a RAM issue, and an unavoidable consequence of rsync's
 architecture - I'm not faulting rsync mind you the kind of filesystem
 that BPC (and rdiff/rsnapshot etc) build over time is a pretty extreme
 outlier case.

 That is not a problem anymore with latests versions of rsync. I've been using 
 this technique for a year now with a cpool of almost 1Tb with no problems.

 Don't expect it to run on a celeron machine as it requieres big processors. 
 Rsyncing 1Tb of compressed hardlinked data to a new filesystem is a very cpu 
 intensive task. But it does not leak memory as before. You can relay on rsync 
 to mantain a usb disk for off-site bakups.


 Therefore an alternative solution when that time comes is adding RAM,
 and yet another is periodically switching to a new target filesystem
 and deleting the old one after the new one's had a chance to build up
 its history. In fact this last is the easiest way of all to migrate
 over for those that didn't design their disk infrastructure to handle
 future growth (e.g. expandable filesystems built on LVM.)


 On 3/10/2011 8:35 PM, hans...@gmail.com wrote:
 On Fri, Mar 11, 2011 at 3:46 AM, Michael Connermdc1...@gmail.com  wrote:
 That is good to know. Actually things are a little better than I thought, 
 the spare machine is Dell Dimension 2400 with a  Pentium 4, max 2 gb 
 memory. So I guess I could slap a new bigger drive into it and use it. My 
 basic plan is to get backups going to one machine and then dupe those to 
 an NAS elsewhere in the building. While we have a small staff, our 
 building is 62,000 sq ft with three floors, so I can get them physically 
 separated even if not really off site. For the web server, we have a two 
 drive raid set up with two spare drive bays. Besides backing up with BPC, 
 I would also dupe the drive on a schedule and take off site.

 To expand on Jeffrey's comment below - the idea of duping your
 backups is fraught with issues when the BPC filesystem gets past a
 certain size.

 To handle the creation of a redundant backup, I would advise one of
 the following:

 A - Periodically use BPC to run a full backup set to a different
 target filesystem - this is simplest and quite likely the fastest, and
 only becomes an issue if you have a limited time window - in which
 case LVM snapshotting can help as Jeffrey mentioned.

 B - use a block-level cloning process (like DD or its derivatives, or
 Ghost-like COTS programs if that's more comfortable for you, to do
 partition copying to a removable drive. Some use temporary RAID1
 mirrors, but I don't recommend it.

 C - a script included with BPC called BackupPC_tarPCCopy, designed to
 do exactly this process.

 Where you run into problems is trying to copy the hardlinked BPC
 filesystem over at the **file level** - even rsync will choke when
 you've got millions and millions of hardlinks to the same inodes to
 keep track of.

 BTW even if you don't do snapshots, you should use LVM from the
 beginning as the basis for your new BPC target filesystem, gives you
 future flexibility to avoid having to do the above any more than
 necessary.

 Hope this helps. . .

 On Fri, Mar 11, 2011 at 5:04 AM, Jeffrey J. Kosowsky
 backu...@kosowsky.org  wrote:
 Keep in mind the point that Les made regarding backing up BackupPC
 archives. Due to the hard link structure, the fastest way to back up
 any reasonably large backup is at the partition level. This also makes
 it 

Re: [BackupPC-users] Newbie setup questions

2011-03-11 Thread hansbkk
On Fri, Mar 11, 2011 at 9:05 PM, Les Mikesell lesmikes...@gmail.com wrote:
 It is the number of files with more than one link that matter, not so much the
 total size.  But the newer rsync that doesn't need the whole file tree loaded 
 at
 once besides the link table and lots of RAM may permit it to scale up more.

OK, so TOPDIR is the proper name for the hardlinked filesystem (as
opposed to the pool), and it's usually /var/lib/backuppc/, correct?

And it's this filesystem that is the one that can be a problem, correct?

It would be great if we could have a standard set of metrics to be
able to compare our filesystems, since what might be a huge number
to one person is likely to be a tiny fraction of someone else's.

So looking for advice from one more Linux-knowledgeable than I on what
stats to collect and how to best collect them.

  For example df -i /var/lib/backuppc/ will show the total number of
inodes, correct?

  And find /var/lib/backuppc/ -type f -links +1 would show the total
number of files that have more than one hard link, correct?

  Would these two metrics be sufficient to allow for objective
comparisons of the filesystems?

Other data points:

  Exact version of rsync - if between client and server then on both ends.

  Total RAM on the machine, and ideally a comparison of the amount in
use before the RSYNC process starts running, and then say ten minutes
after it's started.

Anything else?

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Newbie setup questions

2011-03-11 Thread hansbkk
On Sat, Mar 12, 2011 at 5:07 AM, Jeffrey J. Kosowsky
backu...@kosowsky.org wrote:
 In particular with regard to metrics you seek, I don't know whether it is 
 better/worse to have one file with 2N links or N files with 2 links. Your 
 metrics don't distinguish that and depending on how the list of hard links is 
 constructed that may or may not be a big difference. Specifically, in the 1st 
 case, does the link list still have O(N) entries or just O(1) entries -- huge 
 difference potentially.

 More generally, I'm really wondering whether perhaps rsync could be 
 patched/modified to work better in edge cases like


You guys definitely working at a deeper level than me 8-)

I'm not seeking a formula that will be able to predict how well rsync
will handle a given TOPDIR, but just a set of data points we can
collect from BPC users when discussing this issue.

So when a given BPC user says rsync is working fine for me to clone
my whole filesystem, and I've got a 'really big' TOPDIR, then I'm
proposing we have a standard set of questions to allow us to get the
relevant facts, so we can discuss the issues more meaningfully.

Here's what I've got so far (assuming TOPDIR is in the standard spot):

1. Exactly what version of rsync - singular if local copy, on both
ends if client/server.
2. Total number of inodes - df -i /var/lib/backuppc/
3. Total number of files that have more than one hard link - find
/var/lib/backuppc/ -type f -links +1
4. Total physical RAM in the machine
5. memfree stats from running free -m, before running rsync and
say 10 minutes into running the job.

Perhaps once we've collected enough of the above profiles, this may
well lead in the future to more or less rough guidelines/predictions
regarding rsync, e.g. how much RAM you'd need. The idea of improving
or assisting rsync to handle these edge case needs is far beyond my
own scope, and I'd personally just use one of the workarounds I
mentioned previously if it looked like rsync was struggling.

So what I'm looking for here (not only from you but anyone who can help), is:

A. confirmation that each of the example commands above are good ones
to collect the given data point, or suggestions for better ones, and
B. confirmation/suggestion on the data points themselves - are any
completely irrelevant, or should any be added?

For example, I believe total disk space used by TOPDIR is basically
irrelevant, correct?

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Newbie setup questions

2011-03-10 Thread hansbkk
On Thu, Mar 10, 2011 at 9:59 PM, Michael Conner mdc1...@gmail.com wrote:
 and a NAS (and may be adding another). Note that my Linux knowledge is still 
 limited but growing as I look at more open source stuff.

So here's another reason to set up that second NAS.

What I've done is set up a separate (bigger) NAS that also acts as my
backup server. It holds not only the backup sets, but all big files
that are both very large and not important enough to back up - easily
retrieved or recreated media filesets, cold-metal restore clone
images, ISO and thin client boot images and virtual machine images,
temporary scratch space for LVM snapshots, testing and in-process file
conversions, exports from version control systems, sync targets for
very-frequently backed up file sets (via Time Machine, Rsnapshot,
Unison). Etc.

So all important data is either on individual hosts or the central
NAS, and only data that is either unimportant or already being
backed up elsewhere is stored on the big cahuna NAS, which is also the
backup server.

Handled yet again separately is the offsite rotation of especially
important data sets to protect against theft and the various possible
site-level disasters.

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Newbie setup questions

2011-03-10 Thread hansbkk
On Fri, Mar 11, 2011 at 3:46 AM, Michael Conner mdc1...@gmail.com wrote:
 That is good to know. Actually things are a little better than I thought, the 
 spare machine is Dell Dimension 2400 with a  Pentium 4, max 2 gb memory. So I 
 guess I could slap a new bigger drive into it and use it. My basic plan is to 
 get backups going to one machine and then dupe those to an NAS elsewhere in 
 the building. While we have a small staff, our building is 62,000 sq ft with 
 three floors, so I can get them physically separated even if not really off 
 site. For the web server, we have a two drive raid set up with two spare 
 drive bays. Besides backing up with BPC, I would also dupe the drive on a 
 schedule and take off site.


To expand on Jeffrey's comment below - the idea of duping your
backups is fraught with issues when the BPC filesystem gets past a
certain size.

To handle the creation of a redundant backup, I would advise one of
the following:

A - Periodically use BPC to run a full backup set to a different
target filesystem - this is simplest and quite likely the fastest, and
only becomes an issue if you have a limited time window - in which
case LVM snapshotting can help as Jeffrey mentioned.

B - use a block-level cloning process (like DD or its derivatives, or
Ghost-like COTS programs if that's more comfortable for you, to do
partition copying to a removable drive. Some use temporary RAID1
mirrors, but I don't recommend it.

C - a script included with BPC called BackupPC_tarPCCopy, designed to
do exactly this process.

Where you run into problems is trying to copy the hardlinked BPC
filesystem over at the **file level** - even rsync will choke when
you've got millions and millions of hardlinks to the same inodes to
keep track of.

BTW even if you don't do snapshots, you should use LVM from the
beginning as the basis for your new BPC target filesystem, gives you
future flexibility to avoid having to do the above any more than
necessary.

Hope this helps. . .

On Fri, Mar 11, 2011 at 5:04 AM, Jeffrey J. Kosowsky
backu...@kosowsky.org wrote:
 Keep in mind the point that Les made regarding backing up BackupPC
 archives. Due to the hard link structure, the fastest way to back up
 any reasonably large backup is at the partition level. This also makes
 it hard to enlarge your archive space should you outgrow your
 disk. One good solution is to use lvm since you can
 enlarge/expand/move partitions across multiple disks. You can also use
 lvm to create partition snapshots that can then be replicated as backups.

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Newbie setup questions

2011-03-10 Thread hansbkk
On Fri, Mar 11, 2011 at 10:33 AM, Jeffrey J. Kosowsky
backu...@kosowsky.org wrote:

 I wrote a script BackupPC_copyPcPool that I posted to the list that should be 
 a bit more efficient  faster than BackupPC_tarPCCopy

Noted, and thanks

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Newbie setup questions

2011-03-10 Thread hansbkk
On Fri, Mar 11, 2011 at 10:56 AM, Rob Poe r...@poeweb.com wrote:
 I'm using RSYNC to do backups of 2 BPC servers.  It works swimmingly, you 
 plug the USB drive into the BPC server, it auto-mounts, emails that it's 
 starting, does an RSYNC dump (with delete), flushes the buffers, dismounts 
 and emails.

Sounds great Rob, would you be willing to post the script?

Rsync'ing is all fine and good until your hardlinked filesystem (I
don't know the proper term for it, as opposed to the pool) gets too
big. It's a RAM issue, and an unavoidable consequence of rsync's
architecture - I'm not faulting rsync mind you the kind of filesystem
that BPC (and rdiff/rsnapshot etc) build over time is a pretty extreme
outlier case.

Therefore an alternative solution when that time comes is adding RAM,
and yet another is periodically switching to a new target filesystem
and deleting the old one after the new one's had a chance to build up
its history. In fact this last is the easiest way of all to migrate
over for those that didn't design their disk infrastructure to handle
future growth (e.g. expandable filesystems built on LVM.)


 On 3/10/2011 8:35 PM, hans...@gmail.com wrote:
 On Fri, Mar 11, 2011 at 3:46 AM, Michael Connermdc1...@gmail.com  wrote:
 That is good to know. Actually things are a little better than I thought, 
 the spare machine is Dell Dimension 2400 with a  Pentium 4, max 2 gb 
 memory. So I guess I could slap a new bigger drive into it and use it. My 
 basic plan is to get backups going to one machine and then dupe those to an 
 NAS elsewhere in the building. While we have a small staff, our building is 
 62,000 sq ft with three floors, so I can get them physically separated even 
 if not really off site. For the web server, we have a two drive raid set up 
 with two spare drive bays. Besides backing up with BPC, I would also dupe 
 the drive on a schedule and take off site.

 To expand on Jeffrey's comment below - the idea of duping your
 backups is fraught with issues when the BPC filesystem gets past a
 certain size.

 To handle the creation of a redundant backup, I would advise one of
 the following:

 A - Periodically use BPC to run a full backup set to a different
 target filesystem - this is simplest and quite likely the fastest, and
 only becomes an issue if you have a limited time window - in which
 case LVM snapshotting can help as Jeffrey mentioned.

 B - use a block-level cloning process (like DD or its derivatives, or
 Ghost-like COTS programs if that's more comfortable for you, to do
 partition copying to a removable drive. Some use temporary RAID1
 mirrors, but I don't recommend it.

 C - a script included with BPC called BackupPC_tarPCCopy, designed to
 do exactly this process.

 Where you run into problems is trying to copy the hardlinked BPC
 filesystem over at the **file level** - even rsync will choke when
 you've got millions and millions of hardlinks to the same inodes to
 keep track of.

 BTW even if you don't do snapshots, you should use LVM from the
 beginning as the basis for your new BPC target filesystem, gives you
 future flexibility to avoid having to do the above any more than
 necessary.

 Hope this helps. . .

 On Fri, Mar 11, 2011 at 5:04 AM, Jeffrey J. Kosowsky
 backu...@kosowsky.org  wrote:
 Keep in mind the point that Les made regarding backing up BackupPC
 archives. Due to the hard link structure, the fastest way to back up
 any reasonably large backup is at the partition level. This also makes
 it hard to enlarge your archive space should you outgrow your
 disk. One good solution is to use lvm since you can
 enlarge/expand/move partitions across multiple disks. You can also use
 lvm to create partition snapshots that can then be replicated as backups.

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Basics

2011-03-08 Thread hansbkk
Of course it should be!

Every child should have enough to eat and a good education too, but
few people live in an environment where they'd have the cheek to
demand it as if it's an inherent *right* - someone's got to develop
the skills and put in the time and effort to earn the money to make it
happen.

So of course every FOSS should have good docs - the question is
written by whom? In most cases the answer is - by the user community.
The developer's time is better spent actually working on the code.

It would be great if the developer(s) could facilitate - set up a
wiki, put good notes in the source code, be available to review for
accuracy, answer the more technical questions etc.

But on a small un-sponsored project, the developer has no obligation
to anyone to do anything - he's been so incredibly generous to share
the code he wrote to scratch his own itch, that the least the
freeloading users can do is contribute back where they can.

If a user wants to be able to be demanding, then they should become
a donor/sponsor/paid customer of the project's development. Any
attitude of entitlement to free resources is not only selfish but
unrealistic.

All of this is just my opinion of course. . .



On Tue, Mar 8, 2011 at 8:30 PM, Timothy Murphy gayle...@eircom.net wrote:
 hans...@gmail.com wrote:

 if a user takes the
 attitude that a program should have well-written documentation
 designed for non-technical users to understand, and any program that
 doesn't is somehow deficient in his eyes, then perhaps he would be
 better served as a paying customer of a company he then has the right
 to complain to.

 I completely disagree.
 Any program offered to the public should be properly documented.
 This has nothing to do with open source.

 --
 Timothy Murphy
 e-mail: gayleard /at/ eircom.net
 tel: +353-86-2336090, +353-1-2842366
 s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland


 --
 What You Don't Know About Data Connectivity CAN Hurt You
 This paper provides an overview of data connectivity, details
 its effect on application quality, and explores various alternative
 solutions. http://p.sf.net/sfu/progress-d2d
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Basics

2011-03-07 Thread hansbkk
On Mon, Mar 7, 2011 at 10:30 PM, Cesar Kawar kawa...@gmail.com wrote:
 I'm sorry if I've been rude in my last mail.
 Again, I don't try to be rude, so if it sound a bit snarky, I'm really 
 sorry.

I for one didn't find your comment snarky at all.

Many people are so religious about open source that they interpret a
(IMO perfectly valid in certain circumstances) suggestion to use COTS
software as an insult to the recipient. Sometimes such a suggestion is
meant that way, e.g. Maybe you should just stick with Windows may be
intended to infer what a wimp/idiot, isn't able/willing to learn
*nix.

But I took your suggestion as perfectly valid - if a user takes the
attitude that a program should have well-written documentation
designed for non-technical users to understand, and any program that
doesn't is somehow deficient in his eyes, then perhaps he would be
better served as a paying customer of a company he then has the right
to complain to.

My two cents to the OP - In the open-source world, if you think
something's missing don't complain - jump in there and do it yourself,
or sponsor someone else to do it.

 And, please, excuse my English, I'm from Spain.

Your English is excellent!

--
What You Don't Know About Data Connectivity CAN Hurt You
This paper provides an overview of data connectivity, details
its effect on application quality, and explores various alternative
solutions. http://p.sf.net/sfu/progress-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Basics

2011-03-07 Thread hansbkk
On Mon, Mar 7, 2011 at 11:49 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On the other hand, if you have a specific question or run into a problem 
 while trying to follow the installation instructions I think you will find 
 that people are more than willing to help.

Absolutely - my point is the don't complain' part. Suggestions for
improvement are part of the development process, but IMO tone is
important.

I find it most effective to at the same time offer to help out in
whatever ways one can - updating a wiki-based doc set is a perfect
example of how a relative noob can do so.

And I feel when it comes to open source, it is important to maintain a
spirit of gratitude for the generosity of the developers and
supporting community.

So let me say it again - Thanks!

--
What You Don't Know About Data Connectivity CAN Hurt You
This paper provides an overview of data connectivity, details
its effect on application quality, and explores various alternative
solutions. http://p.sf.net/sfu/progress-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Does anybody have a live CD with backuppc installed?

2011-02-12 Thread hansbkk
Grml's persistence feature is the same as Debian LiveCDs - just create
an ext3 partition labeled live-rw and boot with the kernel option
(cheat code) of persistence, and everything's automatically
persistent. Can also use a loopback file if you don't want to dedicate
a partition (although this could just be a USB). Can have a separate
partition/USB/file for /home as well or instead, or take manual
fs-snapshots if you prefer.

It's also easy to setup GRUB2 to boot a Debian-based ISO from the HD
if you're booting it regularly, much faster, more durable and more
convenient than physical discs.

Tangential thinking out loud:

If you have a limited time window to back up a lot of clients, this
allows you to have multiple BPC hosts running concurrently without
having to dedicate hardware to the task - a well-segmented network
scheme and fast clustered SAN/NAS storage will help reduce the
bottlenecking from those factors. Or at the other extreme, have a BPC
boot disc/image on key hosts configured to automate backing up to its
own dedicated filesystem (external drive for off-site rotation) -
multi-platform Time Machine functionality.

On Sun, Feb 13, 2011 at 12:24 AM, Les Mikesell lesmikes...@gmail.com wrote:
 On 2/11/11 6:09 PM, John Rouillard wrote:
 Hi all:

 I guess the subject kind of says most of it. We keep our backuppc data
 in an external drive bay. If our server should die, we are looking at
 a mechanism to boot another system from CD and access the backup
 array.

 Before I start working on this I thought I'd ask and see if anybody
 had one they would be willing to share, or could provide tips on how
 you did it if you can't share.

 I can dual-boot my laptop into linux and connect the disk that I mirror for
 offsite storage via usb - and it really wouldn't take that long to install a
 system from scratch if I had to.  But now that you mention it, it would be 
 great
 to do it with one of those liveCD distros that let you install things
 permanently on a flash drive, especially if you are able to keep using newer
 versions to be more likely to work with current hardware without repeating a 
 lot
 of work.

 --
   Les Mikesell
    lesmikes...@gmail.com

 --
 The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
 Pinpoint memory and threading errors before they happen.
 Find and fix more than 250 security defects in the development cycle.
 Locate bottlenecks in serial and parallel code that limit performance.
 http://p.sf.net/sfu/intel-dev2devfeb
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC very slow backing up 25GB files

2011-02-02 Thread hansbkk
On Wed, Feb 2, 2011 at 10:58 PM, John Goerzen jgoer...@complete.org wrote:
 This *is* the smallest possible chunk, sadly ;-)

 You may want to consider a separate backup profile of the database dumps. So
 set up one backup for the rest of the machine; and another backup (using
 $Conf{ClientNameAlias} to point to the desired machine) just to back up the
 database. That way you can use rsync for one and tar for the other.

 An excellent idea as well.  I like that and will give it a shot.


Or even (forgive me please rabid BackupPC devotees) consider another
tool to handle those special-case files.

rdiff-backup  is designed  for exactly this type of scenario - very
large files that only change a little bit from one run to the next -
not just database dumps but mailstores are another classic example. It
saves a huge amount of storage space compared to programs that store
whole files only; its algorithm does the incremental-version-linking
within the file level, kind of like the rsync algorithm implemented at
the on-disc layer.

It's easier for you to check it out and see for yourself than to
understand the theory - rdiff-backup is available on the
SystemRescueCD disc, it's a simple CLI proggie, can just be run from a
cron script.

This would allow you to either exclude those specific targets from
BackupPC, or include them for redundancy in the full-backup runs but
still have a fine granularity of incrementals in between.

BTW if yours is a large database actively being written to while the
dumps are made, I hope you've accounted for the fact that there may be
inconsistencies due to the time taken to dump the various tables. I'd
advise either taking the database offline for the time it takes to do
the dumps, or if that's too long a window using LVM snapshotting (or
something similar) to reduce the downtime. Same with Exchange, perhaps
less critical with other mailstores.

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Basic usage questions (noob)

2010-12-20 Thread hansbkk
I have a large but still limited amount of total storage space
available, and am trying to figure out how to optimise dividing it up
between the primary NAS and the BUNAS that will hosting BackupPC's
volumes.

The least critical are of course media files, and in fact they don't
even really need to be stored on hard disk space, as I've been
archiving to DVD as I've been collecting/converting. They
**certainly** don't need to take up my valuable NAS space more than
once - in other words, if I do choose to store the more valuable files
there, they should not be backed up any further.

So here's my question - is it possible to set up a separate directory
media tree within the BackupPC pool's filesystem that is explicitly
intended to be exposed to regular users as a network share (could be
via samba, nfs or even offered up by a check-in/out web front end)?

The goal is to ensure that to the extent users have copies of these
files on their local filesystems, they aren't going to slow down the
backup sessions, as they'll already exist in the pool. If this isn't a
good idea, perhaps having such a folder tree off to the side from
BackupPC's pool but still in the same filesystem would work in
conjunction with periodically running one of the dedupe/hardlinking
scripts - any particular one recommended to use with BackupPC's pool?

--

I'm going to set up and train users to use a top-level don't back up
this tree folder on their local filesystems, where they should be
placing unimportant but humungous files like these. However as we all
know, users will follow such a scheme inconsistently at best.

Which brings my next question - can anyone suggest a way to alert the
sysadmin (me) that a user did (mistakenly) have one of these ginormous
files in the filesystem being backed up? I'm guessing some sort of
cron-triggered script looking for new hardlinks to existing files
within my media tree, ideally indicating the user and the file's
location in their filesystem.

Pointers to implementation examples of anything similar to this would
be great, or otherwise perhaps some hints to get me started?

Thanks in advance. . .

PS I never got a direct answer to my question about backing
up/transferring BackupPC's filesystem (but thanks to Pavel about
mentioning his RAID method)

=
If no one's worked with FSArchiver, then how about feedback on what
y'all would choose between my current top two choices?

A - rsync the pool over + BackupPC_tarPCCopy the hardlinks dirstruc to
a temp location and then unpack it. I need to work on my scripting
skills anyway 8-(

B - full partition clone via something like dc3dd or Acronis True Image


Feature request: just MHO, but there really should be an option (both
Web and CLI) to just copy the whole cahuna over, as long as there's an
appropriate target device mounted. I can't imagine too many people are
dumping this stuff to tape??

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Need help with automating Windows client deployments handling the file lock backup problem with open files

2010-12-20 Thread hansbkk
Nothing will properly handle backing up files currently being written
to during the backup session without inserting itself somewhere
between the OS and the disk/controller hardware.

COTS hot imaging programs (Symantec's (recent versions of) Ghost and
BackupExec System Recovery are examples) do this pretty reliably now,
but they create large monolithic disk image files that don't take
advantage of BackupPC's magical file deduping.

I handle this by making sure that always-on apps like Outlook (and in
your case SQL servers) store their files to the OS/apps partition -
and these get backed up separately from the regular data files, via CD
or PXE boot allowing a proper cold imaging backup. With decent
network bandwidth and good automation, most clients can get backed up
in a 20-30 minute session - I do this weekly, and each user knows
which day of the week they're supposed to do it - most set the session
running and get their coffee first thing in the morning.

An alternative would be having such apps storing their data on
filesystems with snapshot capabilities, e.g. LVM on a Linux server.

I realize that none of these approaches are easy or inexpensive, but
AFAIK that's the nature of the beast.

Whatever path you take, make sure to test it thoroughly with realistic
scenarios.


On Tue, Dec 21, 2010 at 2:59 AM, Ryan Blake rbla...@hotmail.com wrote:
 That sounds simple enough and something I found initially when doing my
 research.  However, I do not believe this also can handle open files, such
 as Outlook's .pst files or other documents open by the user, such as system
 files.  I want to be able to back up all of these files without the end-user
 having to actually do anything on their part and being able to remotely
 install the software.  End result: I want to be able to restore the entire
 OS just as it was the moment it was backed up.

 Maybe I'm mistaken, though.  Will this option backup open files as well
 (including the ubiquitous Outlook data files)?  If not, what options do I
 have to do this?   It seemed so much easier working with rsyncd on *nix...
 From: Rob Poe
 Sent: Monday, December 20, 2010 12:32 PM
 To: General list for user discussion,questions and support
 Subject: Re: [BackupPC-users] Need help with automating Windows client
 deployments  handling the file lock backup problem with open files
 Personally, I use a batch file to install the client.  You have to configure
 the rsyncd.conf and rsyncd.secrets

 Using this download:
 http://sourceforge.net/projects/backuppc/files/cygwin-rsyncd/2.6.8_0/
 and these instructions:
 http://gerwick.ucsd.edu/backuppc_manual/backuppc_winxp.html


 @echo off
 c:
 md c:\rsyncd
 net use x: \\server\utils
 xcopy x:\rsyncd\*.* c:\rsyncd\*.* /Y
 c:
 cd \rsyncd
 cygrunsrv.exe -I rsyncd -e CYGWIN=nontsec -p c:/rsyncd/rsync.exe -a
 --config=c:/rsyncd/rsyncd.conf --daemon --no-detach
 cygrunsrv.exe --start rsyncd

 rem Les Stott notes you can setup the WinXP firewall to allow
 rem port 873 TCP connections to rsync with the following script
 rem lines.  Remove the rem lines to run these three commands.
 rem
 rem netsh firewall set allowedprogram program = c:\rsyncd\rsync.exe name =
 rsync mode = enable scope = CUSTOM addresses = LocalSubnet
 rem netsh firewall set portopening protocol = TCP port = 873 name = rsyncd
 mode= enable scope = CUSTOM addresses = LocalSubnet
 rem netsh firewall set icmpsetting 8 enable




 Ultimately, I am trying to automate the update process to end-users as much
 as possible (which is why I used smb to start with).  I'm looking for
 something I can remotely install and have backup all directories, including
 those currently locked and also including live Microsoft SQL databases
 without the bandwidth cost of SMB and while backing up open files (I
 essentially want a backup good enough that I can use to restore the entire
 system state, which includes active system files).  I know from my
 previous experience, I was able to successfully backup a MySQL database on
 Linux and restore it when the entire database was accidentally deleted.  I
 just am not too confident with Windows and how to do this properly while the
 DB is still running.

 

 --
 Lotusphere 2011
 Register now for Lotusphere 2011 and learn how
 to connect the dots, take your collaborative environment
 to the next level, and enter the era of Social Business.
 http://p.sf.net/sfu/lotusphere-d2d

 

 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

 --
 Lotusphere 2011
 Register now for Lotusphere 2011 and learn how
 to connect the dots, 

[BackupPC-users] Cloning the BackupPC filestore to a removable drive

2010-12-11 Thread hansbkk
 If you consider using ZFS as BackupPC's filesystem

In relation to other projects, I've done a bit of research, and got
interested in Nexenta, however even before the Oracle debacle I wasn't
quite ready to invest the additional learning curve - much less likely
now. . .

And just a comment, but keeping time machine snapshots at the
filesystem level of BackupPC's time machine snapshots filestore, wow
would probably open up a wormhole big enough to drive a Tardis
through!  8-O

Back to serious - these are mission-critical data backups, so I'm
defaulting to conservative - note this part of my goal:

 What I'm really looking for is to be able to just mount the resulting 
 filesystem on any ol' livecd

Which your idea doesn't address. But I *did* say

 Any and all feedback/suggestions welcome.

so thanks!

---

If no one's worked with FSArchiver, then how about feedback on what
y'all would choose between my current top two (actually 2.5) choices?

A - rsync the pool over + BackupPC_tarPCCopy the hardlinks dirstruc to
a temp location and then unpack it. I need to work on my scripting
skills anyway   8-(

B - full partition clone via something like dc3dd  or  Acronis True Image


Feature request: just MHO, but there really should be an option (both
Web and CLI) to just copy the whole cahuna over, as long as there's an
appropriate target device mounted. I can't imagine too many people are
dumping this stuff to tape??

=
On Thu, Dec 9, 2010 at 3:35 PM, Jonathan Schaeffer
jonathan.schaef...@univ-brest.fr wrote:
 Le 09/12/2010 06:55, hans...@gmail.com a écrit :
 I've been investigating how to backup BackupPC's filesystem,
 specifically the tree with all the hard links (BTW what's the right
 name for it, the one that's not the pool?)

 The goal is to be able to bring a verified-good copy of the whole
 volume off-site via a big cahuna sata drive.

 I'm answering not exactly to your question, but you might be interested
 in this :

 If you consider using ZFS as BackupPC's filesystem, there is the awesome
 combo :

 zfs snapshot   # makes a snapshot of you filesystem, for instance on a
 daily bases

 zfs send snapshot | ssh backback zfs receive

 and your filesystem will be exported on host backback AND you will be
 able to travel in time by munting the daily snapshots.

 Jonathan


 I don't have enough RAM (or time!) for rsync -H and cp -a

 I was originally looking at block-level partition imaging tools, from
 mdmraid (RAID1'ing to a removable drive) to dd to Acronis.

 I'm also looking at BackupPC_tarPCCopy, which seems great, but

 What I'm really looking for is to be able to just mount the resulting
 filesystem on any ol' livecd, without having to restore anything,
 reconstruct LVM/RAID etc complexities just to get at the data - the
 source volume is an LV running on a RAID6 array, but I want the target
 partition to be a normal one.

 I've come across this tool: http://www.fsarchiver.org/Main_Page

 Does anyone have experience with it?

 Any and all feedback/suggestions welcome.

--
Oracle to DB2 Conversion Guide: Learn learn about native support for PL/SQL,
new data types, scalar functions, improved concurrency, built-in packages, 
OCI, SQL*Plus, data movement tools, best practices and more.
http://p.sf.net/sfu/oracle-sfdev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] FSArchiver?

2010-12-08 Thread hansbkk
I've been investigating how to backup BackupPC's filesystem,
specifically the tree with all the hard links (BTW what's the right
name for it, the one that's not the pool?)

The goal is to be able to bring a verified-good copy of the whole
volume off-site via a big cahuna sata drive.

I don't have enough RAM (or time!) for rsync -H and cp -a

I was originally looking at block-level partition imaging tools, from
mdmraid (RAID1'ing to a removable drive) to dd to Acronis.

I'm also looking at BackupPC_tarPCCopy, which seems great, but

What I'm really looking for is to be able to just mount the resulting
filesystem on any ol' livecd, without having to restore anything,
reconstruct LVM/RAID etc complexities just to get at the data - the
source volume is an LV running on a RAID6 array, but I want the target
partition to be a normal one.

I've come across this tool: http://www.fsarchiver.org/Main_Page

Does anyone have experience with it?

Any and all feedback/suggestions welcome.

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/