Infra is not really a concern for me, it's more will the software handle it.
Network connectivity and IOPs aren't a huge concern (machines will be
speced to be "beasts" as well as network.
de-dupe for me is not a massive want. 99.9% of the files we will be backing
up are 100% unique. I'd rather
Yeah I'm not super concerned personally with the UI but the UI is for
pointy haired types to be able to do things. (yeah I know).
On 8 February 2017 at 21:39, Kenneth Porter wrote:
> Deduplication and rsync are the big reasons I went with BackupPC. UI is way
> down the
Individual files range from k text files, to100GB+ in size. Everything from
images to movies files, to text, to VM images. (We are a VFX studio).
And mainly unique files. so deduplication I'm not honestly concerned about.
I just like BackupPC UI and it's easy of use and I'm wondering if this is a
Has anyone used BackupPC in an enterprise environment?
I'm talking PBs of data, 100's of servers, hybrid environment. Mac,
Solaris, BSD, Linux, Windows.
Did it work well? Any gotcha's? When you see PB of data does it make your
gut feeling go uhh yeah no.
I'm just fact finding and investigating.
I was reading about tar snapshots of the backuppc server / data, or
similar. Where exactly can I do this?For example, if I go to my
backuppc web admin, how can I say create a compressed tar export of the
ALL the backup data?
I am ultimately looking for off-site, but if I can get a
I noticed the backup data appears to be stored in the pc directory. Does
that mean the hardlinks are in the pool directory and not in the pc
directory?
So would it be possible to store the pc directory in a different directory
(in my flexraid pool) but store the pool directory in a normal ext
But does moosefs basically duplicate the data, so if you have 2tb of
backuppc data, you need a moosefs with 2tb of storage to duplicate the whole
thing?
On Fri, May 20, 2011 at 8:05 AM, Mike ispbuil...@gmail.com wrote:
On 11-05-18 05:21 PM, Carl Wilhelm Soderstrom wrote:
On 05/17 01:25 ,
Is it possible/how to view details on a backup in progress - for example, it
would be great to see what file it is backing up, how many/how big the
backup is so far. Totals would be nice, like 100 files totaling 200MB
backed up out of 500 files totaling 2GB.
I have a slow backup and I have no
I tried excluding files and does not seem to be working :
In the web interface I added an entry for: *.MPG
Yet in the backup pc folder I am seeing .MPG files!
Do I need to type anything more than just *.MPG ? (windows client)
If there is a different place to submit features, I apologize.
It would be great if there were a field for machine description - for
example for location or similar.
Most of the machines I deal with are 'tagged' with a number. We usually
name the machine with this number, so in backuppc I see
Ok, so I think I am ALMOST there for having stats on the status page. I
fixed my $TopDir path in the config file, /etc/passwd and now the /lib file.
Now everything seems to be working - backups and my cpool looks good now
from the command line.
In fact, BackupPC_nightly now shows this:
So I know for sure it is a path issue somewhere - I can see errors in the
log that it is trying to to a linknewfile and referencing the
/var/lib/backuppc and getting permission denied because that directory does
not exist anymore. I already defined $Conf ( TopDir ) , is there anything
else / any
I am trying to figure out why my backuppc statistics are not generating
(everything is basically all 0's in the status page -- see below).
Notice pool is at 0gb...
So looking at the code, it loops through the backupc/cpool/ directory tree
and counts files and directories.Running a linux
What do these errors mean? I have LOTS of them - at first I thought
it was something to do with my eclipse directory, but I see the errors
on another machine with Adobe and some other directories:
2011-03-18 11:04:21 BackupPC_link got error -4 when calling
MakeFileLink(/mnt/backuppc/pc/2510c
So I am trying to figure out why my statistics are not updating, so I looked
at the code for BackupPC_nightly. Running it manually as perl
BackupPC_nightly 0 128, I get a list of BackupPC_stats x = pool, 0,0,0,0
. (all zeros) - except for a couple lines where I see a 1 in the
second
No sure why, but I am not receiving messages from the list, not in my spam
box either. I am having to read my messages in gmane...
Ok, so ls -l shows numbers like 2, 3, and 4 in the second column. So that
means de-dup is working? But just stats are not updated? How do you
update?
This
How do we handle laptops or similar machines that are rarely (or never)
connected to the LAN or anywhere that I can open ports? Is it possible to
install something on the client laptop/machine to make an outgoing
connection to the server and initiate a backup, like an outgoing rsync to
the
So I have backuppc working for basic backup of some smb (winxp) machines.
However, I can not tell if there is any data de-duplication going on - is
there something I have to do to get that going?
My status screen shows:
- The servers PID is 977, on host backuppc, version 3.1.0, started at
For SMB machines (windows clients), is it possible / how to back up other
drives in addition to C$? (D$, E$, etc.) - however I do not want to back up
D$ on ALL clients, only specific ones. ?
--
What You Don't Know
have on what could be done to
determine the cause here to find a solution. TIA
Scott
On 2/11/2011 1:15 AM, Stefan Peter wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Scott
Am 11.02.2011 01:29, schrieb Scott Saunders:
I let the most recent backup 'finish' on its own. It becomes
are running 2.6.9 protocol version 29 and both of the
clients are running 3.0.7 protocol version 30. AFAIK the newer version
would be backwards compatible, no? Is this setup confusing -- have I
explained the issue well enough?
Scott
On 2/7/2011 2:46 PM, Scott Saunders wrote:
I've got a couple
I've got a couple of servers running in a 2 node master/slave cluster
using pacemaker(corosync)/drbd. Like other servers, I've got them
configured to backup to a local BackupPC server as well as a remote (VPN
over T1) BackupPC server (rsync over ssh for both). However, with the
cluster, only
On 16 Aug 2009 at 14:11, Michael 'Moose' Dinn wrote:
another unit offsite. The extensive use of hardlinks prevents
rsync from being the right solution here.
rsync -H doesn't work for you?
From earlier discussions on the list:
Re: [BackupPC-users] backup of backup machine
Note
On 16 Aug 2009 at 9:07, Clint Alexander wrote:
One thing that has frustrated the heck out of me is the lack of finding an
organized system for feature requests.
A critical missing feature to me is the ability to mirror my BackupPC unit to
another unit offsite. The extensive use of
that would be and quite frankly it seems too easy
Any suggestions of variables are welcome :) thanks!
--
Lyle Scott, III
http://www.lylescott.ws
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R
to choosing a
transport, tweaking the flags and deciding on what to include or exclude...
-Scott
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go
I've noticed with the default scheduling for a backup a full week of a
host is backed up consisting of 1 full and 6 incrementals. Using the
default scheduling as an example, the full could happen any day of the
week or the scheduling would even be reset if the backuppc daemon is
restarted.
On 01/09 09:19 , Scott Saunders wrote:
I know it's possible to set specific schedules for backups per host
using $Conf{FullPeriod}, $Conf{IncPeriod}, and $Conf{FullKeepCnt}among
others, but I was wondering if there is a way to do scheduling specific
to a smaller subset of that - being
Rich Rauenzahn wrote:
Multiple profiles? I'm not sure I understand. Would this consist of
creating multiple per-host configurations for the same host? If so,
Yes...
would there be a specific naming convention? Is there a way to have
No...
backuppc still automatically back them
/solutions?
-Scott
-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
scott wrote:
Craig Barratt wrote:
Scott writes:
TarClientCmd is
/usr/bin/sudo /bin/tar -c -v -f - -C $shareName+ --totals
and /bin/tar --version gives:
tar (GNU tar) 1.15.92
The backup fails with:
2007-12-04 22:09:50 full backup started for directory /data
2007-12-04 22:27:23
Craig Barratt wrote:
Scott writes:
TarClientCmd is
/usr/bin/sudo /bin/tar -c -v -f - -C $shareName+ --totals
and /bin/tar --version gives:
tar (GNU tar) 1.15.92
The backup fails with:
2007-12-04 22:09:50 full backup started for directory /data
2007-12-04 22:27:23 Got fatal error during
but couldn't find anything to match
this. Thanks if anyone can point me into doing some diagnostics.
--
Scott
-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop
.
Is there a recommended partitioning schema for BackupPC? What are
people here using?
/boot 100mb
/ 3 gb
/home 10 gb
/var rest of volume / disk / LVM
TIA
Angus
--
Angus Scott-Fleming
GeoApps, Tucson, Arizona
http://www.geoapps.com/
-
[1
.
Is there a recommended partitioning schema for BackupPC? What are
people here using?
/boot 100mb
/ 3 gb
/home 10 gb
/var rest of volume / disk / LVM
TIA
Angus
--
Angus Scott-Fleming
GeoApps, Tucson, Arizona
http://www.geoapps.com/
-
[1]
http
On Sep 28, 2007, at 6:13 PM, James Kyle wrote:
I've completed my script that automatically configures OSX Tiger
clients for backuppc. I can post it to the list if anyone thinks it'd
be of use.
Yes please.
--
Scott [EMAIL PROTECTED]
AIM: BlueCame1
On May 2, 2007, at 5:59 AM, Jamie Lists wrote:
Hey Scott,
this may sound weird but have pretty much the same setup as you and
we're finding the cause of our terrible backup speeds to be a problem
with ssh speeds on centos.
We don't know why this is happening yet but are working on it. Also
pooling with my current full
backup and future incremental? Is there any tuning that can be done
with pooling to allow for faster backup speeds?
Thanks for the help.
--
Scott
-
This SF.net email is sponsored by DB2 Express
To my knowledge rsyncd is the rsync daemon process that runs in the
background. They are two parts of the whole, not two seperate animals.
Rsync is by far the safest and the smartest way to go about it. But
you're right, ssh adds a considerable amount of overhead.
Faster? Are you running
the backup-up. Not very
efficient for all the reasons you detailed.
Well, at least I'm not on crack. You might look at that driveimage XML
thing. You can do bare-metal restores with it if you like. Beyond that
if you like.
Scott
have backuppc back itself up and catch it. This
is a far from ideal scenario.
But thats neither here nor there. The shadowcopy function is the
ticket. If theres a way to leverage that, many doors open for us.
Scott Gamble
Bill Hudacek wrote:
I for one am very interested in this. All
Guus Houtzager wrote:
On Wed, 2005-12-21 at 22:53 +1100, Vincent Ho wrote:
On Wed, Dec 21, 2005 at 11:29:33AM +0100, Guus Houtzager wrote:
A colleague of mine wrote just that script. I had the problem of needing
to migrate my backuppc with all data to another server and ran into the
Is it possible to backup Novell servers with Backuppc?
Scott Gamble
___
[EMAIL PROTECTED]
[EMAIL PROTECTED]
206 412 3055
In my config file of host 127.0.0.1 I got these lines:$Conf{TarShareName} = '/home/e-smith/files/users';#Â Â Â $Conf{BackupFilesExclude} = '/temp
43 matches
Mail list logo