Toni Van Remortel wrote:
> Anyway, I'm preparing a separate test setup now, to be able to do
> correct tests (so both BackupPC and an rsync tree are using data from
> the same time).
> Test results will be here tomorrow.
>
So that is today.
BackupPC full dump, with patch which removed --ignore-
BackupPC 3.1.0 has been released on SF.net.
This release contains a few new features and bug fixes.
Some of the new features are:
* Added new script BackupPC_archiveStart that allows command-line
starting of archives.
* Added sorting by column feature to host summary table in CGI
interface.
I have two installs of BackupPC on CentOS 4.5 which all of a sudden stopped
working with can't find Compress::Zlib errors which causes the backup to
fail.
Both systems have been running for almost a full year without issue until
just the last few days.
I was able to fix one install a few days ago
nexenta is alive and well. in fact, check this out.
http://www.nexenta.com/corp/
nexenta is not advancing at the pace of ubuntu though. i like the ubuntu
system so nexenta is great for me. if i were you and you were not tied to
ubuntu then you might consider opensolaris or solaris10. solaris10
Gene Horodecki wrote:
> Sounds reasonable... What did you do about the attrib file? I noticed
> there is a file called 'attrib' in each of the pool directories with
> some binary data in it.
>
Nothing... it just contains permissions, etc. That's why I did another
full after the move -- then
Sounds reasonable... What did you do about the attrib file? I noticed
there is a file called 'attrib' in each of the pool directories with some
binary data in it.
"Rich Rauenzahn" <[EMAIL PROTECTED]> wrote:
> Gene Horodecki wrote:
>
>>
>>>
>>>I had that problem as well.. so I uhh..
Vlade wrote:
First of all backuppc rocks! :)
What is best way to upgrade from backuppc 3.0.0. to latest stable
3.1.0.
I installed backuppc 3.0.0 throught aptitude.
If you installed the Debian/Ubuntu package it's best to wait for
someone to package 3.1.0 (I don't know who the maintainer i
To whoever asked about upgrading (sorry, I delete the email by accident):
You can check this URL:
http://us.archive.ubuntu.com/ubuntu/pool/main/b/backuppc/
and when 3.1.0 shows up, you can either grab a copy, or edit
/etc/apt/sources.list to include backports, which should give you access to
i
First of all backuppc rocks! :)
What is best way to upgrade from backuppc 3.0.0. to latest stable 3.1.0.
I installed backuppc 3.0.0 throught aptitude. As I can remember right,
there were some file location issues when upgrading from 2.x.x. to 3.0.0
regars,
Vlade
Gene Horodecki wrote:
I had that problem as well.. so I uhh.. well, I fiddled with the backup
directory on the backuppc server and moved them around so that backuppc
wouldn't see I had moved them remotely.. Not something I would exactly
recommend doing... although it worked.
Great suggesti
Mathis, Jim wrote:
> Was able to change the group to "apache" and the permission settings. As
> follows:
> -rwsr-x--- root apache BackupPC_Admin
> However, no displayed is observed when executing ./BackupPC_Admin (same
> as before). Did notice that there are two
> other CGI scripts within the cgg
Was able to change the group to "apache" and the permission settings. As
follows:
-rwsr-x--- root apache BackupPC_Admin
However, no displayed is observed when executing ./BackupPC_Admin (same
as before). Did notice that there are two
other CGI scripts within the cgg-bin directory "htsearh" and "q
> Perhaps you could fiddle with them to make them exactly the same...
> At least if you have the 3.x version you will be able to stop and
> restart the initial full if you have to while getting the first complete
> copy.
> I had that problem as well.. so I uhh.. well, I fiddled with the backup
> d
Using version 2.1.3 I am running into an error.
I migrated my backuppc onto a new file server. From fedora to ubuntu. The
move was easy and the backups are running fine again.
When I go to dump a gz file using an archive host, I get these in the logs
Executing: /var/bin/BackupPC_archiveHost /
Gene Horodecki wrote:
> I fiddled with the paths of my biggest backup in order to simplify an
> offsite copy and now because the files aren't "exactly the same" it seems
> it's going to take as long as the very first backup which was 4x as long as
> subsequent fulls. Unfortunate, because all the
Gene Horodecki wrote:
> I fiddled with the paths of my biggest backup in order to simplify an
> offsite copy and now because the files aren't "exactly the same" it seems
> it's going to take as long as the very first backup which was 4x as long as
> subsequent fulls. Unfortunate, because all the f
Jack wrote:
> I ues a backup system before that had a "incremental forever" backup policy.
> The first "incremental" was really a full, and after that, like BackupPC
> with rsync, it scanned for changes, and only backedup what it needed to.
> Unless you forced it, you never did (or needed to) do a
I asked that very question a couple of weeks ago!
In my undrstanding, the answer was thus:
An incremental backup relies on modification datestamp to know what to back
up, and the method is not infallible. Depending on the transfer method, it
can miss the odd update. Also, it gets more and more
Nils Breunese (Lemonbit) schrieb:
> Holm Kapschitzki wrote:
>
>>
>> So my question is how to define different dir (at the different config
>> files) for each host where i can backup the data?
>
> I don't think this is possible, because BackupPC uses hardlinks and
> these hardlinks need to be al
I ues a backup system before that had a "incremental forever" backup policy.
The first "incremental" was really a full, and after that, like BackupPC
with rsync, it scanned for changes, and only backedup what it needed to.
Unless you forced it, you never did (or needed to) do a full backup again.
T
I fiddled with the paths of my biggest backup in order to simplify an
offsite copy and now because the files aren't "exactly the same" it seems
it's going to take as long as the very first backup which was 4x as long as
subsequent fulls. Unfortunate, because all the files are there.. but they
nee
Les Mikesell wrote:
Gene Horodecki wrote:
Is this true? Why not just send the checksum/name/date/permissions of the
file first and see if it exists already and link it in if it does. If the
file does not exist by name but there is a checksum for the file, then just
use the vital data to lin
Mathis, Jim wrote:
> Hello,
>
> I'm attempting to use the standard CGI interface but it does not appear
> to be working. Additional, info as follows:
>
> user = root
> directory= /var/www/cgi-bin/BackupPC_Admin
> permission settings: --wsr-x--- root root BackupPC_Admin
> http= httpd service
Hi all -
Likely a stupid question, but is it safe to remove the --ignore-times
rsync option in the BackupPC config.pl file? When I've done rsync
backups before with simple scripts, I've never used that option, and
have yet to land in hot water because of it. While in an ideal world
we'd chec
Hello,
I'm attempting to use the standard CGI interface but it does not appear
to be working. Additional, info as follows:
user = root
directory= /var/www/cgi-bin/BackupPC_Admin
permission settings: --wsr-x--- root root BackupPC_Admin
http= httpd service is running
I've noticed that the per
Gene Horodecki wrote:
>> I'm not sure what you mean by 'pool' here. The only thing relevant to
>> what a backuppc rsync transfer will copy is the previous full of the
>> same machine. Files of the same name in the same location will use the
>> rsync algorithm to decide how much, if any, data ne
dan wrote:
> the ZFS machine is an nextenta(opensolaris+ubuntu) machine with an
> athlon64x2 3800+ and 1Gb Ram with 2 240Gb sata drives in the array. its
> a dell e521
Is nexenta still an active project? And would you recommend using it?
--
Les Mikesell
[EMAIL PROTECTED]
---
the ZFS machine is an nextenta(opensolaris+ubuntu) machine with an
athlon64x2 3800+ and 1Gb Ram with 2 240Gb sata drives in the array. its a
dell e521
On Nov 27, 2007 9:33 AM, Les Mikesell <[EMAIL PROTECTED]> wrote:
> Toni Van Remortel wrote:
>
> > But I don know that BackupPC does use more band
> I'm not sure what you mean by 'pool' here. The only thing relevant to
> what a backuppc rsync transfer will copy is the previous full of the
> same machine. Files of the same name in the same location will use the
> rsync algorithm to decide how much, if any, data needs to be copied -
> any
I mount the 'target' directory directly on the default $TopDIR though it
does work by changing the $TopDIR. i have done this on debian and ubuntu.
On Nov 27, 2007 8:11 AM, Nils Breunese (Lemonbit) <[EMAIL PROTECTED]> wrote:
> dan wrote:
>
> > also, $TopDIR is in the config.pl file and not hardco
Toni Van Remortel wrote:
> But I don know that BackupPC does use more bandwidth.
> Besides: when dumping a full backup, the 'pool' means (I hope): file
> already in pool, using it. If not, then there is a problem, as those
> files are already in another backup set of the test host. But BackupPC
>
What kind of specs does your server have (besides running ZFS)? That is,
processor, memory, etc.
I've got a P-III 500Mhz with 512MB RAM as my backup server. It also is my
file server (I want to split those into separate machines, but I can't right
now), with about 250GB of data. (Most of that i
dan wrote:
also, $TopDIR is in the config.pl file and not hardcoded on debian
or ubuntu(at least in version 3)
you can certainly change $TopDIR, but you will have to you LVM and
make a raid or jbod of some sort to get one filesystem.
I believe just changing $TopDIR isn't going to work as
With rsync, the time required to do a backup depends as much on the number
of files as the total size of the data. For example, backing up an email
server with 20GB in 2 million files will take much longer than backing up
10 2GB isos.(*)
So "I backed up X GB in Y minutes" is meaningless without
I backup about 6-7Gb during a full backup of one of my sco unix servers
using rsync over ssh and it takes under an hour.
4-5Gb on an very old unix machine using rsync on an nfs mount takes just
over an hour.
full backups of my laptop is about 8Gb and takes about 15minutes though it
is on gigabit
also, $TopDIR is in the config.pl file and not hardcoded on debian or
ubuntu(at least in version 3)
you can certainly change $TopDIR, but you will have to you LVM and make a
raid or jbod of some sort to get one filesystem.
On Nov 27, 2007 12:43 AM, Nils Breunese (Lemonbit) <[EMAIL PROTECTED]> wro
36 matches
Mail list logo