dan wrote:
> I'd like to add that ZFS is not an experimental filesystem. It is
> deployed in production environments on Solaris and is very robust for
> its age. Also, since it did not start life as open source, it lived
> behind the scenes and was tested on behind the curtain at Sun for
> ye
Bernhard Ott wrote:
> John Pettitt wrote:
>
>> In the server I just upgraded (Code 2 quad 2ghz, 2GB, 1.5TB ufs on
>> RAID10 , FreeBSD 7.0) my backups run between 3.6 MB/sec for a remote
>> server (*) and 56 MB/sec for a volume full of digital media on a gig
Bruno Faria wrote:
> Hello to everyone,
>
> Lately for some reason, BackupPC has been running very slow on a
> server that we have configured to do backups. Just so that you guys
> can have an idea of how slow it really is going, it took BackupPC
> 10265.2 minutes to backup 1656103 files totali
David Rees wrote:
> On Wed, Feb 27, 2008 at 4:38 PM, Stephen Joyce <[EMAIL PROTECTED]> wrote:
>
>> (Mostly) agreed. If you can afford a hardware raid controller, raid 5 is a
>> good choice.
>>
>
> To clarify, a hardware raid controller with battery backed RAM is a
> good choice fo RAID 5,
Nils Breunese (Lemonbit) wrote:
> Hello all,
>
> We're running a BackupPC 3.1.0 installation on CentOS 4 32-bit on a
> machine with the following specs:
>
> - Intel Celeron CPU 2.66 GHz
> - 512 MB RAM
> - BackupPC pool on a single 250 GB ATA 133 drive
>
> We currently running one backup at a time
Damien Hull wrote:
> Here's my situation.
> 1. Workstation
> 2. Laptop
> 3. 3 - 4 Servers
>
> Backup Server
> 1. 750 GB of data storage ( - OS )
> 2. Software RAID 1
>
> Data
> 1. Currently have about 200 Gigs to backup
> 2. May have an extra 100 Gigs or more to backup in the future
>
> I like the
Alan Orlic( Belšak wrote:
> Hello,
>
> is there a way to speed up transfer via rsyncd, the last backup was at
> 0,9 MB/s. The network is 100Mb, the backup was made on local disk (no
> USB involved), except the include list was involved and there is a lot
> of empty directories.
>
> Bye, Alan
>
>
Bryan Penney wrote:
> The pool consists of about 3.8 million files, so there are a lot of
> small files.
>
My experience when I had to do this on one of my servers was that it's
not the pool itself that kills you it's linking all the directory trees
from the pc directories.The box I was
Bryan Penney wrote:
> We have a server running BackupPC that has filled up it's 2TB partition
> (96% full anyway). We are planning on moving BackupPC to another server
> but would like bring the history of backups over without waiting the
> extended period of time (days?) for the entire pool t
Nelson Serafica wrote:
>
> Anyone knows how to disable incremental backup?
Set the incremental interval to a value larger than the full backup
interval.
John
>
>
> Tired of spam? Yahoo! Mail has the best spam protection a
Brendan Simon wrote:
> David Rees wrote:
>
>> On Dec 18, 2007 5:05 PM, Brendan Simon <[EMAIL PROTECTED]> wrote:
>>
>>
>>> So is the bottleneck rsync or the number of files or memory ???
>>>
>>>
>> In this case, it's neither the number of files or memory.
>>
>> If you look at
Brendan Simon wrote:
> I need to really speed up my backup of Linux boxes/directories !!!
> I'm using ssh/rsync to do Linux backups. As an example (see end for
> more details):
>
> * a backup of a 22.6GB Linux directory is taking 2037 minutes (34
> hours = 1.4 days).
> * a backup of
Rob Ogle wrote:
> Ok...so...since the beef of my backup is a sql database backup file that is
> 3.5GB. The reason I've got so much data is because the I have about 8 copies
> of it. One for the full and one of the incremental of each day between
> fulls. Right?
>
> If so...how do I tweak my setting
There have been several threads lately about storage issues so I
figured a quick refresher on how unix like systems store files would
shed some light.
When you make a file on a unix system (eg Linux, FreeBSD, solaris etc)
what actually happens is the system allocates an inode (index node) on
last weeks full
you're not going to see a big space drop when last weeks goes away.
John
> -Original Message-
> From: Les Mikesell [mailto:[EMAIL PROTECTED]
> Sent: Thursday, December 13, 2007 3:22 PM
> To: [EMAIL PROTECTED]
> Cc: backuppc-users@lists.sourceforge.net;
Rob Ogle wrote:
>
> Dan,
>
> I waited 4 days with no change in size on any directory. The data
> being backed up doesn’t change that much, but with the removal of old
> full backups, I would have expected it to come down. The data being
> backed up is about 3GB uncompressed. The directory ‘hostB
Rob Ogle wrote:
>
> I’m running Backuppc 3 on ubuntu gutsy. I deleted a host via the gui.
> The log file shows it as deleted. However, the machine’s directory
> still resides in the ‘pc’ directory and it’s still taking up disk space.
>
> In addition, I reduced my number of full backups to keep vi
Johan Ekh wrote:
> Hi all,
> I would like to use backuppc with a network disk (Qnap ts-209).
> What is the appropriate way to do this? I've tried to use a linux
> computer as server with the ts-209 nfs mounted but backuppc
> will not install as it cannot "chown" the mounted disk.
>
> Has anyone don
Rich Rauenzahn wrote:
> John Pettitt wrote:
>>>
>>>
>> What happens is the newly transfered file is compared against candidates
>> in the pool with the same hash value and if one exists it's just
>> linked, The new file is not compressed. I
Rich Rauenzahn wrote:
>
>
> I know backuppc will sometimes need to re-transfer a file (for instance,
> if it is a 2nd copy in another location.) I assume it then
> re-compresses it on the re-transfer, as my understanding is the
> compression happens as the file is written to disk.(?)
>
> Woul
Craig Barratt wrote:
> Rich writes:
>
>
>> I don't think BackupPC will update the pool with the smaller file even
>> though it knows the source was identical, and some tests I just did
>> backing up /tmp seem to agree. Once compressed and copied into the
>> pool, the file is not updated with fu
Matthew Metzger wrote:
> Hello David,
>
> thanks for the response. I did exactly what you suggested with RAID 1
> and LVM them together to create a large drive. It was fairly easy to
> accomplish with Ubuntu's installer.
>
> However, Les Stott brings up a great point about RAID 5. I would like
>
John Pettitt wrote:
John Pettitt wrote:
I'm getting an out of memory on large archive jobs - this in a box with
2GB of ram which makes me thing there is a memory leak someplace ...
Writing tar archive for host jpp-desktop-data, backup #150 to output
file /dumpdir/jpp-desktop
John Pettitt wrote:
I'm getting an out of memory on large archive jobs - this in a box with
2GB of ram which makes me thing there is a memory leak someplace ...
Writing tar archive for host jpp-desktop-data, backup #150 to output
file /dumpdir/jpp-desktop-data.150.tar.gz
Out of m
I'm getting an out of memory on large archive jobs - this in a box with
2GB of ram which makes me thing there is a memory leak someplace ...
Writing tar archive for host jpp-desktop-data, backup #150 to output
file /dumpdir/jpp-desktop-data.150.tar.gz
Out of memory during "large" request for
Has anybody tried BackupPC using a ZFS (RAIDZ) filesystem for the
pool? It's currently a Solaris thing (and Linux?) but it's going to
be in FreeBSD 7.0 and I've been playing the a VMWare system with a ZFS
file system and it looks to be pretty fast. Does anybody have any
real world data?
I'm posting this to the list so people searching for FreeBSD
optimizations will find it in the archives.
I finally got around to looking at why my FreeBSD server was only
backing up at about 2.5MB/sec using tar with clients with lots of small
files.
Using my desktop (a Mac PRO) as the test
James Ward wrote:
> My understanding is that the current Tiger rsync with the -E flag
> will do everything needed to make useful backups with BackupPC? Am I
> wrong?
>
> Thanks in advance,
>
> James
>
>
>
>
The -E flage doesn't work with backuppc - you can either use tar or
forgo the resou
Doug Smith wrote:
> I currently use BackupPC to backup 34 servers (two different backup
> servers). We have this one development machine that takes more than a
> day to backup (2100 minutes for 139 gigs on full backup). We have other
> servers with the same amount of data (some with more) that
David Relson wrote:
> Good Evening,
>
> The drive with my BackupPC files on it is generating "I/O error"
> messages and reiserfsck's recommendation is, once a drive is reporting
> errors, it's nearing the end of its life and ought to be replaced.
>
> As I'd like to preserve my old backups, I'm look
Les Mikesell wrote:
> Jason M. Kusar wrote:
>
>
>>> If you use rsync as the transport you never actually transfer unchanged
>>> files again - you only make a pass over the files comparing block
>>> checksums. This takes some time/cpu at each end but not a lot of bandwidth.
>>>
>>>
>
Jason M. Kusar wrote:
> Hi all,
>
> I'm currently using BackupPC to back up our network. However, I have
> one server that has over a terrabyte of data. I am currently running a
> full backup that has been running since Friday morning. I have no idea
> how much more it has to go.
>
> My quest
Johan Ehnberg wrote:
> John Pettitt wrote:
>
>> Johan Ehnberg wrote:
>>
>>> VPN:s are not a good idea in my case since they would cross over
>>> different organizations.
>>>
>> Huh? Just because it's a VPN it doesn't h
Johan Ehnberg wrote:
> VPN:s are not a good idea in my case since they would cross over
> different organizations.
Huh? Just because it's a VPN it doesn't have to be wide open. A VPN
with firewall rules that only allow connections from the BackupPC server
to the rsyncd ports on the clients s
Craig Barratt wrote:
> Brendan writes:
>
>
>> I want to upgrade to BackupPC 3.x.y, but am not game to upgrade to
>> 3.0.0. I'd like to at least wait until the first bug fix release (3.0.1).
>>
>> Does anyone know when that is due for release?
>>
>
> When there are some bugs :).
>
> Seriou
Some stats using rsync vs using tar on a file system with big files
Server is a FreeBSD 6.2 box, 2.93Ghz Celeron with 768MB Ram., RAID10 on
a 3ware 9500Scontroller.
client is a Mac pro dual/dual xeon 2.66 6GB ram
source drive 94GB of media files average file size 10MB on a 250GB
SATA-30
Evren Yurtesen wrote:
>
> Perhaps it could be a feature if it checksum checks could be disabled
> altogether
> for situations where the bandwidth is cheap but cpu time is expensive?
>
> Thanks,
> Evren
>
>
That option is called "tar" :-)
John
benjamin thielsen wrote:
> hi-
>
> i'm having what is probably a basic problem, but i'm not sure where
> to look next in troubleshooting. i've got a working installation,
> currently backing up 4 machines, and decided to add an archive host,
> but it's not showing up. the log file indicates
Evren Yurtesen wrote:
> BackupPC Manual mentions:
>
>
> Each file is examined by generating block checksums (default 2K blocks) on
> the receiving side (that's the BackupPC side), sending those checksums to the
> client, where the remote rsync
Evren Yurtesen wrote:
> John T. Yocum wrote:
>
>> According to the 3ware CLI, the cache is enabled.
>>
>
> I have the same problem with much slower speeds (since I dont use SATA
> or raid it makes things worse) My finding is that backuppc is doing a
> lot of work while checking the files.
Following the extended discussion of system benchmarks here are some
actual numbers from a FreeBSD box - if anybody has the time to run
similar numbers on linux boxes I will happily collate the data.
John
2.93 GHz Celeron D, 768 MB ram FreeBSD 6.2
bonnie++ -f 0 -d . -s 3072 -n 10:10:10:1
Jason Hughes wrote:
> Evren Yurtesen wrote:
>> I am saying that it is slow. I am not complaining that it is crap. I
>> think when something is really slow, I should have right to say it right?
>>
>
> There is such a thing as tact. Many capable and friendly people have
> been patient with you,
Les Mikesell wrote:
> Evren Yurtesen wrote:
>
>>> Raid5 doesn't distribute disk activity - it puts the drives in
>>> lockstep and is slower than a single drive, especially on small writes
>>> where it has to do extra reads to re-compute parity on the existing data.
>>>
>> I am confused,
David Rees wrote:
> On 3/26/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
>
>> Lets hope this doesnt wrap around... as you can see load is in 0.1-0.01
>> range.
>>
>> 1 usersLoad 0.12 0.05 0.01 Mar 27 07:30
>>
>> Mem:KBREALVIRTUAL
Evren Yurtesen wrote:
>
>
> I know that the bottleneck is the disk. I am using a single ide disk to
> take the backups, only 4 machines and 2 backups running at a time(if I
> am not remembering wrong).
>
> I see that it is possible to use raid to solve this problem to some
> extent but the real
Evren Yurtesen wrote:
> I am using backuppc but it is extremely slow. I narrowed it down to disk
> bottleneck. (ad2 being the backup disk). Also checked the archives of
> the mailing list and it is mentioned that this is happening because of
> too many hard links.
>
>
[snip]
The basic problem i
Have you checked that the 3ware actually has cache enabled - it has a
habit of disabling it if the battery backup is bad or missing and it
will make a *huge* difference
John
John T. Yocum wrote:
> I'm seeing terrible backup performance on my backup servers, the speed
> has slowly degraded
It's time to build a new server. My old one (a re-purposed Celeron D
2.9Ghz / 768M FreeBSD box with a a 1.5 TB raid on a Highpoint card) has
hit a wall in both performance and capacity. gstat on FreeBSD shows
me that the Highpoint raid array is the main bottleneck (partly because
it's in
Jason B wrote:
> close to
> the same way as an incremental, except it's more useful, so to say?
>
> Incidentally, unrelated, but something that's been bugging me for a while:
> subsequent full backups hardlink to older ones that have the true copy of the
> file, correct? That means there is no
Nils Breunese (Lemonbit) wrote:
> I wrote:
>
>> I recently upgraded to BackupPC 3 and although the idea of being able
>> to run the nightly jobs, the trashClean job and dump jobs all at the
>> same time is nice it seems it's a bit too much for our server (load =
>> 10 at the moment). Can I make
Has anybody played with using wake-on-lan with BackupPC?
I'm thinking the approach would be to define $Conf{PingCmd} as a script
that sends a wake on lan, waits a second then does the regular ping.
Then use Pre and Post commands to disable and re-enable sleeping ...
(using pmset on osx)
Do
Brien Dieterle wrote:
> The ".filename" stuff is called AppleDouble and I think it preserves
> metadata as well in there.
>
> Are you also getting a ton of xfer errors as below? (I did)
> /usr/bin/tar: /tmp/tar.md.Fif1QE: Cannot stat: No such file or directory
> /usr/bin/tar: /tmp/tar.md.b7XePQ: C
I'm running 3.0.0 and noticed that Mac OSX clients take way longer to
do an incremental backup that I would expect given how many files have
changed. I dug around in the logs and found that tar on the mac
copies extended attributes as ._filename files and totally ignores the
--newer flag w
Phong Nguyen wrote:
> Hi all,
>
> I just would like to know if it is possible to make an incremental
> backup of a host every hour.
> I don't know how to set the value for $Conf{IncrPeriod} since it juste
> take a value counted in days.
> Thanks a lot
>
> Phong Nguyen
>
> Axone S.A.
> Geneva / Swi
[EMAIL PROTECTED] wrote:
>
> Aloha,
>
> Is there any hope for adding bare-metal restore capabilities to
> BackupPC for Windows clients?
>
>
> Thanks,
>
> Richard
Bare metal restore is tricky.There are two things needed to make it
work 1) a backup of all the files on the box 2) a toolset for
Does anybody happen to know if BackupPC copes with OSX extended
attributes
-E, --extended-attributes
Apple specific option to copy extended attributes,
resource
forks, and ACLs. Requires at least Mac OS X 10.4 or
suitably
patched rsy
Paul Harmor wrote:
>
>
> I'm running a 1.3Gig Duron, 512 DDR, LVM and 2x160GB drives, on Ubuntu 6.10.
>
> I have only 2 machines (at the moment) being backed up, but every time
> the backups start, the server system slows to an UNUSEABLE crawl, until
> I can, slowly, start top, and renice the 4 bac
Notes on migrating to bigger storage.
Two weeks ago I asked about migrating to my BackupPC pool to bigger
storage. I got a number of responses and after some experimentation
reached the following conclusions:
Suggestions:
1) dd the filesystem then expand it on the new storage. It's fast
I'm about to migrate my BackupPC partition to a new raid controller
(more space and more spindles) - my current thinking is to use
dump/restore - has anybody done this - what issues did you encounter?
John
-
Take Surveys.
Garith Dugmore wrote:
> Hi,
>
> I've found backing up any linux server's data works using tar and ssh
> but when trying to back up a freebsd server it gives the following error:
>
>
[snip]
I've had good results with rsync on FreeBSD boxes - I chose rsync
because the box is remote and it minim
have any advice on best practices for larger deployments?
What do you backup on each desktop? (at home I backup everything)
We're thinking rsync over ssh for the OSX and Linux boxes and smb for
Windows - good plan?
Any reason not to start with 3.00 beta2
What issue going to bite us unexpect
Alex Schaft wrote:
> Hi,
>
> I've got a couple of laptop users that sometimes stay at the office
> the whole day. This causes BackupPC to only schedule them after hours,
> when they're obviously not around anymore
>
> Will there after hours status be cancelled once they go home? I'd like
> the i
Paul Fox wrote:
> > Actually no - I specifically don't want to go though a tar/untar step
> > because for some reason on my FreeBSD 5.4 box BackupPC_tarCreate has a
> > memory leak that causes it to fail after a few thousand files. I was
> > looking for a way round that bug.
>
>you're probably
Carl Wilhelm Soderstrom wrote:
>On 07/19 03:02 , John Pettitt wrote:
>
>
>>Has anybody written a script to take a backuppc file tree and un mangle,
>>un compress and write it as a plain file tree? I'm looking for a way to
>>restore to file system mounted on
64 matches
Mail list logo