I think you want -H to the rsync command. Or just dd if=/dev/sda
of=/dev/sdb. Or just use archive host w/ parity. I recommend making
/dev/sdb an external drive and only connect it when doing backups.
brien
YOUK Sokvantha wrote:
Dear All,
I installed Backuppc 2.1.2-2ubuntu5 on Ubuntu 5
I've already been down this road, unfortunately. It's not scenic.
You can do something with predump to do "find . -iname *.doc
/tmp/filelist"
and then modify your tar command to use tar -T /tmp/filelist.
Be warned, this totally messes up backuppc's notions of how
incrementals work, and how
It sounds very much like a hardware problem, perhaps slightly toasted
ide controllers? It sounds like a commodity box, can you move all the
disks to another machine and fire it up? Oh and go get a decent UPS! :-)
brien
Klaas Vantournhout wrote:
Dear all,
The real questions are at the
To not answer your question; why don't you just let the new machine use
the existing configs, assuming you keep a few fulls you'll still have
access to the old files just the same. You could also archive it if you
really wanted to preserve it as-is. Basically, what I'm saying is OS
changes
Here are some benchmarks I ran last week: I think it's important
to balance the -ssize with the -n numbers so that you are
dealing with the same amount of data, otherwise caching can bite you
and you can have misleading results. Therefore, I used 10k file-size,
and adjusted the number of files
I don't see mine either, I think it's normal. It wouldn't make sense to
show that type of information (# of incrementals, etc) for an archive
host, I don't think...
brien
benjamin thielsen wrote:
hi-
i'm having what is probably a basic problem, but i'm not sure where
to look next in
Evren Yurtesen wrote:
David Rees wrote:
On 3/27/07, Les Mikesell [EMAIL PROTECTED] wrote:
Evren Yurtesen wrote:
What is wall clock time for a run and is it
reasonable for having to read through both the client and server copies?
Jason Hughes wrote:
Evren Yurtesen wrote:
Jason Hughes wrote:
That drive should be more than adequate. Mine is a 5400rpm 2mb
buffer clunker. Works fine.
Are you running anything else on the backup server, besides
BackupPC? What OS? What filesystem? How many files total?
It sounds like you are describing rsync building the file list before
transferring starts, which takes a long time, and there really isn't
much you can do about that. One thing you might try is splitting your
data up with multiple RsyncShareNames. I'm thinking that might help
avoid ALRM, but
Ludovic Gele wrote:
Selon Brien Dieterle [EMAIL PROTECTED]:
I don't think another instance of backuppc would work "very well" for a
number of reasons. However, I think you could do well with copying the
raw block device over netcat or ssh. If you are
I don't think another instance of backuppc would work very well for a
number of reasons. However, I think you could do well with copying the
raw block device over netcat or ssh. If you are using LVM for the
backuppc data you could take a snapshot and not affect regular backups,
otherwise you
Have you tried escaping the spaces with a \ ? Like:
'/Application\ Data/'
Not sure if that will work, but it sounds like it's worth a shot.
brien
Jim McNamara wrote:
Hello again list!
I'm running into some trouble with excluding directories in rsyncd.conf
on a windows machine. The
t
handled c:\Documents and Settings without special regards to the
whitespace in the path, but figured it was better safe than sorry.
Unfortunately, it still grabs the entire contents of Documents and
Settings.
Peace,
Jim
On 2/28/07, Brien Dieterle [EMAIL PROTECTED]
wrote:
Have you
It sounds a lot like you've hit some bugs in cygwin/rsync/smb bugs?
from the faq:
Smbclient is limited to 4GB file sizes. Moreover, a bug in
smbclient
(mixing signed and unsigned 32 bit values) causes it to incorrectly
do the tar octal conversion for file sizes from 2GB-4GB.
This is going to be unsupported, I know, but for my own amusement (and
possibly yours!) can someone help me understand the ramifications of
subverting some of the tar options during the backups?
Specifically, take this scenario:
#1 full backup of / (tar -cvf - --totals -C / ./)
so tar backs
How can I get this to work? I am storing the data inside lasttime.txt
(don't ask why) :-)
$Conf{TarIncrArgs} = '--newer=`cat lasttime.txt` $fileList+';
the shell command within ` ` does not get executed, so of course this
doesn't work at all. Any ideas?
Thanks!
Brien
Has anyone tried using mdns//bonjour for clients? Macs have it enabled
by default, most linux distros have it, although not enabled, and you
can download it for Windows... Then you wouldn't have to do anything
special; they'd be normal dns lookups-- fred.local, etc.
brien
James Kyle wrote:
Most NFS servers are pitifully slow compared to a local filesystem,
particularly when dealing with many small files. It pains me to think
about how slow that might get-- is anyone else using a non-local
filesystem?
brien
Simon Kstlin wrote:
I think TCP is a safer connection or plays that
Are you using rsync -H, to preserve hard links? You may find it
unbearably memory/time/resource intensive to use rsync for this.
Since you are using lvm (assuming you have some unused space), you could
create a snapshot (lvcreate -s) and then dump the raw block device over
ssh (or nc). (dd
It looks like the only thing open when backuppc is running and idle is
the /backuppc/log/LOG file.
So, if you symlink that dir to somewhere on another filesystem, I don't
see why you can't use automount or
maybe pre/post scripts to achieve what you want, for whatever reason :-)
brien
Roger
I'd like to run full backups at night (say, 10pm-2am), but run
incrementals every 2 hours from 6am-6pm. There doesn't seem to be any
way to do this. Unless, maybe I can use a predump script to test the
time and $type and abort fulls that try to run during the day? It would
be annoying to
We have a CMS that basically stores user data in a fs structure such as
/users/a/b/abraham/. Whenever a user edits one of their own files, the
webapp will touch a file in a specific location such as
/activeUsers/abraham. We use a predump script that quickly generates a
list of recently
You might want to just use good ole' tar in 10.4. It will preserve
resource forks (unlike rsync) by creating AppleDouble files inside the
tar archive.
You might also want to disable ACLs if by some odd chance you have them
enabled. Here is what I use with modest success:
$Conf{TarClientCmd}
I have been struggling with this for a few weeks now...
Server: debian sarge: backuppc 2.1.1-2
Client: OSX Tiger 10.4.2 Server
using the new tiger tar, or xtar, I get the same results:
Everything transfers along just fine until it hits my netboot images.
It transfers about 6.5 gigs of a 12 gig
24 matches
Mail list logo