I'm going to work for the Protein Data Bank, and we're seriously talking
about using 1 TB flash drives for backups in the not-too-distant
future. It may take a few years to get down to a reasonable price
point, however.
--jonathan
I'm beginning to investigate using external USB 2.0 / Firewire drives to
do backups of some systems. The idea is that we could keep 2-3 spare
PCs around, then if your computer is toast, we just ship it out for
repairs, plug your external drive into one of the spare PCs, and rebuild
the system
I sent this "From" the wrong address initially, apologies if you
actually get it twice :(
My experience is with amanda, but I will be taking over a Bacula
installation in about a month when I change jobs, see
http://www.bacula.org/ Has anybody used Bacula and have any comments on
how it compares
Good points. For small backups, I think that removable firewire / USB
2.0 drives would be a more economical and convenient option, using
FILE-DRIVER to write vtapes onto the removable drives. I still think
DVDs are more suited to "archival" backups of a few GB where you plan to
store and dele
Jon LaBadie wrote:
On Wed, Sep 08, 2004 at 11:57:37AM -0400, Jonathan Dill wrote:
... . With even more luck, you might figure out what
compression algorithm was used, for example "gzip" compressed data
almost always begins with the same character string,
but I forget what it is,
Martin Hepworth wrote:
dd the entire tape to disk, so you've got the binary to play with^W^W
inspect.
Exactly what I was thinking. Next, I'd probably use "split" to break it
down into manageable chunks and use "strings" and "od" commands such as
"od -c" and hex editors to try to make some sense
Paul Bijnens wrote:
If split in chunks, just feed the first part; the rest is done
automatically (the name of the next part is in the header of
each holding chunk).
Hmm. That's what I thought, finding subsequent chunks might not be
working correctly then for whatever reason, but I'll have to sear
Hi folks,
I'm trying to help someone do a restore from a dump that is split into
multiple "chunks" in holding disk files. In this case, flushing to tape
first is not an option. I thought amrestore could do it, but then I
read the manpage and didn't see a way to do it.
The only way that I coul
setup that I run
that way.
--
Jonathan Dill <[EMAIL PROTECTED]>
to data that is already software compressed will use at
least as much, and probably more (and possibly a lot more) tape.
--
Jonathan Dill <[EMAIL PROTECTED]>
e disk devices for "dump" style backups (xfsdump, ufsdump, dump etc.)
and no users are members of that group.
--
Jonathan Dill <[EMAIL PROTECTED]>
t archives.
On Fri, 2004-05-14 at 17:53, Josh Welch wrote:
> I am thinking that things are dying as a result of this friggin huge file, I
> am able to restore just fine from a smaller backup on a different machine. I
--
Jonathan Dill <[EMAIL PROTECTED]>
ys? Try Mailman
for instance, available as an RPM on multiple Linux distros.
> --
>
> >>>> unsubscribe * [EMAIL PROTECTED]
> unsubscribe: unknown list 'RCS'.
> Your request to [EMAIL PROTECTED]:
--
Jonathan Dill <[EMAIL PROTECTED]>
o was magically re-subscribed yesterday. Is
> there someone on this list who and explain what has happened. Also,
> I'm not able to unsubscribe either. Getting an error message back
> from Majordomo.
--
Jonathan Dill <[EMAIL PROTECTED]>
Gavin Henry wrote:
Sorry, forgot to put in a subject.
Yes, Fedora. Is there anything else like it for Linux?
I have periodically searched for ufsrestore-compatible software for
Linux, but have been unable to find any yet. In my case, I wanted to be
able to index dumps from a Solaris amanda
Jon LaBadie wrote:
One of the concerns I have about disk-only based backup schemes is the
total loss of data. If you encounter a 2-disk failure you lose not only
your most recent, but all your backups. If a tape drive fails the data
can be read on another drive. If a single tape goes bad, that i
Which reminds me...If cost is a factor, now that FILE-DRIVER is an
option, RAID or removable hard drives may give you a better $/GB ratio
than tapes, and much more capacity than CD-R. I think this is a very
good option for a single computer or small network like Justin described
in his original e-
Justin Gombos wrote:
First of all, I did not know whether this was an issue, that's why I
posted here. It was certainly proper to raise an Amanda
issue/question to the amanda-users mailing list.
Asking, "Can amanda do X?" is one thing, but to complain of "an absurd
limitation" is, frankly, ins
Use FILE-DRIVER and wait until you have enough files to fill up the
CD-R. Or use CD-RW as your "tapes" and keep several that you can rotate
and re-use.
Oh yes, we have designed amanda specifically to satisfy your personal
whims, pretty please don't reject it, it will so much hurt my feelings.
nap in
the (hopefully rare) event of a 2-drive failure on the RAID-5.
On Thu, 2004-04-08 at 15:35, Galen Johnson wrote:
> Does/can amanda utilize star? This would be one of the most useful
> capabilities I can foresee for Amanda (quiet Windows users).
--
Jonathan Dill <[EMAIL PROTECTED]>
Here's a strategy that I implemented about a month ago that is working
pretty well so far:
1. run amdump every night to large RAID w/o tape, mix of full and incr
2. run script to separate images to amanda-incr and amanda-full
3. when amanda-full exceeds size of tape, run amflush
4. when RAID appr
the old problem.
> I checked the help ind docs/FAQ but it doesn´t work.
> Help please and sorry if I´m tedious!!
> Thanks
--
Jonathan Dill <[EMAIL PROTECTED]>
y on NIS, I would suggest setting up some kind
of a "watchdog" to restart ypbind if it fails. In fact, I think I am
going to look into that option right now, it would fix some of the
problems/complaints that I have had, like occasional problems with
people not being able to login.
--
Jonathan Dill <[EMAIL PROTECTED]>
There are a few more things to try. First, there may be an mt command
to set the "default" compression for the drive--that will at least help
make sure if you use some new tapes, they will get started with the
correct compression. The tricky part is that some drives, such as 8mm,
use the "den
Stefan G. Weichinger wrote:
on Dienstag, 23. März 2004 at 09:28 you wrote to amanda-users:
PB> (Don't be afraid, you wont hear any noise during compilation, it's
PB> quiet and easy :-) )
Seems to get the joke-of-the-week in here.
Quiet funny ;)
[sound of crickets chirping]
I saw that in a c
Just a thought, but I would probably boot off a "rescue disk" and check
the filesystem consistency on the client with the failing estimates.
I'd probably use Knoppix just because it is user-friendly and provides a
comfortable, gui environment for poking around.
--jonathan
James Tappin wrote:
I would guess that the "ufsrestore" is making an "index" of one of the
dumps. If you don't care about interactive "amrecover" you could make a
dumptype that doesn't do "index" that should eliminate the ufsrestore
process. Running fewer dumps in parallel should help, too.
I don't know a lot ab
ther hackneyed
expression.
--
Jonathan Dill <[EMAIL PROTECTED]>
#!/bin/tcsh -f
set r=(/snap/amanda-hd)
set i=(/snap/amanda-incr)
cd $r
set dl=(`find . -depth -type d -print`)
cd $i
foreach d ($dl)
if (! -d $d) then
echo mkdir -p $d
endif
end
cd $r
foreach d ($dl)
set fl=(`find
On Thu, 2004-03-18 at 14:31, Joshua Baker-LePain wrote:
> It is true that it takes some wedging to make amanda work in a 'doze heavy
> environment -- that's simply not what it was designed for. As for advice
> on commercial solutions, this isn't exactly the best place to ask. ;)
If you're not
Has anyone tried the approach of only flushing full dumps and leaving
incremental dumps on disk? I think that this would have roughly the same
effect as doing full dumps out of cycle, but I have not had luck so far
with the out-of-cycle approach. I think that it should be fairly easy to
script,
Geoff Swavley wrote:
I was wondering if anyone could tell me why amanda seems to split
my filesystem into 2 "holding" files? An error has occurred so these filesare
It sounds like you have other, unrelated problems, but check the setting
of "chunksize" in amanda.conf, that is usually what dete
Have you taken a look around in /proc/scsi? /proc/scsi/scsi should give
you some basic information, and the subdir for your driver should give
more details, such as what transfer rate the drive is negotatiated at,
for example /proc/scsi/aic7xxx/0 for an Adaptec 2940 series. Perhaps
there was a
locking, crazy as that sounds, but that was also on SGI IRIX with
Exabyte 820 "Eagle" drives. It was a pain, and there were loads of
errors, but I got most of the data off the tapes.
--jonathan
Joshua Baker-LePain wrote:
On Thu, 11 Mar 2004 at 2:21pm, Jonathan Dill wrote
I would
Joshua Baker-LePain wrote:
I'm reading some somewhat large (14-18GB) images off of AIT3 tapes, and
it's taking *forever*. Some crude calculations show it coming off the
tape at around 80 KB/s, whereas it was written out at 11701.6 KB/s. The
tapes were written in variable block size mode. Wha
Joshua Baker-LePain wrote:
I think what you mean is what files do you need in order to save the
complete current state and history of the backups, although I'm guessing
as your request was overly terse. If that's right, you need:
the config dirs (where your amanda.confs are)
the "infofile" dir
Hi Frank,
The documentation for gethostbyaddr and gethostbyname explained how each
call goes about looking up addresses. At least under Linux, there were
several opportunities to "override" the default behavior and make the
routines consult /etc/hosts first.
In my particular case, there are o
Here's a very simple solution:
1. "reserve 100" in amanda.conf (or comment out "reserve" line)
2. leave out the tape during the week (*or change dev to /no/such/tape)
3. run "amflush" before Friday backups
In this case, amanda should try to do "degraded mode" backups during the
week while there i
I want to backup a client on a "private" network 10.160.32, but amanda
> I think, that an ipv4 address has four address parts, you have only three.
--
Jonathan Dill <[EMAIL PROTECTED]>
jfdill.com
heck bind first, but you can
override that by putting
order hosts, bind
in /etc/host.conf. At least that's how it works on some flavors of Linux.
Jonathan Dill wrote:
I want to backup a client on a "private" network 10.160.32, but amanda
seems to be looking for a DNS to re
I want to backup a client on a "private" network 10.160.32, but amanda
seems to be looking for a DNS to resolve the IP, and then do a reverse
lookup on the IP to get the hostname. Is there a way to do this without
setting up a DNS for 10.160.32? I wish amanda would just believe the
address th
Is anyone successfully using a mixed strategy of backups to both disk
and tape? In particular, I have a 1 TB Snap 4500 and an LTO tape
drive. I have thought about a few different ways to go about it, but
would appreciate suggestions from someone who has tried this approach.
The Snap also has
I read the last e-mail about this, but lost it, but I think I remember
the basic details.
First, I would try setting up some sort of nameservice caching on the
client and server as a work-around. Some flavors of Linux have a
caching-nameserver package that sets up the correct bind files for yo
Resolving IP address to a hostname (reverse lookup) is the part that
looks broken, check the reverse domain in the DNS i.e.
host 172.24.16.86
or
nslookup 172.24.16.86
The error says *hostname* lookup failed, not "address lookup failed."
Someone else reported a similar problem a few days ago, an
I have run into this problem before with NIS when "ypbind" crashed on
some of the clients--this has been a chronic problem for me with Linux
talking to IRIX NIS servers. Consequently, I put the amanda host IP,
amanda user and group IDs in the local files so that ypbind crashing
would not muck
If you're backing up more than one architecture, I find that it's nice
to set things up so that you can have the same path to gnutar on all of
the architectures. That way, you can run amrecover on any machine and
it will find a valid path to gnutar. Normally, I just create a symbolic
link to
Thanks, that worked:
make install prefix=/tmp/amanda-2.4.4p2/usr/local
cd /tmp/amanda-2.4.4p2
tar cvzf ../amanda-tarball.tgz
Paul Bijnens wrote:
The prefix= can be specified to override the configure prefix
like:
make install prefix=/tmp/amanda-2.4.4p2
I vaguely recall that there is variable that you can pass to 'make' to
install to a different root, similar to what happens during building a
binary RPM, for example:
make install VARIABLE=/tmp/amanda-2.4.4p2
The result is that you end up with all of the amanda files in
/tmp/amanda-2.4.4p2/usr
Hi Tom,
OK I got it now this is a charity.
If the backups aren't too big and you have a big enough holding disk,
you might consider a strategy where you do some dumps to the holding
disk. For example, I have a config where on Wednesdays, I flush the
holding disk and then do a dump with the tape
Tom Strickland wrote:
> We're on the verge of ordering a DDS drive (this week). It'll probably
> be an HP Surestore - but the question is DDS3 or DDS4? There's the
> obvious difference in capacity, but beyond that are there any other
> differences? Speed is an obvious one - any others?
Keep in mi
"Bryan S. Sampsel" wrote:
>
> That's the bitch of it...it IS resolved: via nslookup, via ping--ANYTHING but Amanda.
>
> It's bizarre. I'm getting ready to compile amanda from source to see if it's a
>problem with the rpm on the client. Rpm installs are OK sometimes--other times, I'd
>rather
Olivier Nicole wrote:
>
> >I am learning to use Amanda but it seems it has a problem, as everyone
> >knows, backing up a filesystem larger than the tape.
>
> Am I wrong? I understood that Amanda is supposed to ask for as many
> tapes during a single run, that are needed to complete the back-up.
Hello Howard,
There are two solutions that I have used in this situation:
1) If you can handle a few minutes of down time, like on a home PC or
personal workstation, you could use "ghosting" to a removable disk
drive. This is probably OK for a home PC or another situation where you
can handle l
Hi John, Mitch,
Does it matter that my dumpcycle is 6 weeks? I actually have 12 weeks
worth of tapes in the cycle, but want 2 full dumps of each disk in case
something happens to one of them.
The problem that I'm having is that the dumps are taking too long. It
looks like it would probably be
Hi guys,
Actually, as Joshua pointed out, "amadmin due" does exactly
what I want--It gives you a nice list saying which dumps are overdue,
what dumps are due today, and how many days until other dumps are due.
Forcing dumps that are overdue, due today, or that are due in a day or
two would be a
Does anyone have any scripts, or any tricks that I could use to try to
predict what full dumps are coming due?
I would like to be able to force a bunch of full dumps on the weekend,
when the total time of the run is not an issue, so that the load should
be a little lighter during the week.
Thank
Hello Nicki,
This is a "severe problem" with ext2fs that has not been patched for
large file sizes, it is not a problem for amanda. All you need to do is
define "chunksize" in your holding disk configuration to be less than 2
GB and the dump images will be split into chunks less than 2 GB for eg
I think I was the one who made the suggestion to remove ext2 dump, but I
was wrong, you don't have to do that. ./configure will find both
xfsdump and dump, and amanda will choose whichever program is
appropriate for the type of filesystem i.e. if it is XFS filesystem and
you have not specified GN
Hi Eric,
You may want to take a look through the list archives at:
http://groups.yahoo.com/group/amanda-users/
This subject has already been hashed and rehashed to death on just about
every mailing list that I subscribe to including this one.
I'm planning to migrate to SGI XFS on Linux--SGI ha
You can read the tapes without amanda using just "dd" and a restore
program. You can get some hints by looking at the first part of a dump
image.
For example I will do these commands with one of my backup tapes.
First, I have to "mt fsf" to skip over the tape header, and then I can
use just "dd
amrecover would work with xfsdump, I would be in heaven :-)
Original Message
Subject: Re: SGI's XFS: ready for production use?
Date: Fri, 11 May 2001 11:39:19 -0500
From: Eric Sandeen <[EMAIL PROTECTED]>
To: Jonathan Dill <[EMAIL PROTECTED]>
CC: "[EMAIL PROTE
"C. Chan" wrote:
> I'm experimenting with XFS now and if I run into any glitches
> I'll let the Amanda list know. I'd like know how to tell Amanda
> to use xfsdump rather than GNU tar on XFS partitions.
I think you would have to recompile amanda. I would install
xfsdump/xfsrestore and "rpm -e du
"Marcelo G. Narciso" wrote:
> | DUMP: Warning - cannot read sector 600384 of `/dev/rdsk/c0t6d0s0'
> | DUMP: Warning - cannot read sector 600400 of `/dev/rdsk/c0t6d0s0'
It looks to me like you have some bad sectors on your disk, or possibly
a disk drive that is on its way to failing, like the
Hi Ron,
I have a Brother P-touch 540 Extra that I use for making tape labels
which also does several types of barcodes. However, someone else will
have to verify whether the P-touch barcodes work with a changer or not
because my changers don't read barcodes.
I find using the P-touch a whole lot
Hi folks,
I just found out about these sg_utils which may be helpful for folks
running amanda on Linux systems, especially for debugging tapedrive and
changer problems...
http://gear.torque.net/sg/#Utilities: sg_utils and sg3_utils
--
"Jonathan F. Dill" ([EMAIL PROTECTED])
CARB Systems and Netw
This point is very important. You will have to do the equivalent of
exporting to the server with "root" enabled. In Unix this usually is an
option like "root=X" or on Linux "no_root_squash" otherwise you may not
have sufficient priviledges to read the files. It may look like the
backups worked,
Alexandre Oliva wrote:
> I'd much rather use NFS than SMB. It's generally far more efficient.
> However, God only knows how much crap an NFS server running on
> MS-Windows would have to work against, so it might be that it actually
> takes longer to run.
I recommend running some I/O benchmarks e
Alexandre Oliva wrote:
> > Can amanda use who tape devices to perform a single backup?
>
> You can't use them concurrently (yet), but you can set up chg-multi to
> switch between tape drives automatically. That's what we do here.
Actually, there is a way that you can use them concurrently--You
"John R. Jackson" wrote:
>
> >I checked the /etc/passwd and /etc/group files which had user/group
> >nobody with the id of 4294967294. After I changed that ...
>
> Ummm, you changed the gid and uid of "nobody"? That was probably kind
> of rash. There are things floating around that know about
Hi Terry,
I have seen this problem with Unix computers running either Samba,
Appletalk sharing, or PCNFS. Something, possibly a misconfigured Samba,
is probably using that UID as the "nobody" UID. If you're not using
Samba for anything and it's just installed and "turned on" I think it
would be
Hi all,
What values are people using for bumpsize, bumdays, bumpmult other than
the example values?
I have a lot of > 10 GB disks to back up and it doesn't seem efficient
to me to do eg. an 11 GB "level 5" backup of a disk that has 13 GB of
data on it--In that situation, I think it would probabl
I'm using a Sony TSL-11000 DDS4 autoloader under Linux without
problems. It works as an Ultra2 LVD device without problems. I think
I'm getting better throughput than 2.8 MB/s, but it's still well under
Fast/Narrow bandwidth due to the limitations of the drive. I use mtx
and random access works
Hi all,
If your real address is 210.54.131.226 on radionet.co.nz, please check
your system for w95.hybris infection as you are the source of the Snow
White message that was sent to amanda-users. I have also e-mailed the
postmaster at radionet.co.nz.
PS amanda-users server admin Todd is THE MAN
"Anthony A. D. Talltree" wrote:
> IMHO, anyone who insists on using the software that's vulnerable to such
> attacks deserves to lose.
OTOH the amanda-users list doesn't deserve to lose if someone is dumb
enough to open the attachment, gets infected, and sends all kinds of
crap back to the list.
Hi everybody,
In case you haven't heard, don't open that Snow White attachment! I'll
send more details shortly so you know this isn't a hoax...
--
"Jonathan F. Dill" ([EMAIL PROTECTED])
Ryan Williams wrote:
> There are now daily spam messages about toner supplies going to the amanda
> mailing list. This is a big annoyance. Please do something to prevent such a
> thing from happening again. If needed I can provide headers and the emails
> that I recieved.
I'm mad about it too, bu
ginal Message
Subject: enp3.unam.mx spam relay
Date: Mon, 19 Feb 2001 14:35:34 -0500
From: Jonathan Dill <[EMAIL PROTECTED]>
Organization: CARB
To: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED]
Dear administrators,
enp3.unam.mx has incorrect e-mail configuration which allo
Hi Joe,
Writing a script is a lot easier than you might think--I just did it to
label about 90 tapes using a DDS4 autoloader with 8 tape magazine, it
was just a 14 line tcsh script with almost no debugging involved. The
script is very site-specific since it was a one-off type of project, but
I a
Hi Suman,
Did you check on the client to see what's in the debug files of the
/tmp/amanda directory?
Check the permissions on /usr/local/libexec/runtar--it must be owned by
root and SUID like this:
-rwsr-x---1 root amanda 78334 Nov 13 15:32
/usr/local/libexec/runtar
I had a proble
Mandrake Cooker has RPM and SRPM for tar 1.13.19 and you can get it on
rpmfind.net. I suggest building from the src.rpm unless you're running
Mandrake:
http://rpmfind.net/linux/RPM/cooker//cooker/SRPMS//tar-1.13.19-4mdk.src.html
I have no idea if this is "stable" but I'm going to test it out.
79 matches
Mail list logo