Depending on your platform and choice of dump/tar, you may be able to
just leave it umounted.
>True. But one can work around that by backing up / uncompressed,
>and making sure it contains a (possibly statically linked) copy
>of gzip.
... or just write a couple of copies of a CD with gzip and whatever else
you might need.
>That's really funny after no sleep all night. ;] I think at this point
>I'd even enjoy Vogon poetry.
M-X miss-point
I can make no sense whatsoever of this statement in the context of
AMANDA (or in any other):
"An AMANDA ebuild for Gentoo will be available shortly in portage. Yay!"
>And if you
>An AMANDA ebuild for Gentoo will be available shortly in portage. Yay!
You're one zassy frood who really knows where his towel's at.
>if it is then you'll have to include that in the LD_LIBRARY_PATH. This a
>a runtime path (like $PATH, $MANPATH etc) that is used by Solaris to
>search for libs during execution.
The right way is to use -R during linking instead of LD_LIBRARY_PATH
Amanda manages backups using whatever native tool it's configured to use.
You don't state what platform you're using I can't speculate as to an
ACL-aware tool for it.
Brandon makes some good points. I'll add a couple points of my own that
some may take for granted:
o For a user interface to be friendly, it has to in fact be runnable.
Window system interfaces tend not to work well on text consoles or in
environments where the user is running on a different mac
I've had weird stuff happen to me when writing across
samba mounts -- the files would quietly disappear under the Nethood
pseudo-directory, eating up disk space while being inaccessible. Associating
the mounts with drive letters seems to avoid this.
I've never had to do this, but an approach that I'd probably persue
would be to use Symmantec/Norton Ghost to create occasional disk images
that would get xferred via Samba to a *ix filesystem. I'd try real
hard to keep important data off of the M$ machine's local disks.
>the speed of our hp surestore seems to be ok, but the amdump takes too
>much time (6 hours for 23 GB, we want to use now client fast
>compression).
How fast can you get data off of your disks? If that 23G is one
filesystem, you might consider splitting it up so that Amanda can
schedule it more
>I'm just doing my home stuff here, but with a dumpcycle of 5 and a
>tapecycle of 28, I could move a considerable percentage of my tapes
>offsite, *if* I had an offsite. :)
Offsite could be as simple as your office (if you have one away from
home), your mom's house, a bank deposit box, etc. Whe
>or adding 5 internal 36 GB drives in a RAID 0 configuration as a holding
>disk.
I see that IBM has SCSI disks available up to at least 146G. I'd
recommend plunking in two of those instead so you have room to grow.
>Is it just me, or do people not generally change that
>option and just live with it going into /usr/local???
I've always considered /opt a Sun abberation and never deliberately put
anything there. I want my systems to have as little as possible on
filesystems that an OS reinstall will wipe.
>On Thursday 08 August 2002 14:07, Adam D. Read wrote:
>>Inetd, Xinetd are not options for me to use on my solaris boxen.
>Sorry, I missed that solaris thing.
Huh? xinetd should run just fine on SunOS 5.
>What email agent are you using?
You've never heard of it, and it doesn't matter.
>The header didn't show any mimetype specs either
--cmJC7u66zC7hs+87
Content-Type: application/msword
Content-Disposition: attachment; filename="chg-scsi.doc"
Since it appeared to be encrypted, I hadn't t
> I would love it if you all would tell me what you thought so far
Encrypting the document makes it kinda tough to read.
>> Where? I'm not even aware of a 100G disk being sold.
>Seagate and IBM sell 120 GB ATA drives. Street price is around
>$120-140. Seagate also has a 180 GB SCSI/FC-AL disk in their catalog,
With all due respect, those 120 and 180 aren't 100, and since ATA disks
are basically toys I wasn't eve
>100 gigabyte hard disk is less than $200
Where? I'm not even aware of a 100G disk being sold.
>while the last check on high capacity tape drives turned up prices exceeding
>4 times that for maybe a quarter the capacity because advertised tape
>capacity is compressed capacity.
I think you're k
>What would the advantages/disadvantages be of using GNU tar
>for all your backup needs? Is it less efficient than vendor
>dump utilities? Unwanted side effects?
In general, it's significantly slower, and touches the read dates on all
your files. On the other hand, it's possible to break up a
> Is this okay to do?
Yep, I've done it when eg. I was snowed in for a week and couldn't get
in to change the tape. It's a good argument for a large holding disk,
and this is a feature that I haven't found in other, rather expensive,
backup packages.
>Its interesting that I was unaware of this dilema ( the possible failure
>of DUMP ) until it was posted on this list
It's mentioned in the second paragraph of Sun's ufsdump man page.
Despite all the FUD that's been parroted about dump over the years, by
and large it's worked just fine for most
>Just curious - anyone using mtx from sourceforge.net?
I couldn't get it to work on my Solaris 8 / SPARC system 8^(
>In case of a disaster I have to be able to restore a huge
>directory tree with more than 10,000 files within minutes or hours at
>most. With a paper list and tapes that I have to get a visa and fly a day
>in order to touch them this is not an option.
Is there a compelling reason why you can't pr
There's a lot to be said for printing tape labels or case inserts that
document the contents of each tape -- or for printing each day's results
and keeping them in a binder.
> http://lwn.net/2001/0503/a/lt-dump.php3
>for what Linus has to say about it.
Which as always is handwaving to cover the fact that he's too lazy to fix it.
>But now that you mention it, I suspect that
>for Solaris we should add "-R$dir" in addition to "-L$dir". Right?
Yep. I *think* a space is necessary after the R, but could be wrong.
>Not long ago I discovered Amanda had a --with-libraries option that
>took a list of directories and stuck them
>Thank you all for the advice, I got through make successfully after setting
>LD_LIBRARY_PATH.
>However, after I've installed Amanda, there's a new problem. When I try
>to run any amanda executables, I get the following error:
>$ amlabel DailySet1 DailySet1-001
>ld.so.1: amlabel: fatal: libread
I suspect that you've got an ld.so path issue.
The best way to deal with that is to have the linking invocations of gcc
specify -R /where/ever/your/libs/are, with LD_LIBRARY_PATH unset.
>What would the advantages/disadvantages be of using GNU tar
>for all your backup needs? Is it less efficient than vendor
>dump utilities? Unwanted side effects?
In general, it's significantly slower, and touches the read dates on all
your files. On the other hand, it's possible to break up a
> Is this okay to do?
Yep, I've done it when eg. I was snowed in for a week and couldn't get
in to change the tape. It's a good argument for a large holding disk,
and this is a feature that I haven't found in other, rather expensive,
backup packages.
Remember that storage vendors usually redefine meg/gig to be
1000/100 instead of the traditional multiples of 1024. Taking that
into account, 96868 real-world meg would be 101573 storage-vendor meg.
>get a cheap Adaptec PCI controller, since Adaptec is the standard in
>compatibility.
Back when I was forced to attemp to deliver services on x86 hardware, I
had various flakiness with 2940's. Less, to be sure, than with the
@#$@# Buslogic 946's that I was forced to use before, but still hassles
>Dump bypasses the filesystemlevel to access the data and therefor only
>works reliable if all caches are flushed to disk. This is only garanteed
>if the filesystem is unmounted or at least mounted read-only.
Yes, I know. I learned that 15 years ago.
>But this is not a problem of linux, it's
>Yikes! A troll!
Nope, just a naked emperor.
>Summary: "Dump was a stupid program in the first place. Leave it behind."
What it really means: "Linux is a toy system and rather than fix our
design flaws we'll play sour grapes."
>A quick search on google revails that someone is working on this feature
>for Linux as well: http://lwn.net/2001/0308/a/snapfs.php3
It'll probably work about as well as anything else in Linux land.
>I've seen ads for the commercial and pricey backup packages from
>Syncsoft, Veritas and so on which claim no problems with live backups
>on *nix or NT. I suppose they have some way of write-locking files,
>copy to memory, then releasing the lock, but how could these utils
>work at the block rathe
>As long as you don't mind altering the last access time of every file
>that is backed up.
... and as long as you don't mind waiting twice as long.
IMHO, anyone who insists on using the software that's vulnerable to such
attacks deserves to lose.
>> change the OS on my amanda server from RH Linux to Solaris 8 x86.
>Bad move :-) :-)
I've never had to set up a cron job on a SunOS 5 machine that runs every
minute, ifconfig'ing down and up the ethernet interface and re-adding
the default route. This is what I have to do on my laptop when run
> Hopefully some backup hardware manufacturer may in future sell sell a
> system comprised of hot plugable disk mechanisms with little or no
> electronics and a drive bay with the supporting electronics. The
> economics of this look quite good at the moment. Any thoughts on this??
www.iomega.com
> It will get installed in (e.g.) /usr/local/bin.
This is important for two reasons:
o Other software that relies on vendor-supplied software's behavior won't break
o Vendor patches would otherwise overwrite the GNU util, breaking (in
this case) Amanda
I like to put GNU reimplementations in /u
42 matches
Mail list logo