I do the same as you for amvault command line invocation i.e.
--latest-fulls --dest-storage. However I am vaulting from the vtl
directories only, not the holding disk. Without some details on your
amanda.conf I don't know if that's part of the problem but you appear to
never get to loading
Hi all,
I have a setup where I have a holding disk and vtapes. Then I try to use
amvault to copy the latest full backups to tape.
This works if I use amvault's --fulls-only with --src-storage parameter
and use the vtapes as the source.
But if I try to vault backups that are still on the holding
; The first, /dev/sda contains the current operating system. This
> > > > includes /usr/dumps as a holding disk area.
>
> ...
>
> > Sounds good, so I'll try it.
>
> If the sda DLE(s) are small enough to go direct to "tape",
> define all holdings, but run
> > includes /usr/dumps as a holding disk area.
> > >
...
>
> Sounds good, so I'll try it.
>
If the sda DLE(s) are small enough to go direct to "tape",
define all holdings, but run sda DLEs with "holdingdisk no".
> Merry Christamas everybody.
mega-d
On Tuesday 24 December 2019 10:40:39 Nathan Stratton Treadway wrote:
> On Mon, Dec 23, 2019 at 23:51:11 -0500, Gene Heskett wrote:
> > Sounds good, so I'll try it. Also, where is the best explanation
> > for "bumpmult"? I don't seem to be getting the results I expect.
>
> I'm only aware of these
On Mon, Dec 23, 2019 at 23:51:11 -0500, Gene Heskett wrote:
> Sounds good, so I'll try it. Also, where is the best explanation
> for "bumpmult"? I don't seem to be getting the results I expect.
I'm only aware of these parameters being explained in the amanda.conf man
page
However, did you
t; > includes /usr/dumps as a holding disk area.
> > >
> > > The next box of rust, /dev/sdb, is the previous os, kept in case I
> > > need to go get something I forgot to copy over when I first made
> > > the present install. It also contains this /user/dumps di
On Monday 23 December 2019 21:16:26 Nathan Stratton Treadway wrote:
> On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> > The first, /dev/sda contains the current operating system. This
> > includes /usr/dumps as a holding disk area.
> >
> > The
On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> The first, /dev/sda contains the current operating system. This
> includes /usr/dumps as a holding disk area.
>
> The next box of rust, /dev/sdb, is the previous os, kept in case I need
> to go get something I forg
On Thursday 05 December 2019 16:16:58 Nathan Stratton Treadway wrote:
> On Tue, Dec 03, 2019 at 15:43:10 +0100, Stefan G. Weichinger wrote:
> > I consider recreating that holding disk array (currently RAID1 of 2
> > disks) as RAID0 ..
>
> Just focusing on this one as
gt;
> > ;-)
> >
> > stay tuned ...
>
> Now an additional obstacle:
>
> one DLE (a Veeam Backup Dir, so I don't want to split it via "exclude"
> or so) is larger than (a) one tape and (b) the holding disk.
>
> DLE = 2.9 TB
> holding disk
;> ;-)
>>
>> stay tuned ...
>
> Now an additional obstacle:
>
> one DLE (a Veeam Backup Dir, so I don't want to split it via "exclude"
> or so) is larger than (a) one tape and (b) the holding disk.
>
> DLE = 2.9 TB
> holding disk = 2 TB
"exclude"
or so) is larger than (a) one tape and (b) the holding disk.
DLE = 2.9 TB
holding disk = 2 TB
one tape = 2.4 TB (LTO6)
It seems that the tape device doesn't support LEOM ...
Amdump dumps the DLE directly to tape, fills it and fails with
" lev 0 partial taper: No sp
On Fri, Nov 29, 2019 at 07:29:13PM +0100, Stefan G. Weichinger wrote:
> Am 27.11.19 um 21:22 schrieb Debra S Baddorf:
> >
> >
> >> On Nov 27, 2019, at 3:29 AM, Stefan G. Weichinger wrote:
> >>
> >> There could also be a separate cronjob with "amdump --no-taper" when I
> >> think about it.
> >>
On Tue, Dec 03, 2019 at 15:43:10 +0100, Stefan G. Weichinger wrote:
> I consider recreating that holding disk array (currently RAID1 of 2
> disks) as RAID0 ..
Just focusing on this one aspect of your question: assuming the
filesystem in question doesn't have anything other than the Amanda
h
Am 03.12.19 um 15:43 schrieb Stefan G. Weichinger:
>
> Another naive question:
>
> Does the holdingdisk have to be bigger than the size of one tape?
As there were multiple replies to my original posting and as I am way
too busy right now: a quick "thanks" to all the people who replied.
So far
On Thursday 05 December 2019 10:50:34 Charles Curley wrote:
And I replied back on the list where this belongs, even if some of it is
me blowing my own horn.
> On Thu, 5 Dec 2019 00:00:24 -0500
>
> Gene Heskett wrote:
> > Lesson #2, I just learned today that the raspbian AND debian buster
> >
On Thu, 5 Dec 2019 04:43:15 -0500
Gene Heskett wrote:
> > # systemctl status amanda.socket
> pi@rpi4:/etc $ sudo systemctl status amanda.socket
> Unit amanda.socket could not be found.
Same on Debian 10.2. Also, it appears that no Debian 10.2 package
provides amanda.service:
charles@hawk:~$
On Thu, 5 Dec 2019 00:00:24 -0500
Gene Heskett wrote:
> Lesson #2, I just learned today that the raspbian AND debian buster
> 10.2 versions have NO inetd or xinetd. Ditto for RH.
I don't know where you get that idea, as far as Debian goes.
root@jhegaala:~# cat /etc/debian_version
10.2
On Thursday 05 December 2019 02:12:52 Uwe Menges wrote:
> On 2019-12-05 06:00, Gene Heskett wrote:
> > Lesson #2, I just learned today that the raspbian AND debian buster
> > 10.2 versions have NO inetd or xinetd. Ditto for RH.
>
> I think that's along with other stuff moving to systemd.
> On
On 2019-12-05 06:00, Gene Heskett wrote:
> Lesson #2, I just learned today that the raspbian AND debian buster 10.2
> versions have NO inetd or xinetd. Ditto for RH.
I think that's along with other stuff moving to systemd.
On Fedora 30, I have
# systemctl status amanda.socket
● amanda.socket -
On Tuesday 03 December 2019 20:23:04 Olivier wrote:
> "Stefan G. Weichinger" writes:
> > So far it works but maybe not optimal. I consider recreating that
> > holding disk array (currently RAID1 of 2 disks) as RAID0 ..
>
> Unless your backups are super cr
"Stefan G. Weichinger" writes:
> So far it works but maybe not optimal. I consider recreating that
> holding disk array (currently RAID1 of 2 disks) as RAID0 ..
Unless your backups are super critical, you may not need RAID 1 for
holding disk. Also consider that holding
Stefan,
In order for the holding disk to be used it has to be bigger than the largest
DLE.
To get parallelism in dumping it has to be large enough to hold more than one
DLE at a time, ideally I suppose the number of in parallel dumps, and then some
more so that you can begin spooling to tape
Another naive question:
Does the holdingdisk have to be bigger than the size of one tape?
I know that it would be good, but what if not?
I right now have ~2TB holding disk and "runtapes 2" with LTO6 tapetype.
That is 2.4 TB per tape.
So far it works but maybe not optimal.
area)
>
> so that the sync happens immediately after the dump is finished.
> Run script earlier in the day than normal backups, allowing enough time
> for it to finish.
>
> Then, when the normal backup run is done, amanda will auto-magically
> FLUSH that earlier run onto th
> On Nov 27, 2019, at 10:23 AM, Charles Curley
> wrote:
>
> On Wed, 27 Nov 2019 10:32:51 +0100
> "Stefan G. Weichinger" wrote:
>
>>> amvault might be worth looking at.
>>
>> I never understood that one ... :-(
>
> Drat. I never did, either. I was hoping you'd figure it out and then I
>
l backups, allowing enough time
for it to finish.
Then, when the normal backup run is done, amanda will auto-magically
FLUSH that earlier run onto the vtape with the rest of them, and will then
delete it from the holding disk.
Or, run it immediately after LAST NIGHT’s backup run. The dump
would sit in h
On Wed, 27 Nov 2019 10:32:51 +0100
"Stefan G. Weichinger" wrote:
> > amvault might be worth looking at.
>
> I never understood that one ... :-(
Drat. I never did, either. I was hoping you'd figure it out and then I
could use it. :-)
--
Does anybody read signatures any more?
Am 26.11.19 um 19:58 schrieb Charles Curley:
> I wonder if this would capture complete backups? If you have all level
> 0 (total) backups, this should be fine. But if you have non-level-0
> backups, you need a way to capture and keep until the next level 0
> backup all the non-level-0 backups.
>
hings in her holding disk. She puts
> them to
> tape (vtape) and then is done with them …. or she doesn’t, and the taping is
> pending.
Right. AFAIK it could only be influenced indirectly by using the various
thresholds (keep stuff in holding disk until X tapes could be filled etc)
On Tue, 26 Nov 2019 13:13:00 +0100
"Stefan G. Weichinger" wrote:
> One DLE uses amsamba-application to dump a Windows-share, containing a
> specific SQL-Server-export
>
> That dump should go to (a) amanda's vtapes (done already) and (b)
> rsynced to some remote server off-site
>
> Now how do I
l
somebody with better
ideas comes along.
I suspect amanda does not like to leave things in her holding disk. She puts
them to
tape (vtape) and then is done with them …. or she doesn’t, and the taping is
pending.
How about a cronjob to copy the tarballs from the vtape to another disk (NOT
t
special need again:
One DLE uses amsamba-application to dump a Windows-share, containing a
specific SQL-Server-export
That dump should go to (a) amanda's vtapes (done already) and (b)
rsynced to some remote server off-site
Now how do I keep the tarballs in the holdingdisk for syncing them
; wrote:
> >>>
> >>> On 2018-11-16 12:27, Chris Miller wrote:
> >>>> Hi Folks,
> >>>> I'm unclear on the timing of the flush from holding disk to
> >>>> vtape. Suppose I run two backup jobs,and each uses the holding
> >>&g
On Fri, Nov 16, 2018 at 14:52:08 -0500, Nathan Stratton Treadway wrote:
> On Fri, Nov 16, 2018 at 09:42:18 -0800, Chris Miller wrote:
> > Hi Folks,
> >
> > I have 194 files on my holding disk that were written as a result of
> > "amdump aequitas.tclc.org&q
t;> Hi Folks,
>>>> I'm unclear on the timing of the flush from holding disk to vtape.
>>>> Suppose I run two backup jobs,and each uses the holding disk. When
>>>> will the second job start? Obviously, after the client has sent
>>>> everything... Be
On Fri, Nov 16, 2018 at 09:42:18 -0800, Chris Miller wrote:
> Hi Folks,
>
> I have 194 files on my holding disk that were written as a result of "amdump
> aequitas.tclc.org", but I can't manually flush them.
>
>
>
> bash-4.2 $ ls -lv /var/amanda/hold/201
> On Nov 16, 2018, at 1:28 PM, Alan Hodgson wrote:
>
> On Fri, 2018-11-16 at 11:00 -0800, Chris Miller wrote:
>> Hi Alan,
>>> From: "Alan Hodgson"
>>> To: "amanda-users"
>>> Sent: Friday, November 16, 2018 9:59:29 AM
>>&
On Friday 16 November 2018 13:59:59 Debra S Baddorf wrote:
> > On Nov 16, 2018, at 12:11 PM, Austin S. Hemmelgarn
> > wrote:
> >
> > On 2018-11-16 12:27, Chris Miller wrote:
> >> Hi Folks,
> >> I'm unclear on the timing of the flush from holding disk to
On Fri, 2018-11-16 at 11:00 -0800, Chris Miller wrote:
> Hi Alan,
> > From: "Alan Hodgson"
> > To: "amanda-users"
> > Sent: Friday, November 16, 2018 9:59:29 AM
> > Subject: Re: Manually flush the holding disk
> > On Fri, 2018-11-16
Hi Alan,
> From: "Alan Hodgson"
> To: "amanda-users"
> Sent: Friday, November 16, 2018 9:59:29 AM
> Subject: Re: Manually flush the holding disk
> On Fri, 2018-11-16 at 09:42 -0800, Chris Miller wrote:
> # man amflush
> amflush [-b] [-f] [--exact-ma
> On Nov 16, 2018, at 12:11 PM, Austin S. Hemmelgarn
> wrote:
>
> On 2018-11-16 12:27, Chris Miller wrote:
>> Hi Folks,
>> I'm unclear on the timing of the flush from holding disk to vtape. Suppose I
>> run two backup jobs,and each uses the holding disk. When
Hi Austin,
- Original Message -
> From: "Austin S. Hemmelgarn"
> To: "Chris Miller" , "amanda-users"
> Sent: Friday, November 16, 2018 10:11:22 AM
> Subject: Re: Flushing the Holding Disk
> On 2018-11-16 12:27, Chris Miller wrote:
On 2018-11-16 12:27, Chris Miller wrote:
Hi Folks,
I'm unclear on the timing of the flush from holding disk to vtape.
Suppose I run two backup jobs,and each uses the holding disk. When will
the second job start? Obviously, after the client has sent everything...
Before the holding disk flush
On Fri, 2018-11-16 at 09:42 -0800, Chris Miller wrote:
> >
> > bash-4.2$ amflush aequitas.tclc.org
> > Could not find any Amanda directories to flush.
>
>
>
> Does anybody have any advice?
>
# man amflush
amflush [-b] [-f] [--exact-match] [-s] [-D datestamp] [-o
configoption...] config
Hi Folks,
I have 194 files on my holding disk that were written as a result of "amdump
aequitas.tclc.org", but I can't manually flush them.
bash-4.2 $ ls -lv /var/amanda/hold/20181115124329/
:
-rw---. 1 amandabackup disk 1073741824 Nov 15 15:11
aequitas.tclc.org.C__.0.1.t
Hi Folks,
I'm unclear on the timing of the flush from holding disk to vtape. Suppose I
run two backup jobs,and each uses the holding disk. When will the second job
start? Obviously, after the client has sent everything... Before the holding
disk flush starts, or after the holding disk flush
(how) can I define this behavior:
for DLEs kvm_host:/my/virt-backup/VM_*
amdump their content to amanda_server:/mnt/amhold
write it to tape but leave the files in the holdingdisk as well until
next amdump-run
I want that behavior for some DLEs only, not for the whole config.
Is there a trick?
daily`, I see a DLE indicating PARTIAL (i.e. it ran off the end of the tape), I see a few "waiting
for writing to tape", and I see a couple "waiting for holding disk space." So, in other words,
nothing can be done. If I didn't look, it would sit there forever, apparently.
S
ra.
> Thank you for responding to my question.
>
> I have a more question.
> If I remove partial dumps in holding disk by running "rm -rf treename",
> aren't there any problem to running Amanda later?
> I saw *.tmp files related the partial dumps in index directory when I tested
Hello, Debra.
Thank you for responding to my question.
I have a more question.
If I remove partial dumps in holding disk by running "rm -rf treename",
aren't there any problem to running Amanda later?
I saw *.tmp files related the partial dumps in index directory when I
tested about th
> On Aug 2, 2016, at 9:55 PM, Chaerin Kim <chaerin3...@gmail.com> wrote:
>
> Hello.
>
> I have a question about cleaning up holding disk.
> For cleaning up holding disk, do I have to run amflush command?
>
> I don't want to waste a tape by flushing inc
Hello.
I have a question about cleaning up holding disk.
For cleaning up holding disk, do I have to run amflush command?
I don't want to waste a tape by flushing incomplete backup image remaining
in holding disk.
So, I searched on the internet and found this page in zmanda forum.
(
https
On 01/25/2015 10:27 PM, Jason L Tibbitts III wrote:
My amanda server has a really large holding disk, because disk is cheap
and because lots of disk striping generally equals better write
performance.
The usual restore operation I have to do is pulling things off of last
night's backup, which
I wonder if one could somehow use the AMVAULT command to do this?
It serves to make a COPY of a dump tape (as I understand it).
At the very least, you could have a cron job copy the tape back *off* of
the
tape, onto a spare corner of the holding disk that you had labeled
JM == Jean-Louis Martineau martin...@zmanda.com writes:
JM The main problem is that if you leave the dump in the holding disk,
JM amanda will automatically re-flush (autoflush) them on the next run.
JM There is no way to store the information about dump that are already
JM flushed and dump
a good chance that the backup image I need was just on the
holding disk, and if it hadn't been deleted then there would be no
reason to touch the tapes at all. In fact, even with LTO6 tapes, I
should still be able to fit several tapes worth of backups on the
holding disk.
Is there any way to force
My amanda server has a really large holding disk, because disk is cheap
and because lots of disk striping generally equals better write
performance.
The usual restore operation I have to do is pulling things off of last
night's backup, which involves waiting for the library to load things
line the routine to check
holding disk free space does some calculations that assume the frag size
is a multiple of 1024, and the 512 frag size found here caused it to
round everything down to zero. (That's why the amcheck message says
only 0 KB free, rather than some number that results from
Thanks, Nathan,
According to Wietse Venema (with regard to compiling Postfix on Solaris with
ZFS):
There was a workaraound involving setting parameters on the ZFS that didn't overload the
statvfs() call.
The fix was to build it using statvfs64().
I don't know if that is the answer
On 10/17/2014 07:34 PM, Nathan Stratton Treadway wrote:
On Fri, Oct 17, 2014 at 01:08:42 -0400, Nathan Stratton Treadway wrote:
On Thu, Oct 16, 2014 at 15:58:58 -0400, Chris Hoogendyk wrote:
Is it possible that Amanda 2.5.1p3 is using some UFS specific system
level call that doesn't work for
On Mon, Oct 20, 2014 at 13:16:17 -0400, Chris Hoogendyk wrote:
According to Wietse Venema (with regard to compiling Postfix on Solaris with
ZFS):
There was a workaraound involving setting parameters on the ZFS
that didn't overload the statvfs() call.
The fix was to build it using
On Fri, Oct 17, 2014 at 01:08:42 -0400, Nathan Stratton Treadway wrote:
On Thu, Oct 16, 2014 at 15:58:58 -0400, Chris Hoogendyk wrote:
Is it possible that Amanda 2.5.1p3 is using some UFS specific system
level call that doesn't work for ZFS?
I had a copy of the amanda 2.6.1p1 source lying
I have an older Sun server (T5220, Solaris 10, J4200 SAS, LIB162-AIT5) that is still running but
close to being replaced.
I tried to add some holding disk space by allocating from a ZFS pool.
amcheck tells me that there is 0 KB free, but df -k tells me it has 179G free. amcheck debug makes
, October 16, 2014 3:59 PM
To: AMANDA users
Subject: Amanda 2.5.1p3 does not recognize ZFS holding disk
I have an older Sun server (T5220, Solaris 10, J4200 SAS, LIB162-AIT5) that is
still running but close to being replaced.
I tried to add some holding disk space by allocating from a ZFS pool
What does /etc/fstab contain for the two partitions with the holding disks?
I've never used a zfs filesystem; does the amanda account have sufficient
permissions to create files/directories on the new holding disk directory?
Could the mount permissions be incorrect? (IE it's mounted for owner
on the new holding disk directory?
Could the mount permissions be incorrect? (IE it's mounted for owner=root and
thus only root can read/write to it?) If mount has a uid=root option, it
wouldn't matter what the actual uid of the owner is inside the filesystem, as
the kernel overrides
like to hear what any
other LTO6 users are doing for a holding disk.
LTO6 isn't much faster than LTO4, AFAIR 160MB/s vs. 120 MB/s.
I am running five linux md raid5, each consisting of five 2 TB SAS
drives (7200 rpm). The five RAIDs give me five independant
spindles. This gives enough concurrency
for a holding disk. We're
currently using LTO4 drives so I can't do my own real
world benchmarking. Thanks in advance!
--Marcus
On 08/17/2011 03:35 PM, Jean-Francois Malouin wrote:
I think I got it now.
The amanda.conf used the following holding disk definition:
define holdingdisk holddisk {
directory /holddisk/charm
use -50Gb
chunksize 0
}
So I changed it to:
holdingdisk holddisk {
directory
doesn't seem
to use the holding disk: it port-dumps directly to tape and if I
specify 'holdingdisk required' in the dumptype the run simply fails:
define holdingdisk holddisk {
directory /holddisk/charm
use -50Gb
chunksize 0
}
define dumptype app-amgtar-span {
global
program
* u...@3.am u...@3.am [20110817 11:54]:
It appears that you are telling amanda to use -50GB of space for your holding
diskwhy would you want a negative number?
from the man page:
use int
Default: 0 Gb. Amount of space that can be used in this holding disk
area
It appears that you are telling amanda to use -50GB of space for your holding
diskwhy would you want a negative number?
Mine us configured as:
use 17 Mb
Anyone on this?
Right now this is a show stopper for me :(
jf
* Jean-Francois Malouin jean-francois.malo
On Wednesday, August 17, 2011 12:58:46 PM u...@3.am did opine:
It appears that you are telling amanda to use -50GB of space for your
holding diskwhy would you want a negative number?
That _used_ to be (could have been changed in the last 2-3 years) how one
would specify the use of all
Am 17.08.2011 um 18:00 schrieb Jean-Francois Malouin:
* u...@3.am u...@3.am [20110817 11:54]:
It appears that you are telling amanda to use -50GB of space for your holding
diskwhy would you want a negative number?
driver: pid 9802 ruid 111 euid 111 version 3.3.0: start at Wed Aug
* Michael Müskens m...@commie.de [20110817 14:46]:
Am 17.08.2011 um 18:00 schrieb Jean-Francois Malouin:
* u...@3.am u...@3.am [20110817 11:54]:
It appears that you are telling amanda to use -50GB of space for your
holding
diskwhy would you want a negative number
Hi,
I've have this seemingly simple problem but I can't put my finger on
it :)
I just installed amanda-3.3.0 on a new server and amanda doesn't seem
to use the holding disk: it port-dumps directly to tape and if I
specify 'holdingdisk required' in the dumptype the run simply fails:
define
the problem re-occurs, I change the MTU to a different value: if it's
9000bytes, I change it to 1500bytes, and vice-versa.
Does anyone know why this happens? Why does Amanda become slow at reading
from the holding disk residing on an iSCSI volume?
I don't know much about iSCSI, but as you
On Wed, Sep 29, 2010 at 11:46 AM, Valeriu Mutu vm...@pcbi.upenn.edu wrote:
What do you mean by shoe-shining?
shoe-shining is when a tape drive must stop the tape repeatedly while
it buffers more deta. It creates a lot of wear on the tape, and also
kills performance.
Dustin
--
Open Source
Hi,
I am using Amanda 2.6.1p2.
I'm currently using Equallogic iSCSI storage for Amanda's holding disk.
My current Amanda server has iptables disabled because this somehow affects
iSCSI multipathing, i.e. if iptables is enabled, only one path works. I have
yet to determine how to get iptables
size are you using?
When the problem re-occurs, I change the MTU to a different value: if it's
9000bytes, I change it to 1500bytes, and vice-versa.
Does anyone know why this happens? Why does Amanda become slow at reading
from the holding disk residing on an iSCSI volume?
I don't know much
Hi,
Sometimes, after I run a backup, amdump leaves dump images in the holding disk.
Next time 'amdump' runs, it will not be able to use the size of holding disk
specified and the warning will be printed:
NOTES:
driver: WARNING: /data3/amanda/holdingdisk/Daily1/: 880803840 KB requested
On Fri, Sep 3, 2010 at 11:13 AM, Valeriu Mutu vm...@pcbi.upenn.edu wrote:
Next time 'amdump' runs, it will not be able to use the size of holding disk
specified and the warning will be printed:
NOTES:
driver: WARNING: /data3/amanda/holdingdisk/Daily1/: 880803840 KB requested,
but only
On Fri, Sep 03, 2010 at 01:19:50PM -0500, Dustin J. Mitchell wrote:
On Fri, Sep 3, 2010 at 11:13 AM, Valeriu Mutu vm...@pcbi.upenn.edu wrote:
Next time 'amdump' runs, it will not be able to use the size of holding
disk specified and the warning will be printed:
NOTES:
??driver: WARNING
On Fri, Sep 3, 2010 at 5:57 PM, Jon LaBadie j...@jgcomp.com wrote:
Isn't a decision made before each DLE is dumped whether there is enough
holding disk for it? In that case, does the reported amount (i.e. 2.3GB
above) serve as an upper limit for the entire amdump run? If so, maybe
that part
Hi Amanda developers,
I would like to get a better understanding of how Amanda's holding disk and
dump splitting features work.
According to the documentation, to speed up backups, one could could setup
holding disks where the data will be buffered before it is written to tape.
This method
well for
DLE's which can fit into the holding disk area.
Correct.
Nevertheless, for the DLE's that don't fit into the holding disk, Amanda
would use the second method known as PORT-WRITE [1]. With this method, Amanda
splits the DLE into chucks of a given size S, writes each chunk to disk one
On Thu, Jul 15, 2010 at 4:19 PM, Florian Lengyel
florian.leng...@gmail.com wrote:
We're attempting to get the specifications for the holding disk of an AMANDA
server with 6GB SAS controllers that will be connected to an LTO5 drive.
We want to know whether 7200 RPM SATA drives would
be suitable
for a holding disk with 2 LTO5 drives. Either that or a fully-loaded MD1000 with
SATA drives.
On Sun, Jul 18, 2010 at 6:07 PM, Dustin J. Mitchell dus...@zmanda.com wrote:
On Thu, Jul 15, 2010 at 4:19 PM, Florian Lengyel
florian.leng...@gmail.com wrote:
We're attempting to get
configuration with at least 3 disks would
be suitable
for a holding disk with 2 LTO5 drives. Either that or a fully-loaded MD1000
with
SATA drives.
Keep in mind that your SAS and LTO5 won't talk to one another
directly, so all of that data will need to get into and out of RAM -
hopefully via DMA
nor the server with the
disks, so I am guessing.
I imagine a RAID5 SAS disk configuration with at least 3 disks would
be suitable
for a holding disk with 2 LTO5 drives. Either that or a fully-loaded MD1000
with
SATA drives.
Keep in mind that your SAS and LTO5 won't talk to one another
Hi,
We're attempting to get the specifications for the holding disk of an AMANDA
server with 6GB SAS controllers that will be connected to an LTO5 drive.
We want to know whether 7200 RPM SATA drives would be sufficiently fast
for the holding disk (this will be a separate RAID unit), or do we need
:
hertz /gauss/export/users-q RESULTS MISSING
hertz /gauss/export/users-q lev 0 FAILED [can't dump required
holdingdisk]
I ran the same backup dumptype holdingdisk yes and it ran to completion but
it did not write to the holding disk but sent directly to tape.
Can you see any reason
, D01005,
D01006 FAILURE DUMP SUMMARY: hertz /gauss/export/users-q RESULTS MISSING
hertz /gauss/export/users-q lev 0 FAILED [can't dump required
holdingdisk]
I ran the same backup dumptype holdingdisk yes and it ran to completion
but it did not write to the holding disk but sent directly
On Friday 02 July 2010, McGraw, Robert P wrote:
[...]
Any ideas why Amanda 3.1.1 will not use the holdingdisk?
Perms? It must be rw available to the user amanda is running as.
[...]
I checked and rechecked this before sending the email.
To triple check again after receiving your email. I
not write to the holding disk but sent directly to tape.
Can you see any reason why the holding disk is not being used?
And amstatus shows
6 dumpers idle : no-diskspace
taper status: Idle
taper qlen: 0
network free kps: 100
On Fri, Jul 2, 2010 at 4:24 PM, Jon LaBadie j...@jgcomp.com wrote:
The status seems to say there is no holding disk space available.
You've asked to set aside 700GB, does /zvol/amanda/holdingdisk/daily
have that much space available?
Are the DLEs causing the problem larger than 700GB
On Fri, Jul 02, 2010 at 04:36:52PM -0500, Dustin J. Mitchell wrote:
On Fri, Jul 2, 2010 at 4:24 PM, Jon LaBadie j...@jgcomp.com wrote:
The status seems to say there is no holding disk space available.
You've asked to set aside 700GB, does /zvol/amanda/holdingdisk/daily
have that much
. For forever, the following syntax has worked to specify
a holding disk:
holdingdisk foo {
directory /foo
# ...
}
But unlike tapetypes or other config subsections, there was no way to
define a holdingdisk that was not subsequently used -- and for some
unusually contorted Amanda configurations
1 - 100 of 573 matches
Mail list logo