Re: holding disk too small? -- holding disk RAID configuration

2019-12-25 Thread Gene Heskett
On Wednesday 25 December 2019 19:33:04 Jon LaBadie wrote:

> On Mon, Dec 23, 2019 at 11:51:11PM -0500, Gene Heskett wrote:
> > On Monday 23 December 2019 21:16:26 Nathan Stratton Treadway wrote:
> > > On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> > > > The first, /dev/sda contains the current operating system. This
> > > > includes /usr/dumps as a holding disk area.
>
> ...
>
> > Sounds good, so I'll try it.
>
> If the sda DLE(s) are small enough to go direct to "tape",
> define all holdings, but run sda DLEs with "holdingdisk no".
>

Some are rather gargantuan with multiple iso's etc. , so moving the 
holding disk to an otherwise unused spindle makes the best situation by 
my reasoning.  And backup times the last 2 nights have been cut 
drastically. The question then is will it work that well for 2 weeks. Or 
a month?

> > Merry Christamas everybody.
>
> mega-dittos!

Same here as I'm celebrating yet another instance of makeing the guy with 
the scythe blink. Twice now. But my non-oem parts list is beginning to 
read like the 6 million dollar man. But in the middle of all that work 
in the cath-lab at Ruby in Morgantown, my drivers license expired so I 
need to go get that fixed tomorrow. A week past a new Aortic valve in my 
ticker, I feel like I ought to be good for another decade.  Great IOW.

Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: holding disk too small? -- holding disk RAID configuration

2019-12-25 Thread Jon LaBadie
On Mon, Dec 23, 2019 at 11:51:11PM -0500, Gene Heskett wrote:
> On Monday 23 December 2019 21:16:26 Nathan Stratton Treadway wrote:
> 
> > On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> > > The first, /dev/sda contains the current operating system. This
> > > includes /usr/dumps as a holding disk area.
> > >
...
> 
> Sounds good, so I'll try it.
> 
If the sda DLE(s) are small enough to go direct to "tape",
define all holdings, but run sda DLEs with "holdingdisk no".


> Merry Christamas everybody.

mega-dittos!

-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)


Re: holding disk too small? -- bumpmult

2019-12-24 Thread Gene Heskett
On Tuesday 24 December 2019 10:40:39 Nathan Stratton Treadway wrote:

> On Mon, Dec 23, 2019 at 23:51:11 -0500, Gene Heskett wrote:
> > Sounds good, so I'll try it.  Also, where is the best explanation
> > for "bumpmult"? I don't seem to be getting the results I expect.
>
> I'm only aware of these parameters being explained in the amanda.conf
> man page
>
> However, did you find the "amadmin ... bumpsize" command?  You can use
> it to check the actual effect of the bump* parameters in the config
> file, which perhaps will help you get a sense of how they interrelate:
>
> =
> # su backup -c "amadmin TestBackup bumpsize"
> Current bump parameters:
>   bumppercent  20 % - minimum savings (threshold) to bump level 1
> -> 2 bumpdays 1- minimum days at each level
>   bumpmult 4- threshold = disk_size * bumppercent *
> bumpmult**(level-1)
>
>   Bump -> To  Threshold
> 1  ->  220.00 %
> 2  ->  380.00 %
> 3  ->  4   100.00 %
> 4  ->  5   100.00 %
> 5  ->  6   100.00 %
> 6  ->  7   100.00 %
> 7  ->  8   100.00 %
> 8  ->  9   100.00 %
> =
>
>
>   Nathan

No, I wasn't aware of this tool, and it showed me why I wasn't getting 
the promotions I expected. Now I think I should see better results. Last 
nights run was in something less than an hour, whereas the night before 
had many more compressed DLE's which took 4:45 to complete. The mix of 
compressed vs why waste the time trying straight copies is confusing the 
issue, but I don't recall a just over 40 minute run in recent history 
either.  We'll let this "settle" for a couple weeks and see.  Thanks and 
have a Merry Christmas, Nathan.
>
> --
>-- Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic
> region Ray Ontko & Co.  -  Software consulting services  -  
> http://www.ontko.com/ GPG Key:
> http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: holding disk too small? -- bumpmult

2019-12-24 Thread Nathan Stratton Treadway
On Mon, Dec 23, 2019 at 23:51:11 -0500, Gene Heskett wrote:
> Sounds good, so I'll try it.  Also, where is the best explanation 
> for "bumpmult"? I don't seem to be getting the results I expect.

I'm only aware of these parameters being explained in the amanda.conf man
page

However, did you find the "amadmin ... bumpsize" command?  You can use
it to check the actual effect of the bump* parameters in the config
file, which perhaps will help you get a sense of how they interrelate:

=
# su backup -c "amadmin TestBackup bumpsize" 
Current bump parameters:
  bumppercent  20 % - minimum savings (threshold) to bump level 1 -> 2
  bumpdays 1- minimum days at each level
  bumpmult 4- threshold = disk_size * bumppercent * 
bumpmult**(level-1)

  Bump -> To  Threshold
1  ->  220.00 %
2  ->  380.00 %
3  ->  4   100.00 %
4  ->  5   100.00 %
5  ->  6   100.00 %
6  ->  7   100.00 %
7  ->  8   100.00 %
8  ->  9   100.00 %
=


Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: holding disk too small? -- holding disk RAID configuration

2019-12-23 Thread Gene Heskett
On Monday 23 December 2019 23:51:11 Gene Heskett wrote:

> On Monday 23 December 2019 21:16:26 Nathan Stratton Treadway wrote:
> > On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> > > The first, /dev/sda contains the current operating system. This
> > > includes /usr/dumps as a holding disk area.
> > >
> > > The next box of rust, /dev/sdb, is the previous os, kept in case I
> > > need to go get something I forgot to copy over when I first made
> > > the present install. It also contains this /user/dumps directory
> > > but currently unused as it normally isn't mounted.
> > >
> > > Wash, rinse and repeat for /dev/sdc. normally not mounted.
> >
> > [...]
> >
> > > What would be the effect of moving from a single holding area on
> > > /dev/sda as it is now operated, compared to mounting and using the
> > > holding directorys that already exist on /dev/sdb and /dev/sdc?
> > > Seems to me this
> >
> > Right... mount the sdb and sdc holding-disk filesystems, then add
> > additional holdingdisk{} definitions pointing to those directories
> > to your amanda.conf.
> >
> > > should result in less pounding on the /dev/sda seek mechanism
> > > while backing up /dev/sda as it would move those writes to a
> > > different spindle, with less total time spent seeking overall.
> > >
> > > Am I on the right track?  How does amanda determine which holding
> > > disk area to use for a given DLE in that case?
> >
> > Yes, I think that's the right track.
> >
> > I have not investigated this in depth, but as far as I know Amanda
> > doesn't have a way to notice that a particular DLE is on physical
> > device local-sda and that a particular holding-disk directory is
> > also on that same physical device, and thus choose to use a
> > different holding disk for that particular DLE.  (It does attempt to
> > spread out temporary files across the various holding-disk
> > directories -- it just presumably can't take into account the
> > physical device origin of a particular DLE when decided where to
> > send that DLEs temporary file.)
> >
> > So if you left your existing holding-disk definition as well as
> > adding the ones for sdb and sdc, about one third of the time
> > (theoretically) Amanda would end up using sda for the holding disk
> > for the
> > os-files-on-sda's DLE, and you'd end up with some thrashing.  As far
> > as I know, the only way to completely avoid that is to to remove the
> > holdingdisk section pointing to sda from the config and use only the
> > other two.
> >
> > However, as long as you are using more than two dumpers in your
> > config, I'm pretty sure that having more than two physical drives in
> > use for holding disks will still come out ahead, because there will
> > also be some thrashing between the holding-disk files for different
> > DLEs that are being backed up in parallel.  So unless the server's
> > sda DLE was a huge portion of the overall data being backed up
> > across your entire disklist, I'd guess that the occasional thrashing
> > on sda when backing up that DLE is a price worth paying to have the
> > holdingdisk activity spread across as many physical drives as
> > possible.
> >
> > (Of course it wouldn't be a bad idea to try it for a dumpcycle with
> > three holding-disk drives and then comment out the entry for the
> > holding disk on sda and try that for a few runs at least and see how
> > the performance compares in reality on your actual installation...)
> >
> >
> > Nathan
>
> Sounds good, so I'll try it.

Except when I mouunted the sdc, it turned out to be the old 1T 
for /amandatapes, and its to close to launch time to go thru all the 
formatting. So we'll try with 1 holding disks, removeing /dev/sda.
 
> Also, where is the best explanation 
> for "bumpmult"? I don't seem to be getting the results I expect.
>
> Merry Christamas everybody.
>
> > 
> >-- -- Nathan Stratton Treadway  -  natha...@ontko.com  - 
> > Mid-Atlantic region Ray Ontko & Co.  -  Software consulting services
> >  -
> > http://www.ontko.com/ GPG Key:
> > http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> > fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239
>
> Copyright 2019 by Maurice E. Heskett
> Cheers, Gene Heskett



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: holding disk too small? -- holding disk RAID configuration

2019-12-23 Thread Gene Heskett
On Monday 23 December 2019 21:16:26 Nathan Stratton Treadway wrote:

> On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> > The first, /dev/sda contains the current operating system. This
> > includes /usr/dumps as a holding disk area.
> >
> > The next box of rust, /dev/sdb, is the previous os, kept in case I
> > need to go get something I forgot to copy over when I first made the
> > present install. It also contains this /user/dumps directory but
> > currently unused as it normally isn't mounted.
> >
> > Wash, rinse and repeat for /dev/sdc. normally not mounted.
>
> [...]
>
> > What would be the effect of moving from a single holding area on
> > /dev/sda as it is now operated, compared to mounting and using the
> > holding directorys that already exist on /dev/sdb and /dev/sdc?
> > Seems to me this
>
> Right... mount the sdb and sdc holding-disk filesystems, then add
> additional holdingdisk{} definitions pointing to those directories to
> your amanda.conf.
>
> > should result in less pounding on the /dev/sda seek mechanism while
> > backing up /dev/sda as it would move those writes to a different
> > spindle, with less total time spent seeking overall.
> >
> > Am I on the right track?  How does amanda determine which holding
> > disk area to use for a given DLE in that case?
>
> Yes, I think that's the right track.
>
> I have not investigated this in depth, but as far as I know Amanda
> doesn't have a way to notice that a particular DLE is on physical
> device local-sda and that a particular holding-disk directory is also
> on that same physical device, and thus choose to use a different
> holding disk for that particular DLE.  (It does attempt to spread out
> temporary files across the various holding-disk directories -- it just
> presumably can't take into account the physical device origin of a
> particular DLE when decided where to send that DLEs temporary file.)
>
> So if you left your existing holding-disk definition as well as adding
> the ones for sdb and sdc, about one third of the time (theoretically)
> Amanda would end up using sda for the holding disk for the
> os-files-on-sda's DLE, and you'd end up with some thrashing.  As far
> as I know, the only way to completely avoid that is to to remove the
> holdingdisk section pointing to sda from the config and use only the
> other two.
>
> However, as long as you are using more than two dumpers in your
> config, I'm pretty sure that having more than two physical drives in
> use for holding disks will still come out ahead, because there will
> also be some thrashing between the holding-disk files for different
> DLEs that are being backed up in parallel.  So unless the server's sda
> DLE was a huge portion of the overall data being backed up across your
> entire disklist, I'd guess that the occasional thrashing on sda when
> backing up that DLE is a price worth paying to have the holdingdisk
> activity spread across as many physical drives as possible.
>
> (Of course it wouldn't be a bad idea to try it for a dumpcycle with
> three holding-disk drives and then comment out the entry for the
> holding disk on sda and try that for a few runs at least and see how
> the performance compares in reality on your actual installation...)
>
>
>   Nathan

Sounds good, so I'll try it.  Also, where is the best explanation 
for "bumpmult"? I don't seem to be getting the results I expect.

Merry Christamas everybody.
>
> --
>-- Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic
> region Ray Ontko & Co.  -  Software consulting services  -  
> http://www.ontko.com/ GPG Key:
> http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: holding disk too small? -- holding disk RAID configuration

2019-12-23 Thread Nathan Stratton Treadway
On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> The first, /dev/sda contains the current operating system. This 
> includes /usr/dumps as a holding disk area.
> 
> The next box of rust, /dev/sdb, is the previous os, kept in case I need 
> to go get something I forgot to copy over when I first made the present 
> install. It also contains this /user/dumps directory but currently 
> unused as it normally isn't mounted.
> 
> Wash, rinse and repeat for /dev/sdc. normally not mounted.

[...] 

> What would be the effect of moving from a single holding area on /dev/sda 
> as it is now operated, compared to mounting and using the holding 
> directorys that already exist on /dev/sdb and /dev/sdc? Seems to me this 

Right... mount the sdb and sdc holding-disk filesystems, then add
additional holdingdisk{} definitions pointing to those directories to
your amanda.conf.

> should result in less pounding on the /dev/sda seek mechanism while 
> backing up /dev/sda as it would move those writes to a different 
> spindle, with less total time spent seeking overall.
> 
> Am I on the right track?  How does amanda determine which holding disk 
> area to use for a given DLE in that case?

Yes, I think that's the right track.

I have not investigated this in depth, but as far as I know Amanda
doesn't have a way to notice that a particular DLE is on physical device
local-sda and that a particular holding-disk directory is also on that
same physical device, and thus choose to use a different holding disk
for that particular DLE.  (It does attempt to spread out temporary files
across the various holding-disk directories -- it just presumably can't
take into account the physical device origin of a particular DLE when
decided where to send that DLEs temporary file.)

So if you left your existing holding-disk definition as well as adding
the ones for sdb and sdc, about one third of the time (theoretically)
Amanda would end up using sda for the holding disk for the
os-files-on-sda's DLE, and you'd end up with some thrashing.  As far as
I know, the only way to completely avoid that is to to remove the
holdingdisk section pointing to sda from the config and use only the
other two.

However, as long as you are using more than two dumpers in your config,
I'm pretty sure that having more than two physical drives in use for
holding disks will still come out ahead, because there will also be some
thrashing between the holding-disk files for different DLEs that are
being backed up in parallel.  So unless the server's sda DLE was a huge
portion of the overall data being backed up across your entire disklist,
I'd guess that the occasional thrashing on sda when backing up that DLE
is a price worth paying to have the holdingdisk activity spread across
as many physical drives as possible.

(Of course it wouldn't be a bad idea to try it for a dumpcycle with
three holding-disk drives and then comment out the entry for the holding
disk on sda and try that for a few runs at least and see how the
performance compares in reality on your actual installation...)


Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: holding disk too small? -- holding disk RAID configuration

2019-12-22 Thread Gene Heskett
On Thursday 05 December 2019 16:16:58 Nathan Stratton Treadway wrote:

> On Tue, Dec 03, 2019 at 15:43:10 +0100, Stefan G. Weichinger wrote:
> > I consider recreating that holding disk array (currently RAID1 of 2
> > disks) as RAID0 ..
>
> Just focusing on this one aspect of your question: assuming the
> filesystem in question doesn't have anything other than the Amanda
> holding-disk area on it, I suspect you would be better off creating
> two separate filesystems, one on each underlying disk, rather than
> making them into a RAID0 array.
>
> Amanda can make use of two separate holding-disk directories in
> parallel, so you can still get twice the total holding disk size
> avilable in a run (compared to the current RAID1 setup), but Ananda's
> parallel accesses will probably cause less contention on the physical
> device since each filesystem is stored independently on one drive.
>
>
> (Also, if one of the drives fails the other holding disk filesystem
> will still be available, while if you are using RAID0 one drive
> failing will take out the whole array)
>
>   Nathan

I find this an interesting concept Nathan, and would like to explore it 
further.

In my setup here, serving this machine and 4 others in my machine shop 
menagery (sp?), I have 4 boxes of spinning rust.

The first, /dev/sda contains the current operating system. This 
includes /usr/dumps as a holding disk area.

The next box of rust, /dev/sdb, is the previous os, kept in case I need 
to go get something I forgot to copy over when I first made the present 
install. It also contains this /user/dumps directory but currently 
unused as it normally isn't mounted.

Wash, rinse and repeat for /dev/sdc. normally not mounted.

/dev/sdd is /amandatapes, mounted full time,

(I find keeping a disk spinning results is disks that last 100,000+ hours 
with no increase in error rates, I have a 1T that had 25 bad, 
reallocated sectors the first time I checked it at about 5k hours in 
2006, still has the same 25 reallocated sectors today at about 100,000 
head flying hours.)

What would be the effect of moving from a single holding area on /dev/sda 
as it is now operated, compared to mounting and using the holding 
directorys that already exist on /dev/sdb and /dev/sdc? Seems to me this 
should result in less pounding on the /dev/sda seek mechanism while 
backing up /dev/sda as it would move those writes to a different 
spindle, with less total time spent seeking overall.

Am I on the right track?  How does amanda determine which holding disk 
area to use for a given DLE in that case?

Thanks.

> --
>-- Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic
> region Ray Ontko & Co.  -  Software consulting services  -  
> http://www.ontko.com/ GPG Key:
> http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: holding disk too small?

2019-12-14 Thread Jon LaBadie
On Tue, Dec 10, 2019 at 03:27:13PM +0100, Stefan G. Weichinger wrote:
> Am 05.12.19 um 21:47 schrieb Stefan G. Weichinger:
> > Am 03.12.19 um 15:43 schrieb Stefan G. Weichinger:
> >>
> >> Another naive question:
> >>
> >> Does the holdingdisk have to be bigger than the size of one tape?
> > 
> > As there were multiple replies to my original posting and as I am way
> > too busy right now: a quick "thanks" to all the people who replied.
> > 
> > So far the setup works. Maybe not optimal, but it works.
> > 
> > ;-)
> > 
> > stay tuned ...
> 
> Now an additional obstacle:
> 
> one DLE (a Veeam Backup Dir, so I don't want to split it via "exclude"
> or so) is larger than (a) one tape and (b) the holding disk.
> 
> DLE = 2.9 TB
> holding disk = 2 TB
> one tape = 2.4 TB (LTO6)
> 
> It seems that the tape device doesn't support LEOM ...
> 
> Amdump dumps the DLE directly to tape, fills it and fails with
> 
> " lev 0  partial taper: No space left on device, splitting not enabled
> "
> 
> I am unsure how to set LEOM within:
> 
> define device lto6_drive {
> tapedev "tape:/dev/nst0"
> #device-property "BLOCK_SIZE" "2048K"
> device-property "LEOM" "false"
> }
> 
> define changer robot {
>   tpchanger "chg-robot:/dev/sg4"
>   #property "tape-device" "0=tape:/dev/nst0"
>   property "tape-device" "0=lto6_drive"
>   property "eject-before-unload" "yes"
>   property "use-slots" "1-8"
> }
> 
> 
> ... makes amcheck happy.
> 
> additional for your checking eyes:
> 
> define tapetype LTO6 {
> comment "Created by amtapetype; compression enabled"
> length 244352 kbytes
> filemark 868 kbytes
> speed 157758 kps
> blocksize 2048 kbytes
> 
>   part_size 100G
>   part_cache_type memory
>   part_cache_max_size 8G # use roughly the amount of free RAM on your 
> system
> }
> 
> 
> We have 32 GB RAM in there so this should work?

Perhaps my lack of using large devices is causing me to
miss something, but I don't see how.

You are writing 100GB "parts" directly to tape.  At some
point, the tape fills while writing one of these parts.
To repeat that part on a second tape, the 100GB of the
failed part must be saved somewhere.  Certainly not in
memory!  Can the holding disk be used to "cache" the
parts?

Are you sure you can't just plugin another 2TB USB drive
as a second holding disk?

BTW you do have "runtapes" > 1 correct?

Jon
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)


Re: holding disk too small?

2019-12-10 Thread Debra S Baddorf



> On Dec 10, 2019, at 8:27 AM, Stefan G. Weichinger  wrote:
> 
> Am 05.12.19 um 21:47 schrieb Stefan G. Weichinger:
>> Am 03.12.19 um 15:43 schrieb Stefan G. Weichinger:
>>> 
>>> Another naive question:
>>> 
>>> Does the holdingdisk have to be bigger than the size of one tape?
>> 
>> As there were multiple replies to my original posting and as I am way
>> too busy right now: a quick "thanks" to all the people who replied.
>> 
>> So far the setup works. Maybe not optimal, but it works.
>> 
>> ;-)
>> 
>> stay tuned ...
> 
> Now an additional obstacle:
> 
> one DLE (a Veeam Backup Dir, so I don't want to split it via "exclude"
> or so) is larger than (a) one tape and (b) the holding disk.
> 
> DLE = 2.9 TB
> holding disk = 2 TB
> one tape = 2.4 TB (LTO6)
> 
> It seems that the tape device doesn't support LEOM ...
> 
> Amdump dumps the DLE directly to tape, fills it and fails with
> 
> " lev 0  partial taper: No space left on device, splitting not enabled
> "
> 
> I am unsure how to set LEOM within:
> 
> define device lto6_drive {
>tapedev "tape:/dev/nst0"
>#device-property "BLOCK_SIZE" "2048K"
>device-property "LEOM" "false"
> }
> 
> define changer robot {
>   tpchanger "chg-robot:/dev/sg4"
>   #property "tape-device" "0=tape:/dev/nst0"
>   property "tape-device" "0=lto6_drive"
>   property "eject-before-unload" "yes"
>   property "use-slots" "1-8"
> }
> 
> 
> ... makes amcheck happy.
> 
> additional for your checking eyes:
> 
> define tapetype LTO6 {
>comment "Created by amtapetype; compression enabled"
>length 244352 kbytes
>filemark 868 kbytes
>speed 157758 kps
>blocksize 2048 kbytes
> 
>   part_size 100G
>   part_cache_type memory
>   part_cache_max_size 8G # use roughly the amount of free RAM on your 
> system
> }
> 
> 
> We have 32 GB RAM in there so this should work?


Except that clearly (as you say) it ISN’T working.I was going to talk about 
“splitsize”
and “allow-split”  but the comment in the config file say this all defaults to 
YES  (allow)
and  not-used (splitsize).  But either you or amanda needs to split this DLE, 
since it doesn’t
fit onto a tape.   Sounds like amanda will split it by default.  So…..

I don’t have LT06  (LT05 here)  BUT  if it isn’t working as is,
I would set the tape length to 23……  (all the rest)  and see if that makes it 
work.
If yes,  then gradually increase that param until it stops working.

However, I do seem to recall that amanda will keep trying to go further and 
further on the tape
until it actually reaches the end, and fails.   If so, changing the tape length 
won’t help,
if amanda is going to keep testing the ice for itself.  If there is only ONE 
DLE on the tape,  then failing 
and trying again is senseless.   I wonder if there is some new  “force split” 
parameter(s) that
might help?Or, look up  “chunking” ?

Maybe this will stir up ideas in somebody else?

Deb Baddorf
Fermilab







Re: holding disk too small?

2019-12-10 Thread Stefan G. Weichinger
Am 05.12.19 um 21:47 schrieb Stefan G. Weichinger:
> Am 03.12.19 um 15:43 schrieb Stefan G. Weichinger:
>>
>> Another naive question:
>>
>> Does the holdingdisk have to be bigger than the size of one tape?
> 
> As there were multiple replies to my original posting and as I am way
> too busy right now: a quick "thanks" to all the people who replied.
> 
> So far the setup works. Maybe not optimal, but it works.
> 
> ;-)
> 
> stay tuned ...

Now an additional obstacle:

one DLE (a Veeam Backup Dir, so I don't want to split it via "exclude"
or so) is larger than (a) one tape and (b) the holding disk.

DLE = 2.9 TB
holding disk = 2 TB
one tape = 2.4 TB (LTO6)

It seems that the tape device doesn't support LEOM ...

Amdump dumps the DLE directly to tape, fills it and fails with

" lev 0  partial taper: No space left on device, splitting not enabled
"

I am unsure how to set LEOM within:

define device lto6_drive {
tapedev "tape:/dev/nst0"
#device-property "BLOCK_SIZE" "2048K"
device-property "LEOM" "false"
}

define changer robot {
tpchanger "chg-robot:/dev/sg4"
#property "tape-device" "0=tape:/dev/nst0"
property "tape-device" "0=lto6_drive"
property "eject-before-unload" "yes"
property "use-slots" "1-8"
}


... makes amcheck happy.

additional for your checking eyes:

define tapetype LTO6 {
comment "Created by amtapetype; compression enabled"
length 244352 kbytes
filemark 868 kbytes
speed 157758 kps
blocksize 2048 kbytes

part_size 100G
part_cache_type memory
part_cache_max_size 8G # use roughly the amount of free RAM on your 
system
}


We have 32 GB RAM in there so this should work?


Re: holding disk too small? -- holding disk RAID configuration

2019-12-05 Thread Nathan Stratton Treadway
On Tue, Dec 03, 2019 at 15:43:10 +0100, Stefan G. Weichinger wrote:
> I consider recreating that holding disk array (currently RAID1 of 2
> disks) as RAID0 ..

Just focusing on this one aspect of your question: assuming the
filesystem in question doesn't have anything other than the Amanda
holding-disk area on it, I suspect you would be better off creating two
separate filesystems, one on each underlying disk, rather than making
them into a RAID0 array.

Amanda can make use of two separate holding-disk directories in
parallel, so you can still get twice the total holding disk size
avilable in a run (compared to the current RAID1 setup), but Ananda's
parallel accesses will probably cause less contention on the physical
device since each filesystem is stored independently on one drive.


(Also, if one of the drives fails the other holding disk filesystem will
still be available, while if you are using RAID0 one drive failing will
take out the whole array)

Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: holding disk too small?

2019-12-05 Thread Stefan G. Weichinger
Am 03.12.19 um 15:43 schrieb Stefan G. Weichinger:
> 
> Another naive question:
> 
> Does the holdingdisk have to be bigger than the size of one tape?

As there were multiple replies to my original posting and as I am way
too busy right now: a quick "thanks" to all the people who replied.

So far the setup works. Maybe not optimal, but it works.

;-)

stay tuned ...


Re: holding disk too small?

2019-12-05 Thread Gene Heskett
On Thursday 05 December 2019 10:50:34 Charles Curley wrote:
And I replied back on the list where this belongs, even if some of it is 
me blowing my own horn.

> On Thu, 5 Dec 2019 00:00:24 -0500
>
> Gene Heskett  wrote:
> > Lesson #2, I just learned today that the raspbian AND debian buster
> > 10.2 versions have NO inetd or xinetd. Ditto for RH.
>
> I don't know where you get that idea, as far as Debian goes.
>
That list is where I git that info, and it mention that RH was doing it 
too, so I checked my only buster install, which did not yet belong to my 
amanda setup and discovered both were missing on my rpi4/buster 10.2 
raspbian install.  But as you saw from my previous post this morning, 
apt now calls in some bsd stuff which I assume 
installs /etc/xinetd.d/amanda, which itself has a new option I've not 
seen before. It was not there before I had apt install the client stuff.

Because I had played with debians buster arm64 installs on both the pi3 
and the pi4, I know for a fact that touching those clients from the 
server, crashes the arm64 installs, leaving nothing in the logs.  I 
liked the idea of debians arm64 actually using grub to boot instead of 
the u-boot BS, but debian's amanda versions of the client stuff are 
instant crashers.  Between that and the relatively poor latency 
performance of the arm64 with its bigger stack frame, I reasoned that 
the armhf was the install of choice, raspbian was still on armhf, and 
its running beautifully, dead stable, moving that bigger lathe faster 
and sweeter than the pi3 ever did.  And building its own food on itself.  
The rpi4 has arrived IOW.  The only thing I'd do diff is order the 4GB 
model. A 2GB needs close to 3Gigs  of swap to build LinuxCNC, but it 
does it just fine. Swap is not on the u-sd card, but on a 120Gig SSD 
plugged into a sata<->usb3 adapter, making it much faster than spinning 
rust...

Since I'm just barely doing email on a machine pulled pulled out of the 
midden heap in the garage, and this boot drive is the boot drive I'll 
install in the new server when the rest of it arrives, I've not gone any 
further until the new system is up and running. With the realtime kernel 
pinned, uptime is now 13 days, and will probably run till the next power 
bump.

Anyway, thats the story and I'm sticking to it. You can download that 
bleeding edge rpi4 stuff from my web page, but as thats on this drive, 
in this temp machine, it will be watching paint dry slow and maybe die 
mid download as the OOM may kill it.

Time to go see what I'm fixing us for lunch.

 > root@jhegaala:~# cat /etc/debian_version
> 10.2
> root@jhegaala:~# apt-cache search inetd | grep inetd
> inetutils-inetd - internet super server
> libnl-idiag-3-200 - library for dealing with netlink sockets -
> inetdiag interface openbsd-inetd - OpenBSD Internet Superserver
> puppet-module-puppetlabs-xinetd - Puppet module for xinetd
> reconf-inetd - maintainer script for programmatic updates of
> inetd.conf rinetd - Internet TCP redirection server
> rlinetd - gruesomely over-featured inetd replacement
> update-inetd - inetd configuration file updater
> xinetd - replacement for inetd with many enhancements
> root@jhegaala:~#
>
> Indeed, amanda depends on openbsd-inetd:
>
> root@jhegaala:~# apt show amanda-common | grep inetd
>
> WARNING: apt does not have a stable CLI interface. Use with caution in
> scripts.
>
> Depends: adduser, bsd-mailx | mailx, debconf (>= 0.5) | debconf-2.0,
> openbsd-inetd | inet-superserver, update-inetd, perl (>= 5.28.0-3),
> perlapi-5.28.0, libc6 (>= 2.27), libcurl4 (>= 7.16.2), libglib2.0-0
> (>= 2.41.1), libssl1.1 (>= 1.1.0) root@jhegaala:~#
>
> I believe that we should remove the dependencies on openbsd-inetd |
> inet-superserver and update-inetd, and make those suggested, and
> encourage amanda over SSH, but that's another can of lawyers.



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: holding disk too small?

2019-12-05 Thread Charles Curley
On Thu, 5 Dec 2019 04:43:15 -0500
Gene Heskett  wrote:

> > # systemctl status amanda.socket  
> pi@rpi4:/etc $ sudo systemctl status amanda.socket
> Unit amanda.socket could not be found.

Same on Debian 10.2. Also, it appears that no Debian 10.2 package
provides amanda.service:

charles@hawk:~$ apt-file search amanda.service
charles@hawk:~$ 

So I expect amanda.service is a Fedora-ism.


-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/


Re: holding disk too small?

2019-12-05 Thread Charles Curley
On Thu, 5 Dec 2019 00:00:24 -0500
Gene Heskett  wrote:

> Lesson #2, I just learned today that the raspbian AND debian buster
> 10.2 versions have NO inetd or xinetd. Ditto for RH.

I don't know where you get that idea, as far as Debian goes.

root@jhegaala:~# cat /etc/debian_version 
10.2
root@jhegaala:~# apt-cache search inetd | grep inetd
inetutils-inetd - internet super server
libnl-idiag-3-200 - library for dealing with netlink sockets - inetdiag 
interface
openbsd-inetd - OpenBSD Internet Superserver
puppet-module-puppetlabs-xinetd - Puppet module for xinetd
reconf-inetd - maintainer script for programmatic updates of inetd.conf
rinetd - Internet TCP redirection server
rlinetd - gruesomely over-featured inetd replacement
update-inetd - inetd configuration file updater
xinetd - replacement for inetd with many enhancements
root@jhegaala:~# 

Indeed, amanda depends on openbsd-inetd:

root@jhegaala:~# apt show amanda-common | grep inetd

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Depends: adduser, bsd-mailx | mailx, debconf (>= 0.5) | debconf-2.0, 
openbsd-inetd | inet-superserver, update-inetd, perl (>= 5.28.0-3), 
perlapi-5.28.0, libc6 (>= 2.27), libcurl4 (>= 7.16.2), libglib2.0-0 (>= 
2.41.1), libssl1.1 (>= 1.1.0)
root@jhegaala:~# 

I believe that we should remove the dependencies on openbsd-inetd |
inet-superserver and update-inetd, and make those suggested, and
encourage amanda over SSH, but that's another can of lawyers.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/


Re: holding disk too small?

2019-12-05 Thread Gene Heskett
On Thursday 05 December 2019 02:12:52 Uwe Menges wrote:

> On 2019-12-05 06:00, Gene Heskett wrote:
> > Lesson #2, I just learned today that the raspbian AND debian buster
> > 10.2 versions have NO inetd or xinetd. Ditto for RH.
>
> I think that's along with other stuff moving to systemd.
> On Fedora 30, I have
>
> # systemctl status amanda.socket
pi@rpi4:/etc $ sudo systemctl status amanda.socket
Unit amanda.socket could not be found.

> ● amanda.socket - Amanda Activation Socket
>Loaded: loaded (/usr/lib/systemd/system/amanda.socket; enabled;
> vendor preset: disabled)
>Active: active (listening) since Sat 2019-11-30 14:46:46 CET; 4
> days ago Listen: [::]:10080 (Stream)
>  Accepted: 0; Connected: 0;
> Tasks: 0 (limit: 4915)
>Memory: 0B
>CGroup: /system.slice/amanda.socket
>
> Nov 30 14:46:46 lima systemd[1]: Listening on Amanda Activation
> Socket.
>
> # systemctl cat amanda.socket
> # /usr/lib/systemd/system/amanda.socket
> [Unit]
> Description=Amanda Activation Socket
>
> [Socket]
> ListenStream=10080
> Accept=true
>
> [Install]
> WantedBy=sockets.target

So I had apt install the usual suspects since that did nothing to this 
machine.

>pi@rpi4:/etc $ sudo apt install amanda-common amanda-client
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  openbsd-inetd tcpd
Suggested packages:
  dump gnuplot smbclient
The following NEW packages will be installed:
  amanda-client amanda-common openbsd-inetd tcpd
0 upgraded, 4 newly installed, 0 to remove and 6 not upgraded.
Need to get 2,363 kB of archives.
After this operation, 9,161 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirror.pit.teraswitch.com/raspbian/raspbian buster/main 
armhf tcpd armhf 7.6.q-28 [21.5 kB]
Get:2 http://mirror.pit.teraswitch.com/raspbian/raspbian buster/main 
armhf openbsd-inetd armhf 0.20160825-4 [34.3 kB]
Get:3 http://mirror.pit.teraswitch.com/raspbian/raspbian buster/main 
armhf amanda-common armhf 1:3.5.1-2+b3 [1,889 kB]
Get:4 http://mirror.pit.teraswitch.com/raspbian/raspbian buster/main 
armhf amanda-client armhf 1:3.5.1-2+b3 [418 kB]
Fetched 2,363 kB in 3s (825 kB/s)
Preconfiguring packages ...
Selecting previously unselected package tcpd.
(Reading database ... 263218 files and directories currently installed.)
Preparing to unpack .../tcpd_7.6.q-28_armhf.deb ...
Unpacking tcpd (7.6.q-28) ...
Selecting previously unselected package openbsd-inetd.
Preparing to unpack .../openbsd-inetd_0.20160825-4_armhf.deb ...
Unpacking openbsd-inetd (0.20160825-4) ...
Selecting previously unselected package amanda-common.
Preparing to unpack .../amanda-common_1%3a3.5.1-2+b3_armhf.deb ...
Unpacking amanda-common (1:3.5.1-2+b3) ...
Selecting previously unselected package amanda-client.
Preparing to unpack .../amanda-client_1%3a3.5.1-2+b3_armhf.deb ...
Unpacking amanda-client (1:3.5.1-2+b3) ...
Setting up tcpd (7.6.q-28) ...
Setting up openbsd-inetd (0.20160825-4) ...
Created symlink /etc/systemd/system/multi-user.target.wants/inetd.service 
→ /lib/systemd/system/inetd.service.
Setting up amanda-common (1:3.5.1-2+b3) ...
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
Adding user `backup' to group `disk' ...
Adding user backup to group disk
Done.
Adding user `backup' to group `tape' ...
Adding user backup to group tape
Done.
Setting up amanda-client (1:3.5.1-2+b3) ...
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for systemd (241-7~deb10u2+rpi1) ...

Looks good, but:

pi@rpi4:/etc $ sudo systemctl status amanda.socket
Unit amanda.socket could not be found.

Clearly, the install is incomplete.  Or is it?, there is now 
an /etc/xinetd.d/ with an amanda file that was not there before. And it 
has arguments I've not seen before:

service amanda
{
disable = no
flags   = IPv4
socket_type = stream
protocol= tcp
wait= no
user= backup
group   = disk
groups  = yes
server  = /usr/lib/amanda/amandad
server_args = -auth=bsdtcp amdump amindexd amidxtaped 
senddiscover
}

That last I've not seen before. And /etc/amandahosts looks incomplete.

The rest of the checking will have to wait till the new server is 
running. Tomorrow (Friday maybe). This one doesn't have the cajones to 
run amanda and kmail at the same time.


>
> Yours, Uwe

Looks like they've at least tried to fix it. But that will be quite a 
heavy load since it has two aux ssd drives attached that contain stuff 
no one else has done (yet).  There's a reason its called bleeding 
edge...  And I'll sure sleep better if its backed up.

Thanks Uwe.

Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect 

Re: holding disk too small?

2019-12-04 Thread Uwe Menges
On 2019-12-05 06:00, Gene Heskett wrote:
> Lesson #2, I just learned today that the raspbian AND debian buster 10.2
> versions have NO inetd or xinetd. Ditto for RH.

I think that's along with other stuff moving to systemd.
On Fedora 30, I have

# systemctl status amanda.socket
● amanda.socket - Amanda Activation Socket
   Loaded: loaded (/usr/lib/systemd/system/amanda.socket; enabled;
vendor preset: disabled)
   Active: active (listening) since Sat 2019-11-30 14:46:46 CET; 4 days ago
   Listen: [::]:10080 (Stream)
 Accepted: 0; Connected: 0;
Tasks: 0 (limit: 4915)
   Memory: 0B
   CGroup: /system.slice/amanda.socket

Nov 30 14:46:46 lima systemd[1]: Listening on Amanda Activation Socket.

# systemctl cat amanda.socket
# /usr/lib/systemd/system/amanda.socket
[Unit]
Description=Amanda Activation Socket

[Socket]
ListenStream=10080
Accept=true

[Install]
WantedBy=sockets.target


Yours, Uwe



Re: holding disk too small?

2019-12-04 Thread Gene Heskett
On Tuesday 03 December 2019 20:23:04 Olivier wrote:

> "Stefan G. Weichinger"  writes:
> > So far it works but maybe not optimal. I consider recreating that
> > holding disk array (currently RAID1 of 2 disks) as RAID0 ..
>
> Unless your backups are super critical, you may not need RAID 1 for
> holding disk. Also consider that holding dick puts a lot of mechanical
> stress on the disk: I have seen at least one case where the disk did
> start failing and developing bad blocks in the holding disk space
> while the rest of the disk was OK.
>
> Best regards,
>
> Olivier

Lesson #1: Never ever put the holding disk area on the backup disk 
holding the vtapes, it just pounds the seek mechanism of that drive 
leading to a potential early failure. I've always put it on the main 
drive, but that then subjects the main drive to the same abuse only when 
backing up the main drive, and I'm supposedly backing up 4 other 
machines too. TBT I will have a drive that is not in current use when 
the new machine is built, and which may well be an even better solution. 
Set up that way should also be a measurable speed improvement.

Lesson #2, I just learned today that the raspbian AND debian buster 10.2 
versions have NO inetd or xinetd. Ditto for RH.

So that probably explains why I can install the clients and configure 
them to be backed up, except the clients crash about 3 seconds after the 
server first accesses that client, leaving NO clues in the logs of the 
crashed machines. I've had it happen to an armbian jessie install on a 
rock64, an rpi3 with a 64 bit debian buster install, and I expect it 
will be the same should I try to backup the raspbian armhf install on 
the rpi4.

This machine I pulled out of the midden heap in the garage so I at least 
had email, is so crippled I've turned amanda off until I can get a new 
machine built and this boot drive moved to it. Got the cpu today, might 
have the rest of it tomorrow.  The tower has smoke stains from the fire 
but that won't hurt it a bit.

Anyway, we've got to figure out what to do about the missing #inetd 
stuffs or its all going to die as folks update. 

Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: holding disk too small?

2019-12-03 Thread Olivier
"Stefan G. Weichinger"  writes:

> So far it works but maybe not optimal. I consider recreating that
> holding disk array (currently RAID1 of 2 disks) as RAID0 ..

Unless your backups are super critical, you may not need RAID 1 for
holding disk. Also consider that holding dick puts a lot of mechanical
stress on the disk: I have seen at least one case where the disk did
start failing and developing bad blocks in the holding disk space while
the rest of the disk was OK.

Best regards,

Olivier



RE: holding disk too small?

2019-12-03 Thread Cuttler, Brian R (HEALTH)
Stefan,

In order for the holding disk to be used it has to be bigger than the largest 
DLE.
To get parallelism in dumping it has to be large enough to hold more than one 
DLE at a time, ideally I suppose the number of in parallel dumps, and then some 
more so that you can begin spooling to tape while a new dump is being performed.

I think that a work area larger than a tape is probably overkill - but the took 
I like to use to visualize where the bottle neck is, is amplot.

With a work area as large as yours I think you will probably see that the work 
area is never fully utilized, and that dumping constraints are somewhere else, 
or showing that you can increase parallelism in dumping to shorten overall 
amdump run time.

I don't know what the config looks like, number of clients, number and size of 
partitions being managed, at some point you will run out of CPU, or disk 
performance or something you can't overcome with Amanda tuning.

Best,
Brian

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Stefan G. Weichinger
Sent: Tuesday, December 3, 2019 9:43 AM
To: amanda-users@amanda.org
Subject: holding disk too small?

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Another naive question:

Does the holdingdisk have to be bigger than the size of one tape?

I know that it would be good, but what if not?

I right now have ~2TB holding disk and "runtapes 2" with LTO6 tapetype.

That is 2.4 TB per tape.

So far it works but maybe not optimal. I consider recreating that
holding disk array (currently RAID1 of 2 disks) as RAID0 ..

And sub-question:

how would you configure these parameters here:

autoflush   yes
flush-threshold-dumped  50
flush-threshold-scheduled 50
taperflush  50

I'd like to collect some files in the disk before writing to tape, but
can't collect a full tape's data ...

I assume here also "dumporder" plays a role:

dumporder "Ssss"

- thanks, Stefan



holding disk too small?

2019-12-03 Thread Stefan G. Weichinger


Another naive question:

Does the holdingdisk have to be bigger than the size of one tape?

I know that it would be good, but what if not?

I right now have ~2TB holding disk and "runtapes 2" with LTO6 tapetype.

That is 2.4 TB per tape.

So far it works but maybe not optimal. I consider recreating that
holding disk array (currently RAID1 of 2 disks) as RAID0 ..

And sub-question:

how would you configure these parameters here:

autoflush   yes
flush-threshold-dumped  50
flush-threshold-scheduled 50
taperflush  50

I'd like to collect some files in the disk before writing to tape, but
can't collect a full tape's data ...

I assume here also "dumporder" plays a role:

dumporder "Ssss"

- thanks, Stefan