Dealing with a dump too big

2004-06-21 Thread KEVIN ZEMBOWER
I'm trying to deal with a problem which I've just noticed. I've completely overwritten 
my level 0 backup of a disk called admin://db/f$. This is a SAMBA share from an NT 
server. I think I have complete level one backups:
[EMAIL PROTECTED]:~ > amadmin DailySet1 find admin //db/ |fgrep //db/f$  
2004-05-20 admin //db/f$  1 DailySet109 8 OK
2004-05-21 admin //db/f$  1 DailySet11113 OK
2004-05-24 admin //db/f$  1 DailySet112 7 OK
2004-05-25 admin //db/f$  1 DailySet114 8 OK
2004-05-26 admin //db/f$  1 DailySet11510 OK
2004-05-27 admin //db/f$  1 DailySet11610 OK
2004-05-28 admin //db/f$  1 DailySet117 8 OK
2004-05-31 admin //db/f$  1 DailySet118 7 OK
2004-06-01 admin //db/f$  1 DailySet119 8 OK
2004-06-02 admin //db/f$  1 DailySet121 8 OK
2004-06-04 admin //db/f$  1 DailySet123 9 OK
2004-06-07 admin //db/f$  1 DailySet12411 OK
2004-06-08 admin //db/f$  1 DailySet125 9 OK
2004-06-09 admin //db/f$  1 DailySet126 8 OK
2004-06-10 admin //db/f$  1 DailySet12712 OK
2004-06-11 admin //db/f$  1 DailySet12810 OK
2004-06-14 admin //db/f$  1 DailySet101 9 OK
2004-06-15 admin //db/f$  1 DailySet10212 OK
2004-06-16 admin //db/f$  1 DailySet10410 OK
2004-06-17 admin //db/f$  1 DailySet10510 OK
2004-06-18 admin //db/f$  1 --- 0 FAILED (planner) [dumps way too 
big, 5679359 KB, must skip incremental dumps]
2004-06-18 admin //db/f$/inetsrv  0 DailySet10613 OK
[EMAIL PROTECTED]:~ > 

The last daily report I got, in the planner section, said:
  planner: admin //db/f$ 20040618 0 [dump larger than tape, 13855177 KB, full dump 
delayed]

The disk F: on the NT server is indeed 13G in size, but I didn't think that would be a 
problem, since I excluded //db/f$/inetsrv, which is 9.1G. This filesystem, which I 
just backup for the first time last run, backed up at level 0 just fine:
[EMAIL PROTECTED]:~ > amadmin DailySet1 info admin //db/   



Current info for admin //db/f$:
  Stats: dump rates (kps), Full:  4221.0, 4333.0, 4412.0
Incremental:  4607.0, 4309.0, 4723.0
  compressed size, Full: -100.0%,-100.0%,-100.0%
Incremental: -100.0%,-100.0%,-100.0%
  Dumps: lev datestmp  tape file   origK   compK secs
  0  20040514  DailySet10515 4621248 4622030 1095
  1  20040617  DailySet10510 5662420 5662530 1229

Current info for admin //db/f$/inetsrv:
  Stats: dump rates (kps), Full:  3642.0,  -1.0,  -1.0
Incremental:   -1.0,  -1.0,  -1.0
  compressed size, Full: -100.0%,-100.0%,-100.0%
Incremental: -100.0%,-100.0%,-100.0%
  Dumps: lev datestmp  tape file   origK   compK secs
  0  20040618  DailySet10613 9174403 9193950 2524
[EMAIL PROTECTED]:~ > 

The pertinent sections of my disklist and amanda.conf files are:
[EMAIL PROTECTED]:~ > grep admin /etc/amanda/DailySet1/disklist   
admin   //db/f$ db-f-nocomp-medpri-tar-exclude-inetsrv  #DB server, Drive F: excluding 
\inetsrv
admin   //db/f$/inetsrv nocomp-medpri-tar   #DB server, Drive F:\inetsrv\
[EMAIL PROTECTED]:~ > 

>From amanda.conf:
# Special dumptypes for excluding directories
define dumptype db-f-nocomp-medpri-tar-exclude-inetsrv{
   nocomp-medpri-tar
   comment "Special for admin//db/f$, excluding /inetsrv/"
   exclude "./inetsrv/"
}

My questions are:
1. What's the long term solution to this problem? Have I done something wrong in the 
amanda.conf or disklist files?
2. Is there anything I can do right now, before the nightly normal run, to get a level 
0 backup of just this share?

Thanks for all your help and suggestions.

-Kevin Zembower

-
E. Kevin Zembower
Unix Administrator
Johns Hopkins University/Center for Communications Programs
111 Market Place, Suite 310
Baltimore, MD  21202
410-659-6139




RE: Dealing with a dump too big

2004-06-21 Thread Gavin Henry
Title: RE: Dealing with a dump too big






In the UK, we woudl flush a dump down the toilet, but not if it's too big ;-)


-Original Message-
From:   [EMAIL PROTECTED] on behalf of KEVIN ZEMBOWER
Sent:   Mon 6/21/2004 4:46 PM
To: [EMAIL PROTECTED]
Cc:
Subject:    Dealing with a dump too big
I'm trying to deal with a problem which I've just noticed. I've completely overwritten my level 0 backup of a disk called admin://db/f$. This is a SAMBA share from an NT server. I think I have complete level one backups:
[EMAIL PROTECTED]:~ > amadmin DailySet1 find admin //db/ |fgrep //db/f$ 
2004-05-20 admin //db/f$  1 DailySet109 8 OK
2004-05-21 admin //db/f$  1 DailySet111    13 OK
2004-05-24 admin //db/f$  1 DailySet112 7 OK
2004-05-25 admin //db/f$  1 DailySet114 8 OK
2004-05-26 admin //db/f$  1 DailySet115    10 OK
2004-05-27 admin //db/f$  1 DailySet116    10 OK
2004-05-28 admin //db/f$  1 DailySet117 8 OK
2004-05-31 admin //db/f$  1 DailySet118 7 OK
2004-06-01 admin //db/f$  1 DailySet119 8 OK
2004-06-02 admin //db/f$  1 DailySet121 8 OK
2004-06-04 admin //db/f$  1 DailySet123 9 OK
2004-06-07 admin //db/f$  1 DailySet124    11 OK
2004-06-08 admin //db/f$  1 DailySet125 9 OK
2004-06-09 admin //db/f$  1 DailySet126 8 OK
2004-06-10 admin //db/f$  1 DailySet127    12 OK
2004-06-11 admin //db/f$  1 DailySet128    10 OK
2004-06-14 admin //db/f$  1 DailySet101 9 OK
2004-06-15 admin //db/f$  1 DailySet102    12 OK
2004-06-16 admin //db/f$  1 DailySet104    10 OK
2004-06-17 admin //db/f$  1 DailySet105    10 OK
2004-06-18 admin //db/f$  1 --- 0 FAILED (planner) [dumps way too big, 5679359 KB, must skip incremental dumps]
2004-06-18 admin //db/f$/inetsrv  0 DailySet106    13 OK
[EMAIL PROTECTED]:~ >

The last daily report I got, in the planner section, said:
  planner: admin //db/f$ 20040618 0 [dump larger than tape, 13855177 KB, full dump delayed]

The disk F: on the NT server is indeed 13G in size, but I didn't think that would be a problem, since I excluded //db/f$/inetsrv, which is 9.1G. This filesystem, which I just backup for the first time last run, backed up at level 0 just fine:
[EMAIL PROTECTED]:~ > amadmin DailySet1 info admin //db/  



Current info for admin //db/f$:
  Stats: dump rates (kps), Full:  4221.0, 4333.0, 4412.0
    Incremental:  4607.0, 4309.0, 4723.0
  compressed size, Full: -100.0%,-100.0%,-100.0%
    Incremental: -100.0%,-100.0%,-100.0%
  Dumps: lev datestmp  tape file   origK   compK secs
  0  20040514  DailySet105    15 4621248 4622030 1095
  1  20040617  DailySet105    10 5662420 5662530 1229

Current info for admin //db/f$/inetsrv:
  Stats: dump rates (kps), Full:  3642.0,  -1.0,  -1.0
    Incremental:   -1.0,  -1.0,  -1.0
  compressed size, Full: -100.0%,-100.0%,-100.0%
    Incremental: -100.0%,-100.0%,-100.0%
  Dumps: lev datestmp  tape file   origK   compK secs
  0  20040618  DailySet106    13 9174403 9193950 2524
[EMAIL PROTECTED]:~ >

The pertinent sections of my disklist and amanda.conf files are:
[EMAIL PROTECTED]:~ > grep admin /etc/amanda/DailySet1/disklist  
admin   //db/f$ db-f-nocomp-medpri-tar-exclude-inetsrv  #DB server, Drive F: excluding \inetsrv
admin   //db/f$/inetsrv nocomp-medpri-tar   #DB server, Drive F:\inetsrv\
[EMAIL PROTECTED]:~ >

>From amanda.conf:
# Special dumptypes for excluding directories
define dumptype db-f-nocomp-medpri-tar-exclude-inetsrv{
   nocomp-medpri-tar
   comment "Special for admin//db/f$, excluding /inetsrv/"
   exclude "./inetsrv/"
}

My questions are:
1. What's the long term solution to this problem? Have I done something wrong in the amanda.conf or disklist files?
2. Is there anything I can do right now, before the nightly normal run, to get a level 0 backup of just this share?

Thanks for all your help and suggestions.

-Kevin Zembower

-
E. Kevin Zembower
Unix Administrator
Johns Hopkins University/Center for Communications Programs
111 Market Place, Suite 310
Baltimore, MD  21202
410-659-6139










Re: Dealing with a dump too big

2004-06-21 Thread Stefan G. Weichinger
Hi, Kevin,

on Montag, 21. Juni 2004 at 17:46 you wrote to amanda-users:

KZ> The disk F: on the NT server is indeed 13G in size, but I
KZ> didn't think that would be a problem, since I excluded
KZ> //db/f$/inetsrv, which is 9.1G. This filesystem, which I just
KZ> backup for the first time last run, backed up at level 0 just fine:

KZ> The pertinent sections of my disklist and amanda.conf files are:
[EMAIL PROTECTED]:~ >> grep admin /etc/amanda/DailySet1/disklist   
KZ> admin   //db/f$ db-f-nocomp-medpri-tar-exclude-inetsrv  #DB
KZ> server, Drive F: excluding \inetsrv
KZ> admin   //db/f$/inetsrv nocomp-medpri-tar   #DB server, Drive 
F:\inetsrv\
[EMAIL PROTECTED]:~ >> 

KZ> From amanda.conf:
KZ> # Special dumptypes for excluding directories
KZ> define dumptype db-f-nocomp-medpri-tar-exclude-inetsrv{
KZ>nocomp-medpri-tar
KZ>comment "Special for admin//db/f$, excluding /inetsrv/"
KZ>exclude "./inetsrv/"
KZ> }

KZ> My questions are:
KZ> 1. What's the long term solution to this problem? Have I done
KZ> something wrong in the amanda.conf or disklist files?

The usage of excludes is a bit different when backing up smb-shares.
Please give me the output of

amadmin DailySet1 disklist admin //db/f$

which should tell us more about how AMANDA interprets your cascading
of exclusions ...

Execute this before and after you edited the following =>

You can only use ONE exclusion-option with smbclient ...

AFAIK this should be:

exclude ".\inetsrv\*"

in this case (Win uses backslashes ...)

KZ> 2. Is there anything I can do right now, before the nightly
KZ> normal run, to get a level 0 backup of just this share?

You can do this (after editing your exclusion):

amadmin DailySet1 force admin //db/f$
amdump DailySet1 admin //db/f$

-- 
best regards,
Stefan

Stefan G. Weichinger
mailto:[EMAIL PROTECTED]






Re: Dealing with a dump too big

2004-06-21 Thread Paul Bijnens
KEVIN ZEMBOWER wrote:
The disk F: on the NT server is indeed 13G in size, but I didn't
think that would be a problem, since I excluded //db/f$/inetsrv,
which is 9.1G. This filesystem, which I just backup for the first
time last run, backed up at level 0 just fine:
There is a bug in the estimate for samba clients.
It used to be that amanda started smbclient -T and just flushed
the output in /dev/null for the estimate phase.
That was very very slow.  Then the implementation was changed
to use the smbclient builtin 'du' command to do the estimate.
That's very fast now.
The bug you're hitting is that that builtin 'du' command
does not know about excludes.  The estimate for the full dump
does include everything.   The real run not :-)

My questions are:
1. What's the long term solution to this problem? Have I done
something wrong in the amanda.conf or disklist files?
Jean-Louis is working on a generalized "quick-and-dirty" estimate,
that is based on the statistics of the previous runs.  That could
would help in your case.  I have no idea about the current status.
(You asked for 'long term' :-) )

2. Is there anything I can do right now, before the nightly normal
run, to get a level 0 backup of just this share?
As dirty trick would be to change your tapetype and fake a larger
tape size.  As long as you don't overflow your real tape, that would
at least get around the problem "dump larger than tapesize" for this
particular backup image.  (But lying to Amanda is in general not
such a good idea.  Keep your eyes open for other unexpected results!)
--
Paul Bijnens, XplanationTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out  *
***



Re: Dealing with a dump too big

2004-06-21 Thread KEVIN ZEMBOWER
Stefan, hi, thanks for your suggestions.

Here's the output before any changes are made:
[EMAIL PROTECTED]:~ > amadmin DailySet1 disklist admin //db/  


line 100:
host admin:
interface default
disk //db/f$:
program "GNUTAR"
exclude file "./inetsrv/"
priority 1
dumpcycle 3
maxdumps 1
maxpromoteday 1
strategy STANDARD
compress NONE
auth BSD
kencrypt NO
holdingdisk YES
record YES
index NO
skip-incr NO
skip-full NO

line 101:
host admin:
interface default
disk //db/f$/inetsrv:
program "GNUTAR"
priority 1
dumpcycle 3
maxdumps 1
maxpromoteday 1
strategy STANDARD
compress NONE
auth BSD
kencrypt NO
holdingdisk YES
record YES
index NO
skip-incr NO
skip-full NO

[EMAIL PROTECTED]:~ > 

[Why can't I enter 'amadmin DailySet1 disklist admin //db/f$'? I get "amadmin: no disk 
matched", unless I trim it back to "amadmin DailySet1 disklist admin //db/".]

As Paul points out, the problem is a flaw in the way estimated sizes are computed: it 
ignores the excluded files and directories.

Won't your suggestion to:
amadmin DailySet1 force admin //db/f$
amdump DailySet1 admin //db/f$

just force admin //db/f$ to do a level 0 dump, but all the rest of the backup targets 
do whatever they were scheduled to do? This normally takes  4-6 hours on my system, 
and wouldn't complete before the normally scheduled backup. I am doing 'amadmin 
DailySet1 force admin //db/f$' for tonight's backup.

Thanks, again, for your suggestions.

-Kevin

>>> [EMAIL PROTECTED] 06/21/04 12:43PM >>>
Hi, Kevin,

on Montag, 21. Juni 2004 at 17:46 you wrote to amanda-users:

KZ> The disk F: on the NT server is indeed 13G in size, but I
KZ> didn't think that would be a problem, since I excluded
KZ> //db/f$/inetsrv, which is 9.1G. This filesystem, which I just
KZ> backup for the first time last run, backed up at level 0 just fine:

KZ> The pertinent sections of my disklist and amanda.conf files are:
[EMAIL PROTECTED]:~ >> grep admin /etc/amanda/DailySet1/disklist   
KZ> admin   //db/f$ db-f-nocomp-medpri-tar-exclude-inetsrv  #DB
KZ> server, Drive F: excluding \inetsrv
KZ> admin   //db/f$/inetsrv nocomp-medpri-tar   #DB server, Drive 
F:\inetsrv\
[EMAIL PROTECTED]:~ >> 

KZ> From amanda.conf:
KZ> # Special dumptypes for excluding directories
KZ> define dumptype db-f-nocomp-medpri-tar-exclude-inetsrv{
KZ>nocomp-medpri-tar
KZ>comment "Special for admin//db/f$, excluding /inetsrv/"
KZ>exclude "./inetsrv/"
KZ> }

KZ> My questions are:
KZ> 1. What's the long term solution to this problem? Have I done
KZ> something wrong in the amanda.conf or disklist files?

The usage of excludes is a bit different when backing up smb-shares.
Please give me the output of

amadmin DailySet1 disklist admin //db/f$

which should tell us more about how AMANDA interprets your cascading
of exclusions ...

Execute this before and after you edited the following =>

You can only use ONE exclusion-option with smbclient ...

AFAIK this should be:

exclude ".\inetsrv\*"

in this case (Win uses backslashes ...)

KZ> 2. Is there anything I can do right now, before the nightly
KZ> normal run, to get a level 0 backup of just this share?

You can do this (after editing your exclusion):

amadmin DailySet1 force admin //db/f$
amdump DailySet1 admin //db/f$

-- 
best regards,
Stefan

Stefan G. Weichinger
mailto:[EMAIL PROTECTED] 








Re: Dealing with a dump too big

2004-06-21 Thread Stefan G. Weichinger
Hi, Kevin,

on Montag, 21. Juni 2004 at 19:19 you wrote to amanda-users:

KZ> Stefan, hi, thanks for your suggestions.

KZ> Here's the output before any changes are made:
[EMAIL PROTECTED]:~ >> amadmin DailySet1 disklist admin //db/  
KZ> 

KZ> line 100:
KZ> host admin:
KZ> interface default
KZ> disk //db/f$:
KZ> program "GNUTAR"
KZ> exclude file "./inetsrv/"

And now you modified it to the backslashes?

KZ> [Why can't I enter 'amadmin DailySet1 disklist admin
KZ> //db/f$'? I get "amadmin: no disk matched", unless I trim it back
KZ> to "amadmin DailySet1 disklist admin //db/".]

The character $ seems to be interpreted as the regex-metacharacter.

KZ> As Paul points out, the problem is a flaw in the way
KZ> estimated sizes are computed: it ignores the excluded files and
KZ> directories.

This way the estimates should be bigger than the actual dumps.

KZ> Won't your suggestion to:
KZ> amadmin DailySet1 force admin //db/f$
KZ> amdump DailySet1 admin //db/f$

KZ> just force admin //db/f$ to do a level 0 dump, but all the
KZ> rest of the backup targets do whatever they were scheduled to do?
KZ> This normally takes  4-6 hours on my system, and wouldn't complete
KZ> before the normally scheduled backup. I am doing 'amadmin
KZ> DailySet1 force admin //db/f$' for tonight's backup.

the second line would just dump the DLE given, //db/f$.

SYNOPSIS
   amdump config [ host [ disk ]* ]*

Stefan.



Re: Dealing with a dump too big

2004-06-22 Thread KEVIN ZEMBOWER
Hi, Stefan,

Thanks for pointing out the option on amdump to just backup a single host or 
partition; I never used that, and overlooked what you were trying to tell me in my 
original response. I've been working since yesterday to implement your suggestions.

I'll modify the slashes to back-slashes today, when I'm able to make an individual run 
for just the admin //db/f$ share.

WRT the '$' metacharacter, what I actually had to run was:
amdump DailySet1 admin '//db/f\$'

Thanks, again, for your help and suggestions.

-Kevin

>>> [EMAIL PROTECTED] 06/21/04 01:37PM >>>
Hi, Kevin,

on Montag, 21. Juni 2004 at 19:19 you wrote to amanda-users:

KZ> Stefan, hi, thanks for your suggestions.

KZ> Here's the output before any changes are made:
[EMAIL PROTECTED]:~ >> amadmin DailySet1 disklist admin //db/  
KZ> 

KZ> line 100:
KZ> host admin:
KZ> interface default
KZ> disk //db/f$:
KZ> program "GNUTAR"
KZ> exclude file "./inetsrv/"

And now you modified it to the backslashes?

KZ> [Why can't I enter 'amadmin DailySet1 disklist admin
KZ> //db/f$'? I get "amadmin: no disk matched", unless I trim it back
KZ> to "amadmin DailySet1 disklist admin //db/".]

The character $ seems to be interpreted as the regex-metacharacter.

KZ> As Paul points out, the problem is a flaw in the way
KZ> estimated sizes are computed: it ignores the excluded files and
KZ> directories.

This way the estimates should be bigger than the actual dumps.

KZ> Won't your suggestion to:
KZ> amadmin DailySet1 force admin //db/f$
KZ> amdump DailySet1 admin //db/f$

KZ> just force admin //db/f$ to do a level 0 dump, but all the
KZ> rest of the backup targets do whatever they were scheduled to do?
KZ> This normally takes  4-6 hours on my system, and wouldn't complete
KZ> before the normally scheduled backup. I am doing 'amadmin
KZ> DailySet1 force admin //db/f$' for tonight's backup.

the second line would just dump the DLE given, //db/f$.

SYNOPSIS
   amdump config [ host [ disk ]* ]*

Stefan.





Re: Dealing with a dump too big

2004-06-22 Thread Stefan G. Weichinger
Hi, Kevin,

on Dienstag, 22. Juni 2004 at 18:17 you wrote to amanda-users:

KZ> Hi, Stefan,

KZ> Thanks for pointing out the option on amdump to just backup a
KZ> single host or partition; I never used that, and overlooked what
KZ> you were trying to tell me in my original response. I've been
KZ> working since yesterday to implement your suggestions.

Is it that much I suggested ? ;-)

KZ> I'll modify the slashes to back-slashes today, when I'm able
KZ> to make an individual run for just the admin //db/f$ share.

The back-slashes work out well in one of my client's installation.

KZ> WRT the '$' metacharacter, what I actually had to run was:
KZ> amdump DailySet1 admin '//db/f\$'

Errm, yes, I could have told you that ... escaping meta-characters it
is ...

KZ> Thanks, again, for your help and suggestions.

You're welcome ...
-- 
best regards,
Stefan

Stefan G. Weichinger
mailto:[EMAIL PROTECTED]