Blanks in disklist entries

2001-05-29 Thread Martin Apel

Hi,

is it possible to put a path into the disklist, which contains blanks?
I searched the FAQ and the mailing list but had no luck. 
Is it possible to use either quotes or a backslash to escape the
blank? Or do I have to rename the directory so it doesn't contain
any blanks?

Greetings,

Martin

Martin Apel, Dipl.-Inform.t e c m a t h  A G
Group Manager Software Development  Human Solutions Division
phone +49 (0)6301 606-300Sauerwiesen 2, 67661 Kaiserslautern
fax   +49 (0)6301 606-309Germany
[EMAIL PROTECTED]   http://www.tecmath.com





Re: Advice: NFS vs SMB - looking for the voice of experience

2001-04-05 Thread Martin Apel

On Wed, 4 Apr 2001, Dave Hecht wrote:

> I have just a couple M$oft Win2K boxes that I would like my newly installed
> Amanda system to backup.  I do have NFS servers running on them and
> presently mount the data directory's, via NFS, onto a Linux box.  Keeping in
> mind that I have < 50 GB of data (very little) , and have a 12 hour window
> for backups (yes, small network in an office) and I really am not worried
> about preserving ACL's - My question is:
> 
> Which is easiest to maintain/setup - NFS mount the Win2k volumes onto the
> Linux box and use a gnu-tar-index backup, or install samba and use
> smbclient.  I lean towards NFS, is there any reason I should not?

I would prefer NFS mounted volumes for the following reasons: 

smbtar currently only supports doing incremental dumps via
a single archive bit, i.e. you only get level 0 and level 1 dumps.
If you have a dumpcycle of say two weeks the level 1 dump just before the
a level 0 dump will collect all changes of the last two weeks, which
can be quite a lot. However this depends on how often you data changes.
With NFS there is no such issue.

Hope this helps,

Martin
________
Martin Apel, Dipl.-Inform.t e c m a t h  A G
Group Manager Software Development  Human Solutions Division
phone +49 (0)6301 606-300Sauerwiesen 2, 67661 Kaiserslautern
fax   +49 (0)6301 606-309Germany
[EMAIL PROTECTED]   http://www.tecmath.com





Re: using a DDS-4 Autoloader under linux

2001-03-08 Thread Martin Apel

On Thu, 8 Mar 2001, Werner Behnke wrote:

> > We are using a Seagate Scorpion DDS-4 Autoloader without problems.
> > The only issue was, that for an unknown reason it doesn't like to be
> > connected to the SCSI controller as a 'wide' device. If you set the DIP
> > switches, such that it will register itself as a 'narrow' SCSI device
> > everything is fine. Performance is about 2.8 MB/s so the narrow cable
> > should not slow down the device noticeably.
> 
> Does the autoloader support random access or 
> only sequential mode?

It supports random access.

> Did you set up chg-manual in amanda to change 
> tapes manually?
> 
> If yes: how do you control the media changer?
> With mtx (http://mtx.sourceforge.net/) or
> Autoloader Tape Library 
> (http://www.ee.ryerson.ca/~sblack/autoloader/)
> or something else (mt, SCSI UNLOAD command, 
> Amanda's chg-mtx...)?

I took chg-mtx and adapted it at one place to use mtx 1.2. Using Linux
for the tape server you have to configure the kernel with multiple
LUNs per SCSI device. 

Greetings,

Martin




Re: using a DDS-4 Autoloader under linux

2001-03-08 Thread Martin Apel

On Thu, 8 Mar 2001, Werner Behnke wrote:

> Hi,
> 
> we would like to buy a DDS-4 Autoloader
> (HP T5717A 40x6 or Seagate DAT 240 or 
> Sony TSL-S11000).
> 
We are using a Seagate Scorpion DDS-4 Autoloader without problems.
The only issue was, that for an unknown reason it doesn't like to be
connected to the SCSI controller as a 'wide' device. If you set the DIP
switches, such that it will register itself as a 'narrow' SCSI device
everything is fine. Performance is about 2.8 MB/s so the narrow cable
should not slow down the device noticeably.

Martin
________
Martin Apel, Dipl.-Inform.t e c m a t h  A G
Group Manager Software Development  Human Solutions Division
phone +49 (0)6301 606-300Sauerwiesen 2, 67661 Kaiserslautern
fax   +49 (0)6301 606-309Germany
[EMAIL PROTECTED]   http://www.tecmath.com





Re: DDS4 parameters

2001-01-28 Thread Martin Apel

On Sat, 27 Jan 2001, Jason Winchell wrote:

> Does anyone know the parameters for DDS4, 150m, 20GB/40GB tapes?

I can offer the following:

define tapetype DDS4 {
comment "DDS 4 tapes 150 m"
length 19400 mbytes
filemark 32 kbytes  
speed 2700 kbytes
}

This has been evaluated with a Seagate Scorpion 240.

Martin




Re: Weird illustration of peculiar interactions :-}

2001-01-18 Thread Martin Apel

On Fri, 19 Jan 2001, Chris Karakas wrote:

> Martin Apel wrote:
> > 
> > >
> > > So?  I was trying to point out that simply selecting the biggest
> > > dump may not give you the best packing.  Often, the few tapes contain
> > > four or five smaller dumps and can obtain a 99.8% usage rate.
> > >
> ...
> 
> > Yes, you are right. You might achieve a better packing by a more intelligent
> > algorithm. 
> 
> I dont know if you noticed it, but you are talking about the famous "bin
> packing problem" in combinatorics. Just search the web for "bin packing"
> and you will find quite a few algorithms and further literature on this
> vast subject (even AMANDA uses one, according to some old papers). 

Yes, I knew that this is a known problem. But most of these algorithms
are designed to work with full knowledge, i.e. they assume they know
all sizes of all dumps. In Amanda's case this is not true, because
after you wrote out a first dump you can make a new decision with extended
information. After all, since I have rather good experience with my simple
approach there's not really a need for a better algorithm. 

Regards,

Martin


Martin Apel, Dipl.-Inform.t e c m a t h  A G
Group Manager Software Development  Human Solutions Division
phone +49 (0)6301 606-300Sauerwiesen 2, 67661 Kaiserslautern
fax   +49 (0)6301 606-309Germany
[EMAIL PROTECTED]   http://www.tecmath.com





Re: Weird illustration of peculiar interactions :-}

2001-01-18 Thread Martin Apel

On Thu, 18 Jan 2001, Joi Ellis wrote:

> On Thu, 18 Jan 2001, Martin Apel wrote:
> 
> >On Wed, 17 Jan 2001, Joi Ellis wrote:
> >
> >> 
> >> I amflush these to tapes to send to offsite storage.
> >> I've already done two tapes from this batch, I have four
> >> left to do.
> >
> >That's a nice idea, but I have more data to back up than fits on the
> >holding disk, so I have to flush some dumps to tape in order to dump
> >all filesystems completely.
> >
> >Martin
> 
> So?  I was trying to point out that simply selecting the biggest
> dump may not give you the best packing.  Often, the few tapes contain
> four or five smaller dumps and can obtain a 99.8% usage rate.
> 
> The total amount of data to be backed up has no effect, since you can
> only flush what's on the holding disk.

Yes, you are right. You might achieve a better packing by a more intelligent
algorithm. But after you wrote out the first dump, things might have changed already
because another dump has been finished in the meantime. I used this scheme for a few 
months now and it gives me usage ratios close to 100 % for all but the last tape.
But this only works if you have a good mix of file system sizes.

Greetings,

Martin




Re: Weird illustration of peculiar interactions :-}

2001-01-17 Thread Martin Apel

On Wed, 17 Jan 2001, Joi Ellis wrote:

> I have a perl script which will go through my holding disk and spit
> out a list of backup sets to select to best pack tapes.
> 
> here's an example:
> 
> [amanda@joi amanda]$ pack -C OffSite
> 138530/140906 (   98%)
> /home/amanda/mnt/holdingdisk/OffSite/20010106
> /home/amanda/mnt/holdingdisk/OffSite/20010114
> 
> 116784/140906 (   82%)
> /home/amanda/mnt/holdingdisk/OffSite/20010107
> /home/amanda/mnt/holdingdisk/OffSite/20010108
> 
> 114088/140906 (   80%)
> /home/amanda/mnt/holdingdisk/OffSite/20010111
> 
> 99327/140906 (   70%)
> /home/amanda/mnt/holdingdisk/OffSite/20010112
> 
> Nothing left to pack!
> 
> I amflush these to tapes to send to offsite storage.
> I've already done two tapes from this batch, I have four
> left to do.

That's a nice idea, but I have more data to back up than fits on the
holding disk, so I have to flush some dumps to tape in order to dump
all filesystems completely.

Martin


Martin Apel, Dipl.-Inform.t e c m a t h  A G
Group Manager Software Development  Human Solutions Division
phone +49 (0)6301 606-300Sauerwiesen 2, 67661 Kaiserslautern
fax   +49 (0)6301 606-309Germany
[EMAIL PROTECTED]   http://www.tecmath.com





Re: Weird illustration of peculiar interactions :-}

2001-01-14 Thread Martin Apel

On Sun, 14 Jan 2001, David Wolfskill wrote:

> Well, as it happened, one of the file systems I back up is hovering right
> around 12 GB.  And as luck would have it, amanda would typically get about
> 30 - 35 GB on the first tape before this file system was ready to be taped.
> Then taper would try to write this backup image to tape, and eventually
> fail, only to re-start on the next tape (all as expected).  But that would
> leave a bunch of wasted space on the end of that first tape, with the
> result that the Offsite backups would actually require all 3 tapes.

I implemented some changes in the driver that cause it to gather dumps
until a certain threshold is reached. Afterwards it will always write
the biggest dump still fitting on the tape. This works quite nicely for me
and improves tape utilization a lot. Unfortunately it also increases the
total dump time a bit, if your tape is slow.
I haven't released it yet, because I implemented it in Amanda 2.4.1p1
and didn't come around to porting it to 2.4.2.
But if you like I can post the patches for 2.4.1p1.

Greetings,

Martin
________

Martin Apel, Dipl.-Inform.t e c m a t h  A G
Group Manager Software Development  Human Solutions Division
phone +49 (0)6301 606-300Sauerwiesen 2, 67661 Kaiserslautern
fax   +49 (0)6301 606-309Germany
[EMAIL PROTECTED]   http://www.tecmath.com





Re: Amanda vs. AIX

2001-01-11 Thread Martin Apel

On Thu, 11 Jan 2001, Bernhard R. Erdmann wrote:

> Hi,
> 
> anyone with experience "Amanda on IBM AIX"? Any troubles to be afraid
> of?

I have one Amanda client (among others :-)) which runs AIX. 
Without any problems ever.

Greetings,

Martin
____

Martin Apel, Dipl.-Inform.t e c m a t h  A G
Group Manager Software Development  Human Solutions Division
phone +49 (0)6301 606-300Sauerwiesen 2, 67661 Kaiserslautern
fax   +49 (0)6301 606-309Germany
[EMAIL PROTECTED]   http://www.tecmath.com





Re: Datagram size for estimates revisited

2000-11-09 Thread Martin Apel

On 9 Nov 2000, Alexandre Oliva wrote:

> On Nov  9, 2000, Chris Karakas <[EMAIL PROTECTED]> wrote:
> 
> > How about increasing PIPE_BUF and recompiling the kernel?
> 
> It might be a good idea, for systems whose kernel sources are
> available.  But we'd better fix the actual bug instead of just
> papering over it :-)

Yes, I would also prefer fixing the bug in Amanda than doing a workaround
in the kernel. However I found a way to circumvent the problem, which
works at least for me. Most of the filesystems' names to be backed up
start with /projekte/memo so I simply set a link from /pm to point to
that directory. I had to replace /projekte/memo by /pm everywhere in the
log files as well as in the disklist, curinfo, index and the gnutar-listdir, 
but everything works fine now. The only problem will be to restore something
during the next month or so, because the tapes still have the long names 
on it :-)

Martin
--------

Martin Apel phone: ++49.6301.606.300
Human Modeling  fax:   ++49.6301.606.309
TECMATH AG  email: [EMAIL PROTECTED]
Sauerwiesen 2
67661 Kaiserslautern, Germany






Datagram size for estimates revisited

2000-11-09 Thread Martin Apel

Hi,

I investigated the problem regarding estimate problems on a host with
many filesystems a bit. I think I have found out what goes wrong.
It's not the size of the datagram, it's the maximum size of a pipe's buffer.
The sendsize processes all seem to be stuck waiting for a lock when
trying to write their results to stdout.
Until that time they have written a little more than 4000 bytes to the
pipe to the master sendsize process. This process reads the pipe only
after all children have terminated, so there's a deadlock.
I'm not sure if it's really the master sendsize process on the other end
of the pipe or even the amandad process.
Does anybody have an idea about what to do against this?

Martin
----

Martin Apel phone: ++49.6301.606.300
Human Modeling  fax:   ++49.6301.606.309
TECMATH AG  email: [EMAIL PROTECTED]
Sauerwiesen 2
67661 Kaiserslautern, Germany






Datagram size for estimates revisited

2000-11-09 Thread Martin Apel

Hi, all

there has been some discussion on this list related to the size of a datagram
which is sent during the estimate phase. The original 2.4.1p1 size was 8 KB;
Alexandre and John both recommended increasing it to 64 KB if one machine
has very many filesystems to estimate. 
One of my machines has 39 filesystems to back up, which seems to be too many,
because the path names to each of these filesystem are rather long.
If I reduce the number by a few everything works fine.
Is there any problem in raising the maximum datagram size any further 
(Linux 2.2 for client and server) or is there any other recommended way
to deal with this problem?

Martin


Martin Apel phone: ++49.6301.606.300
Human Modeling  fax:   ++49.6301.606.309
TECMATH AG  email: [EMAIL PROTECTED]
Sauerwiesen 2
67661 Kaiserslautern, Germany