more re Amanda on Cobalt Raq-4i - 2.6.0 source tree configure fails

2008-04-25 Thread Craig Dewick


Ok well putting the problems with 2.5.2p1 aside for now, I decided to give 
the brand new 2.6.0 source tree a go, but it's failed during the configure 
phase with this:


 start 

checking for pkg-config... no
checking for GLIB - version >= 2.2.0... no
*** A new enough version of pkg-config was not found.
*** See http://www.freedesktop.org/software/pkgconfig/
configure: error: glib not found or too old; See 
http://wiki.zmanda.com/index.php/Installation for help


 stop -

I don't know if it's possible to upgrade gcc and glibc on the platform in 
question since the OS is pretty much a closed-book deal (and as you'd 
know, Sun stopped supporting the entire Cobalt line around 2 years ago 
now). If anyone can confirm that they've been able to replace gcc and 
glibc with newer versions, I'll give that a go. I don't think anyone has a 
free gcc uprade package for the Raq servers bit zeffie.bet or 
cobaltsupport.com may.


I've considered the radical option of totally replacing the OS (with 
something like Strongbolt Linux) though without a spare system to try that 
out on first I'm not keen to do it with my 'production' web server. 8-)


Can anyone suggest what could be done with the 2.5.2p1 source tree to 
correct the compilation problem I outlined in a previous message? I'm not 
sure if the problem is due to something missing out of a header file, or 
an unforseen issue with platform-specific parts of the dgram.c code file.


Regards,

Craig.

--
Post by Craig Dewick (tm). Web @ "http://lios.apana.org.au/~cdewick";.
Email 2 "[EMAIL PROTECTED]". SunShack @ "http://www.sunshack.org";
Galleries @ "http://www.sunshack.org/gallery2";. Also lots of tech data, etc.
Sun Microsystems webring at "http://n.webring.com/hub?ring=sunmicrosystemsu";.


Re: Amanda and ZFS

2008-04-25 Thread Jon LaBadie
On Fri, Apr 25, 2008 at 01:36:35PM -0600, John E Hein wrote:
> Jon LaBadie wrote at 13:57 -0400 on Apr 25, 2008:
>  > Though I've not tried it, it should.
>  > 
>  > I base that on the description of the command
>  > 
>  > /usr/sbin/ufsdump [options] [arguments] files_to_dump
>  > 
>  > and the belief that the include directive merely provides the args
>  > corresponding to "files_to_dump".
> 
> Ah.  Okay.  That's a solaris ufsdump feature... linux, too, maybe
> others.  It won't work for the BSDs (filesystem only).
> 
> And that's _if_ amanda passes that on the dump invocation.
> I haven't tried it either or looked at the code yet.
> 
> But one limitation (with solaris' ufsdump and linux's dump) is that
> you can't do incrementals using that method.  Level 0 only.  I don't
> know if amanda adds support on top of that to kludge in incremental
> support - I doubt it, but I'm someone will speak up if you can.
> 

In the original query regarding ZFS the question was about file systems.
I don't know if dump/ufsdump would regard a relative pathname that
is a mount point as a file system.  If it did, then incrementals and
fulls could both be done.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: Amanda and ZFS

2008-04-25 Thread Pieter Bowman

>> ...
>> The gtar devs finally accepted something to help with this problem:
>> --no-check-device.
>> ...

Thanks, I hadn't caught the addition of that option.  That also
reminds me that the problem isn't the inode number, but the device
number which was the problem.

Pieter


Re: Amanda and ZFS

2008-04-25 Thread John E Hein
Jon LaBadie wrote at 13:57 -0400 on Apr 25, 2008:
 > Though I've not tried it, it should.
 > 
 > I base that on the description of the command
 > 
 > /usr/sbin/ufsdump [options] [arguments] files_to_dump
 > 
 > and the belief that the include directive merely provides the args
 > corresponding to "files_to_dump".

Ah.  Okay.  That's a solaris ufsdump feature... linux, too, maybe
others.  It won't work for the BSDs (filesystem only).

And that's _if_ amanda passes that on the dump invocation.
I haven't tried it either or looked at the code yet.

But one limitation (with solaris' ufsdump and linux's dump) is that
you can't do incrementals using that method.  Level 0 only.  I don't
know if amanda adds support on top of that to kludge in incremental
support - I doubt it, but I'm someone will speak up if you can.


Re: Amanda and ZFS

2008-04-25 Thread Jon LaBadie
On Fri, Apr 25, 2008 at 11:46:34AM -0600, John E Hein wrote:
> Jon LaBadie wrote at 10:59 -0400 on Apr 25, 2008:
>  > Another way would be to use include directives.  For example, if the
>  > zfs pool was /pool and had file systems of a, b,c, and d, you could
>  > set up multiple DLEs that were rooted at /pool (different tag names)
>  > and had include directives of "include ./a ./c" and another with
>  > "include ./b ./d"  While traversing each of the included starting
>  > points (directories), tar would never cross a file system boundary.
> 
> Do those work when using 'dump' instead of 'tar'?

Though I've not tried it, it should.

I base that on the description of the command

/usr/sbin/ufsdump [options] [arguments] files_to_dump

and the belief that the include directive merely provides the args
corresponding to "files_to_dump".

jl
-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: Amanda and ZFS

2008-04-25 Thread John E Hein
Pieter Bowman wrote at 11:41 -0600 on Apr 25, 2008:
 > The final issue I found was that the inode numbers in the snapshots
 > change each time a new snapshot is created.  This is a problem with
 > GNU tar's listed-incremental facility.  To work around this I ended up
 > hacking GNU tar to make it ignore the inodes stored in the listed
 > incremental files.  This was just a simple change, to have ZFS
 > filesystems treated the same as NFS.  The patch was submitted to the
 > GNU tar developers, but was rejected.  Here is the patch as applied to
 > GNU tar 1.16 (this patch also contains what I consider a fix for an
 > actual coding bug):

The gtar devs finally accepted something to help with this problem:
--no-check-device.

http://article.gmane.org/gmane.comp.archivers.amanda.user/32804/match=nfs+tar


Re: Amanda and ZFS

2008-04-25 Thread Pieter Bowman
I started using ZFS in a big way over a year ago on our main file
server.  Since there is no ufsdump replacement to use with ZFS, I
elected to use GNU tar.  I know this doesn't yet cover backing up
things like ACLs, but we don't use them in our very heterogeneous
environment.  The main idea I had was to take a snapshot and point tar
at the snapshot so it had a nice static, read-only copy of the
filesystem to work from.

I created a shell script to run as a cron job, just before amdump is
run, which cleans up the previous snapshots and takes new snapshots of
each of the pools (effectively):

zfs destroy -r [EMAIL PROTECTED]
zfs snapshot -r [EMAIL PROTECTED]

Fortunately, amanda has a nice way to specify that the filesystem name
is something like "/local", but the point to have tar start at is a
different location.  A disklist entry such as:

foo.math.utah.edu /local /local/.zfs/snapshot/AMANDA user-tar

The final issue I found was that the inode numbers in the snapshots
change each time a new snapshot is created.  This is a problem with
GNU tar's listed-incremental facility.  To work around this I ended up
hacking GNU tar to make it ignore the inodes stored in the listed
incremental files.  This was just a simple change, to have ZFS
filesystems treated the same as NFS.  The patch was submitted to the
GNU tar developers, but was rejected.  Here is the patch as applied to
GNU tar 1.16 (this patch also contains what I consider a fix for an
actual coding bug):

diff -r -c tar-1.16/src/incremen.c tar-1.16-local/src/incremen.c
*** tar-1.16/src/incremen.c Fri Sep  8 10:42:18 2006
--- tar-1.16-local/src/incremen.c   Fri Dec  8 14:53:37 2006
***
*** 71,77 
  
  #if HAVE_ST_FSTYPE_STRING
static char const nfs_string[] = "nfs";
! # define NFS_FILE_STAT(st) (strcmp ((st).st_fstype, nfs_string) == 0)
  #else
  # define ST_DEV_MSB(st) (~ (dev_t) 0 << (sizeof (st).st_dev * CHAR_BIT - 1))
  # define NFS_FILE_STAT(st) (((st).st_dev & ST_DEV_MSB (st)) != 0)
--- 71,77 
  
  #if HAVE_ST_FSTYPE_STRING
static char const nfs_string[] = "nfs";
! # define NFS_FILE_STAT(st) (strcmp ((st).st_fstype, nfs_string) == 0 || 
strcmp ((st).st_fstype, "zfs") == 0)
  #else
  # define ST_DEV_MSB(st) (~ (dev_t) 0 << (sizeof (st).st_dev * CHAR_BIT - 1))
  # define NFS_FILE_STAT(st) (((st).st_dev & ST_DEV_MSB (st)) != 0)
***
*** 247,253 
 directories, consider all NFS devices as equal,
 relying on the i-node to establish differences.  */
  
!   if (! (((DIR_IS_NFS (directory) & nfs)
  || directory->device_number == stat_data->st_dev)
 && directory->inode_number == stat_data->st_ino))
{
--- 247,253 
 directories, consider all NFS devices as equal,
 relying on the i-node to establish differences.  */
  
!   if (! (((DIR_IS_NFS (directory) && nfs)
  || directory->device_number == stat_data->st_dev)
 && directory->inode_number == stat_data->st_ino))
{


I hope this helps other people with using amanda and ZFS.

I'm happy to clear up any unclear issues.

Pieter


Re: Amanda and ZFS

2008-04-25 Thread John E Hein
Jon LaBadie wrote at 10:59 -0400 on Apr 25, 2008:
 > Another way would be to use include directives.  For example, if the
 > zfs pool was /pool and had file systems of a, b,c, and d, you could
 > set up multiple DLEs that were rooted at /pool (different tag names)
 > and had include directives of "include ./a ./c" and another with
 > "include ./b ./d"  While traversing each of the included starting
 > points (directories), tar would never cross a file system boundary.

Do those work when using 'dump' instead of 'tar'?


RE: Amanda and ZFS

2008-04-25 Thread Anthony Worrall

neat

> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
> On Behalf Of Jon LaBadie
> Sent: 25 April 2008 16:00
> To: amanda-users@amanda.org
> Subject: Re: Amanda and ZFS
> 
> On Fri, Apr 25, 2008 at 02:32:27PM +0100, Anthony   Worrall wrote:
> > Hi
> >
> > unfortunately zfsdump, or "zfs send" as it is now, does not relate
to
> > ufsdump in any way :-(
> >
> >
>   [ big snip ]
> >
> > One of the properties of zfs is that in encourages the use of a
> > filesystem for a logical set of files, i.e. user home directory,
> > software package etc.
> > This means that every time you create a new filesystem you need to
> > create a new DLE for amanda. In fact creating the amanda DLE takes
> > longer than creating the zfs filesystem.
> >
> > You can not just use tar to dump multiple zfs filestems because
amamda
> > tells tar not to cross filesystem boundaries.
> >
> > You could probably write a wrapper to tar to remove
--one-file-system
> > option to get around this limitation.
> 
> Another way would be to use include directives.  For example, if the
> zfs pool was /pool and had file systems of a, b,c, and d, you could
> set up multiple DLEs that were rooted at /pool (different tag names)
> and had include directives of "include ./a ./c" and another with
> "include ./b ./d"  While traversing each of the included starting
> points (directories), tar would never cross a file system boundary.
> 
> --
> Jon H. LaBadie  [EMAIL PROTECTED]
>  JG Computing
>  12027 Creekbend Drive(703) 787-0884
>  Reston, VA  20194(703) 787-0922 (fax)


RE: Amanda and ZFS

2008-04-25 Thread John E Hein
Anthony   Worrall wrote at 14:32 +0100 on Apr 25, 2008:
 > unfortunately zfsdump, or "zfs send" as it is now, does not relate to
 > ufsdump in any way :-(

Sorry to hijack this thread, but...

Can Solaris and/or ZFS snapshots support partial filesystem dumps (and
restores)?  If not, how do people using dump for backups support large
filesystems (that may be bigger than a tape)?  Are split dumps and
dump/restore or tar with excludes the only way in amanda right now?


Re: Amanda and ZFS

2008-04-25 Thread Chris Hoogendyk



Anthony Worrall wrote:

Hi

unfortunately zfsdump, or "zfs send" as it is now, does not relate to
ufsdump in any way :-(



hmm. I guess I was being a bit naive.

I had assumed zfs development was more mature.

After reading the comments on this thread, I went searching for 
references to zfsdump (which doesn't exist, but nevertheless is a good 
search term for discussions of the missing capability). There are a 
variety of discussions on Sun's web site and others regarding 
difficulties with figuring out how to backup zfs, most particularly with 
respect to disaster recovery, where file system structure and 
information have been lost.


This gives me a bit more insight into comments from a Sun engineer we 
met with a couple of weeks ago (ok, he was an engineer specializing in 
Sun storage systems who works for the vendor who has the Sun contract 
for our state). Anyway, when I asked him if ZFS was ready for prime 
time, he hedged. I asked why people weren't adopting it more. He said 
that it hadn't really panned out, and that UFS had developed more during 
the time of ZFS development. So, most people were sticking with UFS. Of 
particular note was that ZFS isn't really supported for your boot drive.


Based on our discussions with this engineer, the new servers and storage 
systems we are getting will be set up entirely without ZFS.


However, given my earlier naive assumptions, I'm not going to assume 
that this is the complete story. Just enough to get my skepticism and 
sysadmin conservatism into full gear. ;-)




---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology Department
 (*) \(*) -- 140 Morrill Science Center
~~ - University of Massachusetts, Amherst

<[EMAIL PROTECTED]>

---


Re: Amanda and ZFS

2008-04-25 Thread Jon LaBadie
On Fri, Apr 25, 2008 at 02:32:27PM +0100, Anthony   Worrall wrote:
> Hi
> 
> unfortunately zfsdump, or "zfs send" as it is now, does not relate to
> ufsdump in any way :-(
> 
> 
  [ big snip ]
> 
> One of the properties of zfs is that in encourages the use of a
> filesystem for a logical set of files, i.e. user home directory,
> software package etc.
> This means that every time you create a new filesystem you need to
> create a new DLE for amanda. In fact creating the amanda DLE takes
> longer than creating the zfs filesystem.
> 
> You can not just use tar to dump multiple zfs filestems because amamda
> tells tar not to cross filesystem boundaries.
> 
> You could probably write a wrapper to tar to remove --one-file-system
> option to get around this limitation.

Another way would be to use include directives.  For example, if the
zfs pool was /pool and had file systems of a, b,c, and d, you could
set up multiple DLEs that were rooted at /pool (different tag names)
and had include directives of "include ./a ./c" and another with
"include ./b ./d"  While traversing each of the included starting
points (directories), tar would never cross a file system boundary.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


RE: Amanda and ZFS

2008-04-25 Thread Anthony Worrall
Hi

unfortunately zfsdump, or "zfs send" as it is now, does not relate to
ufsdump in any way :-(


>From man zfs 

zfs send [-i snapshot1] snapshot2
 Creates a stream representation of snapshot2,  which  is
 written to standard output. The output can be redirected
 to a file or to a different machine (for example,  using
 ssh(1). By default, a full stream is generated.

 -i snapshot1Generate  an  incremental  stream   from
 snapshot1  to snapshot2. The incremental
 source snapshot1 can be specified as the
 last component of the snapshot name (for
 example, the part after the "@"), and it
 will be assumed to be from the same file
 system as snapshot2.

 The format of the stream is evolving. No backwards  compati-
 bility  is  guaranteed.  You may not be able to receive your
 streams on future versions of ZFS.

I wrote a script to use this but I had a problem getting estimates for
the incremental snapshots.

I can not see how amrecover would not be able to restore from the
snapshot as it does not know the format used. In fact there is no way
that I know of to extract a file from the snapshot sort of recovering
the whole snapshot. This is probably not too much of a big issue as the
tape backup is only needed for disaster recover and snapshot can be used
for file recovery.

amrestore could be used with "zfs receive" to recover the snapshot.

One of the properties of zfs is that in encourages the use of a
filesystem for a logical set of files, i.e. user home directory,
software package etc.
This means that every time you create a new filesystem you need to
create a new DLE for amanda. In fact creating the amanda DLE takes
longer than creating the zfs filesystem.

You can not just use tar to dump multiple zfs filestems because amamda
tells tar not to cross filesystem boundaries.

You could probably write a wrapper to tar to remove --one-file-system
option to get around this limitation.

  

Anthony Worrall

 
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
> On Behalf Of Chris Hoogendyk
> Sent: 25 April 2008 13:39
> To: Nick Smith
> Cc: amanda-users@amanda.org
> Subject: Re: Amanda and ZFS
> 
> 
> 
> Nick Smith wrote:
> > Dear Amanda Administrators.
> >
> > What dump configuration would you suggest for backing up a ZFS pool
of
> > about 300GB? Within the pool there several smaller 'filesystems'.
> >
> > Would you :
> >
> > 1.  Use a script to implement ZFS snapshots and send these to the
server
> > as the DLE?
> > 2.  Use tar to backup the filesystems? We do not make much use of
ACLs
> > so tar's lack of ACL support shouldn't be an issue?
> > 3.  Something else?
> >
> > Question : If a use 2 can still use 'amrecover' which AFAIK would be
the
> >case if I went with 1??
> >
> > The host is a Sun Solaris 10 X86 box is that pertinent.
> 
> 
> I'm not on Solaris 10 yet, and haven't used ZFS, but . . .
> 
> I understand that with ZFS you have zfsdump (just as with ufs I have
> ufsdump). So you could usse zfsdump with snapshots. I'm guessing it
> wouldn't be too hard to modify the wrapper I wrote for Solaris 9 that
> uses ufsdump with snapshots and is documented here
>
http://wiki.zmanda.com/index.php/Backup_client#Chris_Hoogendyk.27s_Examp
le
> 
> If you have that pool logically broken up into a number of smaller
> pieces that can be snapshotted and dumped, it will make it smoother
for
> Amanda's planner to distribute the load over the dump cycle.
> 
> Shouldn't have any problems with amrecover.
> 
> 
> 
> ---
> 
> Chris Hoogendyk
> 
> -
> O__   Systems Administrator
>c/ /'_ --- Biology Department
>   (*) \(*) -- 140 Morrill Science Center
> ~~ - University of Massachusetts, Amherst
> 
> <[EMAIL PROTECTED]>
> 
> ---


Re: amanda 2.6.0 planner segfaulting when the estimate for a pending backup is larger than the tape size

2008-04-25 Thread Telsin

Jean-Louis-

It looks like that did the trick. I'll test a little more and let you  
know for sure, but it correctly produced a "dumps way to big, must  
skip incremental dumps" error and ran the rest of the dumps last night  
after I applied the patch.


Thanks!

 -Darrell

On Apr 23, 2008, at 8:19 AM, Jean-Louis Martineau wrote:


Darrell,

Thanks for reporting the problem.
Can you try the attached patch?

Jean-Louis

Telsin wrote:
I noticed this problem after upgrading to 2.6.0 from 2.5.something.  
All of a sudden it wouldn't complete a backup run and I was getting  
messages from cron about the planner segfaulting. I culled some  
items from my disklist, and all of a sudden it starting working  
again. After a little experimenting, I've come to the conclusion  
that it's happening whenever a backup estimate generates a backup  
size that's bigger than the current tape size. The debug logs don't  
help, they just end after closing a connection for an estimate. I  
do not currently have tape spanning enabled, although it will  
probably be something to try soon.


Anyone else seen this or have a fix? I havn't had a chance to look  
into the source yet, figured I'd ask here before I started poking  
around at something I wasn't familiar with :)


Thanks!

 -Darrell


Index: server-src/planner.c
===
--- server-src/planner.c(revision 11038)
+++ server-src/planner.c(working copy)
@@ -2664,7 +2664,7 @@
}
strappend(errstr, "]");
qerrstr = quote_string(errstr);
-vstrextend(&bi->errstr, " ", qerrstr);
+vstrextend(&bi->errstr, " ", qerrstr, NULL);
amfree(errstr);
amfree(qerrstr);
arglist_end(argp);




Re: Invalid Service?

2008-04-25 Thread Tony van der Hoff
On 25 Apr at 14:02 "Stefan G. Weichinger" <[EMAIL PROTECTED]> wrote in message
<[EMAIL PROTECTED]>

> Tony van der Hoff schrieb:
> > Thanks for taking an interest, Stefan; that's what I thought, too. I
> > wish it were that simple. Maybe I'm missing something, but what is wrong
> > with this (3 seperate files, each with the same name as the service):
>
> > #default: on # description: The amanda index service service amandaidx {
> > disable = no socket_type= stream protocol   = tcp 
> > wait  = no user   =
> > backup group= backup groups = yes server
> > = /usr/lib/amanda/amindexd
> > }
> >
> > Oh, and all those paths are valid.
>
> And (x)inetd was restarted?
>
yes (frequently).

> Show "ls -l /usr/lib/amanda/amindexd", is it executable, who is the owner?
>
[EMAIL PROTECTED]:~$ ls -l /usr/lib/amanda/amindexd
-rwxr-xr-x 1 backup backup 43428 2008-02-04 14:57 /usr/lib/amanda/amindexd

I'm away for the weekend now; catch up on Mnday.

Cheers, Tony.

-- 
Tony van der Hoff| mailto:[EMAIL PROTECTED]
Buckinghamshire, England 


Re: Invalid Service?

2008-04-25 Thread Stefan G. Weichinger

Tony van der Hoff schrieb:

Thanks for taking an interest, Stefan; that's what I thought, too. I wish it
were that simple. Maybe I'm missing something, but what is wrong with this
(3 seperate files, each with the same name as the service):



#default: on
# description: The amanda index service
service amandaidx
{
disable = no
socket_type = stream
protocol= tcp
wait= no
user= backup
group   = backup
groups  = yes
server  = /usr/lib/amanda/amindexd
}

>

Oh, and all those paths are valid.


And (x)inetd was restarted?

Show "ls -l /usr/lib/amanda/amindexd", is it executable, who is the owner?

Stefan





Re: Invalid Service?

2008-04-25 Thread Marc Muehlfeld

Tony van der Hoff schrieb:

I wish it were that simple.


What authentication do you use? Have you checked out the link
http://wiki.zmanda.com/index.php/Configuring_bsd/bsdudp/bsdtcp_authentication
at the "Configuring xinetd section", too?


--
Marc Muehlfeld (Leitung Systemadministration)
Zentrum fuer Humangenetik und Laboratoriumsmedizin Dr. Klein und Dr. Rost
Lochhamer Str. 29 - D-82152 Martinsried
Telefon: +49(0)89/895578-0 - Fax: +49(0)89/895578-78
http://www.medizinische-genetik.de



Re: Invalid Service?

2008-04-25 Thread Tony van der Hoff
On 25 Apr at 13:22 "Stefan G. Weichinger" <[EMAIL PROTECTED]> wrote in message
<[EMAIL PROTECTED]>

> Tony van der Hoff schrieb:
> > Using Amanda 2.5.1p1 under Debian Etch, my backups work fine, and I can
> > recover partitions from tape using dd, etc.
>> 
> > However: [EMAIL PROTECTED]:~$ sudo amrecover HomeDumps AMRECOVER Version
> > 2.5.1p1. Contacting server on localhost ... NAK: amindexd: invalid
> > service
>> 
> > What does this message mean, and how to fix this?
>
> I think it means that you haven't read the docs on how to configure
> (x)inetd correctly.
>
> http://www.amanda.org/docs/install.html#id325457
>
>
http://wiki.zmanda.com/index.php/Quick_start#Configuring_xinetd_on_the_server
>
> Stefan
>
Thanks for taking an interest, Stefan; that's what I thought, too. I wish it
were that simple. Maybe I'm missing something, but what is wrong with this
(3 seperate files, each with the same name as the service):

# default: on
# description: The amanda service
service amanda
{
disable = no
socket_type = dgram
protocol= udp
wait= yes
user= backup
group   = backup
groups  = yes
server  = /usr/lib/amanda/amandad
}


#default: on
# description: The amanda index service
service amandaidx
{
disable = no
socket_type = stream
protocol= tcp
wait= no
user= backup
group   = backup
groups  = yes
server  = /usr/lib/amanda/amindexd
}

#default: on
# description: The amanda tape service
service amidxtape
{
disable = no
socket_type = stream
protocol= tcp
wait= no
user= backup
group   = backup
groups  = yes
server  = /usr/lib/amanda/amidxtaped
}


Oh, and all those paths are valid.

-- 
Tony van der Hoff| mailto:[EMAIL PROTECTED]
Buckinghamshire, England 


Re: Amanda and ZFS

2008-04-25 Thread Chris Hoogendyk



Nick Smith wrote:

Dear Amanda Administrators.

What dump configuration would you suggest for backing up a ZFS pool of 
about 300GB? Within the pool there several smaller 'filesystems'.


Would you :

1.  Use a script to implement ZFS snapshots and send these to the server
as the DLE?
2.  Use tar to backup the filesystems? We do not make much use of ACLs
so tar's lack of ACL support shouldn't be an issue?
3.  Something else?

Question : If a use 2 can still use 'amrecover' which AFAIK would be the
   case if I went with 1??

The host is a Sun Solaris 10 X86 box is that pertinent.



I'm not on Solaris 10 yet, and haven't used ZFS, but . . .

I understand that with ZFS you have zfsdump (just as with ufs I have 
ufsdump). So you could usse zfsdump with snapshots. I'm guessing it 
wouldn't be too hard to modify the wrapper I wrote for Solaris 9 that 
uses ufsdump with snapshots and is documented here 
http://wiki.zmanda.com/index.php/Backup_client#Chris_Hoogendyk.27s_Example


If you have that pool logically broken up into a number of smaller 
pieces that can be snapshotted and dumped, it will make it smoother for 
Amanda's planner to distribute the load over the dump cycle.


Shouldn't have any problems with amrecover.



---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology Department
 (*) \(*) -- 140 Morrill Science Center
~~ - University of Massachusetts, Amherst

<[EMAIL PROTECTED]>

---


Re: Amanda on a Cobalt Raq-4 web server?

2008-04-25 Thread Craig Dewick


Hi everyone,

Ok I've had another go at compiling Amanda (2.5.2p1 is the source version 
I have in use on other systems at present) on my Cobalt Raq-4i system, and 
it's aborted when compiling dgram.c like this:


 start 

source='dgram.c' object='dgram.lo' libtool=yes \
DEPDIR=.deps depmode=gcc /bin/sh ../config/depcomp \
/bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I. 
-I../config -I../gnulib  -Wall -W -Wparentheses -Wmissing-prototypes 
-Wstrict-prototypes -Wmissing-declarations -Wformat -Wsign-compare 
-D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE  -c -o dgram.lo dgram.c
gcc -DHAVE_CONFIG_H -I. -I. -I../config -I../gnulib -Wall -W 
-Wparentheses -Wmissing-prototypes -Wstrict-prototypes 
-Wmissing-declarations -Wformat -Wsign-compare -D_FILE_OFFSET_BITS=64 
-D_GNU_SOURCE -c dgram.c -Wp,-MD,.deps/dgram.TPlo  -fPIC -DPIC -o 
.libs/dgram.o


gram.c: In function `dgram_bind':
dgram.c:83: structure has no member named `ss_family'
dgram.c: In function `dgram_send_addr':
dgram.c:167: structure has no member named `ss_family'
gmake[1]: *** [dgram.lo] Error 1
gmake[1]: Leaving directory `/home/root/azwan/amanda-2.5.2p1/common-src'
gmake: *** [all-recursive] Error 1

 stop 

Looks like it could be a coding error, but it might be something triggered 
by the specific version of Linux that the Cobalt server is running, or the 
version of gcc provided with it.


If it helps, this is the configure script I'm using on the system to set 
up things prior to running 'make':


sh ./configure --with-user=amanda --with-group=sys --with-amandahosts \
--with-tmpdir=/var/amanda/tmp --with-config=ORBnet --without-server \
--with-debugging=/var/amanda/debug --with-shared --with-db=text \
--with-debug-days=28 --with-gnutar=/bin/tar --without-ipv6

Regards,

Craig.

--
Post by Craig Dewick (tm). Web @ "http://lios.apana.org.au/~cdewick";.
Email 2 "[EMAIL PROTECTED]". SunShack @ "http://www.sunshack.org";
Galleries @ "http://www.sunshack.org/gallery2";. Also lots of tech data, etc.
Sun Microsystems webring at "http://n.webring.com/hub?ring=sunmicrosystemsu";.


Re: Invalid Service?

2008-04-25 Thread Stefan G. Weichinger

Tony van der Hoff schrieb:

Using Amanda 2.5.1p1 under Debian Etch, my backups work fine, and I can
recover partitions from tape using dd, etc.

However:
[EMAIL PROTECTED]:~$ sudo amrecover HomeDumps
AMRECOVER Version 2.5.1p1. Contacting server on localhost ...
NAK: amindexd: invalid service

What does this message mean, and how to fix this?


I think it means that you haven't read the docs on how to configure 
(x)inetd correctly.


http://www.amanda.org/docs/install.html#id325457

http://wiki.zmanda.com/index.php/Quick_start#Configuring_xinetd_on_the_server

Stefan


Amanda and eeePC

2008-04-25 Thread Charles Stroom

Hi all,

has anyone tried to have an amanda client running on an ASUS eeePC?

Regards,

Charles



-- 
Charles Stroom
email: charles at no-spam.stremen.xs4all.nl (remove the "no-spam.")


Invalid Service?

2008-04-25 Thread Tony van der Hoff
Using Amanda 2.5.1p1 under Debian Etch, my backups work fine, and I can
recover partitions from tape using dd, etc.

However:
[EMAIL PROTECTED]:~$ sudo amrecover HomeDumps
AMRECOVER Version 2.5.1p1. Contacting server on localhost ...
NAK: amindexd: invalid service

What does this message mean, and how to fix this?

Cheers, Tony

-- 
Tony van der Hoff| mailto:[EMAIL PROTECTED]
Buckinghamshire, England 


Amanda and ZFS

2008-04-25 Thread Nick Smith

Dear Amanda Administrators.

What dump configuration would you suggest for backing up a ZFS pool of 
about 300GB? Within the pool there several smaller 'filesystems'.


Would you :

1.  Use a script to implement ZFS snapshots and send these to the server
as the DLE?
2.  Use tar to backup the filesystems? We do not make much use of ACLs
so tar's lack of ACL support shouldn't be an issue?
3.  Something else?

Question : If a use 2 can still use 'amrecover' which AFAIK would be the
   case if I went with 1??

The host is a Sun Solaris 10 X86 box is that pertinent.

Many Thanks for any assistance!!

Nick Smith

Lead Software Engineer (and reluctant system administrator)
TECH OP AG
Switzerland