Re: restored amanda client errors out

2019-06-21 Thread C. Scheeder

Hi,
As far as i recal this is an expected behavior for this version of amanda.

short explanation:

amcheck does all work as normal user, so if the user hasn't access to 
the full tree up to the root dir it will complain it can't access these 
dirs.


amdump does the backups with elevated rights, so it can access these 
dirs without a problem.


That isn't nice but i have learned to live with it

Cu Christoph

Am 21.06.19 um 16:00 schrieb Gene Heskett:

I sent this yesterday around noon, but it never came back.  I've also
appended additional info at the bottom.

resent msg=

I just installed the stretch version of amanda common and amanda client,
with should have restored one of my machines to a full
backup status.

Unforch, it didn't, and this situtation existed on the previous jessie
install, where a pi scatters its stuff over a larg e number
of mount points all attached to /, so basically nothing has actually
changed.

But here is the emailed report from last nights run:
STRANGE DUMP DETAILS:
   /-- picnc / lev 0 STRANGE
   sendbackup: start [picnc:/ level 0]
   sendbackup: info BACKUP=/bin/tar
   sendbackup: info RECOVER_CMD=/bin/tar -xpGf - ...
   sendbackup: info end
   ? /bin/tar: ./dev: directory is on a different filesystem; not dumped
   ? /bin/tar: ./proc: directory is on a different filesystem; not dumped
   ? /bin/tar: ./run: directory is on a different filesystem; not dumped
   ? /bin/tar: ./sys: directory is on a different filesystem; not dumped
   ? /bin/tar: ./media/pi/backuppi: directory is on a different
filesystem; not dumped
   ? /bin/tar: ./media/pi/bootpi: directory is on a different filesystem;
not dumped
   ? /bin/tar: ./media/pi/workpi1: directory is on a different filesystem;
not dumped
   ? /bin/tar: ./media/pi/workpi120: directory is on a different
filesystem; not dumped
   | /bin/tar: ./tmp/.X11-unix/X0: socket ignored
   | /bin/tar: ./tmp/ssh-5i6ERjqwMXjX/agent.705: socket ignored
   | /bin/tar: ./tmp/ssh-VqldgesfylC3/agent.611: socket ignored
   | Total bytes written: 8424396800 (7.9GiB, 5.2MiB/s)
   sendbackup: size 8226950
   sendbackup: end
   \
==
then in the actual report:
==
picnc   /   080343685  45.9 25:48  5314.9
0:25 150937.9
picnc   /boot   1  36  29  79.8  0:05  7168.4
0:00 295570.0

That level 0 should have been around 7 or 8GB. The 1st 4 shoulda been
in the excludes file and I can fix that easy nuff, as should the last 3.
But it looks as if I'll have to make disklist entries for the pair of
ssd's plugged into /media.

But thats  not legal either: amcheck says:
ERROR: picnc: Could not access /media/pi/workpi120 (/media/pi/workpi120):
Permission denied
ERROR: picnc: Could not access /media/pi/bootpi (/media/pi/bootpi):
Permission denied
ERROR: picnc: Could not access /media/pi/backuppi (/media/pi/backuppi):
Permission denied
ERROR: picnc: Could not access /media/pi/workpi1 (/media/pi/workpi1):
Permission denied

/media is owned by root:root, but everything beyond that is pi:pi who is
1st
user on picnc.coyote.den

What changed to cause this sudden fussiness? Better yet, how do I fix
this?
=end resent

There seems to a marked disagreement between amanda and amanda.

After it has logged the no perms errors above, then at the end of the
body of that same message (and this is after I had added those 4
locations as separate LDE's, I see this in the same run report:
picnc   /media/pi/backuppi  0   0   0   --   0:01 8.9
0:00 0.0
picnc   /media/pi/bootpi0   0   0   --   0:0072.9
0:00 0.0
picnc   /media/pi/workpi1   02603 817  31.4  5:45  7734.3
0:08 104628.4
picnc   /media/pi/workpi120 0   0   0   --   0:01 8.3
0:00 0.0

Which looks perfectly normal since there is not in fact anything on 3 of
those partitions/directories, its two separate SSD's,  on usb adaptors
plugged into that pi-3b.


Twould be nice if stories matched.  Helpful even.
   
Cheers, Gene Heskett




Re: exact-match

2019-06-11 Thread C. Scheeder

Errm
sorry to interup this all,
but isn't the option
 "exclude list "
or
 "exclude file "
man amand.conf dosn't know the option "exclude" without being folowed by 
the word "file" or "list".


Christoph

Am 11.06.19 um 15:54 schrieb Nuno Dias:

On Tue, 2019-06-11 at 07:04 -0400, Nathan Stratton Treadway wrote:

On Fri, Jun 07, 2019 at 11:04:48 +0100, Nuno Dias wrote:

  I'm trying to use amanda to backup only one dir from a list of
dirs
that are in disklist file

I run amdump like this

$ /usr/sbin/amdump  -o reserve=0 --no-taper MACHINE
^/dir/subdir/name/2019$

and with ps I can see the amdump running

/usr/bin/perl /usr/sbin/amdump  -o reserve=0 --no-taper MACHINE
^/dir/subdir/name/2019$

The problem is instead of only one dir I have two dirs in the
backup

MACHINE:/dir/subdir/name/2019 20190606153859 0   486g
dumping
(18g done (3.74%)) (15:39:26)

MACHINE:/dir/subdir/name/2019/another   20190606153859
1   244g
wait for dumping


Am I correct you have many DLEs in your disklist file, and if you
don't
put the "MACHINE ^/dir/subdir/name/2019$" on the amdump line they all
get dumped?  (That is, the "match" is mostly is working to restrict
the
DLEs dumped, only the trailing "$" is not working?)


  Yes, to all the questions.


What version of Amanda is this?


amanda-3.5.1-16.fc29.x86_64



Nathan

---
-
Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic
region
Ray Ontko & Co.  -  Software consulting services  -
http://www.ontko.com/
  GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID:
1023D/ECFB6239
  Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Amanda Performance

2013-03-15 Thread C. Scheeder

Hi,
Summarizing up:
your clients have 100Mbit-Nics,
your server has a 1000Mbit-Nic,
you are not using a holdingdisk, so as far as i recall,
you are getting the maximum possible performance out of your setup.
Why?
without Holdingdisk, amanda will fetch all your dumps one after the other,
no matter what you set inparallel to in amada.conf.

Or has that behavior changend for newer versions of amanda?

You are limited by the speed of your client-nics, 100mBit/sec means max 11 
MByte/sec.
and as a short calculation this leads to roughly 3 to 4 days backup-time.

if your NAS has a 1000Mbit-Nic, and if the systems are connected together by a
1GBit/sec switch then do yourself a favor and put a holdingdisk into your 
server,
i would suggest a sata-disk with around 2 times the capacity of the largest DLE 
you have.
It will cut Backuptime dramatically, as amanda will start dumping many hosts in 
parallel.

But if your nas only has a 100MBit NIC or you don't have a Gbit switch you'll 
never get
amanda faster than now, nor any other backup solution.
Hope that helps
Christoph

Am 15.03.2013 07:41, schrieb Amit Karpe:

I am sharing her more Info:

cpu usage

On server (Intel® Xeon® series Quad core processors @ 2.66GHz)
# ps -eo pcpu,pid,user,args | sort -r -k1 | head
%CPU   PID USER COMMAND
  6.0 26873 33   /usr/bin/gzip --fast
  4.3 26906 33   /usr/bin/gzip --fast
27.7 30002 ntop ntop
  2.1 26517 33   dumper3 DailySet2
  2.1 26515 33   dumper1 DailySet2
  1.4  1851 root [nfsiod]
  1.2  1685 nobody   /usr/sbin/ns-slapd -D /etc/dirsrv/slapd-borneo -i
/var/run/dirsrv/slapd-borneo.pid -w /var/run/dirsrv/slapd-borneo.startpid
  1.0 27603 root ps -eo pcpu,pid,user,args
  1.0  2135 root [nfsd]

But on client is always 80%-90% cpu usage. So I am planning to use
compression server fast.


parallel:
Though I am using inparallel option in config file, I am not sure whether
multiple dumper or other process running parallel or not !
  inparallel 30   #performance
 maxdumps 5  #performance


netusage:
I read on forum that netusage is obsolete option, but still I have tried to
play around from 8m to 8000m, but no grt success. What should it value
for netusage
? If my server having NIC support for 1000 Mbps.

maxdumps:
I have changed it from one to five. How to make sure whether its working or
not ?

I have tested 15GB backup by changing above parameters for 50+ times. I see
its improvement in performance only 5%. i.e. I reduce backup time from
18min to 15min. Can someone guide me to improve it further ?


Client System: These normal ten workstation with 4GB RAM, Xeon duel core
2.5GHz, 100 Mbps NIC.
Those having 200G to 800G data, but number of files are far more in numbers.
Just to give idea:
# find /disk1 | wc -l
647139
# df -h /disk1
FilesystemSize  Used Avail Use% Mounted on
/dev/cciss/c0d2   1.8T  634G  1.1T  37% /disk1

or
# du -sh .
202G .
# find | wc -l
707172

I have tried with amplot I have found these outputs:
amdump.1https://www.dropbox.com/sh/qhh16izq5z43iqj/hx6uplXRUp/20130315094305.ps
amdump.2https://www.dropbox.com/sh/qhh16izq5z43iqj/7IecwXLIUp/20130315105836.ps
Sorry but I could not understand these plot. I think it just cover first
one min information.

Thank you all those you are helping and answering my dumb questions.