Re: changer for more than one usb disk device
Olivier writes: > Many many years ago, I wrote a changer that could use several vtape My configuration file mentions 2008!
Re: changer for more than one usb disk device
Hi, > Nowerdays the people changing the disks frequently work at home. So it > would be nice to plug in more than one disk and have a tape changer scan > for more than one disk. I understand there is no changer included in > amanda which is directly capable of that. Many many years ago, I wrote a changer that could use several vtape disks, each disk has a "disk label" and the configuration file will have the list of the slots associated to one disk. It is written in Perl and still works fine with my Amanda 3.3.9. I use it with disks mounted in the server, but I designed it to work with USB disks at first. It also has a feature that will send an email to Amanda operators if the needed disk cannot be find (but I would not rely on that part still working). It maintains a database of the various disks mounted in the system to avoid re-scanning all the disks each time, it will only do a re-scan when it finds a discrepancy. A more in-depth description is available at: https://www.cs.ait.ac.th/~on/technotes/archives/2020/09/17/managing_multiple_disks_with_virtual_tapes_for_amanda_ttchg-vdisktt/index.html I think that Amanda has evolved a lot lince I wrote that and now can use many disks, but I never really explored the need to change my configuration as my script is doing what I need. I run my Amanda server on FreeBSD (13.2-p5, I did the security update this afternoon, yeah!) but there is nothing specific to FreeBSD as far as I know (apart from the disk mount-points), As noted on the web page, it only uses one disk at a time (I wrote it long time ago, when vtapes where pretty new and multi-changer was not even a consideration). I can share if you want. Best regards, Olivier
Re: Error with amrecover
So, I did some more digging, the problem came because a bad log file that contains the line: DONE taper WARNING driver Taper protocol error This line caused Perl module Catalog.pm to die at line 764 because $str is empty and Perl cannot apply a regex replace on an empty string. After I removed that specific log file, amrecover runs fine. Next question is to know what that warning message means in the log, I can provide the faulty log file and other needed information. I can also file a bug tracker. Thank you Olivier --
Error with amrecover
Hi, Lately, when trying to do an amrecover I get the following error message: Got no header and data from server, check in amidxtaped.*.debug and amandad.*.debug files on server I could not see anything in amandad.*.debug In amidxtaed.*.debug I found a number of lines saying: Warning: no log files found for tape CSIM-set-372 written 2023-03-21 17:02:30 Warning: no log files found for tape CSIM-set-371 written 2023-03-21 17:02:30 Most of the lines correspond to tapes that have not been used for some time or are marked no-reuse. A couple of months ago, I had a disk full error in the disk holding /var/amanda. I may have tried to free some space by deleting older files, hence deleting the so called log file for tape CSIM-set-372 and al. But I could not find any log file corresponding to newer tape either. What did I do wrong? Best regards, Olivier --
Re: Amanda is crashing on [runtar invalid option: -]
Nathan Stratton Treadway writes: > I have had Amanda running for over a decade, yesterday I had no issue at > all but last night, my backups for Ubuntu machines started crashing > consistently with the error: > strange(?): runtar: error [runtar invalid option: -] > > The just-released amanda package upgrade seems to have a regression for > GNUTAR DLEs; see: > https://bugs.launchpad.net/debian/+source/amanda/+bug/2012536/ Thank you Nathan, that did the trick. Now I have to understand what did the upgrade, these servers have no automatic upgrade. But that is not and Amanda problem. Olivier
Amanda is crashing on [runtar invalid option: -]
Hello, I have had Amanda running for over a decade, yesterday I had no issue at all but last night, my backups for Ubuntu machines started crashing consistently with the error: strange(?): runtar: error [runtar invalid option: -] That even happens on a machine that is currently unused. The sendbackup.debug: Thu Mar 23 12:42:52.540023056 2023: pid 176238: thd-0x55a338876200: sendbackup: pid 176238 ruid 34 euid 34 version 3.5.1: start at Thu Mar 23 12:42:52 2023 Thu Mar 23 12:42:52.540038593 2023: pid 176238: thd-0x55a338876200: sendbackup: Version 3.5.1 Thu Mar 23 12:42:52.540203704 2023: pid 176238: thd-0x55a338876200: sendbackup: pid 176238 ruid 34 euid 34 version 3.5.1: rename at Thu Mar 23 12:42:52 2023 Thu Mar 23 12:42:52.540260443 2023: pid 176238: thd-0x55a338876200: sendbackup: Parsed request as: program `GNUTAR' Thu Mar 23 12:42:52.540265622 2023: pid 176238: thd-0x55a338876200: sendbackup: disk `/home2/docker/volumes' Thu Mar 23 12:42:52.540268838 2023: pid 176238: thd-0x55a338876200: sendbackup: device `/home2/docker/volumes' Thu Mar 23 12:42:52.540271771 2023: pid 176238: thd-0x55a338876200: sendbackup: level 1 Thu Mar 23 12:42:52.540274728 2023: pid 176238: thd-0x55a338876200: sendbackup: since NODATE Thu Mar 23 12:42:52.540277637 2023: pid 176238: thd-0x55a338876200: sendbackup: options `' Thu Mar 23 12:42:52.540280674 2023: pid 176238: thd-0x55a338876200: sendbackup: datapath `AMANDA' Thu Mar 23 12:42:52.540314487 2023: pid 176238: thd-0x55a338876200: sendbackup: start: guppy1000:/home2/docker/volumes lev 1 Thu Mar 23 12:42:52.540326715 2023: pid 176238: thd-0x55a338876200: sendbackup: Spawning "/bin/gzip /bin/gzip --best" in pipeline Thu Mar 23 12:42:52.540482259 2023: pid 176238: thd-0x55a338876200: sendbackup: gnutar: pid 176240: /bin/gzipThu Mar 23 12:42:52.540495845 2023: pid 176238: thd-0x55a338876200: sendbackup: pid 176240: /bin/gzip --best Thu Mar 23 12:42:52.540532571 2023: pid 176238: thd-0x55a338876200: sendbackup: doing level 1 dump as listed-incremental from '/var/lib/amanda/gnutar-lists/guppy1000_home2_docker_volumes_0' to '/var/lib/amanda/gnutar-lists/guppy1000_home2_docker_volumes_1.new' Thu Mar 23 12:42:52.541308063 2023: pid 176238: thd-0x55a338876200: sendbackup: Spawning "/usr/lib/amanda/runtar runtar normal /bin/tar --create --file - --directory /home2/docker/volumes --one-file-system --listed-incremental /var/lib/amanda/gnutar-lists/guppy1000_home2_docker_volumes_1.new --sparse --ignore-failed-read --totals ." in pipeline Thu Mar 23 12:42:52.541356931 2023: pid 176241: thd-0x55a338876200: sendbackup: Dupped file descriptor 3 to 11 Thu Mar 23 12:42:52.541474547 2023: pid 176238: thd-0x55a338876200: sendbackup: gnutar: /usr/lib/amanda/runtar: pid 176242 Thu Mar 23 12:42:52.541498736 2023: pid 176238: thd-0x55a338876200: sendbackup: shm_ring_link /amanda_shm_control-176237-0 Thu Mar 23 12:42:52.541535955 2023: pid 176238: thd-0x55a338876200: sendbackup: am_sem_open 0x7f67cb7e6000 1 Thu Mar 23 12:42:52.541546592 2023: pid 176238: thd-0x55a338876200: sendbackup: am_sem_open 0x7f67cb7e5000 1 Thu Mar 23 12:42:52.541553997 2023: pid 176238: thd-0x55a338876200: sendbackup: am_sem_open 0x7f67cb7e4000 1 Thu Mar 23 12:42:52.541561135 2023: pid 176238: thd-0x55a338876200: sendbackup: am_sem_open 0x7f67cb7e3000 1 Thu Mar 23 12:42:52.541565219 2023: pid 176238: thd-0x55a338876200: sendbackup: shm_ring_producer_set_size Thu Mar 23 12:42:52.542263918 2023: pid 176241: thd-0x55a338876200: sendbackup: Started index creator: "/bin/tar -tf - 2>/dev/null | sed -e 's/^\.//'" Thu Mar 23 12:42:52.544787802 2023: pid 176238: thd-0x55a338876200: sendbackup: Started backup Thu Mar 23 12:42:52.544826375 2023: pid 176238: thd-0x55a338887060: sendbackup: fd_to_shm_ring Thu Mar 23 12:42:52.548988463 2023: pid 176238: thd-0x55a338876200: sendbackup: 119: strange(?): runtar: error [runtar invalid option: -] Thu Mar 23 12:42:52.549723621 2023: pid 176241: thd-0x55a338876200: sendbackup: Index created successfully Thu Mar 23 12:42:53.550073702 2023: pid 176238: thd-0x55a338876200: sendbackup: critical (fatal): error [no backup size line] /usr/lib/x86_64-linux-gnu/amanda/libamanda-3.5.1.so(+0x39067)[0x7f67cb820067] /lib/x86_64-linux-gnu/libglib-2.0.so.0(g_logv+0x21c)[0x7f67cb70355c] /lib/x86_64-linux-gnu/libglib-2.0.so.0(g_log+0x93)[0x7f67cb703743] /usr/lib/amanda/sendbackup(parse_backup_messages+0x46f)[0x55a33859273f] /usr/lib/amanda/sendbackup(main+0x1332)[0x55a33858f512] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7f67cb4b9083] /usr/lib/amanda/sendbackup(_start+0x2e)[0x55a338590d8e] Note that the sendsize is fine. Thank for any idea on what I should look for. Best regards, Olivier
Re: garbage collect: free orphaned chunks from disks?
Hi, > Given a series of dumps like > > a-host a-disk 1 1 1 0 1 1 > > the left most level 1 dumps are worthless, because we already dropped > the "base" level 0. While the tape/slot could free the level 1 dumps for a-disk, there could be some DLE for other disks/hosts that are still valid on that tape/slot. I don't think Amanda has any way to free some specific DLE from a tape/slot. It is only at the tape/slot level that a tape/slot can be erased and reused. Best regards, Olivier
(OT) What backup to use with Windows
Hi, I know this is not about Amanda, butt this is about backup. I am trying to find a suitable solution for Windows/Mac/Linux desktops. Amanda has a major flaw: it is working on a pull schedule, the client must be available when the server is starting the backup. But clients tend to have unpredictable schedule, so the backup must be initiated from the client side. Few years ago, I investigated CrashPlan, it had a free server (and the client is always free), but since, they wanted to kae profit by vending cloud space, no local server anymore. Most of the solutions I can see are limited to a cloud storage, or a pull schedule, or do not support Linux/FreeBSD, or do not have any type of scheduling... I am looking at restic, but it is really dry for a secretary, and it needs a lot of manual install (so the user cannot be independant). (it has a huge pro though, the server side is SSH, no work to be done on the server). So, what are you using? What would you recommend? TIA Olivier --
Re: Prob;em to locate metho Taper::Splitter
> I upgraded to Amanda 3.3.9 on FreeBSD 13.1 and since, the amdump > terminates with the following diagnostic in the mail sumary: > > FAILURE DUMP SUMMARY: > taper: FATAL Can't locate object method "new" via package > "Amanda::Xfer::Dest::Taper::Splitter" (perhaps you forgot to load > "Amanda::Xfer::Dest::Taper::Splitter"?) at > /usr/local/lib/perl5/site_perl/Amanda/Taper/Scribe.pm line 764. > > Before I try digging in the Perl scripts, I want to know if there could > be something I overlooked. So... I tried to remove Amanda an rebuild it from FreeBSD ports, but it came with missing Perl module Config.pm At tshat stage, I suspected something fishy with FreeBSD ports, so I ended up copying all the Perl libraries for Amanda from another known working server and now everything is fine. I think that the other server is working because it was not build as a 3.3.9 server, but had been upgraded from many prior versions and the missing Perl buits where installed by those previous versions. I strongly susppect that le port for Amanda on FreeBSD has some issues. Also, 3.9.9 is the latest version available as a port for FreeBSD. Bests, Olivier
Prob;em to locate metho Taper::Splitter
Hi, I upgraded to Amanda 3.3.9 on FreeBSD 13.1 and since, the amdump terminates with the following diagnostic in the mail sumary: FAILURE DUMP SUMMARY: taper: FATAL Can't locate object method "new" via package "Amanda::Xfer::Dest::Taper::Splitter" (perhaps you forgot to load "Amanda::Xfer::Dest::Taper::Splitter"?) at /usr/local/lib/perl5/site_perl/Amanda/Taper/Scribe.pm line 764. Before I try digging in the Perl scripts, I want to know if there could be something I overlooked. TIA, Olivier --
Re: Problem to back up docker volumes
Diego Zuccato writes: > Try setting an explicit ACL for 'amanda' user instead of just giving > o+x. And check permissions on the full path: too often the problem is in > the permissions of an ancestor folder but you keep looking at the > file... Been there, done that. Too many times :( And you were spot on. Thank you, Olivier > > Il 18/11/2021 05:06, Olivier ha scritto: >> Hi, >> >> I know it is a bit off topic, but I have a problem to back up my docker >> volumes (.../docker/volumes): >> >> sudo -u amanda amcheck -c normal puffer1000 >> >> Amanda Backup Client Hosts Check >> >> ERROR: puffer1000: [puffer1000: Could not access /home2/docker/volumes >> (/home2/docker/volumes): Perm >> ission denied] >> >> I suspect it is something to do with some access control list on Linux >> (I am more used to FreeBSD); I checked but saw nothing glaring to my >> face: >> >> on@puffer:~$ sudo getfacl /home2/docker/volumes >> [sudo] password for on: >> getfacl: Removing leading '/' from absolute path names >> # file: home2/docker/volumes >> # owner: root >> # group: root >> user::rwx >> group::--- >> other::--x >> >> I even tried to create a test directory, I gave it the same ACL and >> could amcheck and amdump that test directory. >> >> I must say that I am at lost here on what could be the main cause for >> this issue. and idea or pointer will be very welcome. >> >> TIA, >> >> Olivier >> --
Problem to back up docker volumes
Hi, I know it is a bit off topic, but I have a problem to back up my docker volumes (.../docker/volumes): sudo -u amanda amcheck -c normal puffer1000 Amanda Backup Client Hosts Check ERROR: puffer1000: [puffer1000: Could not access /home2/docker/volumes (/home2/docker/volumes): Perm ission denied] I suspect it is something to do with some access control list on Linux (I am more used to FreeBSD); I checked but saw nothing glaring to my face: on@puffer:~$ sudo getfacl /home2/docker/volumes [sudo] password for on: getfacl: Removing leading '/' from absolute path names # file: home2/docker/volumes # owner: root # group: root user::rwx group::--- other::--x I even tried to create a test directory, I gave it the same ACL and could amcheck and amdump that test directory. I must say that I am at lost here on what could be the main cause for this issue. and idea or pointer will be very welcome. TIA, Olivier --
Re: amanda fails
Gene Heskett writes: > I should probably point out that ALL my problems with bad crc's of the > holding disk files went away when I replaced the holding disk, which was > apparently an SMR disk, with an SSD of 1/4 the size of the spinning rust > disk. And my backup times to get all 6 machines were sped up from > around two hours to under 30 minutes. All by the non-SMR, SSD holding > disk. So at the very least, I can sure recommend an SSD as a holding > disk. My 2 cents about using SSD for holding disk: over 15 years of running amanda on hard disks, the only disk problem I ever had was with the holding disk and the disk did get damaged in the holding disk area. It seems that the holding disk function puts a tremendous usage on the disk (especially with compression on the server I presume) to the point of damaging a hard disk. I am wondering how an SSD will cope. Best regards, Olivier
Ordering the dumps?
Hello, My backups have not been runnig for some time due to a combination of work from home and general power failure. Now that I have restarted it, it fails for all the DLE with an error like below: localhost / lev 0 FAILED [dumps too big, 171153 KB, but cannot incremental dump skip-incr disk] localhost /var lev 1 FAILED [dumps way too big, 124813 KB, must skip incremental dumps] I have 1.6 TB in holding disk and no DLE is bigger than 1.1 TB, so any DLE should fit inside the holding disk and inside the tapes of one run. I know there is a way for Amanda to prioritize the DLE, but I am not sure what configuration I should use. TIA, Olivier --
Re: amrecover prompting for vtapes
Neil Richardson writes: > tl;dr: How to prevent amrecover from promoting for the next vtape? > > > I feel like I'm on the edge of getting this but am still missing a piece: > when Amanda is using folders as vtapes on a connected storage device (so all > the tapes are always available), amdump doesn't prompt for the operator to > change to the next tape when one gets filled. So how do I get amrecover to > also not prompt me to change disks? > > I thought it would be related to what Changer was configured, but it isn't > working in my current configuration and when I tried changing it the values I > used didn't work. I tried reviewing the documentation and searching the list > archives but haven't found anything helpful yet. Can someone please help me > understand what I'm missing? If you use the same configuration for amrecover that you used for amdump and if the index and taper are the proper ones (-s and -t), amrecover will use the same tape-changer that amdump used and it will not prompt you for new tapes. At the begining of the recover, it will ask you to mount the first tape, but normally you only have to answer yes and it will be automatic. Best regards, Olivier For example, the DLE is spit on 11 tapes, but amrecover never ask for any tape beyond the first one: amrecover> extract Extracting files using tape drive chg-compat:chg-vdisk on host localhost. The following tapes are needed: CSIM-set-198 CSIM-set-199 CSIM-set-200 CSIM-set-202 CSIM-set-203 CSIM-set-204 CSIM-set-206 CSIM-set-207 CSIM-set-208 CSIM-set-246 CSIM-set-255 Extracting files using tape drive chg-compat:chg-vdisk on host localhost. Load tape CSIM-set-198 now Continue [?/Y/n/s/d]? y Restoring files into directory /holding/recover All existing files in /holding/recover can be deleted Continue [?/Y/n]? y --
Automatically split large DLE
Hi, I have a DLE that has grown larger (1.3TB) than the total vtape space allocated per day (1.1TB). Is there a way for Amanda to automatically split a DLE when it gets too large for a single run? Best regards, Olivier --
Re: LVM for vtapes
Jon, > Interesting discussion in other threads got me wondering > whether I should have made some other choices when setting > up my vtape environment. Particularly whether I should > have used LVM (Logical Volume Management) to create one > large filesystem covering my multiple dedicated disks. > > Its a topic I do not recall being discussed, pros & cons. I am using 7 disks of 3 (or is that 4) and 6 TB (should upgrade them all to 6TB soon) almost dedicated to vtapes (the last disk also has a copy of the deleted accounts). I have them configured as individual disks. The size of my vtapes is also about 100GB and I am using a small chunck size, so my disks end up being 80% full at least. When I designed my vtape architecture, I decided to keep each disk individual so that it can be put offline after usage. My idea was to have a system that could prompt an operator to "mount a disk" before the backup and the disk could be manually unmounted and safe stored each day. It is taking advantage of the automount service on FreeBSD. Mounting could be USD disk, or hot-swap. I never went very far in the implementation. I wrote all that many years ago when vtapes were new and limited to a single directory, that is why I wrote my own tape changer. I knew about the risk of loosing a disk and it being a good portion of consecutive backups. But what I had in mind was: - have the system as simple and as portable as possible, so I can shove a disk in another machine and extract contents manually (during the great flood of Bangkok in 2011, I moved all the servers and also took all my hard disks from Amanda backup, but I did not need to move the rack mounted server itself); - a side advantage of my own tape changer is that I can keep the older disks (each disk has an individual label, like any vtape has a label) (I have updated them from 500GB to 1TB to 3TB and soon to 6TB) and the vtapes are still known into tapelist (they are marked noreuse). If the need arise, I can still remount that old disk. So far (10+ years) the only disk I had failling was the disk having the holding partition, I guess it was because of excessive usage. I understand that vtapes have evolved since I started using them, but my system works for me so I never took the time to search any further. Best regards, Olivier
Re: DLE splitting
Hi, > My GenesAmandaHelper does not do anything to the vtape, all it does is > run amanda as a child so it knows when amanda is done. at which point it > does its thing copyiing the now uptodate indices and configs to that > same vtape amanda just made. so they are valid for the vtape just > remade, as opposed to always being a day out of date. > > No one else has publicly discussed how they've handled that so I have no > clue what others are doing, if anything to prevent a days indexes loss > if the drive or the amandatapes drive pukes and you have to recover to > bare metal. I also delete from the index database I keep, those files At the end of each dump, I rsync index and all to another machine, I also send that info to me by mail (and my mail is automatically replicated in at least 3 places). The reason being the one you mentioned: loss of main Amanda index disk. Olivier > that represent a vtape that has been re-used. So I have a duplicate of > amandas index stuff that takes up a few gigabytes on the main drive in > addition to a backup copy on the vtape. Belt and suspenders approach. > Recovering the latest indice and putting it where amanda keeps them, > enables access to the whole /amandatapes drive in one swell foop. > > Copyright 2019 by Maurice E. Heskett > Cheers, Gene Heskett --
Problem with directory
Hi, While upgrading my amanda server to 3.3.9, I am facing with the following problem. All amanda server configuration directory, tapelist, etc. are in /usr/local/etc/amanda and that directory must belong to the user amanda:amanda so that the server can store its business. The client file amanda-security.conf is also in the same directory and amanda client needs that directory to be owned by root for security reasons. How can I solve that conflict? Better would be having the client file somewhere else. I am running amanda on FreeBSD, hence /usr/local/etc Thanks in advance, Olivier --
Compression time and amplot margin
Hi, I have a couple of questions. In amanda mail report, I cannot find the time it takes to compress the data. I am trying to see where amanda is spending more of its time (dump across the network, compression, taper). When I am using amplot for PostScript, top and right caption are mostly out of the page frame. I "think" I once solved that problem, but it would have been years ago and the solution is lost. How can I reframe the output of amplot so that the captions stay in the page? Best regards, Olivier --
Re: How's amanda feeling these days?
Dave Sherohman writes: > On Tue, Sep 29, 2020 at 10:55:51AM +0700, Olivier wrote: >> I think that independant disks is better than any RAID thing that >> would either waste some storage, or render the array unusable if any >> one disk gets faulty. > > If we go disk-based/vtape for storage, I'd be looking at RAID 1 for > protection against disk failure (which we've had trouble with in > production systems in the past). Definitely not going to use anything > that would increase exposure to data loss, of course. But if you don't use RAID, you can double the number of backups you are keeping, that includes older versions. That is a risk calculation between having secure backups and loosing the backups data as same time as the primary data. >> I have 8 disk bays, one disk for system, holding disk and a copy of >> the final state of the accounts about to be deleted. Another disk for >> some additional holding space and a second copy of the deleted >> accounts and the 6 other disks for vtapes. > > Are those hot-swappable? I've made a couple half-hearted attempts at > hot-swap disk in the past, but never managed to make it work myself. No, but I never dug that very much. i need to replace a disk or mount and older disk about twice a year, so I can reboot at that time. >> I have been looking at some solution that would run on the user's >> workstation and that would push a backup on some server and use Amanda >> to keep a backup of that server. > > I've got a couple developers who are backing up their workstations via > rsync to the FreeNAS server, which sounds like basically the same > concept. They seem happy with it, although we've never needed to do a > restore on any of that data. Developpers can also be told to let their workstation on all the time, they can understand the reason why, then you can use Amanda. Administrtive staff do not always understand these details. Best regards, Olivier --
Re: How's amanda feeling these days?
Dave, > So let's see what the current users have to say. Is a new amanda > installation still a sane choice in 2020? I cannot speak for a new installation as I have been using Amanda for over 2 decades. But for backing up servers, a mix of unixes, I find it very reliable and dependable. I have not been using the latest bells and whistles, I have developped some of my owns to fit my needs 9nice thing part of Amanda is written in Perl and integrates well with some admin scripts here and there). I am far from dealing with your amount of servers and data, having about 15 machines and about 10TB. On a 4 cores, 4GB RAM machine, backups take only a few hours every night, compression being made on the client or on Amanda backup, about half and half. > And what kind of hardware specs should I be looking at? Is tape still > king, or is everyone backing up to hard drives now? One of my concerns would be the network bandwidth, you'd need at least 10Gbps network to push that amount of data through in one day. That is where Amanda trying to balance the size of the backup across a cycle may come handy. I had been using QIC type of tapes for years and changed for vtapes on disk about 12 years ago. As I was not satisfied that it could only use one disk at that time (or I did I misunderstood something), I developped my own tape changer that could work with multiple independant disks. By choice, I think that independant disks is better than any RAID thing that would either waste some storage, or render the array unusable if any one disk gets faulty. And since, I have been installing bigger disks in my same server when I need more space, from 500GB, to 1.5TB, 3TB and lately one 6TB (I tried with only one because I was not sure the motherboard would recognize it). The old disks are stored so I can retreive the data from them if needed, I just have to plug the disk in Amanda server. I have 8 disk bays, one disk for system, holding disk and a copy of the final state of the accounts about to be deleted. Another disk for some additional holding space and a second copy of the deleted accounts and the 6 other disks for vtapes. Per choice too, I am using smallish vtapes so they are almost all full, that limits the fragmentation (most of the disks end up with less than 1M enpty space). I also use Amanda to backup one Windows machine, through the Samba method. Amanda is not very well suited to backup workstations when mot of the users will turn off their machine after work. I don't know if that is part of your plan, but I have been looking at some solution that would run on the user's workstation and that would push a backup on some server and use Amanda to keep a backup of that server. One think I implemented a long time ago is a replication of Amanda configuration and indexes: at the end of each dump, I push a copy of that data with rsync and also send that information to me by email (email being replicated automatically on the email server). With the disks and the indexes, I can manually extract about any backup from a machine that would not even have Amanda running. I choose to use tar rather than dump for that compatibility advantage. Best regards, Olivier
Is amanda busy
Hello, Does it exist a command that can be used to check whether amanda is busy or not? For example, do not launch the daily backup if the previous one is still running, or do not reboot Amanda server (network stability issue) if a backup is being done. TIA. Best regards, Olivier --
Re: How to "unlable" a tape
Nathan Stratton Treadway writes: > On Fri, Sep 18, 2020 at 10:53:44 +0700, Olivier wrote: >> I know there is amadmin no-reuse, but suppose the tape had been >> completely destroyed and is not readable anymore, there should be a way >> to tell Amanda it should completely forget about that tape, remove any >> index it can have aboutt he tape. >> >> What would be the command then? > > You're looking for "amrmtape" . Thank you, that was of course what I was looking for, Best regards, Olivier > > Nathan > > > Nathan Stratton Treadway - natha...@ontko.com - Mid-Atlantic region > Ray Ontko & Co. - Software consulting services - http://www.ontko.com/ > GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt ID: 1023D/ECFB6239 > Key fingerprint = 6AD8 485E 20B9 5C71 231C 0C32 15F3 ADCD ECFB 6239 > --
How to "unlable" a tape
Hi, I have been trying to find the command opposite to amlabel: completely remove a tape from Amanda. I know there is amadmin no-reuse, but suppose the tape had been completely destroyed and is not readable anymore, there should be a way to tell Amanda it should completely forget about that tape, remove any index it can have aboutt he tape. What would be the command then? TIA, Olivier --
Timeout during estimate
Hi, I have an Amanda client that takes more than 4 hours to do the estimate. The estimate is computed correctly, but when amandad on the client tries to send back the estimate to the server, the packet times out. I kind of remember that there is a timeout parameter that I need to tweak before recompiling Amanda, but I can't remember if it is on the client or on te server. I tend to think it is on the server. But definitive answer is welcome. Thanks in advance, Olivier --
Re: holding disk too small?
"Stefan G. Weichinger" writes: > So far it works but maybe not optimal. I consider recreating that > holding disk array (currently RAID1 of 2 disks) as RAID0 .. Unless your backups are super critical, you may not need RAID 1 for holding disk. Also consider that holding dick puts a lot of mechanical stress on the disk: I have seen at least one case where the disk did start failing and developing bad blocks in the holding disk space while the rest of the disk was OK. Best regards, Olivier
Re: Big data
"Stefan G. Weichinger" writes: > Am 21.11.19 um 08:59 schrieb Olivier: >> Hi, >> >> One of the machine I back-up with Amanda is used by our students to run >> AI, machine learning, etc. Each coming with huge set of data. When I >> know that it will be coming, I may exclude the student's directory, but >> I am not always aware. > > Isn't it possible to configure a specific directory for this generated > data and exclude that? Yes, I do that, but unless I check every day, I can come back ine day and find out that one new big data exists and I have to configure it again manually. I would like some more automated tools. Best regards, Olivier --
Big data
Hi, One of the machine I back-up with Amanda is used by our students to run AI, machine learning, etc. Each coming with huge set of data. When I know that it will be coming, I may exclude the student's directory, but I am not always aware. When the data size gets greaterr than the maximum 770GB allocated for one single Amanda run, the dump will fail. Is there a way to tell Amanda to disregard a DLE is the estimated size if too big? Is there a way to automatically split one DLE into two when it is detected it will be too large? And what about deduplication on the client side? How to implement the backup? If two DLE contains the same file, that file should exist in a single location on the physical hard disk. But a backup of each DLE will have a copy of that file, so the size of the backup will be biger than the orginal disk. Am I right? Best regards, Olivier --
Re: Best size for vtapes
Jon LaBadie writes: > Do you really expect all (or most) of your vtapes to be 100% full? If so, > I do not think you have allocated enough total space. > > Amanda has one provision for dealing with such situations, the holding > disk. Mine is dedicated, and about the size of four vtapes. > > Another is "runtapes". Oh, or do you plan to run exactly the number of > vtapes that you need for your chosen dumpcycle? Of course not. runtapes can be as big as needed, provided that vtapes are small enough. >> So I prefer to stick with the amount of vtapes equal to the real amount >> of disk space. > > Then, from my experience, you will be leaving about 1/3 of your disk empty. /dev/ada3p1 2.6T2.2T234G91%/automnt/ada3 /dev/ada4p1 2.6T2.3T121G95%/automnt/ada4 /dev/ada2p1 2.6T2.4T8.5G 100%/automnt/ada2 /dev/ada5p1 2.6T2.4T 60G98%/automnt/ada5 /dev/ada1p1 2.6T2.0T458G82%/automnt/ada1 /dev/ada6p1 2.6T2.3T 86G97%/automnt/ada6 Disks are 3TB with 27 vtapes in each disk an 10GB per chunk. I am surprised at the 100% utilisation reported for ada2, even if it is "system utilisation" with some decent amount of space left. > Is your backup size really even pseudo random? Mine, over 40+ years, > at many sites, have never been. As I said, this is a theoretical exercise. The largest chuck of data that I backup every day is the users files, that can vary a lot when you have 200+ users. > That is based on the assumption that your tapes match the available space > and your runtapes is 1. Neither of which ?we?/I recommend. BTW I just > peeked, my disks dedicated to vtapes, even though substantially over- > subscribed are between 79% and 89% full. Runtapes could be bigger than needed, Amanda will use only what it needs. > First, though an unused inode would be allocated, no inodes would be > wasted. When you create your file system (assuming extX, ???) space > to a set number of inodes is created. OK, my mistake for mentioning inodes. I would have to review how filesystems (ufs, FreeBSD) work. If creating a directpry does not consume any disk space, then there is no penalty for having a multitude of small vtapes. > Second, disks have many millions, even billions of data blocks. Are > you really worried about using another 1 or 3 for a directory? You > must have more important thing with which to be concerned. > > One last thing, when you create your file system(s) for vtapes you may > be able to control how many inodes are created. Remember each file > takes only one inode. A 3TB disk of vtapes on my system only has a > total of 947 files. I have the same number of files per disk. > Yet there were 350,000 inodes created even though > I changed the mkfs options to greatly reduce them. Another disk where > I forgot to reduce the number of inodes created has 190,000,000 inodes. > > So I'm "wasting" about 20,000,000 data blocks as inodes. Not enough > for a 100GB vtape, but enough for four 5GB chunks. I will have to remember twicking the filesystem the next time I change the disks, because with the default, I am wasting over 350 million inodes per disk. Thank you, Olivier --
Re: Best size for vtapes
Charles Curley writes: > So in theory you could allocate more vtape space that there is room on > the partition. Just make sure you never use more vtape space than you > actually have. In theory. But you are accepting the risk that from time to time all your vtapes will be 100% full and you will not have enough space on your disk. And I don't think there is any provision in Amanda to prevent that. So I prefer to stick with the amount of vtapes equal to the real amount of disk space. > The answer to that is, "that depends". I have tried to have a vtape > size a bit larger than my typical daily backup, and then allow amanda > to use enough extra tapes to cover the largest likely backup. So most > days I use one vtape, 40-90% filled. Some days I use three or four > vtapes. All but the last are almost 100% filled. You can also play > with your split size. I don't think that depends at all unless you have a very deterministic usage patern. When the size of the daily backup is truly random, it becomes a purely mathematical problem: Each day, you are wasting on average 1/2 vtape amount of disk. So you could have vtape being half the size of what you are using, wasting 1/4 of the initial amount, but then you are wasting x blocks overheard for declaring new directories and using more inodes. When do both fucntions cross? Best regards, Olivier --
Best size for vtapes
Hello, I apologize in advance if the question has already been answered or has become obsolete. I am wondering what is the best size to declare for the lenght of the vtapes. Too long a vtape could lead to disk space being unused, but having too many small vtapes one or two directories created for each new vtape, and if many vtapes, the main directory containing them would not fit in a single inode; but the space left unused would be smaller. My understanding is that splitting a backup does not consume more space, so it is not the backup data that gets bigger, only what is surrounding it. If wee are not considering the overhead of computing the tape spliting, nor the possible extra I/O, if we only focus on the disk space, what would be the optimum size for the vtapes? Best regards, Olivier --
Re: bad tape?
Jens Berg writes: > On 8/28/2019 9:14 AM, Olivier wrote: >> I have a dedicated server that runs Amanda, with 8 bays, I never >> disconnect the disks unless it is time to replace them with a newer >> and biger one. > > Don't you have Off-Site backups? No. > Since I switched from LTO to vtapes, I'm using USB drives for that which > I exchange weekly with a set I store at home. Never had issues with that > in the last years. My Amanda server is an old machine, no USB 3, so USB drives is not realistic. > I'd love to use a cloud service instead but the price for a reasonable > fast Internet connection together with a sufficiently large cloud > storage is way too high. Depending on the amount of cloud storage you need, how long you want to keep it, etc. I was considering Blackblaze B2 storage or Amazon glacier (is it Amazon) as it is storage that you hopefully do not need to access much, so in the cloud it can drift to cheaper 3rd tier storage. From what I looked, their prices would be similar. I'd favor Blackblaze for their no non-sense, very simple approach. If I remember well, I needed a round robin of about 10TB, (new backup added every week, but older backup removed at the same rate, so the total storage does not grow much) and it adds up to 5$/month/TB. My current limitation is the network bandwidth, until there is a cloud installed in Thailand, I would use all my current weekly b/w just to upload my weekly level 0 backup. I had been considering other solutions, that would offer "unlimited" cloud storage for personal use, but my use is not personal and it is often "backup storage" so you have to conform to their file system, use their application to do the storage (not API, but backup software) etc. And you have never guarantee as when they will start limiting their storage capacity. One last solution is my own colocated server at a datacenter we have good connection with, but so far, monthly running cost would not be cheaper than storage space on the cloud and you must factor the procurement and maintenance cost. Maybe if I replace the current Amanda server by a brand new system, that current one could go to coloc. Best regards, Olivier --
Re: bad tape?
Diego Zuccato writes: > Il 28/08/19 03:34, Olivier ha scritto: > >> Or write another device for Amanda to use, it would not be vtape, it >> would be ... something. > Could be 'rawdisk'. :) > > But better plan for some redundancy to compensate for silent corruption. > > And consider that SATA connectors have a limited life (about 500 mating > cycles, IIRC): for frequently-removed disks it's better to use a > dedicated caddy, maybe an eSata one. I have a dedicated server that runs Amanda, with 8 bays, I never disconnect the disks unless it is time to replace them with a newer and biger one. Olivier --
Re: bad tape?
Gene Heskett writes: > On Monday 26 August 2019 23:55:31 Olivier wrote: > >> Gene Heskett writes: >> > Generally speaking, only because the disc is random access. >> >> But a disk dedicated to vtapes should be doing a lot of sequetial >> accesses: once it has been formatted and the slots have been assigned, >> it is writting files the size of one Amanda's chunk. In fact, that >> would be worth a study: the disk usage for vtapes vs. normal disk >> usage. >> >> That is just gross figures but: >> >> Users' home directories: >> FilesystemSizeUsed Avail Capacity Mounted on >> /dev/da1p12.9T851G1.8T31%/home >> 2565312 files, 223129681 used, 556890331 free >> (564355 frags, 69540747 blocks, 0.1% fragmentation) > > My Mail dir is on /home, dates back to about my 2nd install in 2001, with > probably north of 20 GB of maildirs, so I'd expect that is more than 1% > fragmented, but except for tde's kmail occasionally bucking about it, > has not been a major problem. Copying it to a new Maildir usually fixes > it for that particular list. Reducing the keep time for those lists > deemed not so important has also helped. About 3 lists I keep forever, > but 50 more are expired every 3 or 6 months. That was given as an example only, to show that fragmentation grows faster on a file system supporting users' home directory tha on a file system supporting Amanda'a vtapes. I had rebuild the users' home directory on a new RAID array recently, that is why it is so little fragmented, but still much more than an Amanda disk that is much older. >> Amanda vtape disk: >> FilesystemSizeUsed Avail Capacity Mounted on >> /dev/ada5p1 2.6T2.2T269G89%/automnt/ada5 >> 475 files, 582393950 used, 127171372 free >> (84 frags, 15896411 blocks, 0.0% fragmentation) >> >> The vtape disk is slightly older than the users' home, definitely >> fuller and less fragmented, so I would guess big sequetial files with >> little head movement. >> > I wondered about fragmentation myself, but it has never reared its head > in well over 15 years. Best regards, Olivier --
Re: bad tape?
Diego Zuccato writes: > Il 27/08/19 05:55, Olivier ha scritto: > >> But a disk dedicated to vtapes should be doing a lot of sequetial >> accesses: once it has been formatted and the slots have been assigned, >> it is writting files the size of one Amanda's chunk. In fact, that would >> be worth a study: the disk usage for vtapes vs. normal disk usage. > That's the *perfect* use-case for SMR drives. But they'd need either no > filesystem (like tapes :) ) or a dedicated one. > Is it possible to use raw devices as vtapes? vtape do rely on a Unix file system, it needs a directory and a couple of sub-directory per vtape. Olivier --
Re: bad tape?
Jon LaBadie writes: > On Tue, Aug 27, 2019 at 11:44:02AM +0200, Diego Zuccato wrote: >> Il 27/08/19 05:55, Olivier ha scritto: >> >> > But a disk dedicated to vtapes should be doing a lot of sequetial >> > accesses: once it has been formatted and the slots have been assigned, >> > it is writting files the size of one Amanda's chunk. In fact, that would >> > be worth a study: the disk usage for vtapes vs. normal disk usage. >> That's the *perfect* use-case for SMR drives. But they'd need either no >> filesystem (like tapes :) ) or a dedicated one. >> Is it possible to use raw devices as vtapes? > > My guess is that vtapes are expecting normal *nix file > names and system/library calls. They do, indeed. > I'm pretty sure it would be possible to write a FUSE (file system > in user space extension) to implement what you imagine. Or write another device for Amanda to use, it would not be vtape, it would be ... something. Olivier --
Re: bad tape?
Gene Heskett writes: > Generally speaking, only because the disc is random access. But a disk dedicated to vtapes should be doing a lot of sequetial accesses: once it has been formatted and the slots have been assigned, it is writting files the size of one Amanda's chunk. In fact, that would be worth a study: the disk usage for vtapes vs. normal disk usage. That is just gross figures but: Users' home directories: FilesystemSizeUsed Avail Capacity Mounted on /dev/da1p12.9T851G1.8T31%/home 2565312 files, 223129681 used, 556890331 free (564355 frags, 69540747 blocks, 0.1% fragmentation) Amanda vtape disk: FilesystemSizeUsed Avail Capacity Mounted on /dev/ada5p1 2.6T2.2T269G89%/automnt/ada5 475 files, 582393950 used, 127171372 free (84 frags, 15896411 blocks, 0.0% fragmentation) The vtape disk is slightly older than the users' home, definitely fuller and less fragmented, so I would guess big sequetial files with little head movement. Good luck with your health. Olivier
Re: bad tape?
ghe writes: > Stan and Debra have convinced me to bite the bullet and buy a new tape. > I've never been in this situation before (the DLT drive used to fail > every once in a while, but a couple hours with a jeweler's screwdriver > got it going again). > > Looks like I'm going to have a spare, mildly flaky, tape around. You mean that all your backups were on a single tape? That is beyond daring IMHO. Like Gene mentioned, I have moved to disk and vtapes, never had a problem with the disks holding the vtape, I had problems with the disk with the holding disk though, it started develop bad blocks in the holding disk partition, but vtapes? They are hardly used, if you unmount/stop them after the backups are written, they can last a long time. And when you need more space, you just upgrade with newer disks with more capacity and store the old disks, so you even have offline old stuff. Your slots need to be numbered starting from 1, but the tapes number can start from the next number in sequence in your tape naming scheme, so keep the old disks. You will change them because you need more backup storage, not because they fail (never had since I use disks, while I had to replace the tapes a number of times). I started with 7x512GB disks, then 7x1.5TB and am now at 7x3TB and never had the slightest problem. Admittedly, the size of the vtapes chosen at first looks a bit tiny nowdays, but with vtapes, it really adds very little overhead, largely compensated by avoiding half full tapes. When we were hit by the great flood and we had to move the datacenter, I took the disks, but not Amanda server, as I knew I could connect the disk directly inside any server to do a restore. And no fiddling with mounting and dismounting tapes every day, I have a system with all my vtapes available all the time. In a similar way to what Gene described, after the dump, I get a copy of the indexes, etc. and rsync that to a database server and mail myself a copy, I end up with 5 or 6 copies of that critical information (because database and mail are backed up too). And remeber, tapes do rub on the R/W head of the tape drive so doing an amcheckdump does double the wear and tear of the tape. No physical contact between the head and support on disk. Not to mention that disks are way faster... Best regards, Olivier
Re: How does amanda invoke gnutar on clients?
Winston Sorfleet writes: > I'll try that, thanks! Out of curiosity, would using "include" get > around that? My guess is that it won't. But you can explore the manual of gtar :) Best regards, Olivier > --
Re: How does amanda invoke gnutar on clients?
Winston Sorfleet writes: > Greetings all. I was wondering if other eyes can spot something obvious I'm > missing, with respect > to how amanda invokes GNUTAR on client machines (the amdump logfile doesn't > show anything > unusual). It may be in the guts of /usr/lib/amanda/application/amgtar or > runtar but short of looking > at code I am not familiar with, I can't tell. > > I have amanda running on a server, with client machine forum.romanus.ca, and > it's started giving > me "STRANGE dump details": gnutar uses --one-file-system option, so it will not save what is mounted from a different filesystem. You must have a DEL for each of your filesystems: / /bin /etc /... Olivier > >/-- forum.romanus.ca / lev 1 STRANGE > sendbackup: start [forum.romanus.ca:/ level 1] > sendbackup: info BACKUP=/bin/tar > sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -xpGf - ... > sendbackup: info COMPRESS_SUFFIX=.gz > sendbackup: info end > ? /bin/tar: ./bin: directory is on a different filesystem; not dumped > ? /bin/tar: ./dev: directory is on a different filesystem; not dumped > ? /bin/tar: ./etc: directory is on a different filesystem; not dumped > ? /bin/tar: ./lib: directory is on a different filesystem; not dumped > ? /bin/tar: ./opt: directory is on a different filesystem; not dumped > ? /bin/tar: ./proc: directory is on a different filesystem; not dumped > ? /bin/tar: ./run: directory is on a different filesystem; not dumped > ? /bin/tar: ./sbin: directory is on a different filesystem; not dumped > ? /bin/tar: ./sys: directory is on a different filesystem; not dumped > ? /bin/tar: ./usr: directory is on a different filesystem; not dumped > ? /bin/tar: ./mnt/nvme0n1: directory is on a different filesystem; not > dumped > ? /bin/tar: ./mnt/vhosts: directory is on a different filesystem; not dumped > ? /bin/tar: ./var/spool/fax/incoming: directory is on a different > filesystem; not dumped > > Now recently, I moved these directories off a standard hard disk, to an nvme, > using btrfs. > Because, reasons (trying to do this with a live machine, not from bare-bones > install) I did these as > bind mounts, e.g. in /etc/fstab > > /dev/nvme0n1 /mnt/nvme0n1 btrfs defaults 0 2 > /mnt/nvme0n1/bin /bin none bind > /mnt/nvme0n1/etc /etc none bind > /mnt/nvme0n1/lib /lib none bind > /mnt/nvme0n1/opt /opt none bind > /mnt/nvme0n1/sbin /sbin none bind > /mnt/nvme0n1/usr /usr none bind > > Now if I'd being doing some sort of one-file-system exclusion, I could > understand why tar (as > demanded by amanda) would drop them, but as far as I can tell, I'm not: > > The disklist (/home being an nfs mount from the server machine, it gets > backed up with the > server-as-amanda-client). > > forum.romanus.ca / { > comp-root-tar > # auth "bsdtcp" > exclude append "./home" > } > > (Note that the server's amdump log file does show the intentional exclude): > > driver: send-cmd time 263.809 to dumper2: SHM-DUMP 02-3 44577 NULL 1 > forum.romanus.ca 9efefbfff3fffbf71f / N > ODEVICE 1 2019:5:22:6:47:2 GNUTAR "" "" "" "" "" "" "" 1 "" "" bsdtcp AMANDA > /amanda_shm_control-16837-0 20 |" bsdtcp\ > n FAST\n YES\n YES\n > AMANDA\n \n ./home\n > \n""" > > And the dumptype definition: > > define dumptype root-tar { > global > program "GNUTAR" > comment "root partitions dumped with tar" > compress none > index > priority low > } > > define dumptype comp-root-tar { > root-tar > comment "Root partitions with compression dumped with tar" > compress client fast > }
Re: Only increasing incrementals
Nathan Stratton Treadway writes: > On Thu, Nov 22, 2018 at 11:18:25 +0700, Olivier wrote: >> Hello, >> >> UI am wondering if there is a way to define a DLE that would all >> incrementals but only with increasing level: >> >> - full (0) >> - incremental 1 >> - incremental 2 >> - incremental 3 >> - etc. >> >> But never: 0, 1, 1, 1, 2 >> >> Each back-up level must be above the previous one or be a full back-up. > > I am not sure what you are trying to accomplish, I am trying to backup something that can only have incremental with increasing levels: it cannot do two level 1 in a row, levels must be 1, then 2, then 3, etc. (think some successive snapshots). According to amanda.conf(5) man page: bumpdays int Default: 2 days. To insure redundancy in the dumps, Amanda keeps filesystems at the same incremental level for at least bumpdays days, even if the other bump threshold criteria are met. I want to absolutely cancel that feature, each incremental must have a level creater than the previous dump and an incremental level can not be bumped (only level 0 can be bumped). Thainks, Olivier
Re: which user builds amanda?
Debra S Baddorf writes: >>>> >> Normally you build amanda in the /home/amanda directory, sudo -i, >> then su amanda. Then when its built, ctrl-D out of amanda and make >> install as root, then your perms are all as they should be. I have a >> configure driver file to configure and build, and It won't do a thing if >> I'm root. Thats a bad dog, no biscuit if not the user amanda. > > Group people (not just Gene): > Does it still matter which acount builds and which account installs amanda? Yes, it does matter. You never know what could get wrong when building a package, so you better do the build in a lower priviledged account, like your normal user account. There is no reason why you need higher priviledges to run a compile, so the safe way is to use the less proviledges all the time. Then you do a test, and Unix is good that usually you can do the test while still loged in the lower proviledged account. Then you install with root account: you need root proviledges to install because you need to create directories where other users can't, you may need to create new users too (like user amanda). Best regards, Olivier > I do “ksu root” (kerberos su) > and build and then install.All as root. In a /tmp/amanda-builds > directory. > Sometimes multiple times, if I’m comparing or testing something. > > It always works for me.I never switch back to my amanda user to do the > build or the install. > > My amanda user is operator:root, in case that makes a difference. > amanda versions up to (now) 3.3.8. > We use SLF, Scientific Linux Fermilab (locally modified for science; > available every where, I think) > > Deb Baddorf > Fermilab > > --
Only increasing incrementals
Hello, UI am wondering if there is a way to define a DLE that would all incrementals but only with increasing level: - full (0) - incremental 1 - incremental 2 - incremental 3 - etc. But never: 0, 1, 1, 1, 2 Each back-up level must be above the previous one or be a full back-up. Best regards, Olivier --
Re: Breaking DLEs up
Jon LaBadie writes: > On Thu, Nov 08, 2018 at 09:06:47PM +, Debra S Baddorf wrote: >> Ah, good! What does “file” do in your include line? >> Deb >> > Include (and exclude) can take a first argument of "file" or "list". > With "file" the following string is a "glob" expression. > "file" is the default but I like to specify it anyway. > > With "include list" the string that follows is the name of a file > containing "globs", one per line. > > You can have multiple "include file" globs if all but the first > are "include file append". So > > include file "./[a-gA-G]* > > could also be specified as > > include file "./[a-g]* > include file append "./[A-G]* > > Jon A long, long time ago, I wrote a script to test the exclde files. I believe it could be adapted to test the include files too. I noticed the script was not online anymore, so I put it back: http://www.cs.ait.ac.th/~on/testgtar Best regards, Olivier >> >> >> > On Nov 8, 2018, at 2:54 PM, Jon LaBadie wrote: >> > >> > On Thu, Nov 08, 2018 at 08:43:49PM +, Debra S Baddorf wrote: >> >> Yeah, I do use includes, but I only do a single letter at a time >> >> include "./a*” >> >> >> >> Perhaps the problem is with the syntax of doing more than one letter. >> >> I only do [a-f] on my excludes. Weird! >> >> >> >> Deb Baddorf >> > >> > I have a working entry that matches the OP. >> > >> >include file "./201[7-9]*" >> > >> > Jon >> >> --
Re: Can AMANDA be configured "per client"?
> In your case, though (i.e. with a fixed set of always-available slot > directories), it certainly makes sense to set up an obvious 1-1 mapping > between labels and slots, so that you don't have to think about or go > hunting to figure out which vtape holds a particular label. Amanda > won't care about that (it will always look on the vtape and check the > label found there before doing anything else), but it'll make your life > slightly easier on those situations where you need to go look at the raw > dump files or whatever. Although it seems a good policy to map a one to one slot and label, in the long run, it may not be so. For example, I have retired some old disks and replaced them by new ones. The new disks would endup with the same slot numbers as the disks they replace. But I have created new labels and marked the previous labels as "noreuse". That way, I still have the data on the old disks, and Amanda still keeps a track of what is in those old labels and I could remount the disk and access the files if needed. (BTW, I am using an homemade changer, that splits the virtual tape across multiple physical disks; I came to that solution many years ago when vtape could only use one disk/directory.) Best regards, Olivier
Re: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development
Gene, > And beat me with whatever. Ouch, egg on face. Shtoopidity, must come with > a large birthday count. 83->84 today. > > In posting that script I suddenly noticed the \ after the previous last > line was missing. Its rebuilding now, with that file moved to /etc. My > apologies. > > And amcheck is happy. That's a kind of weird present, but Amada is building, happy birthsday :) I hope to reach 84 one day and still be as bright as you are by that time. Olivier
Re: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development
Gene, > May I be so rude as to point out > that --with-security-file=/path/to/amanda-security.conf doesn't work > according to the config output. I just downlowded amanda 3.5.1 and tried it, I do not have any problem with the configure option --with-security-file. It is there in "configure -h" and "configure --with-security-file=/somthing" does not throw an error message. > I moved it > to /usr/local/etc/amanda/Daily. It is there, and owned by amanda:disk, > but configure reports: > ./gh.cf: > 25: ./gh.cf: > --with-security_file=/usr/local/etc/amanda/Daily/amanda-security.conf: > not found But I could not find the file gh.cf that you mention, it does not appear to be part of the distribution of Amanda. Regards, Olivier
Re: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development
> I don't have direct experience with 3.3.9, but as far as I can tell from > the Amanda source repo [*], the 3.3.9 release does the same checks on > the security-file path directories as 3.5, so off hand I'd still expect > /usr/local/etc/amanda/ to cause an error on your system... > > But perhaps FreeBSD's version is a different from upstream in regards to > these checks, or something? I did not go into the details, but FreeBSD ports are built by downloading the official source from official repository and eventually applying some specific patches. Some options can be applied at configure time, but that are options that should exist in the original configure.sh I can see no patch related to common-src/security-file.c so I expect that file to be kept original. I confirm that the patching process does not change security-file.c Only security options set at configure time are: --with-bsdtcp-security --with-bsdudp-security --with-ssh-security --with-security-file=${ETCDIR}/amanda-security.conf Provided that security-file.c is the same in 3.3.9 and 3.5, the only reason I can see would be the fllowing that skips the recursive checking of the path. #ifdef SINGLE_USERID uid_t ruid = getuid(); uid_t euid = geteuid(); if (ruid != 0 && euid != 0 && ruid == euid) { amfree(quoted); return TRUE; } #endif Olivier
Re: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development
Nathan Stratton Treadway writes: > (You are also running Amanda 3.5, right?) No, only 3.3.9 (that's the latest port on FreeBSD). I wanted to confirm before sending my mail, but I forgot. Olivier --
Re: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development
Nathan, > While trying to figure out the error messages Gene was reporting I took > a look at the source code that performs this security check [*] and Maybe my error was not the same, but it did concern amanda-security.conf permission. It was back in July, so my memory maybe confused :) >> To do that, I modified the Makefile in FreeBSD port to include the >> option: >> >> --with-security-file=/usr/local/etc/amanda/amanda-security.conf >> >> [ In the case of FreeBSDm it was: >> >> --with-security-file=${ETCDIR}/amanda/amanda-security.conf >> > > Have you completed the build process with this configure parameter in > place? (I'm curious to hear if it did work as expected for you.) Yes, and I just check the permission and ownership: /, /usr, /usr/local/, /usr/local/etc belong to 0:0 with mode 755 /usr/local/etc/amanda belongs to amanda:amanda with permission 755 The pamameter --with-security-file (used for ./configure I guess) is in the default FreeBSD port Makefile, I just moved it to /usr/local/etc/amanda. Regards, Olivier
Re: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development
Gene, Sorry, I missed your message yesterday. > ERROR: picnc: selfcheck request failed: file/dir '/usr/local/etc' > (/usr/local/etc/amanda-security.conf) is writable by the group > Client check: 5 hosts checked in 11.356 seconds. 5 problems found. > > ... > > The man page says its to be in /etc/amanda, but since this is a local > build, its in /usr/local/etc/amanda. First, I see a discrepancy: the error messge places the file amanda-security.conf in /usr/local/etc while according to what you say later about the man page, you expect it to be in /usr/local/etc/amanda The error message is complaining about the mode of the directory, not about the file. But mode on /usr/local/etc are not for Amanda only, it's a system stuff, so it is not really realistic to change them. So the solution was to move the file to /usr/local/etc/amanda, as suggested by the man, where you can adjust the mode more to Amanda linking. To do that, I modified the Makefile in FreeBSD port to include the option: --with-security-file=/usr/local/etc/amanda/amanda-security.conf [ In the case of FreeBSDm it was: --with-security-file=${ETCDIR}/amanda/amanda-security.conf I also informed the port maintener that there maybe a change needed] I hope that helps. Olivier
Re: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development?
Jean-Louis Martineau writes: > [1:text/plain Show] > > > [2:text/html Hide Save:noname (3kB)] > > Don't fork. > > I can give write access to the SVN repository to people that ask for. As a long time user of Amanda (not 20 years, only 18 :) I would be gutted to see it die too. Though I am not sure I can do much to contribute. Thank you, Olivier > Jean-Louis > > Le mer. 26 sept. 2018, à 13 h 39, Chris Nighswonger > a écrit : > > On Wed, Sep 26, 2018 at 1:31 PM Gene Heskett wrote: > > Fork it if someone has a long term interest in seeing a good, long term > backup solution keep sucking air regularly. > > Said by a 20 year user of amanda. > > +1 to forking. > > Chris > --
Re: out of holding space in degraded mode
Ryan, > Using Amanda 3.4.5 on Centos 7.4 to backup Linux clients onto 11TB of local > disk. Holding area and vtapes are all in the same 11TB filesystem, which is > now entirely full! > > The config started out with: > dumpcycle = 14days > tapecycle =30days > num-slot = 30 > under tapetype: length=5000GB and part-size=1GB > > I have now cut the dumpcycle down to 10, and the tapecycle down to 20, and > was hoping Amanda could recover. > But I'm still getting "out of holding space in degraded mode" in the logs, > and Amanda seems totally stuck. > > How would I start over, throw away all the backups made so far, and begin > with fresh/empty tapes? > Also, after changing the config file, do I have to restart Amanda to get it > to reread the config? A few things you might want to try. - what is the size of the configuration "use" for your holding disk? - do you have enough space on your disk for Amanda to copy the files from holding to tape: it is not just moving files around, but it is a copy, so you need at least free space as big as one file - have you tried amflush Best regards, Olivier
Re: Another problem with smbclient for Samba 4
Olivier writes: > Hi, > > I noticed that the command >smbclient olivier\\drivec -U on\%password -E -W samba -d0 -Tqca - > does not behave the same with Samba 3 and Samba 4. The smbclient tar > ends with giving the size of the data transfered, but the message is > slightly different and Amanda cannot parse the size returned by Samba 4, > resulting in an error >amanda //olivier/drivec lev 0 FAILED [no backup size line] > > Below is the end of the smbclient command for Samba 4: > > ./WINDOWS/Zapotec.bmp > ./WINDOWS/zipgenius.xml > tar:717 Total bytes received: 84969332671 > > and for Samba 3 > > ./ECsamples/TQM - Baldrige competition.ahp > ./ECsamples/VoiceOfCustomer Part I Prioritizing Market Segments.ahp > ./ECsamples/VoiceOfCustomer Part II Prioritizing Customer Requirements .ahp > ./ECsamples/VP of Development Selection.ahp > tar: dumped 51410 files and directories > Total bytes written: 27800975872 > > Thank you, > > Olivier I think I found the patch, for some reason, it did not make it to FreeBSD ports tree. Best regards, Olivier --
Another problem with smbclient for Samba 4
Hi, I noticed that the command smbclient olivier\\drivec -U on\%password -E -W samba -d0 -Tqca - does not behave the same with Samba 3 and Samba 4. The smbclient tar ends with giving the size of the data transfered, but the message is slightly different and Amanda cannot parse the size returned by Samba 4, resulting in an error amanda //olivier/drivec lev 0 FAILED [no backup size line] Below is the end of the smbclient command for Samba 4: ./WINDOWS/Zapotec.bmp ./WINDOWS/zipgenius.xml tar:717 Total bytes received: 84969332671 and for Samba 3 ./ECsamples/TQM - Baldrige competition.ahp ./ECsamples/VoiceOfCustomer Part I Prioritizing Market Segments.ahp ./ECsamples/VoiceOfCustomer Part II Prioritizing Customer Requirements .ahp ./ECsamples/VP of Development Selection.ahp tar: dumped 51410 files and directories Total bytes written: 27800975872 Thank you, Olivier --
Re: Problem with smbclient for Samba 4
"Stefan G. Weichinger" writes: > Am 2018-07-02 um 09:22 schrieb Olivier: >> "Stefan G. Weichinger" writes: >>> Try the command yourself on the shell and use "-d3" or higher to see the >>> debug details. > > See the difference -> > >> resolve_lmhosts: Attempting lmhosts lookup for name <0x20> >> resolve_wins: WINS server resolution selected and no WINS servers listed. >> resolve_hosts: Attempting host lookup for name <0x20> > >> resolve_lmhosts: Attempting lmhosts lookup for name olivier<0x20> >> resolve_wins: WINS server resolution selected and no WINS servers listed. >> resolve_hosts: Attempting host lookup for name olivier<0x20> > > The failing command seems to look up an empty string ... (no "olivier" > in first call). I agree, but in both cases, I use the same smbclient command and version (the one on Amanda server). The only difference being I once specify the share as "olivier\\drivec" and the other time I use "\\olivier\drivec". The first form is not working, this is the form cut and pasted from Amanda log file. The second form is working. So to me, it looks like that the syntax used by Amanda ("olivier\\drivec"), that was working fine with Samba 3.x is not workihg anymore with Samba 4.x > > But both of your calls say: > > "Client started (version 4.6.15)" > > So the picture is a bit fuzzy. > > You want to backup using the 4.x-smbclient and/or a share on a 4.x-server? I want to use 4.x-smbclient to back-up an WinXP machine (if is was a 4.x-server, it would be a Unix system, it would be easy :) Thank you, Olivier --
Re: Problem with smbclient for Samba 4
"Stefan G. Weichinger" writes: > Am 2018-07-02 um 08:28 schrieb Olivier: >> Hi, >> >> Since I upgraded to Samba 4, I cannot backyp my Windows machine. >> >> Poking around, the command used by Amanda sendsize is: >> >> Mon Jul 2 00:14:39 2018: thd-0x805422c00: sendsize: Spawning >> "/usr/bin/smbclient smbclient "olivier\\drivec" -d 0 -U backup -E -W >> olivier -c "archive 1;recurse;du"" in pipeline >> >> When I try the following command in Samba 3, it works, but not in Samba >> 4: >> >> smbclient "olivier\\drivec" -d 0 -U backup -E -W olivier -c "archive >> 1;recurse;du" >> >> If I use single back-slash or no double-quote in Samba 4, it works. >> >> The error is NT_STATUS_UNSUCCESSFUL > > Try the command yourself on the shell and use "-d3" or higher to see the > debug details. $ smbclient "olivier\\drivec" -d 3 -U backup -E -W olivier -c "archive 1;recurse;du" lp_load_ex: refreshing parameters Initialising global parameters Processing section "[global]" added interface nfe0 ip=192.41.170.11 bcast=192.41.170.255 netmask=255.255.255.0 added interface nfe1 ip=10.41.170.11 bcast=10.41.170.255 netmask=255.255.255.0 Client started (version 4.6.15). resolve_lmhosts: Attempting lmhosts lookup for name <0x20> resolve_wins: WINS server resolution selected and no WINS servers listed. resolve_hosts: Attempting host lookup for name <0x20> resolve_hosts: getaddrinfo failed for name [hostname nor servname provided, or not known] name_resolve_bcast: Attempting broadcast lookup for name <0x20> Connection to failed (Error NT_STATUS_UNSUCCESSFUL) And the working one: $ smbclient "\\olivier\drivec" -d 3 -U backup -E -W olivier -c "archive 1;recurse;du" lp_load_ex: refreshing parameters Initialising global parameters Processing section "[global]" added interface nfe0 ip=192.41.170.11 bcast=192.41.170.255 netmask=255.255.255.0 added interface nfe1 ip=10.41.170.11 bcast=10.41.170.255 netmask=255.255.255.0 Client started (version 4.6.15). resolve_lmhosts: Attempting lmhosts lookup for name olivier<0x20> resolve_wins: WINS server resolution selected and no WINS servers listed. resolve_hosts: Attempting host lookup for name olivier<0x20> Connecting to 192.41.170.57 at port 445 Thank ou, Olivier --
Problem with smbclient for Samba 4
Hi, Since I upgraded to Samba 4, I cannot backyp my Windows machine. Poking around, the command used by Amanda sendsize is: Mon Jul 2 00:14:39 2018: thd-0x805422c00: sendsize: Spawning "/usr/bin/smbclient smbclient "\\\\olivier\\drivec" -d 0 -U backup -E -W olivier -c "archive 1;recurse;du"" in pipeline When I try the following command in Samba 3, it works, but not in Samba 4: smbclient "olivier\\drivec" -d 0 -U backup -E -W olivier -c "archive 1;recurse;du" If I use single back-slash or no double-quote in Samba 4, it works. The error is NT_STATUS_UNSUCCESSFUL Is there a way to configure the smblient command without patching Amanda? Best regards, Olivier --
Re: amandqa-security.conf
Thank you, David, > I have: > root@jenny:~ # uname -sr > FreeBSD 11.2-STABLE > root@jenny:~ # ls -al /usr/local/etc/amanda > total 40 > drwxr-xr-x 4 rootamanda 7 May 26 00:54 . > drwxr-xr-x 53 rootwheel120 Jun 24 08:28 .. > -rw--- 1 amanda amanda 102 Apr 28 08:05 .amandahosts > drwxrwx--- 2 rootamanda 9 Jun 25 01:27 GipNetDaily01 > -rw-r--r-- 1 rootwheel 2045 May 26 00:54 amanda-security.conf > -rw-r--r-- 1 rootwheel 2044 May 29 2017 security.conf > drwxrwx--- 2 rootamanda 4 Oct 18 2014 template.d > root@jenny:~ # I have tried that, but it did not work for me, because the directory /usr/local/etc/amanda also contains all the tapelist files and it has to be owned by the used amanda in order for amdump/amtaper to update these files. My .amandahosts file is located in ~amanda which is different from /usr/local/etc/amanda, > It's been running this way for years. Idem. But my implementation may not completely follow the cannons from FreeBSD since I had installed it by hand long time before trying to use the ports. Best regards, Olivier > > Quoting Olivier : > >> Hi, >> >> Running amanda 3.3.9 on FreeBSD, I just came to an issue. >> >> FreeBSD packaging system places all the configuration files into >> /usr/local/etc instead of /etc. In the case of Amanda, it is >> /usr/local/etc/amanda. >> >> Amanda client expects the file amanda-security.conf in >> /usr/local/etc/amanda, with the directory owned by root. >> >> Amanda server expects that the directory /usr/local/etc/amanda is owned >> by amanda so the process can do various file manipulations like updating >> tapelist files. >> >> How can I reconciliate both? Or rather have a different directory used >> by Amanda client? >> >> Thanks in advance, >> >> Olivier >> -- --
amandqa-security.conf
Hi, Running amanda 3.3.9 on FreeBSD, I just came to an issue. FreeBSD packaging system places all the configuration files into /usr/local/etc instead of /etc. In the case of Amanda, it is /usr/local/etc/amanda. Amanda client expects the file amanda-security.conf in /usr/local/etc/amanda, with the directory owned by root. Amanda server expects that the directory /usr/local/etc/amanda is owned by amanda so the process can do various file manipulations like updating tapelist files. How can I reconciliate both? Or rather have a different directory used by Amanda client? Thanks in advance, Olivier --
Incremental backup with Zmanda Windows Client
Hi, After years of happy backups with the smbclient, I deceided to give ZWC a try. But I am frustrated because incremental backups seems not to work, whatever the level, all the data are being pulled/ Here's a snipset of the reports of 6 days, while I was on leave, no one using the PC: olivier. c:/temp 1 13306536 12085577 90.8 36:10 6132.1 2:26 82777.9 olivier. c:/temp 1 13306536 12085581 90.8 58:01 3823.0 2:47 72368.7 olivier. c:/temp 0 13306535 12085592 90.8 46:20 4787.0 4:05 49328.9 olivier. c:/temp 1 13306536 12085576 90.8 49:44 4459.6 4:25 45605.9 olivier. c:/temp 1 13306536 12085577 90.8 41:32 5339.0 3:26 58667.8 olivier. c:/temp 0 13306535 12085590 90.8 49:04 4520.2 3:12 62945.8 And the backup type is defined as: define dumptype zwc-normal { comment Incremental dumpp of Zmanda Windows Client index yes auth bsdtcp program DUMP maxdumps 1 compress server best priority high allow-split true } It obviously has no skip-incr (as shown by the level 0 and 1 in the dump reports). Why is it always backuping the full directory? Best regards, Olivier
Script to chang ethe user for Zmanda Windows Client
Hi, As explained http://wiki.zmanda.com/index.php/Zmanda_Windows_Client#Running_ZWC_as_some_other_user the Zmanda client for Windows installs with the default username amandauser and this cannot be configured by an option. That username must reflect the username that is running Amanda on the server. That can be changed using a few Windows tools. The script http://cs.ait.ac.th/~on/rename_amanda.vbs automatically performs all the steps required to change Zmanda Windows Client user. I have tested this script on Windows XP Pro SP3 (32 bits) and on Windows 7 Pro SP1 (32 bits). To use the script: Download the script in a temporary directory, then run the script with: cscript //nologo rename_amanda.vbs username {/delete] Where username is the new user name for ZWC and the option /delete allows to completely remove the old user from the computer. The script will leave 3 files in the directory, secedit_backup.sdb is an important file: it is a backup of the system security before it was modified by the script. I am pretty sure this script will not work properly if some system users (Administrator, etc.) have UTF8 names. Best regards, Olivier --
Re: amsamba and NTFS permissions
Stefan G. Weichinger s...@amanda.org writes: Am 25.09.2014 um 17:21 schrieb Stefan G. Weichinger: Additional info: they can't use the A/ZManda-Windows-Client because the NTFS-share is shared from a storage/SAN and not from a dedicated MS Windows Server. So we have to solve that on the side of the linux server, I assume. Does nobody here successfully do that with amanda? You supposedly could run A/ZManda-Windows-Client on a Windows client (one that remotely mounts the NTFS share). It will have a cost because the files are traveling the network twice (from NTFS server to Windows client and from Windows client to Amanda server). Your Windows client could be some virtual machine that is started only for back-up purposes (a virtualbox running from Amanda server, so you cut off one network transfer). That is ugly. It depends what is the NAS/storage amde of. Best regards, Olivier --
Re: What timeout controls the scripts
There is no parameter for that timeout. It is a constant in the program. CONNECT_TIMEOUT in the server-src/chunker.c file Running the script on the client might fix the issue execute-where client Even runing on the client it will take more than 300 seconds, would the chunker wait on the script to finish on the client? If run on the client, the socket will be opened before the script are run. So it should not timeout on 'accepting header stream'. But it might timeout on 'accepting data stream', I'm not sure, you should try. For some reason, the script was not running on the client. I changed the timeout in chunker.c and it does the trick. But I am wondering why the chunker is starting so early and not at the same time as the tar/dump (when the script is finished). Best regards, Olivier Jean-Louis Thank you, Olivier Jean-Louis On 09/24/2014 12:04 AM, Olivier Nicole wrote: Hi, For one specific DLE, I need to do a snapshot, through a pre-del-backup script. But the script takes quite some time to complete (up to a couple of hours). What timeout should I define to make Amanda wait for the end of the script before it can chunk the result? Example for the DLE amanda /virtual/mybackups/st106808 chunker.20140924001916.debug times out after 300 seconds: Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: pid 86096 ruid 10014 euid 10014 version 3.3.2: start at Wed Sep 24 00:19:16 2014 Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: pid 86096 ruid 10014 euid 10014 version 3.3.2: rename at Wed Sep 24 00:19:16 2014 Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: getcmd: START 20140924000501 Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: getcmd: PORT-WRITE 11-00015 /holding/20140924000501/amanda._virtual_mybackups_st106808.0 amanda 9efefbff1f /virtual/mybackups/st106808 0 1970:1:1:0:0:0 10485760 GNUTAR 500064 |;auth=bsd;srvcomp-best;index; ... Wed Sep 24 00:24:17 2014: thd-0x804c6ce00: chunker: stream_accept: timeout after 300 seconds Wed Sep 24 00:24:17 2014: thd-0x804c6ce00: chunker: putresult: 11 TRY-AGAIN Wed Sep 24 00:24:17 2014: thd-0x804c6ce00: chunker: critical (fatal): startup_chunker failed: error accepting header stream: Operation timed out and driver.20140924000501.debug, the script takes 45 minutes to complete: Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: driver: Spawning /usr/local/libexec/amanda/application/vmware vmware PRE-DLE-BACKUP --execute-where server --config normal --host amanda --disk /virtual/mybackups/st106808 --level 0 in pipeline Wed Sep 24 00:19:17 2014: thd-0x804c6ce00: driver: script: zorglub Wed Sep 24 00:19:17 2014: thd-0x804c6ce00: driver: script: PRE-DLE-BACKUP st106808 Wed Sep 24 00:19:18 2014: thd-0x804c6ce00: driver: script: We need to backup that one st106808 ... Wed Sep 24 01:05:54 2014: thd-0x804c6ce00: driver: script: 2014-09-23 18:05:52 -- info: == ghettoVCB LOG END Wed Sep 24 01:05:54 2014: thd-0x804c6ce00: driver: script: Wed Sep 24 01:05:54 2014: thd-0x804c6ce00: driver: script: VMware snapshot OK for st106808 Thank you, Olivier --
Re: What timeout controls the scripts
Jean-Louis, There is no parameter for that timeout. It is a constant in the program. CONNECT_TIMEOUT in the server-src/chunker.c file Running the script on the client might fix the issue execute-where client Even runing on the client it will take more than 300 seconds, would the chunker wait on the script to finish on the client? Thank you, Olivier Jean-Louis On 09/24/2014 12:04 AM, Olivier Nicole wrote: Hi, For one specific DLE, I need to do a snapshot, through a pre-del-backup script. But the script takes quite some time to complete (up to a couple of hours). What timeout should I define to make Amanda wait for the end of the script before it can chunk the result? Example for the DLE amanda /virtual/mybackups/st106808 chunker.20140924001916.debug times out after 300 seconds: Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: pid 86096 ruid 10014 euid 10014 version 3.3.2: start at Wed Sep 24 00:19:16 2014 Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: pid 86096 ruid 10014 euid 10014 version 3.3.2: rename at Wed Sep 24 00:19:16 2014 Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: getcmd: START 20140924000501 Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: getcmd: PORT-WRITE 11-00015 /holding/20140924000501/amanda._virtual_mybackups_st106808.0 amanda 9efefbff1f /virtual/mybackups/st106808 0 1970:1:1:0:0:0 10485760 GNUTAR 500064 |;auth=bsd;srvcomp-best;index; ... Wed Sep 24 00:24:17 2014: thd-0x804c6ce00: chunker: stream_accept: timeout after 300 seconds Wed Sep 24 00:24:17 2014: thd-0x804c6ce00: chunker: putresult: 11 TRY-AGAIN Wed Sep 24 00:24:17 2014: thd-0x804c6ce00: chunker: critical (fatal): startup_chunker failed: error accepting header stream: Operation timed out and driver.20140924000501.debug, the script takes 45 minutes to complete: Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: driver: Spawning /usr/local/libexec/amanda/application/vmware vmware PRE-DLE-BACKUP --execute-where server --config normal --host amanda --disk /virtual/mybackups/st106808 --level 0 in pipeline Wed Sep 24 00:19:17 2014: thd-0x804c6ce00: driver: script: zorglub Wed Sep 24 00:19:17 2014: thd-0x804c6ce00: driver: script: PRE-DLE-BACKUP st106808 Wed Sep 24 00:19:18 2014: thd-0x804c6ce00: driver: script: We need to backup that one st106808 ... Wed Sep 24 01:05:54 2014: thd-0x804c6ce00: driver: script: 2014-09-23 18:05:52 -- info: == ghettoVCB LOG END Wed Sep 24 01:05:54 2014: thd-0x804c6ce00: driver: script: Wed Sep 24 01:05:54 2014: thd-0x804c6ce00: driver: script: VMware snapshot OK for st106808 Thank you, Olivier --
What timeout controls the scripts
Hi, For one specific DLE, I need to do a snapshot, through a pre-del-backup script. But the script takes quite some time to complete (up to a couple of hours). What timeout should I define to make Amanda wait for the end of the script before it can chunk the result? Example for the DLE amanda /virtual/mybackups/st106808 chunker.20140924001916.debug times out after 300 seconds: Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: pid 86096 ruid 10014 euid 10014 version 3.3.2: start at Wed Sep 24 00:19:16 2014 Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: pid 86096 ruid 10014 euid 10014 version 3.3.2: rename at Wed Sep 24 00:19:16 2014 Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: getcmd: START 20140924000501 Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: chunker: getcmd: PORT-WRITE 11-00015 /holding/20140924000501/amanda._virtual_mybackups_st106808.0 amanda 9efefbff1f /virtual/mybackups/st106808 0 1970:1:1:0:0:0 10485760 GNUTAR 500064 |;auth=bsd;srvcomp-best;index; ... Wed Sep 24 00:24:17 2014: thd-0x804c6ce00: chunker: stream_accept: timeout after 300 seconds Wed Sep 24 00:24:17 2014: thd-0x804c6ce00: chunker: putresult: 11 TRY-AGAIN Wed Sep 24 00:24:17 2014: thd-0x804c6ce00: chunker: critical (fatal): startup_chunker failed: error accepting header stream: Operation timed out and driver.20140924000501.debug, the script takes 45 minutes to complete: Wed Sep 24 00:19:16 2014: thd-0x804c6ce00: driver: Spawning /usr/local/libexec/amanda/application/vmware vmware PRE-DLE-BACKUP --execute-where server --config normal --host amanda --disk /virtual/mybackups/st106808 --level 0 in pipeline Wed Sep 24 00:19:17 2014: thd-0x804c6ce00: driver: script: zorglub Wed Sep 24 00:19:17 2014: thd-0x804c6ce00: driver: script: PRE-DLE-BACKUP st106808 Wed Sep 24 00:19:18 2014: thd-0x804c6ce00: driver: script: We need to backup that one st106808 ... Wed Sep 24 01:05:54 2014: thd-0x804c6ce00: driver: script: 2014-09-23 18:05:52 -- info: == ghettoVCB LOG END Wed Sep 24 01:05:54 2014: thd-0x804c6ce00: driver: script: Wed Sep 24 01:05:54 2014: thd-0x804c6ce00: driver: script: VMware snapshot OK for st106808 Thank you, Olivier --
sendsize without gnutar
Hi, I am looking at the poorman way to backup VMware guest machines. I have a script that can do a snapshot, then move that snapshot to some disk where Amanda can save it (then remove the snapshot). It's ugly because it does not support any kind of granularity and only allows full abck-up. But it is still better than nothing. Doing the snapshot will be run through Amanda pre and post scripts. But I am wondering how to do for the size estimate: my script can return the estimate size (the size of the allocated virtual disk). Usually the size if computed by a gnutar --total... but I don't need to run it, I have no data to give it to read, in fact the snapshot will only be generated at the dump time, if and only if the planner decides there should be a dump. How to arrange my configuration for a pre-script to return the estimated size and gtar --total be avoided? Best regards, Olivier --
Re: sendsize without gnutar
Jean-Louis, Thank you. Set estimate server in the dle. But that means there is no estimae (at least the first time) while I could provide an estimate, through a pre script, not through tar. Best regards, Olivier Jean-Louis On 09/15/2014 06:49 AM, Olivier Nicole wrote: Hi, I am looking at the poorman way to backup VMware guest machines. I have a script that can do a snapshot, then move that snapshot to some disk where Amanda can save it (then remove the snapshot). It's ugly because it does not support any kind of granularity and only allows full abck-up. But it is still better than nothing. Doing the snapshot will be run through Amanda pre and post scripts. But I am wondering how to do for the size estimate: my script can return the estimate size (the size of the allocated virtual disk). Usually the size if computed by a gnutar --total... but I don't need to run it, I have no data to give it to read, in fact the snapshot will only be generated at the dump time, if and only if the planner decides there should be a dump. How to arrange my configuration for a pre-script to return the estimated size and gtar --total be avoided? Best regards, Olivier --
Re: reconstruct tapelist from logs ?
Stefan, I know it doies not answer your question, but after each run of amdump, I run a script that rsync all amanda files/config/indexes/etc. to a different server. I also keep a rotation of 10 backward copies of tapelist. Plus the whole information is being emailed to myself (with automatic forward of my email to the email backup machine and my gmail account). That way, I am pretty sure I won't loose anything :) The script is in http://www.cs.ait.ac.th/laboratory/amanda/amdatabase I recently faced a mangled tapelist, but luckily, I only had to reconstruct the 4 ou 5 last days. That is when I added the rotation copied of tapelist. Best regards, Olivier --
Re: Question on the one-filesystem option
Gene, So in a roundabout way, tar's mouthyness coaxed me into buying a better printer. And if the excludes work tonight, I'm happy, camping or... If I may... I think the solution suggested by Nathan does not lean toward the excludes. Solving your problem by using a crafted exclude list that makes tar avoids all mount point directories and synlinks pointing to other filesystems has several drawnbacks. - most of the time, when you will upgrade your system, you may have to upgrade your update list, wondering once more why tar is spitting an error at you. - in the future, a system upgrade may result in merging two file systems into one, your exclude will still apply, that part will silently be ignbored in your backup My uinderstanding is that you should not play with the exclude list, let tar complain, let complains be logged (usefull for later debuging) but ignore these complains in the finale report (now I have no clue on how to do that). This looks like a more sustainable solution to me. Best regards, Olivier --
Re: amrecover works, normal amanda backup, logging connection refused
Gene, On Fri, Jul 18, 2014 at 9:26 PM, Gene Heskett ghesk...@wdtv.com wrote: Greeting Jean-Louis; Trying to figure out why amanda can't backup this machine, one of the things I noticed in /etc, is that on the shop box, which works, there is not an /etc/xinetd.d but it has an old-xinetd.d with a single stanza amanda file in it. An ls -lau shows that file, /etc/old-xinetd.d/amanda was apparently accessed a few minutes ago by my amcheck from the server. However, on the new install on the machine that is failing to allow the connection, there is an /etc/xinet.d, with an amanda file in it with an old last access date/time, was not 'touched' when I ran the amcheck. Its last access date/time is I believe, the date/time of the installation itself. That amanda-common is 2.6.1p1 IIRC. amcheck says: WARNING: lathe: selfcheck request failed: Connection refused There has been enough configuration done that amrecover on this machine works. There is a /var/backups/.amandahosts file, its a link to /etc/amandahosts BUT, in /etc/.amandahosts. I'll mv it to /etc/amandahosts. Ran amcheck, no change and that file was not accessed. What do I check next? netstat -na |grep 10080 You should see an UDP open on that port, else it means xinetd is not running/not listening for amanda. Olivier Thank you. Cheers, Gene Heskett -- There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order. -Ed Howdershelt (Author) Genes Web page http://geneslinuxbox.net:6309/gene US V Castleman, SCOTUS, Mar 2014 is grounds for Impeaching SCOTUS
Re: amrecover works, normal amanda backup, logging connection refused
: DEBUG: 3859 {remove_disabled_services} removing echo 14/7/18@12:09:37: DEBUG: 3859 {remove_disabled_services} removing time 14/7/18@12:09:37: DEBUG: 3859 {remove_disabled_services} removing time Service defaults Bind = All addresses. Only from: All sites No access: No blocked sites No logging Service configuration: amanda id = amanda flags = IPv4 socket_type = stream Protocol (name,number) = (tcp,6) port = 10080 wait = no user = 34 group = 34 Groups = yes PER_SOURCE = -1 Bind = All addresses. Server = /usr/lib/amanda/amandad Server argv = amandad -auth=bsdtcp amdump amindexd amidxtaped Only from: All sites No access: No blocked sites No logging 14/7/18@12:09:37: ERROR: 3859 {activate_normal} bind failed (Address already in use (errno = 98)). service = amanda 14/7/18@12:09:37: ERROR: 3859 {cnf_start_services} Service amanda failed to start and is deactivated. 14/7/18@12:09:37: DEBUG: 3859 {cnf_start_services} mask_max = 0, services_started = 0 14/7/18@12:09:37: CRITICAL: 3859 {init_services} no services. Exiting... Reverted the only_from name change, then noted its commented out? But an amcheck now says connection reset by peer, and returns quickly. And the xinetd.d/amanda file was not touched by the amcheck run. I never used xinetd but once, but the amanda config file should not be touched at every run of amanda, it should only be loaded once when xinetd starts. So what you see here is normal. Olivier
Re: restore what, to undo amrmtape ?
What I do at the end of each dump, I run a script that: sends me the contents of tapelist by mail rsync /etc/amanda (the config directory) rsycn /var/amanda (where I have all the logs, curinfo, indexes, etc.) sends me the result of amadmin config export by mail I think I am pretty safe and could reconstruct my amanda server if anything happen. Olivier On Thu, Jun 5, 2014 at 12:12 AM, Debra S Baddorf badd...@fnal.gov wrote: Per Jean-Louis’ suggesting, I looked in oldlogs, and nothing had been moved there. I restored the whole log directory from the day before (in a scratch area), and compared things. This gave me the complete line to put back into the tapelist file. Upon careful perusal of files, I decided to also restore the curinfo/nodename/diskname/info for each DLE on the removed tape (since I had the table of contents, and knew which DLEs were affected.) The index/nodename/diskname/ files were not removed or affected. I’m doing a test restore (into a scratch area). Amanda DOES recognize that the files I asked for are on that tape, so I think it’s all good now! Restore looks good too. Yay! Deb Baddorf Fermilab On Jun 4, 2014, at 6:03 AM, Jean-Louis Martineau martin...@zmanda.com wrote: Debra, grep the label in logs/* and logs/oldlog/* If it is in logs/oldlog, mv it back to logs. Add the entry in the tapelist file, Jean-Louis On 06/04/2014 12:56 AM, Debra S Baddorf wrote: I accidentally amrmtaped a tape today. The tape is still intact, and I actually have a text file of the contents of it (TOC file.) So I can use DD and get the files back if I need them. But hey — I’m in charge of backup, doggonit — so what files can I restore so amanda will re-remember this tape? FWIW - it’s my monthly archive config, so it won’t be run again until this Saturday. (or later, if I have to delay it!)And my daily config should have the needed files to fix this. I think? Deb Baddorf Fermilab
Re: Problem with Amanda and perl
Thank John, If you run your amcleanup with ktrace and look at the resulting kdump, you might be able to find where the loading process goes wrong (e.g., perhaps something is still trying to load perl 5.14 modules). It was something dirty in the FreeBSD installer for Amanda, apparently, it would not remove all previous files, some stuff was left in /usr/local/libexec/amanda that related to previous version or Perl, even after fully deinstalling Amanda, recompiling with the new version of Perl and fully reinstalling: the old libs where still there. i resolved it by removing everything by hand. Thanks again, olivier Olivier Nicole wrote at 21:37 +0700 on Mar 31, 2014: Jean-Louis, On Mon, Mar 31, 2014 at 6:39 PM, Jean-Louis Martineau martin...@zmanda.com wrote: Olivier, Amanda must use the same perl it was compiled for. You must recompile amanda for the perl you installed or re-install the old perl. I tried that already. Updated perl, updated all perl modules, updated/recompiled amanda, that's when the problem arised. Best regards, Olivier Jean-Louis On 03/31/2014 07:25 AM, Olivier Nicole wrote: Hi, After upgrading my system, perl and Amanda won't work together anymore. The symptom is: $ amcleanup Can't load '/usr/local/lib/perl5/site_perl/5.18/auto/Amanda/Debug/libDebug.so' for module Amanda::Debug: /usr/local/lib/amanda/libamglue-3.3.2.so: Undefined symbol PL_stack_sp at /usr/local/lib/perl5/5.18/mach/DynaLoader.pm line 190. at /usr/local/lib/perl5/site_perl/5.18/Amanda/Debug.pm line 11. Compilation failed in require at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config/FoldingHash.pm line 5. BEGIN failed--compilation aborted at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config/FoldingHash.pm line 5. Compilation failed in require at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config.pm line 750. BEGIN failed--compilation aborted at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config.pm line 750. Compilation failed in require at /usr/local/sbin/amcleanup line 25. BEGIN failed--compilation aborted at /usr/local/sbin/amcleanup line 25. I have reinstalled perl as new, maiking sure that any local perl mordule was properly reinstalled; with previous 5.14 i had the same problem: $ perl -v This is perl 5, version 18, subversion 2 (v5.18.2) built for amd64-freebsd-thread-multi Copyright 1987-2013, Larry Wall Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using man perl or perldoc perl. If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. Amanda is 3.3.2 System is: $ uname -a FreeBSD amanda.cs.ait.ac.th 9.2-RELEASE-p3 FreeBSD 9.2-RELEASE-p3 #9 r263415: Thu Mar 27 12:11:23 ICT 2014 r...@amanda.cs.ait.ac.th:/usr/obj/usr/src/sys/GENERIC amd64 Right now, I am at lost about what I should be doing next, any help is gladly welcome. TIA, Olivier
Problem with Amanda and perl
Hi, After upgrading my system, perl and Amanda won't work together anymore. The symptom is: $ amcleanup Can't load '/usr/local/lib/perl5/site_perl/5.18/auto/Amanda/Debug/libDebug.so' for module Amanda::Debug: /usr/local/lib/amanda/libamglue-3.3.2.so: Undefined symbol PL_stack_sp at /usr/local/lib/perl5/5.18/mach/DynaLoader.pm line 190. at /usr/local/lib/perl5/site_perl/5.18/Amanda/Debug.pm line 11. Compilation failed in require at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config/FoldingHash.pm line 5. BEGIN failed--compilation aborted at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config/FoldingHash.pm line 5. Compilation failed in require at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config.pm line 750. BEGIN failed--compilation aborted at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config.pm line 750. Compilation failed in require at /usr/local/sbin/amcleanup line 25. BEGIN failed--compilation aborted at /usr/local/sbin/amcleanup line 25. I have reinstalled perl as new, maiking sure that any local perl mordule was properly reinstalled; with previous 5.14 i had the same problem: $ perl -v This is perl 5, version 18, subversion 2 (v5.18.2) built for amd64-freebsd-thread-multi Copyright 1987-2013, Larry Wall Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using man perl or perldoc perl. If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. Amanda is 3.3.2 System is: $ uname -a FreeBSD amanda.cs.ait.ac.th 9.2-RELEASE-p3 FreeBSD 9.2-RELEASE-p3 #9 r263415: Thu Mar 27 12:11:23 ICT 2014 r...@amanda.cs.ait.ac.th:/usr/obj/usr/src/sys/GENERIC amd64 Right now, I am at lost about what I should be doing next, any help is gladly welcome. TIA, Olivier --
Re: Problem with Amanda and perl
Jean-Louis, On Mon, Mar 31, 2014 at 6:39 PM, Jean-Louis Martineau martin...@zmanda.com wrote: Olivier, Amanda must use the same perl it was compiled for. You must recompile amanda for the perl you installed or re-install the old perl. I tried that already. Updated perl, updated all perl modules, updated/recompiled amanda, that's when the problem arised. Best regards, Olivier Jean-Louis On 03/31/2014 07:25 AM, Olivier Nicole wrote: Hi, After upgrading my system, perl and Amanda won't work together anymore. The symptom is: $ amcleanup Can't load '/usr/local/lib/perl5/site_perl/5.18/auto/Amanda/Debug/libDebug.so' for module Amanda::Debug: /usr/local/lib/amanda/libamglue-3.3.2.so: Undefined symbol PL_stack_sp at /usr/local/lib/perl5/5.18/mach/DynaLoader.pm line 190. at /usr/local/lib/perl5/site_perl/5.18/Amanda/Debug.pm line 11. Compilation failed in require at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config/FoldingHash.pm line 5. BEGIN failed--compilation aborted at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config/FoldingHash.pm line 5. Compilation failed in require at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config.pm line 750. BEGIN failed--compilation aborted at /usr/local/lib/perl5/site_perl/5.18/Amanda/Config.pm line 750. Compilation failed in require at /usr/local/sbin/amcleanup line 25. BEGIN failed--compilation aborted at /usr/local/sbin/amcleanup line 25. I have reinstalled perl as new, maiking sure that any local perl mordule was properly reinstalled; with previous 5.14 i had the same problem: $ perl -v This is perl 5, version 18, subversion 2 (v5.18.2) built for amd64-freebsd-thread-multi Copyright 1987-2013, Larry Wall Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using man perl or perldoc perl. If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. Amanda is 3.3.2 System is: $ uname -a FreeBSD amanda.cs.ait.ac.th 9.2-RELEASE-p3 FreeBSD 9.2-RELEASE-p3 #9 r263415: Thu Mar 27 12:11:23 ICT 2014 r...@amanda.cs.ait.ac.th:/usr/obj/usr/src/sys/GENERIC amd64 Right now, I am at lost about what I should be doing next, any help is gladly welcome. TIA, Olivier
Re: can amanda auto-size DLE's?
sure that you cannot restore from it anymore (you could with amrestore, but it is though). (and of course if you keep both the old dle and the 2 splitted ones, you will have double back-up). Bests, Olivier Thanks for reading this long post! -M [2:text/html Show] --
Re: Potential user - more questions
Michael, You may have noticed my answer to question 2a was not complete. I answered for a not parted DLE. On a second thought, as each part is exactly the size you configure (exact to the byte), I assume that each file cannot be an independent tar, so the DLE must be tar'ed into a big file first, that is then cut into chunks. I never had to manually restore a parted DLE (I would manually restore / and maybe /usr, that would give me a running Amanda and then use Amanda to restore the data; my / or /usr are smaller that the size of parts); manually restoring a DLE is easy, even with a live-cd, you really only need dd and tar (or dump/restore). It takes some precautions, but is pretty doable. Best regards, Olivier On Fri, Oct 11, 2013 at 4:16 AM, Michael Stauffer mgsta...@gmail.com wrote: Olivier and Jon, thanks for the helpful answers. I'm going to setup my redeployed backup system with Amanda. It seems enough easier than Bacula to make it worth while to make the switch, and I especially like the simple format of the dump files and the simple text indecies for cataloging backups. I'm sure you'll hear from me more while I get things going! -M On Thu, Oct 10, 2013 at 12:45 AM, Jon LaBadie j...@jgcomp.com wrote: On Wed, Oct 09, 2013 at 06:27:48PM -0400, Michael Stauffer wrote: Hi again, I've got another batch of questions while I consider switching to Amanda: 1) catalog (indecies) It seems the main catalog/database is stored in the index files. Is it straightforward to back these up? This doc (http://www.zmanda.com/protecting-amanda-server.html) sugests backing up these dirs/files to be able to restore an amanda configuration (and presumably the backup catalog): /etc/amandates, /etc/dumpdates, /etc/amanda, /var/lib/amanda. There is no built-in way to do this in amanda. The problems are they are not complete, and changing, until the backup is done. Several members of this list have described their home-grown techniques. 2) Spanning and parts Say I split my 32TB of data into DLE's of 2-3TB. a) If I set a 'part' size of 150GB (10% of native tape capacity is what I saw recommended), what is the format of each part as it's written? Is each part its own tarfile? Seems that would make it easier to restore things manually. Traditional amanda tape files, holding the complete tar or dump archive, are a 32KB header followed by the archive. Manual restoration is done with dd to skip the header and pipe the rest to the appropriate command line to restore the data. The header contains information identifying the contents, how they were created, and when. Parts alter this scheme only slightly. Each part still has a header. The header now includes info on which sequential part it is. The part name also identifies it location in the sequence. The data is simply a chunk of the complete archive. Manual restoration again is strip the headers and pipe to the restore command. b) If a part spans two volumes, what's the format of that? Is it a single tarfile that's split in two? A part will NOT span two volumes. If the end of the media is reached, the part is restarted on the next volume. c) What's the manual restore process for such a spanned part? cat the two parts together and pipe to tar for extraction? 3) Restoring w/out Amanda I thought data was written to tape as tar files. But this page suggests a dumpfile is only readable by Amanda apps. Is a dumpfile something else? http://wiki.zmanda.com/index.php/Dumpfile I think the author meant there are no standard unix/linux commands that know the header + data layout. The dumpfiles can be handled with amanda commands or as described above, the operator can use standard commands when armed with knowledge of the layout. 4) holding disk and flushing I see how flushing can be forced when the holding disk has a certain % of tape size. Can a flush be forced every N days? The idea here would be to get data to tape at a min of every week or so, should successive incrementals be small. Dumping to holding disk without taping can be done. Then have a crontable entry to flush when you want. This can done with a separate amflush command, or by varying amdump options. 5) alerting Is there a provision for email and/or other alerts on job completion or error, etc? Most amanda admins have an amreport emailed to them at amdump or amflush completion. As the cron entry can be a shell script, you could customize greatly. Jon -- Jon H. LaBadie j...@jgcomp.com 11226 South Shore Rd. (703) 787-0688 (H) Reston, VA 20190 (609) 477-8330 (C)
Connection reset by peer
Hi, I have a strange combination of errors: [data read: Connection reset by peer] [index read: Connection reset by peer] [missing size line from sendbackup] I giigled for these, but they never show up combined like that. Below is the report and amdump. Thank you if you can tell me what is going on. Best regards, Olivier FAILURE DUMP SUMMARY: ufo1000 /web lev 0 FAILED [data read: Connection reset by peer] ufo1000 /web lev 0 FAILED [index read: Connection reset by peer] FAILED DUMP DETAILS: /-- ufo1000 /web lev 0 FAILED [data read: Connection reset by peer] sendbackup: start [ufo1000:/web level 0] sendbackup: info BACKUP=/usr/local/bin/gtar sendbackup: info RECOVER_CMD=/usr/local/bin/gtar -xpGf - ... sendbackup: info end ? dumper: strange [missing size line from sendbackup] \ /-- ufo1000 /web lev 0 FAILED [index read: Connection reset by peer] sendbackup: start [ufo1000:/web level 0] sendbackup: info BACKUP=/usr/local/bin/gtar sendbackup: info RECOVER_CMD=/usr/local/bin/gtar -xpGf - ... sendbackup: info end ? dumper: strange [missing size line from sendbackup] \ - amdump: start at Thu Nov 29 15:17:16 ICT 2012 amdump: datestamp 20121129 amdump: starttime 20121129151716 amdump: starttime-locale-independent 2012-11-29 15:17:16 ICT driver: pid 5309 executable /usr/local/libexec/amanda/driver version 3.3.2 planner: pid 5308 executable /usr/local/libexec/amanda/planner version 3.3.2 planner: build: VERSION=Amanda-3.3.2 planner:BUILT_DATE=Tue Nov 20 15:38:37 ICT 2012 BUILT_MACH= planner:BUILT_REV=4847 BUILT_BRANCH=community_3_3_2 CC=gcc planner: paths: bindir=/usr/local/bin sbindir=/usr/local/sbin planner:libexecdir=/usr/local/libexec/amanda planner:amlibexecdir=/usr/local/libexec/amanda planner:mandir=/usr/local/man AMANDA_TMPDIR=/tmp/amanda planner:AMANDA_DBGDIR=/tmp/amanda planner:CONFIG_DIR=/usr/local/etc/amanda DEV_PREFIX=/dev/ planner:RDEV_PREFIX=/dev/ DUMP=/sbin/dump planner:RESTORE=/sbin/restore VDUMP=UNDEF VRESTORE=UNDEF planner:XFSDUMP=UNDEF XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF planner:SAMBA_CLIENT=/usr/local/bin/smbclient planner:GNUTAR=/usr/local/bin/gtar COMPRESS_PATH=/usr/bin/gzip planner:UNCOMPRESS_PATH=/usr/bin/gzip LPRCMD=UNDEF MAILER=UNDEF planner:listed_incr_dir=/usr/local/var/amanda/gnutar-lists planner: defs: DEFAULT_SERVER=amanda.cs.ait.ac.th planner:DEFAULT_CONFIG=DailySet1 planner:DEFAULT_TAPE_SERVER=amanda.cs.ait.ac.th planner:DEFAULT_TAPE_DEVICE= NEED_STRSTR AMFLOCK_POSIX planner:AMFLOCK_FLOCK AMFLOCK_LOCKF AMFLOCK_LNLOCK AMANDA_DEBUG_DAYS=4 planner:BSD_SECURITY USE_AMANDAHOSTS CLIENT_LOGIN=amanda planner:CHECK_USERID HAVE_GZIP COMPRESS_SUFFIX=.gz planner:COMPRESS_FAST_OPT=--fast COMPRESS_BEST_OPT=--best planner:UNCOMPRESS_OPT=-dc READING CONF INFO... planner: timestamp 20121129151716 planner: tape_length is set from tape length (115343360 KB) * runtapes (5) == 576716800 KB planner: time 0.001: startup took 0.001 secs SENDING FLUSHES... ENDFLUSH SETTING UP FOR ESTIMATES... planner: time 0.001: setting up estimates for ufo1000:/web ufo1000:/web overdue 15666 days for level 0 setup_estimate: ufo1000:/web: command 0, options: nonelast_level 0 next_level0 -15666 level_days 1getting estimates 0 (-3) 1 (-3) -1 (-3) planner: time 0.001: setting up estimates took 0.000 secs GETTING ESTIMATES... driver: tape size 115343360 driver: adding holding disk 0 dir /holding size 467164160 chunksize 10485760 reserving 0 out of 467164160 for degraded-mode dumps driver: started dumper0 pid 5310 driver: send-cmd time 0.023 to dumper0: START 20121129151716 driver: started dumper1 pid 5311 driver: send-cmd time 0.023 to dumper1: START 20121129151716 driver: started dumper2 pid 5312 driver: send-cmd time 0.024 to dumper2: START 20121129151716 driver: started dumper3 pid 5313 driver: send-cmd time 0.025 to dumper3: START 20121129151716 driver: started dumper4 pid 5314 driver: send-cmd time 0.025 to dumper4: START 20121129151716 driver: started dumper5 pid 5315 driver: send-cmd time 0.026 to dumper5: START 20121129151716 driver: started dumper6 pid 5316 driver: send-cmd time 0.026 to dumper6: START 20121129151716 driver: started dumper7 pid 5317 driver: send-cmd time 0.027 to dumper7: START 20121129151716 driver: started dumper8 pid 5318 driver: send-cmd time 0.027 to dumper8: START 20121129151716 driver: started dumper9 pid 5319 driver: send-cmd time 0.028 to dumper9: START 20121129151716 driver: started dumper10 pid 5320 driver: send-cmd time 0.028 to dumper10: START 20121129151716 driver: started dumper11 pid 5321 driver: send-cmd time 0.035 to dumper11: START 20121129151716 driver: started dumper12 pid 5322 driver: send-cmd time 0.037 to dumper12: START 20121129151716 driver: started dumper13 pid 5323 driver: send-cmd time 0.037
Re: [Amanda-users] Help in installing Amanda Client in FreeBSD 8.0/9.0
Jose, Im new to Amanda. I have installed Amanda Server on a CentOS but on trying to install it on the FreeBSD clients I have I am experiencing hitches. Any help you can accord will be highly appreciated. I am not sure what are your problem sin installing amanda client on FreeBSD. Go to /usr/ports/misc/amanda-client make make install That should do it. Else you ned to be specific on the type of propblems you are facing. Best regards, Olivier
Re: FreeBSD 8.3 killed my Amanda
: since NODATE 1353407966.281734: sendbackup: options `' 1353407966.281843: sendbackup: start: oak1000:/home/java lev 0 1353407966.281895: sendbackup: pipespawnv: stdoutfd is 50 1353407966.281915: sendbackup: Spawning /usr/bin/gzip /usr/bin/gzip --best in pipeline 1353407966.282331: sendbackup: gnutar: pid 35324: /usr/bin/gzip1353407966.282390: sendbackup: pid 35324: /usr/bin/gzip --best 1353407966.282655: sendbackup: doing level 0 dump as listed-incremental to '/usr/local/var/amanda/gnutar-lists/oak1000_home_java_0.new' 1353407966.283941: sendbackup: pipespawnv: stdoutfd is 6 1353407966.284178: sendbackup: Spawning /usr/local/libexec/amanda/runtar runtar normal /usr/local/bin/gtar --create --file - --directory /home/java --one-file-system --listed-incremental /usr/local/var/amanda/gnutar-lists/oak1000_home_java_0.new --sparse --ignore-failed-read --totals . in pipeline 1353407966.284885: sendbackup: Started index creator: /usr/local/bin/gtar -tf - 2/dev/null | sed -e 's/^\.//' 1353407966.284996: sendbackup: gnutar: /usr/local/libexec/amanda/runtar: pid 35327 1353407966.285135: sendbackup: Started backup 1353408441.084565: sendbackup: critical (fatal): index tee cannot write [Broken pipe] I can put all the log file to some common place if needed. Best regards, olivier On Mon, Nov 19, 2012 at 04:30:17PM +0700, Olivier Nicole wrote: Hi, I apologize for coming crying here, but since I updated my manda server to FreeBSD 8.3 (from 7.4), any big DLE will fail. I tried many versions of Amanda (2.5, 2.6, 3.3), with no success. Before I start sending debug, maybe there is an obvious action I have forgotten. I have tried, from the client side to tar|gzip|ssh cat /dev/null the big DLE, and it went on with no problem.
Re: FreeBSD 8.3 killed my Amanda
Jon, I apologize for coming crying here, but since I updated my manda server to FreeBSD 8.3 (from 7.4), any big DLE will fail. I tried many versions of Amanda (2.5, 2.6, 3.3), with no success. Before I start sending debug, maybe there is an obvious action I have forgotten. I have tried, from the client side to tar|gzip|ssh cat /dev/null the big DLE, and it went on with no problem. best regards, Might there have been a udp/tcp change in the settings? In the setting of the operating system? It could be, that is why I tried to manually tar and send through SSH between the 2 machines, and it went through. Best regards, Olivier
Re: FreeBSD 8.3 killed my Amanda
Dear Jean-Louis, You should start with the beginning, what is the error message you get in the email report? Hostname: amanda.cs.ait.ac.th Org : CSIM Normal Config : normal Date: November 20, 2012 There are 3587383k of dumps left in the holding disk. They will be flushed on the next run. The next 5 tapes Amanda expects to use are: CSIM-set-041, CSIM-set-056, CSIM-set-007, CSIM-set-008, CSIM-set-009. FAILURE DUMP SUMMARY: oak1000 /home/java lev 0 FAILED [data timeout] oak1000 /home/java lev 0 FAILED [data timeout] 'data timeout', it is a good starting point. Can you tell me exactly which amanda version you are using on the server? Is it the released 3.3.2 or something more recent? I installed it from FreeBSD ports and it reports amanda-server-3.3.2,1. The port was dated October 2nd, 2012, the tarball being called amanda-3.3.2.tar.gz with an SHA256 of cb40e8aa601e3d106e7d78338b745e5b9c0cd41daaab7937fc23d1b4cf585424 amcheck reports the version 3.3.2 Best egards, Olivier
FreeBSD 8.3 killed my Amanda
Hi, I apologize for coming crying here, but since I updated my manda server to FreeBSD 8.3 (from 7.4), any big DLE will fail. I tried many versions of Amanda (2.5, 2.6, 3.3), with no success. Before I start sending debug, maybe there is an obvious action I have forgotten. I have tried, from the client side to tar|gzip|ssh cat /dev/null the big DLE, and it went on with no problem. best regards, Olivier
re: Need help with new architecture for NAS/NFS setup
Hi Brendon, I'm having trouble figuring out what type of new architecture to go with for an NFS share dir on a NAS. You could run two dumps in parallel, one with tapecycle of 7 onj your local disk and one with tape cycle of 30 or 60 on your NAS. Best regards, Olivier I'm running Amanda version 2.6.0p2-14 on Fedora 10. My current architecture for Amanda is as follows: I'm running a set called DailySet1. My holding disk/vtapes are: /amanda/day0x/data/ where x is 1-8. So I have 8 vtapes in my holding disk. Here's my changer.conf: multieject 0 gravity 0 needeject 0 ejectdelay 0 statefile /var/lib/amanda/DailySet1/changer-status firstslot 1 lastslot 8 slot 1file:/amanda/day01 slot 2file:/amanda/day02 slot 3file:/amanda/day03 slot 4file:/amanda/day04 slot 5file:/amanda/day05 slot 6file:/amanda/day06 slot 7file:/amanda/day07 slot 8file:/amanda/day08 If I do a dump each night, I get a backup for each client in my disklist per day. That gives me about a week's worth of backups. Here's some info from the amanda.conf file: dumpcycle 7 days# the number of days in the normal dump cycle runspercycle 0 # the number of amdump runs in dumpcycle days tapecycle 8 tapes # the number of tapes in rotation runtapes 1 # number of tapes to be used in a single run of amdump tpchanger chg-multi # the tape-changer glue script changerfile /etc/amanda/DailySet1/changer.conf This works fine, however, it only does a week of backups. If I wanted to do a month of backups, I would have to multiply my current system X 4. So I would need a tapecycle of 30 (or so). My dumpcycle would remain 7 since I still want to do full dumps weekly. But in this case, I would need to expand my array of vtapes from /amanda/day01/ to /amanda/day30/. This is not possible since my holding disk area (the size of the system's usable hard disk space) is not big enough for all of those vtapes (and the data they would end up having in them) as it only has about 100 GB free and my backups are about 50GB per week. Thus, it was proposed that I mount some NFS space from another system (a NAS, for all intents and purposes, though it has a WORM architecture and is only used for archiving data with retention periods that delete old archives). So now I have 1TB of space mounted under /nfsbackup. How do I implement my architecture to only keep about a week (or even a day) of backups in the holding disk (locally on the system) but use the nfs storage space for archiving the rest of the old backups? The idea is that we keep 30 to 60 days worth of old backups on the NAS, but only the last day or few days locally on the backup server. How do I do that? Is it possible? What would be the general idea/layout? What directives would I need to change? Would I need to use multiple DailySets?? I'm totally stumped. Any advice would be greatly appreciated. Thanks -Brendon Martino
Live recovery CD
Hi, I wonder if anyone ever put together an Amanda live recovery CD. There are many Linux distro that have a live CD, it woul donly nned to add amrecover to that live CD to be up and running. Booting the live CD, one can repartition/reformat his disk, and amrecover will allow to restore the data. Best regards, Olivier
Fix the order of the dump in Amanda
Hi, I am using amanda 2.5.1 on the server, is there a way to tell Amanda that dump of certain DLE should be started as soon as possible? I am bacukuping some Unix machines and some Windows machines via Samba. The backup via Samba takes very long time, so I would like to have those started very soon (so they are finished before the user come back to work next morning). Best regards, Olivier
Re: how large must the data volume be...
Hi, ...so that tape drives become more cost-effective than storing everything on HD's? From my past experience, 50GB SLR100 tape costs $100 while for that price I can have a 500GB disk... Olivier
Re: Web interface for Amanda
Hi marc, This interface is written in PHP (with a small part in Perl) and should run on the Amanda server, under the Amanda user and group. Is it a good idea to let a web based application run with access rights of the user that collect the data of all my servers? Wouldn't it be better, at least to set the rights of the files you edit so that the webserver user can only change them? I think, if the webserver can edit .amandahosts (), a security whole in your script may allow to add his computer and restore all data. It's worse enough if only the passwords of the shares are stolen. If only you knew how it was done when it was written in Perl :) The problem is that amandapass must be mode 600 and belong to amanda:amanda. If the web server does not run as amanda:amanda it needs some mechanism (home made?) to allow to edit the file. That mechanism could introduce other security threat. The web server runinning the interface runs only that interface and is not accessible/visible from outside. OK that is not full security, but then amanda server is not physically secured either, any local user can access the machine and could steal the hard disk. Or steal one of the disk holding the virtual tapes. Also one point about your documentation: Step 4: Turn off the firewall In some case, Windows XP will not let you access to your PC with this interface. You must turn off your firewall. Why having a firewall, if users (are told) allways deactivate them on every problem? Isn't a firewall totaly useless, if it is deactivated (even if only for a short time)? I think a better hint would be, to ensure, that subnet of the server or better the backup host itself is only allowed to access the share. Default for the file shares rule is current subnet. This is normally the problem, if the server is in a different subnet and can't access the client. I was too lazy to dig it out, plus it contains a lot of Microsoft uncertainity, but the first time I tried to access to a new machine (new XP install, SP3) smbclient would not connect. Without the firewall it did connect. Later on, it would connect even with the firewall activated. My chest for magic spell being currently empty (debugging of XP) I choose to leave it like that. It happened once, so it may happen to others... Thanks for the remarks, Olivier
Re: Web interface for Amanda
Write the entries to amandapass~ and before your cronjob starts amdump, cp amandapass~ amandapass Good idea. any local user can access the machine and could steal the hard disk. Or steal one of the disk holding the virtual tapes. Aren't the doors locked? Or the servers secured in a different way? No :) But we are only a computer science department in a University, we are not manipulating any sensitive information. Bests, Olivier
Web interface for Amanda
Hi, This web interface (http://www.cs.ait.ac.th/laboratory/amanda/index.shtml) can be used to let your users register their Windows Microsoft PC to Amanda backup. First the user has to set a special backup user and to define shares to be accessible for backup. Then, using the interface, the user can add the shares to Amanda, as incremental backup or as full backup only. The interface script will modify the DLE and amandapass files accordingly. This interface is written in PHP (with a small part in Perl) and should run on the Amanda server, under the Amanda user and group. So far the interface has been tested for Windows 2000 and Windows XP only. It should work with Windows NT; the part about old Windows 98 exists only as a phantom. I will add Windows Vista as soon as I have the need for it. I once wrote this interface in Perl and I recently ported to PHP. I have not distribution package, it was written for our internal usage only, but you can contact me, I will be more than happy to share my work with you. Bests, Olivier
Re: amandad args in inetd.conf
On Tue, Sep 23, 2008 at 10:27:26AM -0600, [EMAIL PROTECTED] wrote: Where are the docs for what args need to be added to amandad in inetd.conf? I added amindexd and amidxtaped on the backup server in order to do amrecover, but then amcheck failed (needed noop, then selfcheck). Then amdump failed (needed sendsize, ...). I think you have to populate your ~amandauser/.amandahosts with something like that (amandauser = operator for me) : yourHostoperatoramdump From amanda(8): amdump is a shortcut for noop selfcheck sendsize sendbackup I see the full list in amandad.c, and I think I understand why clients don't need the addition. It defaults to all services except amindexd and amidxtaped being active. But when you activate those two on the server, it seems it _de_activates the others. I myself configured inetd.conf like that: $ grep amandad /etc/inetd.conf amandadgram udp wait operator /usr/local/libexec/amanda/amandad amandad amindexd amidxtaped Later, -- Olivier Cherrier - Symacx.com mailto:[EMAIL PROTECTED]
Syntax error in chg-disk
Hi, I am using chg-disk on Amanda 2.5.1p3 on FreeBSD and while I was trying it by hand, I found a Bourne Sheel syntax error: around the line 83, it should read: if test X$TAPE = X; then With a single = instead of the == operator found in the script. I am not sure it is an error in Amanda distribution or in the port made for FreeBSD. Best regards, Olivier
Changer policy
Hi, I have a brand new Amanda server, that uses virtual tapes. My virtual tapes are set on external hard disks, but I can only physically connect one disk at a time. I uses runtapes 5. So I can end up in a situation where I have 2 vtapes left one a disk, and where I must change disk in the middle of a dump. I beleive this is close to the situation of a tape caroussel, where one wants to reload the caroussel in the middle of a dump. I'd like to know if the model bellow makes sense with Amanda, does it break the general idea or not? Will it work if I write a changer that would wait for manual change of the disk? The dumps would finish to the holding disks and the taper would wait for manual change of the disk before finishing to write on the tape (wait would be several hours). I see that amcheck physically tries to access the tapes, so in a situation with 2 tapes left on a disk and 3 tapes from the new disk, amcheck would request the user tou change disk, that is not really necessary, is there any other command that is likely to access the changer (beside the amdump/amflush and restore). Is there a command to get the label of the tape loaded in the tape device? Does it exist a changer that allow the manual change of a caroussel, so I don't have to redesign everything from scratch? (I mean the Amanda phylosophy, not the exact way to load/eject a caroussel or mout/unmout a disk). Best regards, Olivier
Re: Changer policy
Hi Gerrit, I use tpchanger chg-disk# VTAPES the tape-changer glue script So do I, for the n first vtapes in the first disk, but chg-disk cannot access the vtapes n+1 t0 2n that are on the second disk: this second disk should be manually loaded first. Olivier
Re: Problem Backing Up NFS SmartStor
Hi, I have a Promise SmartStor, a network drive. It's NFS mounted on a Red Hat server. Sometimes it gets backed up, other times I get: whimsy.med.utah.edu/sstore/9gb lev 0 FAILED [dumper returned FAILED] whimsy.med.utah.edu/sstore/9gb lev 0 FAILED [data timeout] whimsy.med.utah.edu/sstore/9gb lev 0 FAILED [cannot read header: got 0 instead of 32768] whimsy.med.utah.edu/sstore/9gb lev 0 FAILED [too many dumper retry: [request failed: timeout waiting for REP]] Just a wild guess, but a Google search yesterday on cannot read header: got 0 instead of lead me to problems with IPv6/IPv4. I recompiled Amanda to use only IPv4 (./configure --without-ipv6) and the problem is gone. To further diagnoze the problem, look at the debug files, you would see that one side of Amanda opened a socket on IPv6: amandad.20080821100912.debug:amandad: time 2.679: stream_server: waiting for connection: ::.51629 while the other side try to connect on IPv4: dumper.20080821100912.debug:dumper: connected to 10.41.170.14.51629 I used information from http://archive.netbsd.se/?ml=pkgsrc-usersa=2008-02t=6414321 As you don't mention anything about the operating system of the server and the client, I don't know if that helps. I am running FreeBSD 6.3 on both sides. Best regards, Olivier
What tapedev is used by Amanda
Hi, Does Amanda use the tapedev defined in amanda.conf or the tapedev returned by the changer? If Amanda uses the tapedev returned by the changer, I think that chg-disk could be rewritten to avoid using symlinks, returning the directory of the slot each time, and so could work with file systems that does not implement symlinks (NTFS/FAT32 USB disks). Best regards, Olivier
amanda-2.5.1p3 ignores no-reuse
Hello, I am setting up a new Amanda server. Using virtual tapes on USB disk, it seems to be fast enough (20 to 40 MBps) depending on the hardware, the speed of the holding disks, etc. Faster than the dump, so no problem. I am currently hit by the IPv6 only problem on the client side, but that one I identified and hope to solve it by reinstalling amanda for IPv4 only. The problem is the following: I have 3 USB disks for virtual tapes, and only one is connected at a given time, so in tapelist I marked the tapes that are not accessible with no-reuse. They cannot be used because the disk is not online. Despite, the report at the end of a dump mention that amanda will try to use these; These dumps were to tape CSIM-set-1-06. The next 5 tapes Amanda expects to use are: 5 new tapes. The next 5 new tapes already labelled are: CSIM-set-1-07, CSIM-set-1-08, CSIM-set-1-09, CSIM-set-2-01, CSIM-set-2-02. The tapes CSIM-set-2-01, CSIM-set-2-02 are on a disk that is not online and they are marked no-reuse: 0 CSIM-set-2-02 no-reuse 0 CSIM-set-2-01 no-reuse 0 CSIM-set-1-09 reuse 0 CSIM-set-1-08 reuse 0 CSIM-set-1-07 reuse Is that a feature? Best regards, Olivier
Server to client connection refused
Hi, Since I upgraded Amanda client to 2.5.2p1 I could not back up that specific client. Server is running 2.4.2p2 and working well with many different versions of clients. The sendsize if OK, but the sendbackup fails. On the server side I see: driver: send-cmd time 1268.504 to dumper2: FILE-DUMP 00-00029 /holding1/20071030/ufo1000._usr.0 ufo1000 /usr 0 1970:1:1:0:0:0 1048576 GNUTAR 5057184 |;bsd-auth;srvcomp-best;index; driver: state time 1268.506 free kps: 2532277 space: 28302702 taper: DOWN idle-dumpers: 17 qlen tapeq: 0 runq: 12 roomq: 0 wakeup: 86400 driver-idle: client-constrained driver: interface-state time 1268.506 if : free 110 if DISK: free 32 if E100S: free 8 if E10S: free 79868 if E1G: free 796013 if E100M: free 76396 if E10M: free 8 driver: hdisk-state time 1268.506 hdisk 0: free 10602966 dumpers 6 hdisk 1: free 0 dumpers 0 hdisk 2: free 8853920 dumpers 2 hdisk 3: free 8845816 dumpers 5 driver: result time 1268.506 from dumper13: RQ-MORE-DISK 13-00014 driver: send-cmd time 1268.506 to dumper13: CONTINUE /holding3/20071030/cluster._state_partition1_home.2 1048576 2752 driver: state time 1268.508 free kps: 2532277 space: 28299950 taper: DOWN idle-dumpers: 17 qlen tapeq: 0 runq: 12 roomq: 0 wakeup: 86400 driver-idle: client-constrained driver: interface-state time 1268.508 if : free 110 if DISK: free 32 if E100S: free 8 if E10S: free 79868 if E1G: free 796013 if E100M: free 76396 if E10M: free 8 driver: hdisk-state time 1268.508 hdisk 0: free 10600214 dumpers 6 hdisk 1: free 0 dumpers 0 hdisk 2: free 8853920 dumpers 2 hdisk 3: free 8845816 dumpers 5 dumper: stream_client: connect(64324) failed: Connection refused driver: result time 1268.530 from dumper2: TRY-AGAIN 00-00029 [could not connect to data port: Connection refused] And on the client: sendbackup: debug 1 pid 91922 ruid 14 euid 14: start at Tue Oct 30 01:11:10 2007 sendbackup: version 2.5.2p1 Could not open conf file /usr/local/etc/amanda/amanda-client.conf: No such fil e or directory sendbackup req: GNUTAR /usr 0 1970:1:1:0:0:0 OPTIONS |;bsd-auth;srvcomp-best; index; parsed request as: program `GNUTAR' disk `/usr' device `/usr' level 0 since 1970:1:1:0:0:0 options `|;bsd-auth;srvcomp-best;index;' sendbackup: start: ufo1000:/usr lev 0 sendbackup-gnutar: time 0.000: doing level 0 dump as listed-incremental to '/usr /local/var/amanda/gnutar-lists/ufo1000_usr_0.new' sendbackup-gnutar: time 0.002: doing level 0 dump from date: 1970-01-01 0:00:00 GMT sendbackup: time 0.002: spawning /usr/local/libexec/amanda/runtar in pipeline sendbackup: time 0.002: argument list: runtar NOCONFIG gtar --create --file - -- directory /usr --one-file-system --listed-incremental /usr/local/var/amanda/gnut ar-lists/ufo1000_usr_0.new --sparse --ignore-failed-read --totals . sendbackup-gnutar: time 0.003: /usr/local/libexec/amanda/runtar: pid 91925 sendbackup: time 0.003: started backup sendbackup: time 0.009: started index creator: /usr/local/bin/mygtar -tf - 2/d ev/null | sed -e 's/^\.//' sendbackup: time 50.112: index tee cannot write [Broken pipe] sendbackup: time 50.112: pid 91924 finish time Tue Oct 30 01:12:00 2007 I have hard time ti figure why there is a bronken connection after only 50 seconds. There is no firewall between client and server. TIA, Olivier