RE: amflush

2017-04-11 Thread Ochressandro Rettinger

In fact, I am now sure that it's not doing anything.  I ran amstatus 
this morning and it looks exactly the same as it did yesterday afternoon.

If I can't get amflush to work, is there a way to clear out the stuff 
that needs flushing in a way that won't mess Amanda up?  I need to be able to 
run backups tonight.

-Sandro


-Original Message-
From: owner-amanda-us...@amanda.org [mailto:owner-amanda-us...@amanda.org] On 
Behalf Of Ochressandro Rettinger
Sent: Monday, April 10, 2017 4:31 PM
To: Nathan Stratton Treadway ; amanda-users@amanda.org
Subject: Re: amflush


I'm not sure it's doing anything.

[amandabackup@archivist NMHPVPR]$ amstatus NMHPVPR
Using: /var/lib/amanda/NMHPVPR/state/log/amdump
>From Mon Apr 10 15:21:05 MDT 2017

fileserver2:/Hope_IT   20170408010017 1  1252k flushing (0k done 
(0.00%)) (15:21:10)
fileserver2:/Hope_Secure   20170408010017 1  2006k wait for flushing
fileserver2:/Hope_Shared   20170408010017 1   876k wait for flushing
fileserver2:/Hope_Students 20170408010017 0  17825086k wait for flushing
fileserver2:/slash 20170408010017 0   1192436k wait for flushing
pr-db2:/slash  20170408010017 0 179616325k wait for flushing
pr-db2test:/slash  20170408010017 0  11823469k wait for flushing

SUMMARY   dle   real  estimated
size   size
   -  -
disk:   0
estimated   :   00k
flush   :   7  210461452k
dump failed :   00k   (  0.00%)
wait for dumping:   00k   (  0.00%)
dumping to tape :   0 0k 0k (  0.00%) (  0.00%)
dumping :   0 0k 0k (  0.00%) (  0.00%)
dumped  :   0 0k 0k (  0.00%) (  0.00%)
wait for writing
wait to flush   :   6  210460199k  210460199k (100.00%) (  0.00%)
writing to tape :   1  1252k  1252k (100.00%) (  0.00%)
dumping to tape
failed to tape
taped

10 dumpers idle : no-dumpers
NMHPVPR qlen: 6
   0: flushing (fileserver2:/Hope_IT)

network free kps: 8
holding space   : 2097152k (100.00%)
 0 dumpers busy :  0:00:17  (100.00%)  no-dumpers:  0:00:12  ( 70.32%)
 not-idle:  0:00:05  ( 29.68%)


-Sandro


From: Nathan Stratton Treadway 
Sent: Monday, April 10, 2017 4:26 PM
To: amanda-users@amanda.org
Cc: Ochressandro Rettinger
Subject: Re: amflush

On Mon, Apr 10, 2017 at 21:56:40 +, Ochressandro Rettinger wrote:
> Is there a way to check to see how far along amflush is?

the "amstatus" command.

Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239




Re: amflush

2017-04-11 Thread Jean-Louis Martineau
It looks like the taper process is hang.

Can you post the taper debug file?
Can you get a gdb stacktrace of all threads?

Jean-Louis

On 11/04/17 10:12 AM, Ochressandro Rettinger wrote:
>   In fact, I am now sure that it's not doing anything.  I ran amstatus 
> this morning and it looks exactly the same as it did yesterday afternoon.
>
>   If I can't get amflush to work, is there a way to clear out the stuff 
> that needs flushing in a way that won't mess Amanda up?  I need to be able to 
> run backups tonight.
>
>   -Sandro
>
>
> -Original Message-
> From: owner-amanda-us...@amanda.org [mailto:owner-amanda-us...@amanda.org] On 
> Behalf Of Ochressandro Rettinger
> Sent: Monday, April 10, 2017 4:31 PM
> To: Nathan Stratton Treadway ; amanda-users@amanda.org
> Subject: Re: amflush
>
>
> I'm not sure it's doing anything.
>
> [amandabackup@archivist NMHPVPR]$ amstatus NMHPVPR
> Using: /var/lib/amanda/NMHPVPR/state/log/amdump
>  From Mon Apr 10 15:21:05 MDT 2017
>
> fileserver2:/Hope_IT   20170408010017 1  1252k flushing (0k done 
> (0.00%)) (15:21:10)
> fileserver2:/Hope_Secure   20170408010017 1  2006k wait for flushing
> fileserver2:/Hope_Shared   20170408010017 1   876k wait for flushing
> fileserver2:/Hope_Students 20170408010017 0  17825086k wait for flushing
> fileserver2:/slash 20170408010017 0   1192436k wait for flushing
> pr-db2:/slash  20170408010017 0 179616325k wait for flushing
> pr-db2test:/slash  20170408010017 0  11823469k wait for flushing
>
> SUMMARY   dle   real  estimated
>  size   size
>    -  -
> disk:   0
> estimated   :   00k
> flush   :   7  210461452k
> dump failed :   00k   (  0.00%)
> wait for dumping:   00k   (  0.00%)
> dumping to tape :   0 0k 0k (  0.00%) (  0.00%)
> dumping :   0 0k 0k (  0.00%) (  0.00%)
> dumped  :   0 0k 0k (  0.00%) (  0.00%)
> wait for writing
> wait to flush   :   6  210460199k  210460199k (100.00%) (  0.00%)
> writing to tape :   1  1252k  1252k (100.00%) (  0.00%)
> dumping to tape
> failed to tape
> taped
>
> 10 dumpers idle : no-dumpers
> NMHPVPR qlen: 6
> 0: flushing (fileserver2:/Hope_IT)
>
> network free kps: 8
> holding space   : 2097152k (100.00%)
>   0 dumpers busy :  0:00:17  (100.00%)  no-dumpers:  0:00:12  ( 
> 70.32%)
>   not-idle:  0:00:05  ( 
> 29.68%)
>
>
> -Sandro
>
> 
> From: Nathan Stratton Treadway 
> Sent: Monday, April 10, 2017 4:26 PM
> To: amanda-users@amanda.org
> Cc: Ochressandro Rettinger
> Subject: Re: amflush
>
> On Mon, Apr 10, 2017 at 21:56:40 +, Ochressandro Rettinger wrote:
>> Is there a way to check to see how far along amflush is?
> the "amstatus" command.
>
>  Nathan
>
> 
> Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
> Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
>   GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
>   Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239
>
>
This message is the property of CARBONITE, INC. and may contain confidential or 
privileged information.
If this message has been delivered to you by mistake, then do not copy or 
deliver this message to anyone.  Instead, destroy it and notify me by reply 
e-mail


RE: amflush

2017-04-11 Thread Ochressandro Rettinger
Taper debug file:

Mon Apr 10 15:21:13.388038789 2017: pid 16607: thd-0x1dcae00: taper: pid 16607 
ruid 9000 euid 9000 version 3.4: start at Mon Apr 10 15:21:13 2017
Mon Apr 10 15:21:13.388111083 2017: pid 16607: thd-0x1dcae00: taper: Arguments: 
NMHPVPR --storage NMHPVPR --log-filename 
/var/lib/amanda/NMHPVPR/state/log/log.20170410152105.0
Mon Apr 10 15:21:13.388421435 2017: pid 16607: thd-0x1dcae00: taper: reading 
config file /etc/amanda/NMHPVPR/amanda.conf
Mon Apr 10 15:21:13.389776453 2017: pid 16607: thd-0x1dcae00: taper: pid 16607 
ruid 9000 euid 9000 version 3.4: rename at Mon Apr 10 15:21:13 2017
Mon Apr 10 15:21:13.397097849 2017: pid 16607: thd-0x1dcae00: taper: 
Amanda::Taper::Scan::traditional stage 1: search for oldest reusable volume
Mon Apr 10 15:21:13.397251012 2017: pid 16607: thd-0x1dcae00: taper: 
Amanda::Taper::Scan::traditional oldest reusable volume is 'NMHPVPR0002'
Mon Apr 10 15:21:13.397341803 2017: pid 16607: thd-0x1dcae00: taper: 
Amanda::Taper::Scan::traditional changer is not fast-searchable; skipping to 
stage 2
Mon Apr 10 15:21:13.397421969 2017: pid 16607: thd-0x1dcae00: taper: 
Amanda::Taper::Scan::traditional stage 2: scan for any reusable volume
Mon Apr 10 15:21:13.401516306 2017: pid 16607: thd-0x1dcae00: taper: Device is 
in variable block size
Mon Apr 10 15:21:18.258290267 2017: pid 16607: thd-0x1dcae00: taper: Slot 1 
with label NMHPVPR0002 is usable
Mon Apr 10 15:21:18.258422019 2017: pid 16607: thd-0x1dcae00: taper: 
Amanda::Taper::Scan::traditional result: 'NMHPVPR0002' on tape:/dev/nst0 slot 
1, mode 2
Mon Apr 10 15:21:18.260327601 2017: pid 16607: thd-0x1dcae00: taper: 
Amanda::Taper::Scribe preparing to write, part size 0, using no cache (PEOM 
will be fatal) (splitter)  (no LEOM)
Mon Apr 10 15:21:18.260781324 2017: pid 16607: thd-0x1dcae00: taper: Starting 
 -> 
)>
Mon Apr 10 15:21:18.260819175 2017: pid 16607: thd-0x1dcae00: taper: Final 
linkage:  -(MEM_RING)-> 

Mon Apr 10 15:21:18.261417598 2017: pid 16607: thd-0x1dcae00: taper: header 
native_crc: 48f5f7aa:3983360
Mon Apr 10 15:21:18.261445517 2017: pid 16607: thd-0x1dcae00: taper: header 
client_crc: 48f5f7aa:3983360
Mon Apr 10 15:21:18.261456690 2017: pid 16607: thd-0x1dcae00: taper: header 
server_crc: 823a1732:1282787
Mon Apr 10 15:21:18.261557266 2017: pid 16607: thd-0x1dcae00: taper: 
start_recovery called
Mon Apr 10 15:21:18.274176799 2017: pid 16607: thd-0x1dcae00: taper: Building 
type TAPESTART header of 262144-262144 bytes with name='NMHPVPR0002' disk='' 
dumplevel=0 and blocksize=262144
Mon Apr 10 15:21:30.273418789 2017: pid 16607: thd-0x338a800: taper: Building 
type SPLIT_FILE header of 262144-262144 bytes with name='fileserver2' 
disk='/Hope_IT' dumplevel=1 and blocksize=262144


/usr/bin/perl amflush stack trace:

#0  0x7f478b5a9ecc in waitpid () from /lib64/libpthread.so.0
#1  0x7f478c5c2c1f in Perl_wait4pid () from /usr/lib64/perl5/CORE/libperl.so
#2  0x7f478c630e86 in Perl_pp_wait () from /usr/lib64/perl5/CORE/libperl.so
#3  0x7f478c5deba6 in Perl_runops_standard () from 
/usr/lib64/perl5/CORE/libperl.so
#4  0x7f478c57b9a5 in perl_run () from /usr/lib64/perl5/CORE/libperl.so
#5  0x00400d99 in main ()

/usr/libexec/amanda/driver stack trace:

#0  0x7fed83286de0 in __poll_nocancel () from /lib64/libc.so.6
#1  0x7fed837c104c in g_main_context_iterate.isra.24 () from 
/lib64/libglib-2.0.so.0
#2  0x7fed837c116c in g_main_context_iteration () from 
/lib64/libglib-2.0.so.0
#3  0x7fed83ae6aed in event_loop_wait () from 
/usr/lib64/amanda/libamanda-3.4.so
#4  0x00404da8 in main ()

/usr/bin/perl taper stack trace:

#0  0x7fcbe7187dfd in poll () from /lib64/libc.so.6
#1  0x7fcbe5a5904c in g_main_context_iterate.isra.24 () from 
/lib64/libglib-2.0.so.0
#2  0x7fcbe5a5916c in g_main_context_iteration () from 
/lib64/libglib-2.0.so.0
#3  0x7fcbe65dcb45 in event_loop_wait () from 
/usr/lib64/amanda/libamanda-3.4.so
#4  0x7fcbdfc698b0 in _wrap_run_c () from 
/usr/local/share/perl5/auto/Amanda/MainLoop/libMainLoop.so
#5  0x7fcbe84a742f in Perl_pp_entersub () from 
/usr/lib64/perl5/CORE/libperl.so
#6  0x7fcbe849fba6 in Perl_runops_standard () from 
/usr/lib64/perl5/CORE/libperl.so
#7  0x7fcbe843c9a5 in perl_run () from /usr/lib64/perl5/CORE/libperl.so
#8  0x00400d99 in main ()


-Sandro

From: Jean-Louis Martineau [mailto:jmartin...@carbonite.com]
Sent: Tuesday, April 11, 2017 8:37 AM
To: Ochressandro Rettinger ; Nathan Stratton Treadway 
; amanda-users@amanda.org
Subject: Re: amflush

It looks like the taper process is hang.

Can you post the taper debug file?
Can you get a gdb stacktrace of all threads?

Jean-Louis

On 11/04/17 10:12 AM, Ochressandro Rettinger wrote:
> In fact, I am now sure that it's not doing anything. I ran amstatus this 
> morning and it looks exactly the same as it did yesterday afternoon.
>
> If I can't get amflush to work, is there a way to clear out the stuff that 
> nee

Re: amflush

2017-04-11 Thread Jean-Louis Martineau

I want the stack trace of all threads,not only the main thread.
in gdb for the taper process, do: thread apply all bt

Jean-Louis
This message is the property of CARBONITE, INC. and may contain confidential or 
privileged information.
If this message has been delivered to you by mistake, then do not copy or 
deliver this message to anyone.  Instead, destroy it and notify me by reply 
e-mail


RE: amflush

2017-04-11 Thread Ochressandro Rettinger
Sorry, I’m definitely not a gdb power user.  I had to google to 
get that much.  :)

Thread 3 (Thread 0x7fcbdea14700 (LWP 16608)):
#0  0x7fcbe718cbf9 in syscall () from /lib64/libc.so.6
#1  0x7fcbe5a9b94f in g_cond_wait () from /lib64/libglib-2.0.so.0
#2  0x7fcbe07427ba in device_thread () from 
/usr/lib64/amanda/libamdevice-3.4.so
#3  0x7fcbe5a7e0f5 in g_thread_proxy () from /lib64/libglib-2.0.so.0
#4  0x7fcbe7463dc5 in start_thread () from /lib64/libpthread.so.0
#5  0x7fcbe719273d in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7fcbde213700 (LWP 16609)):
#0  0x7fcbe718cbf9 in syscall () from /lib64/libc.so.6
#1  0x7fcbe5a9b94f in g_cond_wait () from /lib64/libglib-2.0.so.0
#2  0x7fcbe099e3da in holding_thread () from 
/usr/lib64/amanda/libamserver-3.4.so
#3  0x7fcbe5a7e0f5 in g_thread_proxy () from /lib64/libglib-2.0.so.0
#4  0x7fcbe7463dc5 in start_thread () from /lib64/libpthread.so.0
#5  0x7fcbe719273d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7fcbe8984740 (LWP 16607)):
#0  0x7fcbe7187dfd in poll () from /lib64/libc.so.6
#1  0x7fcbe5a5904c in g_main_context_iterate.isra.24 () from 
/lib64/libglib-2.0.so.0
#2  0x7fcbe5a5916c in g_main_context_iteration () from 
/lib64/libglib-2.0.so.0
#3  0x7fcbe65dcb45 in event_loop_wait () from 
/usr/lib64/amanda/libamanda-3.4.so
#4  0x7fcbdfc698b0 in _wrap_run_c () from 
/usr/local/share/perl5/auto/Amanda/MainLoop/libMainLoop.so
#5  0x7fcbe84a742f in Perl_pp_entersub () from 
/usr/lib64/perl5/CORE/libperl.so
#6  0x7fcbe849fba6 in Perl_runops_standard () from 
/usr/lib64/perl5/CORE/libperl.so
#7  0x7fcbe843c9a5 in perl_run () from /usr/lib64/perl5/CORE/libperl.so
#8  0x00400d99 in main ()

-Sandro

From: Jean-Louis Martineau [mailto:jmartin...@carbonite.com]
Sent: Tuesday, April 11, 2017 9:00 AM
To: Ochressandro Rettinger ; Nathan Stratton Treadway 
; amanda-users@amanda.org
Subject: Re: amflush

I want the stack trace of all threads,not only the main thread.
in gdb for the taper process, do: thread apply all bt

Jean-Louis


Disclaimer

This message is the property of CARBONITE, INC. and 
may contain confidential or privileged information.

If this message has been delivered to you by mistake, then do not copy or 
deliver this message to anyone. Instead, destroy it and notify me by reply 
e-mail.


Re: amflush

2017-04-11 Thread Jean-Louis Martineau

Two thread in g_cond_wait is a bug.

Can you post your storage and changer section?

Jean-Louis

On 11/04/17 11:03 AM, Ochressandro Rettinger wrote:


Sorry, I’m definitely not a gdb power user.  I had to google to get 
that much.  :)


Thread 3 (Thread 0x7fcbdea14700 (LWP 16608)):

#0 0x7fcbe718cbf9 in syscall () from /lib64/libc.so.6

#1 0x7fcbe5a9b94f in g_cond_wait () from /lib64/libglib-2.0.so.0

#2 0x7fcbe07427ba in device_thread () from 
/usr/lib64/amanda/libamdevice-3.4.so


#3 0x7fcbe5a7e0f5 in g_thread_proxy () from /lib64/libglib-2.0.so.0

#4 0x7fcbe7463dc5 in start_thread () from /lib64/libpthread.so.0

#5 0x7fcbe719273d in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7fcbde213700 (LWP 16609)):

#0 0x7fcbe718cbf9 in syscall () from /lib64/libc.so.6

#1 0x7fcbe5a9b94f in g_cond_wait () from /lib64/libglib-2.0.so.0

#2 0x7fcbe099e3da in holding_thread () from 
/usr/lib64/amanda/libamserver-3.4.so


#3 0x7fcbe5a7e0f5 in g_thread_proxy () from /lib64/libglib-2.0.so.0

#4 0x7fcbe7463dc5 in start_thread () from /lib64/libpthread.so.0

#5 0x7fcbe719273d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7fcbe8984740 (LWP 16607)):

#0 0x7fcbe7187dfd in poll () from /lib64/libc.so.6

#1 0x7fcbe5a5904c in g_main_context_iterate.isra.24 () from 
/lib64/libglib-2.0.so.0


#2 0x7fcbe5a5916c in g_main_context_iteration () from 
/lib64/libglib-2.0.so.0


#3 0x7fcbe65dcb45 in event_loop_wait () from 
/usr/lib64/amanda/libamanda-3.4.so


#4 0x7fcbdfc698b0 in _wrap_run_c () from 
/usr/local/share/perl5/auto/Amanda/MainLoop/libMainLoop.so


#5 0x7fcbe84a742f in Perl_pp_entersub () from 
/usr/lib64/perl5/CORE/libperl.so


#6 0x7fcbe849fba6 in Perl_runops_standard () from 
/usr/lib64/perl5/CORE/libperl.so


#7 0x7fcbe843c9a5 in perl_run () from /usr/lib64/perl5/CORE/libperl.so

#8 0x00400d99 in main ()

-Sandro

*From:*Jean-Louis Martineau [mailto:jmartin...@carbonite.com]
*Sent:* Tuesday, April 11, 2017 9:00 AM
*To:* Ochressandro Rettinger ; Nathan 
Stratton Treadway ; amanda-users@amanda.org

*Subject:* Re: amflush

I want the stack trace of all threads,not only the main thread.
in gdb for the taper process, do: thread apply all bt

Jean-Louis

*Disclaimer*

This message is the property of *CARBONITE, INC.* 
 and may contain confidential or privileged 
information.


If this message has been delivered to you by mistake, then do not copy 
or deliver this message to anyone. Instead, destroy it and notify me 
by reply e-mail.



This message is the property of CARBONITE, INC. and may contain confidential or 
privileged information.
If this message has been delivered to you by mistake, then do not copy or 
deliver this message to anyone.  Instead, destroy it and notify me by reply 
e-mail


Re: lots of level-0 backups

2017-04-11 Thread hymie
Jon LaBadie writes:
>On Mon, Apr 10, 2017 at 10:45:52PM -0400, hymie! wrote:
>> On Mon, Apr 10, 2017 at 04:14:25PM -0400, Jon LaBadie wrote:
>> > Do you mean you do not understand the mechanics of splitting a DLE
>> > or that you do not know what pieces to split off.  The former we
>> > can give some direction.  For the latter the "du -s" command can
>> > be used to find the size of subdirs.  For example, "du -sh /home/*"
>> > would tell you the size of each homedir.
>> 
>> I mean that I don't know, for a fact, that
>> 
>> michelle-laptop michelle-A /cygdrive/c/Users {
>> simple-gnutar-remote
>> exclude list "/usr/local/etc/amanda/MyConfig/exclude/michelle-laptop"
>> include "./michelle/[A-CE-Za-ce-z]*"
>> estimate calcsize
>> }
>>
>> will do what I want it to do.

FAILURE DUMP SUMMARY:
  michelle-laptop michelle-A lev 0  FAILED [missing result for michelle-A
  in michelle-laptop response]

>>newlaptop.local.net /home/hymie/2 /home {
>>   simple-gnutar-remote
>>   include "./hymie/[m-z]*"
>>}

  ? /usr/bin/tar: ./hymie/[m-z]*: Warning: Cannot stat: No such file
  or directory

So neither of my backups worked.  I haven't done any investigation yet
because I'm busy at work.   But to quote a movie, this was the kind of
thing I would have liked to know yesterday.  It would have been nice
to ask Amanda to generate a list of files to be backed up, and have
Amanda give me an error message in return.

>I've got two DLEs being backed up over wifi.  Their sizes are 11 & 4GB.
>After client compression they are 4 and 3GB, that is the size of data
>transfered over the wifi.  Most recently their dumps took 38 and 29 min,
>basically 10 min per GB transferred.  YMMV.

Right now, my two DLEs are 57GB and 173GB.  The compressed backups
are 35GB and 85GB.  So my compression is already working.  The doc says
"client fast" is the default compression, so it should be compressed
before it hits the wifi.  But I'd rather not have my wifi clogged
for 12-16 hours doing backups, even once a month.

--hymie! http://lactose.homelinux.net/~hymiehy...@lactose.homelinux.net


Re: lots of level-0 backups

2017-04-11 Thread Jean-Louis Martineau
On 11/04/17 11:41 AM, hy...@lactose.homelinux.net wrote:
> Jon LaBadie writes:
>> On Mon, Apr 10, 2017 at 10:45:52PM -0400, hymie! wrote:
>>> On Mon, Apr 10, 2017 at 04:14:25PM -0400, Jon LaBadie wrote:
 Do you mean you do not understand the mechanics of splitting a DLE
 or that you do not know what pieces to split off.  The former we
 can give some direction.  For the latter the "du -s" command can
 be used to find the size of subdirs.  For example, "du -sh /home/*"
 would tell you the size of each homedir.
>>> I mean that I don't know, for a fact, that
>>>
>>> michelle-laptop michelle-A /cygdrive/c/Users {
>>>  simple-gnutar-remote
>>>  exclude list "/usr/local/etc/amanda/MyConfig/exclude/michelle-laptop"
>>>  include "./michelle/[A-CE-Za-ce-z]*"
>>>  estimate calcsize
>>>  }
>>>
>>> will do what I want it to do.
> FAILURE DUMP SUMMARY:
>michelle-laptop michelle-A lev 0  FAILED [missing result for michelle-A
>in michelle-laptop response]
>
>>> newlaptop.local.net /home/hymie/2 /home {
>>>simple-gnutar-remote
>>>include "./hymie/[m-z]*"
>>> }
>? /usr/bin/tar: ./hymie/[m-z]*: Warning: Cannot stat: No such file
>or directory

$ man amgtar
INCLUDE AND EXCLUDE LISTS
Similarly, include expressions are supplied to GNU-tar's 
--files-from
option. This option ordinarily does not accept any sort of 
wildcards,
but amgtar "manually" applies glob pattern matching to include
expressions with only one slash. The expressions must still 
begin with
"./", so this effectively only allows expressions like "./[abc]*" or
"./*.txt".

./hymie/[m-z]* have more than one slash so it is used as a path, no glob 
matching.


Jean-Louis
>
> So neither of my backups worked.  I haven't done any investigation yet
> because I'm busy at work.   But to quote a movie, this was the kind of
> thing I would have liked to know yesterday.  It would have been nice
> to ask Amanda to generate a list of files to be backed up, and have
> Amanda give me an error message in return.
>
>> I've got two DLEs being backed up over wifi.  Their sizes are 11 & 4GB.
>> After client compression they are 4 and 3GB, that is the size of data
>> transfered over the wifi.  Most recently their dumps took 38 and 29 min,
>> basically 10 min per GB transferred.  YMMV.
> Right now, my two DLEs are 57GB and 173GB.  The compressed backups
> are 35GB and 85GB.  So my compression is already working.  The doc says
> "client fast" is the default compression, so it should be compressed
> before it hits the wifi.  But I'd rather not have my wifi clogged
> for 12-16 hours doing backups, even once a month.
>
> --hymie! http://lactose.homelinux.net/~hymie
> hy...@lactose.homelinux.net
>
This message is the property of CARBONITE, INC. and may contain confidential or 
privileged information.
If this message has been delivered to you by mistake, then do not copy or 
deliver this message to anyone.  Instead, destroy it and notify me by reply 
e-mail


Re: lots of level-0 backups

2017-04-11 Thread hymie
Jean-Louis Martineau writes:
>On 11/04/17 11:41 AM, hy...@lactose.homelinux.net wrote:

 newlaptop.local.net /home/hymie/2 /home {
simple-gnutar-remote
include "./hymie/[m-z]*"
 }
>>? /usr/bin/tar: ./hymie/[m-z]*: Warning: Cannot stat: No such file
>>or directory
>
>$ man amgtar

I'm not using amgtar, I'm using gnutar.  Not sure if that matters.

>INCLUDE AND EXCLUDE LISTS
>   Similarly, include expressions are supplied to GNU-tar's --files-from
>   option. This option ordinarily does not accept any sort of wildcards,
>   but amgtar "manually" applies glob pattern matching to include
>   expressions with only one slash. The expressions must still begin with
>   "./", so this effectively only allows expressions like "./[abc]*" or
>   "./*.txt".
>
>
>./hymie/[m-z]* have more than one slash so it is used as a path, no glob
>matching.

Interesting.

So this seems to have worked for the backup (only one slash)

newlaptop.local.net /home/hymie/2 /home/hymie {
   simple-gnutar-remote
   include "./z*"
}

despite the documentation saying that the "diskdevice" must be a mount
point (it isn't anymore).

Recovery appears to work too, although I have to make sure that I use the
correct "setdisk" command.

Thank you -- you may have solved my problem.

--hymie!http://lactose.homelinux.net/~hymiehy...@lactose.homelinux.net


RE: amflush

2017-04-11 Thread Ochressandro Rettinger

From my amanda.conf?

tapetype "LTO6"
define tapetype LTO6 {
comment "Created by amtapetype; compression disabled"
length 2459879424 kbytes
   filemark 2684 kbytes
speed 154767 kps
blocksize 256 kbytes
}


define changer "tape_drive" {
tpchanger "chg-single:tape:/dev/nst0"
device-property "BLOCK_SIZE" "256 kbytes"
}

tpchanger "tape_drive"

What’s weird is this was working fine for nearly a month and a 
half before it stopped.

Which log files should I look at to try and figure out why the 
original backup failed?

-Sandro


From: Jean-Louis Martineau [mailto:jmartin...@carbonite.com]
Sent: Tuesday, April 11, 2017 9:27 AM
To: Ochressandro Rettinger ; Nathan Stratton Treadway 
; amanda-users@amanda.org
Subject: Re: amflush

Two thread in g_cond_wait is a bug.

Can you post your storage and changer section?

Jean-Louis

On 11/04/17 11:03 AM, Ochressandro Rettinger wrote:
Sorry, I’m definitely not a gdb power user.  I had to google to 
get that much.  :)

Thread 3 (Thread 0x7fcbdea14700 (LWP 16608)):
#0  0x7fcbe718cbf9 in syscall () from /lib64/libc.so.6
#1  0x7fcbe5a9b94f in g_cond_wait () from /lib64/libglib-2.0.so.0
#2  0x7fcbe07427ba in device_thread () from 
/usr/lib64/amanda/libamdevice-3.4.so
#3  0x7fcbe5a7e0f5 in g_thread_proxy () from /lib64/libglib-2.0.so.0
#4  0x7fcbe7463dc5 in start_thread () from /lib64/libpthread.so.0
#5  0x7fcbe719273d in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7fcbde213700 (LWP 16609)):
#0  0x7fcbe718cbf9 in syscall () from /lib64/libc.so.6
#1  0x7fcbe5a9b94f in g_cond_wait () from /lib64/libglib-2.0.so.0
#2  0x7fcbe099e3da in holding_thread () from 
/usr/lib64/amanda/libamserver-3.4.so
#3  0x7fcbe5a7e0f5 in g_thread_proxy () from /lib64/libglib-2.0.so.0
#4  0x7fcbe7463dc5 in start_thread () from /lib64/libpthread.so.0
#5  0x7fcbe719273d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7fcbe8984740 (LWP 16607)):
#0  0x7fcbe7187dfd in poll () from /lib64/libc.so.6
#1  0x7fcbe5a5904c in g_main_context_iterate.isra.24 () from 
/lib64/libglib-2.0.so.0
#2  0x7fcbe5a5916c in g_main_context_iteration () from 
/lib64/libglib-2.0.so.0
#3  0x7fcbe65dcb45 in event_loop_wait () from 
/usr/lib64/amanda/libamanda-3.4.so
#4  0x7fcbdfc698b0 in _wrap_run_c () from 
/usr/local/share/perl5/auto/Amanda/MainLoop/libMainLoop.so
#5  0x7fcbe84a742f in Perl_pp_entersub () from 
/usr/lib64/perl5/CORE/libperl.so
#6  0x7fcbe849fba6 in Perl_runops_standard () from 
/usr/lib64/perl5/CORE/libperl.so
#7  0x7fcbe843c9a5 in perl_run () from /usr/lib64/perl5/CORE/libperl.so
#8  0x00400d99 in main ()

-Sandro

From: Jean-Louis Martineau [mailto:jmartin...@carbonite.com]
Sent: Tuesday, April 11, 2017 9:00 AM
To: Ochressandro Rettinger 
; Nathan Stratton 
Treadway ; 
amanda-users@amanda.org
Subject: Re: amflush

I want the stack trace of all threads,not only the main thread.
in gdb for the taper process, do: thread apply all bt

Jean-Louis



Disclaimer

This message is the property of CARBONITE, INC. and 
may contain confidential or privileged information.

If this message has been delivered to you by mistake, then do not copy or 
deliver this message to anyone. Instead, destroy it and notify me by reply 
e-mail.




Disclaimer

This message is the property of CARBONITE, INC. and 
may contain confidential or privileged information.

If this message has been delivered to you by mistake, then do not copy or 
deliver this message to anyone. Instead, destroy it and notify me by reply 
e-mail.


Re: lots of level-0 backups

2017-04-11 Thread Jon LaBadie
On Tue, Apr 11, 2017 at 12:03:02PM -0400, hy...@lactose.homelinux.net wrote:
> Jean-Louis Martineau writes:
> >On 11/04/17 11:41 AM, hy...@lactose.homelinux.net wrote:
> 
> 
> despite the documentation saying that the "diskdevice" must be a mount
> point (it isn't anymore).
> 
where was that?  it should be corrected.

jl
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)


Re: lots of level-0 backups

2017-04-11 Thread hymie
Jon LaBadie writes:
>On Tue, Apr 11, 2017 at 12:03:02PM -0400, hy...@lactose.homelinux.net wrote:
>> despite the documentation saying that the "diskdevice" must be a mount
>> point (it isn't anymore).
>> 
>where was that?  it should be corrected.

man disklist

   A DLE usually contains one line per disk:
   hostname diskname [diskdevice] dumptype [spindle [interface] ]

[...]

   diskdevice
   Default: same as diskname. The name of the disk device to be backed
   up. It may be a full device name, a device name without the /dev/
   prefix, e.g.  sd0a, or a mount point such as /usr.

--hymie! http://lactose.homelinux.net/~hymiehy...@lactose.homelinux.net


Error when attempting to write to multiple volumes: is already in use by drive

2017-04-11 Thread Oscar Ricardo Silva
I'm running amanda 3.3.9 with vtapes and trying to enable writing to 
multiple volumes with:


taper-parallel-write2


Sometimes, but not always, I'll get an error like this during the dump:

dump to tape failed: taper: Slot 5, containing 'daily-5', is already in 
use by drive '/amandatapes/vtapes/daily/drive1'




I have a filesystem: /amandatapes  mounted for vtapes and amanda.conf 
contains:



taper-parallel-write2
tpchanger   "chg-disk:/amandatapes/vtapes/daily"
changerfile "/usr/local/amanda/etc/daily/chg-disk-status"#
maxdumpsize -1
tapetype vtape
labelstr"^daily-[0-9][0-9]*$"
amrecover_changer "chg-disk"

define tapetype vtape {
length 307000 mbytes
}



Looking around I found only one reference to it in the Zamanda community 
forums from November 10, 2011 but there was no resolution. Anyone know 
what might be causing this? Do I have something configured improperly?



--


Oscar


Re: lots of level-0 backups

2017-04-11 Thread Jon LaBadie
On Tue, Apr 11, 2017 at 05:01:32PM -0400, hy...@lactose.homelinux.net wrote:
> Jon LaBadie writes:
> >On Tue, Apr 11, 2017 at 12:03:02PM -0400, hy...@lactose.homelinux.net wrote:
> >> despite the documentation saying that the "diskdevice" must be a mount
> >> point (it isn't anymore).
> >> 
> >where was that?  it should be corrected.
> 
> man disklist
> 
>A DLE usually contains one line per disk:
>hostname diskname [diskdevice] dumptype [spindle [interface] ]
> 
> [...]
> 
>diskdevice
>Default: same as diskname. The name of the disk device to be backed
>up. It may be a full device name, a device name without the /dev/
>prefix, e.g.  sd0a, or a mount point such as /usr.
> 

thanks,

That should be something like "or a directory pathname such as /usr,
typically a mount point."

jl
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)