backup/recover using tar and hard links
Hi All, I'm using amanda 2.4.5 and a supported gnutar on Fedora. One of the servers we're backing up is a Cyrus IMAP server with a lot of mailboxes. Cyrus makes hard links when the same message is sent to multiple people (when they're on the same cyrus partition). I'm trying to recover the contents of a mailbox, but it contains some hard links to mail messages in other mailboxes. Some of those other mailboxes (that contained the actual file) have been removed (deleted) in the past few months. The amrecover (tar actually) program complains that it cannot hard link to filename because the other mailbox/file doesn't exist: tar: ./user/student5/626.: Cannot hard link to `./user/student2/555.': No such file or directory. I'm now facing two issues: - how can I (easily) recover the rest (I could make dummy directories/files, but is there an easier way)? - how can I make sure with amanda it will backup the actual file instead of only the hard link to the actual file? Is there a flag/option I can add somewhere? thanks in advance, Br, Dennis
Re: hitting EOT early?
On 2007-10-17 14:05, Nick Brockner wrote: Thanks, I just did a mt -f /dev/st0 compression 0, defcompression 0 didn't seem to be valid for this device (or nst0), so I guess I'll just reboot to be sure compression is off on the drive. I guess we'll see what happens tonight. You did read, and understand: http://tech.groups.yahoo.com/group/amanda-users/message/60030 ? (link mentioned in the wiki) Because just those DAT drives have that problem (at least, that is where I encountered them first). I mean: it is NOT enough to disable compression by that command above, because, that sets the mode only for the next write to the drive. However, amanda always first READS the tape inserted to verify the label, and by doing that, the tapedrive is set to whatever mode that the tape was written with. If that tape was written in compressed mode, the drive is in compressed mode again. A reboot will not help at all, it will just reset the tapedrive to the default mode, but still subject to autochange mode the when reading a tape. To really verify if compression is off, you can use the command: amtapetype -c -f /dev/nst0 which takes only a few minutes. Nick Paul Bijnens wrote: On 2007-10-16 16:18, Nick Brockner wrote: Hi All, I am using amanda 2.5.1p1, and I have just started seeing this (with the failing of the DLE that is currently on): planner: Last full dump of HOSTNAME1:/home on tape overwritten in 1 run. taper: tape weekly-2/tape-3 kb 30827264 fm 28 writing file: No space left on device taper: retrying HOSTNAME2:/.0 on new tape due to: [writing file: No space left on device] taper: tape weekly-2/tape-6 kb 0 fm 0 [OK] I am using DAT72, so I should be getting 36 G of usable space. Please help, as I can't get the spanning to work either, and when it tries to write to the next tape, it just dies for some reason. . . I bet you are using the tape drive in hardware compression mode and have Amanda let the data compress with software compression as well. The compression algorithm in DAT drives is stupid enough to not detect uncompressable data, and hence blindly applying that compression algorith to such data results in actually expanding the data by about 20-30%. That should acount for getting 30GB instead of 36 GB data on the tape. Solution: disable the hardware compression on the tape drive. http://wiki.zmanda.com/index.php/Hardware_compression -- Paul Bijnens, xplanation Technology ServicesTel +32 16 397.511 Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax +32 16 397.512 http://www.xplanation.com/ email: [EMAIL PROTECTED] *** * I think I've got the hang of it now: exit, ^D, ^C, ^\, ^Z, ^Q, ^^, * * F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, * * stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt, abort, hangup, * * PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e, kill -1 $$, shutdown, * * init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ... * * ... Are you sure? ... YES ... Phew ... I'm out * ***
driver failed
Hi all, I'm seeing an error I've never seen before, and google didn't turn up anything useful. amstatus is reporting this for several file systems on different machines: ra:/var 0 driver: (aborted:[request failed: error sending REQ: send REQ to resource-assembly.permabit.com failed: Transport endpoint is already connected])(too many dumper retry) Could this be a result of too many dumpers running? I've got maxdumps at 25 and inparallel set to 32. The udp port range is only 840-860, which means I have at least 5 too few ports to bind to, right? Could the cause for these errors be that I'm simply trying to do too many dumps with too few ports? If so, then either increasing the number of ports or decreasing the number of simultaneous dumps ought to solve the problem, right? -- Thanks, Paul
Re: hitting EOT early?
Thanks, I just did a mt -f /dev/st0 compression 0, defcompression 0 didn't seem to be valid for this device (or nst0), so I guess I'll just reboot to be sure compression is off on the drive. I guess we'll see what happens tonight. Nick Paul Bijnens wrote: On 2007-10-16 16:18, Nick Brockner wrote: Hi All, I am using amanda 2.5.1p1, and I have just started seeing this (with the failing of the DLE that is currently on): planner: Last full dump of HOSTNAME1:/home on tape overwritten in 1 run. taper: tape weekly-2/tape-3 kb 30827264 fm 28 writing file: No space left on device taper: retrying HOSTNAME2:/.0 on new tape due to: [writing file: No space left on device] taper: tape weekly-2/tape-6 kb 0 fm 0 [OK] I am using DAT72, so I should be getting 36 G of usable space. Please help, as I can't get the spanning to work either, and when it tries to write to the next tape, it just dies for some reason. . . I bet you are using the tape drive in hardware compression mode and have Amanda let the data compress with software compression as well. The compression algorithm in DAT drives is stupid enough to not detect uncompressable data, and hence blindly applying that compression algorith to such data results in actually expanding the data by about 20-30%. That should acount for getting 30GB instead of 36 GB data on the tape. Solution: disable the hardware compression on the tape drive. http://wiki.zmanda.com/index.php/Hardware_compression
Re: Hung processes w/ Amanda over SSH
Cameron: I had a similar problem: I am backing up Data Base servers with amanda, and because DB Servers need them to be backed up at different schedule I have a configuration for each data base server, and I have a master script that starts/shutdown Database after/before amanda perform the backup. My script was hunging up all the proccess even when the backup was performed pretty fine, every morning I had to check in the console if amanda ended the data base backups because amanda process was hunged trying to send the email report. I decided the re-write my script and I took out from my script many switching sessions I had, the start/shutdown part was made with SSH auth, but the backup is made with regular amdump, after re-write my script and delete a lot of SU sessions my script is not hunging amanda sessions. I hope this can help you Cameron have a great day ! mario -- Mario Silva Systems Administrator Supreme Court of New Mexico Judicial Information Division 2905 Rodeo Park Dr. East, Bldg. #5 Santa Fe, NM 87505 Phone: (505) 476-6959 / Mobil: (505) 660-1026 Fax:(505) 476-6952 Website: http://www.nmcourts.gov mailto: [EMAIL PROTECTED] LEGAL DISCLAIMER: The content of this data transmission is not considered as an offer, proposal, understanding, or agreement unless it is confirmed in a document signed by a legal representative of Supreme Court of the State of New Mexico or the Judicial Information Division. The content of this data transmission is confidential and it is intended to be delivered only to the addresses, therefore, it shall not be distributed and/or disclosed through any mean without the original sender's previous authorization. If you are not the addressee you are forbidden to use it, either totally or partially, for any purpose. AVISO LEGAL: El contenido de este mensaje de datos no se considera oferta, propuesta o acuerdo, sino hasta que sea confirmado en documento por escrito que contenga la firma autgrafa del apoderado legal de La Suprema Corte del Estado de Nuevo Mexico o de la Division de Informatica del Estado. El contenido de este mensaje de datos es confidencial y se entiende dirigido y para uso exclusivo del destinatario, por lo que no podr distribuirse y/o difundirse por ningn medio sin la previa autorizacin del emisor original. Si usted no es el destinatario, se le prohbe su utilizacin total o parcial para cualquier fin. Cameron Matheson wrote: Hi Guys, I've installed Amanda v2.5.2p1 on my servers using SSH auth. The backups are working fine (Dumps come in good, and I can restore w/out any trouble), but I'm seeing a whole bunch of hung ssh/amandad/tar processes on my clients. I'm not real clear on what's causing this (maybe it's the estimates, since the backups are coming in fine?). The only thing I've been able to find in my logs that looks odd is the following (taken from one of my clients): amandad: time 44.823: security_close(handle=0x9666220, driver=0xd20e60 (SSH)) amandad: time 44.823: security_stream_close(0x9696b80) amandad: time 59524.148: security_stream_seterr(0x967ead8, write error to : Broken pipe) amandad: time 59524.163: sending NAK pkt: ERROR write error on stream 49: write error to : Broken pipe amandad: time 59524.163: security_stream_close(0x967ead8) amandad: time 59524.163: security_stream_seterr(0x967ead8, write error to : Broken pipe) amandad: time 59524.163: security_stream_close(0x9686b10) amandad: time 59524.163: security_stream_seterr(0x9686b10, write error to : Broken pipe) amandad: time 59524.163: security_stream_close(0x968eb48) amandad: time 59524.163: security_stream_seterr(0x968eb48, write error to : Broken pipe) amandad: time 59524.163: pid 19860 finish time Tue Oct 16 18:30:27 2007 So I can see how that might cause a hung process (strace'ing the processes generally shows that they're read()ing on something indefinitely)--but is there anyway to avoid this? Thanks, Cameron begin:vcard fn:Mario Silva n:Silva;Mario org:Supreme Court of New Mexico;Judicial Information Division adr:;;2905 Rodeo Park Dr. East, Bldg. #5;Santa Fe;NM;87505;USA email;internet:[EMAIL PROTECTED] title:Systems Administrator tel;work:(505) 476-6959 tel;fax:(505) 476-6952 tel;cell:(505) 660-1026 x-mozilla-html:TRUE url:http://www.nmcourts.gov version:2.1 end:vcard
guntar-lists
Does anyone know how to change the location of the gnutar-lists directory on the clients? By default it is asking me to create them in [can not read/write /usr/local/var/amanda/gnutar-lists/.: No such file or directory] But id rather put them some where else.. not a biggie I guess I can live with it :-). Thanks
Re: guntar-lists
Krahn, Anderson wrote at 11:56 -0500 on Oct 17, 2007: Does anyone know how to change the location of the gnutar-lists directory on the clients? By default it is asking me to create them in [can not read/write /usr/local/var/amanda/gnutar-lists/.: No such file or directory] But id rather put them some where else.. not a biggie I guess I can live with it :-). configure --localstatedir=/some/where
Re: hitting EOT early?
I had to be root in order to change the defcompression setting using mt. I thought that the automagic setting of the compression would not apply for writing, just reading of data from a tape (and the drive would revert back to the default setting on a write operation)? amtapetype -c -o -f /dev/nst0 reports compression as off (compressable and noncompressable data take the same time). Thanks for your help thus far. Nick We'll see tomorrow morning what amanda does. Paul Bijnens wrote: On 2007-10-17 14:05, Nick Brockner wrote: Thanks, I just did a mt -f /dev/st0 compression 0, defcompression 0 didn't seem to be valid for this device (or nst0), so I guess I'll just reboot to be sure compression is off on the drive. I guess we'll see what happens tonight. You did read, and understand: http://tech.groups.yahoo.com/group/amanda-users/message/60030 ? (link mentioned in the wiki) Because just those DAT drives have that problem (at least, that is where I encountered them first). I mean: it is NOT enough to disable compression by that command above, because, that sets the mode only for the next write to the drive. However, amanda always first READS the tape inserted to verify the label, and by doing that, the tapedrive is set to whatever mode that the tape was written with. If that tape was written in compressed mode, the drive is in compressed mode again. A reboot will not help at all, it will just reset the tapedrive to the default mode, but still subject to autochange mode the when reading a tape. To really verify if compression is off, you can use the command: amtapetype -c -f /dev/nst0 which takes only a few minutes. Nick Paul Bijnens wrote: On 2007-10-16 16:18, Nick Brockner wrote: Hi All, I am using amanda 2.5.1p1, and I have just started seeing this (with the failing of the DLE that is currently on): planner: Last full dump of HOSTNAME1:/home on tape overwritten in 1 run. taper: tape weekly-2/tape-3 kb 30827264 fm 28 writing file: No space left on device taper: retrying HOSTNAME2:/.0 on new tape due to: [writing file: No space left on device] taper: tape weekly-2/tape-6 kb 0 fm 0 [OK] I am using DAT72, so I should be getting 36 G of usable space. Please help, as I can't get the spanning to work either, and when it tries to write to the next tape, it just dies for some reason. . . I bet you are using the tape drive in hardware compression mode and have Amanda let the data compress with software compression as well. The compression algorithm in DAT drives is stupid enough to not detect uncompressable data, and hence blindly applying that compression algorith to such data results in actually expanding the data by about 20-30%. That should acount for getting 30GB instead of 36 GB data on the tape. Solution: disable the hardware compression on the tape drive. http://wiki.zmanda.com/index.php/Hardware_compression
Re: guntar-lists
On 10/17/07, John E Hein [EMAIL PROTECTED] wrote: configure --localstatedir=/some/where or, better yet, ./configure --with-gnutar-listdir=/some/where but John's right -- you do have to recompile for this. Dustin -- Storage Software Engineer http://www.zmanda.com
[no subject]
Bcc: Subject: Re: guntar-lists Reply-To: In-Reply-To: [EMAIL PROTECTED] * Dustin J. Mitchell [EMAIL PROTECTED] [20071017 14:03]: On 10/17/07, John E Hein [EMAIL PROTECTED] wrote: configure --localstatedir=/some/where or, better yet, ./configure --with-gnutar-listdir=/some/where but John's right -- you do have to recompile for this. I though that gnutar_list_dir in amanda-client.conf could be used to bypass the compile-time option... jf Dustin -- Storage Software Engineer http://www.zmanda.com -- °
Re: guntar-lists
I managed to mangle the Subject: line so here it is again... * Dustin J. Mitchell [EMAIL PROTECTED] [20071017 14:03]: On 10/17/07, John E Hein [EMAIL PROTECTED] wrote: configure --localstatedir=/some/where or, better yet, ./configure --with-gnutar-listdir=/some/where but John's right -- you do have to recompile for this. I though that gnutar_list_dir in amanda-client.conf could be used to bypass the compile-time option... jf Dustin -- Storage Software Engineer http://www.zmanda.com -- °
Re: guntar-lists
On 10/17/07, Jean-Francois Malouin [EMAIL PROTECTED] wrote: or, better yet, ./configure --with-gnutar-listdir=/some/where but John's right -- you do have to recompile for this. I though that gnutar_list_dir in amanda-client.conf could be used to bypass the compile-time option... You're absolutely right -- thanks! I must have typo'd my 'grep' while looking for that.. Dustin -- Storage Software Engineer http://www.zmanda.com
Re: driver failed
Paul Lussier wrote: Hi all, I'm seeing an error I've never seen before, and google didn't turn up anything useful. amstatus is reporting this for several file systems on different machines: ra:/var 0 driver: (aborted:[request failed: error sending REQ: send REQ to resource-assembly.permabit.com failed: Transport endpoint is already connected])(too many dumper retry) Could this be a result of too many dumpers running? I've got maxdumps at 25 and inparallel set to 32. The udp port range is only 840-860, which means I have at least 5 too few ports to bind to, right? Could the cause for these errors be that I'm simply trying to do too many dumps with too few ports? If so, then either increasing the number of ports or decreasing the number of simultaneous dumps ought to solve the problem, right? yes, increase reserved-udp-port or reserved-tcp-port in amanda.conf or decrease maxdump.
RE: amrestore
After creating a amada-client.conf in /etc/Amanda.. I get the following error /opt/amanda/server/sbin/amrecover Full AMRECOVER Version 2.5.2p1. Contacting server on prdapp16-qa-master ... NAK: amindexd: invalid service, add 'amindexd' as argument to amandad #cat /etc/amanda/amanda-client.conf conf Full index_server prdapp16-qa-master tape_server prdapp16-qa-master client inetd.conf amanda dgram udp wait amanda /opt/amanda/client/libexec/amandad amandad Server: cat /home/amanda/.amandahosts prdapp16.transora.com amanda amdump prdapp16.transora.com root amindexd amidxtaped qaapp01-bkup.1sync.org root amindexd amidxtaped qaapp01-bkup.1sync.org root amandaidx inetd.conf amanda dgram udp wait amanda /opt/amanda/client/libexec/amandad amandad # amanda amandaidx stream tcp nowait amanda /opt/amanda/client/libexec/amindexd amindexd # amanda amidxtape stream tcp nowait amanda /opt/amanda/client/libexec/amidxtaped amidxtaped # Amanda Any thoughts? From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Krahn, Anderson Sent: Wednesday, October 17, 2007 2:07 PM To: amanda-users@amanda.org Subject: amrestore After running a amdump , I was trying to get amrecover to work on one of the clients. On the server prdapp16. I modified the .amandahosts file cat /home/amanda/.amandahosts prdapp16.transora.com root amindexd amidxtaped qaapp01-bkup.1sync.org root amindexd amidxtaped On the client I ran amrecover inside the /var directory. [/var]#/opt/amanda/server/sbin/amrecover Full AMRECOVER Version 2.5.2p1. Contacting server on egvmgmt5001 ... [request failed: timeout waiting for ACK] Strange this is that it is attempting to contact another client and not the master backup server. Is there a variable that needs to be set on the client side to change the server that amrecover will connect to? Thanks
Re: amrestore
Krahn, Anderson wrote: After creating a amada-client.conf in /etc/Amanda.. I get the following error /opt/amanda/server/sbin/amrecover Full AMRECOVER Version 2.5.2p1. Contacting server on prdapp16-qa-master ... NAK: amindexd: invalid service, add 'amindexd' as argument to amandad #cat /etc/amanda/amanda-client.conf conf Full index_server prdapp16-qa-master tape_server prdapp16-qa-master client inetd.conf amanda dgram udp wait amanda /opt/amanda/client/libexec/amandad amandad Server: cat /home/amanda/.amandahosts prdapp16.transora.com amanda amdump prdapp16.transora.com root amindexd amidxtaped qaapp01-bkup.1sync.org root amindexd amidxtaped qaapp01-bkup.1sync.org root amandaidx inetd.conf amanda dgram udp wait amanda /opt/amanda/client/libexec/amandad amandad # amanda amandaidx stream tcp nowait amanda /opt/amanda/client/libexec/amindexd amindexd # amanda amidxtape stream tcp nowait amanda /opt/amanda/client/libexec/amidxtaped amidxtaped # Amanda Any thoughts? Change server inetd.conf for (only one line): amanda dgram udp wait amanda /opt/amanda/client/libexec/amandad amandad amdump amindexd amidxtaped The amandaidx and amidxtape lines are use by older amrecover only. Jean-Louis *From:* [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] *On Behalf Of *Krahn, Anderson *Sent:* Wednesday, October 17, 2007 2:07 PM *To:* amanda-users@amanda.org *Subject:* amrestore After running a amdump , I was trying to get amrecover to work on one of the clients. On the server prdapp16. I modified the .amandahosts file cat /home/amanda/.amandahosts prdapp16.transora.com root amindexd amidxtaped qaapp01-bkup.1sync.org root amindexd amidxtaped On the client I ran amrecover inside the /var directory. [/var]#/opt/amanda/server/sbin/amrecover Full AMRECOVER Version 2.5.2p1. Contacting server on egvmgmt5001 ... [request failed: timeout waiting for ACK] Strange this is that it is attempting to contact another client and not the master backup server. Is there a variable that needs to be set on the client side to change the server that amrecover will connect to? Thanks
Re: backup/recover using tar and hard links
Hi, I'm using amanda 2.4.5 and a supported gnutar on Fedora. One of the servers we're backing up is a Cyrus IMAP server with a lot of mailboxes. Cyrus makes hard links when the same message is sent to multiple people (when they're on the same cyrus partition). That sounds a strange behaviour to me: every time one user read the message, the file is modified (Status: R added) so the hard link is broken for that user. I'm trying to recover the contents of a mailbox, but it contains some hard links to mail messages in other mailboxes. Some of those other mailboxes (that contained the actual file) have been removed (deleted) in the past few months. The amrecover (tar actually) program complains that it cannot hard link to filename because the other mailbox/file doesn't exist: tar: ./user/student5/626.: Cannot hard link to `./user/student2/555.': No such file or directory. That aso sounds weird to me: a hard link is a single file sharde in multiple directories, the file does not reside inside one directory and is not linked from others, it is the same file under different names. I'm now facing two issues: - how can I (easily) recover the rest (I could make dummy directories/files, but is there an easier way)? Short answer, if you have enough temporary disk space, restore all the mailboxes and you should have no more missing link problems. Then you should be able to move only the malbox of that specific user. Best regards, Olivier
Re: backup/recover using tar and hard links
On 10/17/07, Olivier Nicole [EMAIL PROTECTED] wrote: That sounds a strange behaviour to me: every time one user read the message, the file is modified (Status: R added) so the hard link is broken for that user. Oliver -- that change is in the index, which Cyrus imapd keeps separate from the messages themselves. It really does do this duplicate elimination. IIRC, it also does duplicate elimination by lazily hashing files to look for duplicates that the MTA didn't know about (e.g., the same spam delivered from different bots in a botherd). I'm trying to recover the contents of a mailbox, but it contains some hard links to mail messages in other mailboxes. Some of those other mailboxes (that contained the actual file) have been removed (deleted) in the past few months. The amrecover (tar actually) program complains that it cannot hard link to filename because the other mailbox/file doesn't exist: tar: ./user/student5/626.: Cannot hard link to `./user/student2/555.': No such file or directory. That aso sounds weird to me: a hard link is a single file sharde in multiple directories, the file does not reside inside one directory and is not linked from others, it is the same file under different names. This sounds weird to me, too, and may be a bug in GNU Tar. Here's a test case (on my Mac desktop with tar 1.13.25): erdos:~/tmp dustin$ mkdir files erdos:~/tmp dustin$ echo file1 files/file1 erdos:~/tmp dustin$ ln files/file1 files/file2 erdos:~/tmp dustin$ ls -li files/ # verify both have the same inode total 16 36261200 -rw-r--r-- 2 dustin dustin 6 17 Oct 21:38 file1 36261200 -rw-r--r-- 2 dustin dustin 6 17 Oct 21:38 file2 erdos:~/tmp dustin$ tar -cf files.tar files erdos:~/tmp dustin$ rm files/file* erdos:~/tmp dustin$ tar -xf files.tar files/file2 tar: files/file2: Cannot hard link to `files/file1': (null) tar: Error exit delayed from previous errors Same test on one of my Gentoo linux box, with tar 1.18, gives tar: files/file2: Cannot hard link to `files/file1': No such file or directory I'm guessing that tar notices inodes it's seen before, and stores them effectively as symlinks in the tarfile. When tar finds the information about file2, it tries to make a hard link, but isn't smart enough to realize it hasn't extracted the target file. Dennis, do you want to follow up on bug-tar? Short answer, if you have enough temporary disk space, restore all the mailboxes and you should have no more missing link problems. Then you should be able to move only the malbox of that specific user. Precisely. The other option is to repeat the recovery, adding the file that isn't found at each step; in this case, you'd add ./user/student22/555. and retry the extraction. Obviously, this isn't ideal. I'm surprised nobody else has been snagged by this before. I've been using Amanda on Cyrus mailboxes for years, with lots of recoveries. I guess I've just gotten lucky. Dustin -- Storage Software Engineer http://www.zmanda.com
Re: backup/recover using tar and hard links
On Wed, 17 Oct 2007, Dustin J. Mitchell wrote: Obviously, this isn't ideal. I'm surprised nobody else has been snagged by this before. I've been using Amanda on Cyrus mailboxes for years, with lots of recoveries. I guess I've just gotten lucky. We have no experience with cyrus, yet, but have been talking about it for a while. After reading this I forwarded it to a co-worker, who wrote back this: Don't let cyrus do it. From http://cyrusimap.web.cmu.edu/imapd/overview.html#singleinstance -- Single Instance Store If a delivery attempt mentions several recipients (only possible if the MTA is speaking LMTP to lmtpd), the server attempts to store as few copies of a message as possible. It will store one copy of the message per partition, and create hard links for all other recipients of the message. Single instance store can be turned off by using the singleinstancestore flag -- Obviously this won't help the person with the failing restore, but it's something to consider for everyone else. Who really is short enough on disk space in this day and age that they think storing duplicate e-mail with hard links is still a good idea? (And just btw, we had lately been losing the battle against spam using spamassassin, even though it was pretty effective when first deployed. We recently added graylisting and are seeing very few spam making it through now.) -Mitch