Re: Making TSM twin-center compliant

2006-09-01 Thread Kelly Martin

On 9/1/06, Loon, E.J. van - SPLXM <[EMAIL PROTECTED]> wrote:

Hi Allen!
Thank you very much for your reaction!
Correct me if I'm wrong, but as far as I know, you cannot backup to a
copypool. Am I wrong here?
I cannot wait for the restore storagepool to finish... This will take
days and I will have to be able to make client backups immediately.
If I was able to convert a copypool into a primary pool, I would be able
to backup clients right away. Ok, I would loose my copypool, but in case
of a disaster I can live with that for several days. In the meantime I
can have my vendor bring in (rental) stuff to rebuild the copypool.


You can back up to new scratch volumes in the primary storage pool
before you have fully restored the (destroyed) old volumes in it.  All
you need is to have available scratch tapes.

Kelly


Re: Processes calling for scratch instead of an existing FILLING tape

2006-02-07 Thread Kelly Martin
Is the copy storage pool in question collocated?

Kelly

On 2/7/06, William Boyer <[EMAIL PROTECTED]> wrote:
> I've noticed that processes will fill a tape and then instead of calling for 
> another FILLING tape in the storage pool, it will mount
> a scratch tape. I just saw this again this morning...I had a BA STG running 
> from onsite tape to offsite tape. There were 4 tapes in
> the offsite pool...2 filling.. 1 full..and a filling tape being written to. 
> The current tape filled up and instead of mounting one
> of the other 2 filling tapes, a new scratch tape was called for. And one of 
> the filling tapes was only 1.4% used!
>
> I search the TSM site and the archives, but I must not have the correct 
> search words, 'cause I can't find any hits. My TSM server is
> 5.3.2.1 on AIX 5.3 ML2.
>
> Bill Boyer
> "Growing old is mandatory, growing up is optional" - ??
>


Re: Backup larger than data on server

2006-01-18 Thread Kelly Martin
On 1/18/06, Paul Dudley <[EMAIL PROTECTED]> wrote:
> We have TSM version 5.2. The client server I am referring to here is
> Redhat linux.
>
> In the /var/log/tsm/dsmsched.log it lists
>
> Total number of bytes transferred:79.18 GB
>
> However there is only around 60 Gb of data on the server (see df below)
>
> FilesystemSize  Used Avail Use% Mounted on
> /dev/sda1 9.7G  2.1G  7.1G  23% /
> /dev/sdb2 104G   57G   43G  58% /opt
>
> Can someone explain why the number of bytes transferred exceeds the
> amount of data on the server?

I think retransmits due to files changing during the backup (a common
problem when backing up Linux boxes) will be counted each time even
though only the last such transfer is actually kept by the server. 
It's been a while since I ran the TSM client on a Linux box, though,
so I can't verify this, at least not yet.

Kelly


Re: IBM 3581 Autoloader question

2006-01-09 Thread Kelly Martin
On 1/9/06, Tomáš Hrouda <[EMAIL PROTECTED]> wrote:
> I need advice with use of IBM3581 7-slot Autolader. As I know it is without
> barcode reader (in my case), but I think it is optional. Is it possible to
> use it as "library" in TSM (I mean mount any volume what I need) or does it
> only work in sequential (cycle) mode? Does this autoloader have in general
> changer device in system or not?

I used a 3581 with TSM 5.1.5 on Windows 2000 during a DR drill a
couple years ago.  It works adequately well, once we got past setting
up the element configurations.  It works just like any other automated
library, although if you do not have a barcode reader it has to load
each tape into the drive to read the label, which makes it rather slow
to use, and I would not recommend it other than for low-volume,
specialized, or emergency use.

Kelly


Re: 3584 logical library

2005-12-28 Thread Kelly Martin
On 12/28/05, Allen S. Rout <[EMAIL PROTECTED]> wrote:
> I think a checkin with search=bulk will do this.
>
> Remember you'll have to REPLY [req number]

Specifying WAITTIME=0 will obviate the need to reply to the request.

Kelly


Re: Odd error using move nodedata

2005-12-01 Thread Kelly Martin
Thank you, I didn't think about the possibility of a restartable
restore.  That fixed it.

Kelly

On 12/1/05, Thorneycroft, Doug <[EMAIL PROTECTED]> wrote:
> You might have a restartable restore session sitting out there
> issue q restore and see if there is a restartabel restore sessing
> (The session number will be negative)
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
> Kelly Martin
> Sent: Thursday, December 01, 2005 1:08 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Odd error using move nodedata
>
>
> I'm attempting to run a MOVE NODEDATA to coalesce a node's data onto
> as few tapes as possible.  However, the command fails:
> 12/01/2005 15:04:49  ANR1171W Unable to move files associated with node
>   WASHINGTON, filespace \\washington\d$ fsId 3 on volume
>   A5L2 due to restore in progress. (SESSION: 1,
>   PROCESS: 12)
>
> No restore is in process.  TSM 5.3.2 on Windows 2003.
>
> Kelly
>


Odd error using move nodedata

2005-12-01 Thread Kelly Martin
I'm attempting to run a MOVE NODEDATA to coalesce a node's data onto
as few tapes as possible.  However, the command fails:
12/01/2005 15:04:49  ANR1171W Unable to move files associated with node
  WASHINGTON, filespace \\washington\d$ fsId 3 on volume
  A5L2 due to restore in progress. (SESSION: 1,
  PROCESS: 12)

No restore is in process.  TSM 5.3.2 on Windows 2003.

Kelly


Re: Tape Housecleaning

2005-07-05 Thread Kelly Martin
On 7/5/05, Richard Mochnaczewski <[EMAIL PROTECTED]> wrote:
> Hi Everybody,
> 
> Does anyone have any thoughts on saving on the number of tapes TSM uses ? 
> After auditing our backups and finding that the backups are "clean" i.e 
> backing up only what is needed, and the correct number of versioning is being 
> applied to files, and reclamation is used, if there is a need to constantly 
> buy tapes, where would be the places on would look where one can save on tape 
> usage ?

Carefully consider the reuse delays on your sequential storage pools. 
The larger the reuse delay, the more tapes you will have in "pending"
state.

Reclaims of offsite copy pools consume tapes as well, because the
reclaim has to occur to a scratch tape rather than to another tape in
the same tape pool.  Manage these reclaims carefully.

Kelly


Re: Restore Requesting Copypool tape first?

2005-04-18 Thread Kelly Martin
On 4/18/05, Andy Carlson <[EMAIL PROTECTED]> wrote:
> 
> If you delete the primary volume, doesn't the copy pool data get
> released also?


Not in my experience. To get that to happen you have to delete the node or 
the filespace. There is no direct association between a primary volume and a 
copy pool volume.

Kelly


Re: Archive performance problem from TSM Server itself

2005-04-12 Thread Kelly Martin
On Apr 12, 2005 1:54 AM, Eric Winters <[EMAIL PROTECTED]> wrote:
> 
> Dear All,
> 
> TSM Server 5.2.4 on AIX 5.2 with 6 GB memory. (p690 lpar)
> 3592 tape drives fibre attached.
> TSM Backup/Archive client 5.2
> 
> I have a large number of 2 GB files to archive directly to tape - the 
> files
> are within filesystems on a SAN storage device. If I mount the filesystems
> locally on my TSM Server and archive them, I get a rather disappointing 22
> MB/s when archiving to a single tape drive.
> 
> If I archive the same data in the same filesystems but by mounting them
> onto a remote AIX server and running the 'dsmc archive' command and send
> them over the Gb lan, then I get a far more satisfactory 37 MB/s (again,
> archiving to a single tape drive).
> 
> Why should performance archiving over the network be better than archiving
> locally? My remote server is similarly configured to my TSM Server - same
> version of AIX, backup/archive client, similar memory etc.


Is the tape drive on the same HBA as the SAN?

Kelly


Re: Help me convincing IBM please!!!!

2005-04-07 Thread Kelly Martin
On Apr 7, 2005 4:57 AM, Loon, E.J. van - SPLXM <[EMAIL PROTECTED]> 
wrote: 
> 
> I'm begging for your help with this. I would like to ask you to issue the
> following SQL statements on your TSM server(s) and send the output 
> directly
> to me: [EMAIL PROTECTED]:
> 
> select activity, cast ((end_time) as date) as "Date" ,(examined/cast
> ((end_time-start_time) seconds as decimal(18,13)) *3600) "Objects Examined
> Up/Hr" from summary where activity='EXPIRATION' and days (end_time)
> -days(start_time)=0
> 
> select pct_utilized, avail_space_mb from db
> 
> The first statement calculates your expiration performance (objects/hour)
> and the second one lists your database size and utilization.
> Thank you VERY much for sending me your statistics in advance, I REALLY
> appreciate it!!!

 ACTIVITY Date Objects ExaminedUp/Hr
-- -- -
EXPIRATION 2005-03-08 1285200
EXPIRATION 2005-03-09 1227600
EXPIRATION 2005-03-10 3394800
EXPIRATION 2005-03-11 3394800
EXPIRATION 2005-03-13 2041200
EXPIRATION 2005-03-14 3978000
EXPIRATION 2005-03-15 2592000
EXPIRATION 2005-03-16 2530800
EXPIRATION 2005-03-17 2037600
EXPIRATION 2005-03-18 2710800
EXPIRATION 2005-03-20 2030400
EXPIRATION 2005-03-21 3898800
EXPIRATION 2005-03-22 3258000
EXPIRATION 2005-03-23 3362400
EXPIRATION 2005-03-23 4957200
EXPIRATION 2005-03-24 3524400
EXPIRATION 2005-03-25 3171600
EXPIRATION 2005-03-27 2098800
EXPIRATION 2005-03-28 4212000
EXPIRATION 2005-03-29 2217600
EXPIRATION 2005-03-30 3204000
EXPIRATION 2005-03-31 3088800
EXPIRATION 2005-04-01 4039200
EXPIRATION 2005-04-03 2692800
EXPIRATION 2005-04-04 4024800
EXPIRATION 2005-04-05 2260800
EXPIRATION 2005-04-06 3225600
EXPIRATION 2005-04-07 3405600
 PCT_UTILIZED AVAIL_SPACE_MB
 --
68.5 18716
 I use a database trigger to keep the utilization percentage below 70. This 
is a relatively young installation at a relatively small site.


Re: Preparing a tape storage pool volume for reuse

2005-03-30 Thread Kelly Martin
On Mon, 28 Mar 2005 12:11:38 -0500, Rob Berendt <[EMAIL PROTECTED]> wrote:
> Judging by one of your other emails, is reclaimation a process to merge
> partial tapes together?  If so, that really doesn't interest us.  It
> doesn't fit into our backup strategy.

May I suggest that perhaps your backup strategy doesn't fit the
operational model of TSM?  You really can't force TSM into an
old-fashioned full-and-incremental strategy, at least not well, and if
you're dead-set on trying to do that, then perhaps you should
reconsider your choice to use TSM.

Kelly


Re: ANR8963E - AGAIN

2005-03-25 Thread Kelly Martin
On Thu, 24 Mar 2005 08:05:58 -0500, William Boyer <[EMAIL PROTECTED]> wrote:
> I've had issues with replacing LTO drives in a 3583 library. Unless I 
> shutdown the Windows server, then power cycled the library and
> brought Windows back up I would get error trying to define the drive. It 
> would tell me that the ELEMENT= was not found.
>
> Have you verified that the MTx.x.x.x device is still the correct one? I've 
> had Winders recognize the replaced drive as a NEW drive
> and created a different name.
>
> Also verify that the correct device driver is loading for the drive. If this 
> is Windows2000, make sure that the drive is being
> controlled by the TSM device drive. For Windows2003, make sure that the tape 
> is using the TSM device driver. I've also seen where
> Windows thinks it's a new or replaced device, assigning the same device name, 
> but not retaining or reloading the device driver for
> it.
>
> Bill Boyer
> "Some days you're the bug, some days you're the windshield" - ??

I had one instance where I swapped a drive on a 3583 and had Windows
renumber *all* the drives, not just the one swapped; this resulted in
TSM requesting tape mounts into drive 2 but then reading from drive 1
(for example) with all sorts of bizarre consequences.  I had to delete
all the drives from TSM, delete all the devices from Windows, reboot
Windows, let Windows rediscover the drives, and then add them back to
TSM before it would work correctly.

Kelly


Re: Sql query for file listings.

2005-03-18 Thread Kelly Martin
On Fri, 18 Mar 2005 16:05:23 -0700, Ben Bullock <[EMAIL PROTECTED]> wrote:
> Hmm,
> Couldn't find anything in the archives that would help me on
> this. SQL scripting is my weakness.
>
> What I need to do is get a listing of all the files from a node
> that are bound to a certain management class. This is not a problem,
> however I'm not getting it in a format I can use easily.
>
> The simple SQL query looks something like this:
>
> select NODE_NAME, FILESPACE_NAME, HL_NAME ,LL_NAME from archives where
> class_name like 'DBDUMP_8DAY_MC' and NODE_NAME like 'NODE1'
>
> The problem is that on path names longer than 18 characters, it
> wraps the output and makes it difficult to use/import/etc.
>
> I've put "set sqldisplaymode wide" on the front, but then the
> output is in paragraph form, which also makes it difficult to use in an
> Excel spreadsheet (the way the customer would like to use/sort the
> listings).
>
> What I would like is output with the whole path to the files
> listed, like all together as a path. Like this:
>
> NODE1   /DUMP/p05r_1/RDB_BOSQLPROD05R.ISQL
> NODE1   /DUMP/p05r_1/master.dump
> NODE1   /DUMP/p05r_1/sa.dump
> NODE1   /DUMP/p05r_2/model.dump
>
> Anyone a SQL genius out there?

Just as a thought: have you considered using the TSM ODBC driver that
is available as a component with the Windows client?  You could then
use Excel's "get external data" function to pull data directly out of
the TSM SQL engine.  I use this frequently for reporting purposes.

Kelly


Re: Exchange question - for future reference

2005-03-17 Thread Kelly Martin
> These are conservative stats. The customer I'm adminning TSM for right
> now got one of last night's information store backups (of 82.8GB) in
> 8440 seconds--almost 10MB/sec--on a 10/100 network.

My Exchange backup last night was 65.9GB in 1537 seconds (that's about
40 MB/s over a 1Gb/s network and with LTO G2 drives); that's being
written directly to a sequential tape pool without an intervening
random access pool.  We back up Exchange direct-to-tape because the
backup objects are very large (our largest store is 57GB) and I don't
want the large backup objects (the largest is 57GB) clobbering my
random access pool.

> I would not bother with differential backups. One full backup per day,
> with perhaps an incremental backup 12 hours after the full, would
> guarantee a maximum of 12 hours of data loss in the event of a
> catastrophe and subsequent restore. (Of course, you could do a daily
> full, and incrementals at 8 and 16 hours after the backup, for even
> greater minimization of data loss.)

I would agree with that as well.  We do a single full backup once a
day.  The only reason I would use differentials is if I were backing
up over a slow link.

Kelly