Re: Intermittant TSM Server - Database stops

2003-12-16 Thread Pole, Stephen
Richard my sincerest apologies to you and the group. I consider myself
scolded ! Twice!!

TSM Verson 5.1.0 is the server level.

Taking the server up to say 5.1.6 may well fix it Richard. Unfortunately, we
live an far from perfect world and we have SAN agents running that need to
go up at the same time. These agents are across a great many nodes.  So
these issues need also to be considered.

Unloaddb failed. This is an undocumented feature in 5.1.0 so the unload and
reload is not on.

Seems also that on closer look simultaneous writes to different storage
pools may be causing and issue. (This we expect is fixed on 5.2)  Once
again we need to consider the SAN agents on the other machines. These need
to come up to version 5.2 at the same time and the nodes need a reboot if
not mistaken. This is not as a simple task in our environment and will need
to be phased in over time. The upside may well be the phase in process,
which may be that we dedicate a new 5.2 Server and move across the nodes as
we upgrade each node. What we need to consider is the method of eventually
marrying the two machines after all is done!

Meantime, we'll keep a close eye on the monitoring of the database, and
recovery log consumption etc, until we can move up to a Version 5.2.

We hope there are no gotcha's there. If there gotchas then I'd sure
appreciate a posting.

Thanks again for the scolding.

Stephen :-)





-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]
Sent: 16 December, 2003 9:33 PM
To: [EMAIL PROTECTED]
Subject: Re: Intermittant TSM Server - Database stops


...
Since then, TSM Server stops at random times. (no messages in the actlog)

The *SM server has traditionally been programmed to produce an error log in
its
server directory when it crashes.  Have a look for such.

It could well be that simply boosting your maintenance level will resolve
the
issue.

A server with such issues needs to be carefully monitored.  In particular,
watch
Database and Recovery Log consumption as the server operates through the
day, and
watch for space constraints and anomalies.  Your surviving Activity Log from
the
crash vicinity may record the start of some highly-consumptive process which
may
need investigation.

  Richard Sims, BU


Re: Intermittant TSM Server - Database stops

2003-12-15 Thread Pole, Stephen
Hi all,

Sorry to trouble all of you.

Here is an event that happened last week during normal operations, while a
large query was being run on the TSM database.

12/11/03 01:08:13 ANR2561I Schedule prompter contacting ICMC02RPA
(session
   4174) to start a scheduled operation.

12/11/03 01:08:32 ANR2017I Administrator OPS issued command: FETCH NEXT
50
12/11/03 01:08:37 ANR2958E SQL temporary table storage has been
exhausted.

The explanation seems pretty self explanantory.

The server was restarted all seemed ok for about 12 hours.

We have added another extended the database by means of another dataase
volume and refrained from performing any major SQL queries etc..

Since then, TSM Server stops at random times. (no messages in the actlog)
The server just falls and has to be restarted.

The only clue something has happened is in the errpt -a

LABEL:  CORE_DUMP
IDENTIFIER: 1F0B7B49

Date/Time:   Mon Dec 15 23:33:11 WAUS
Sequence Number: 40038
Machine Id:  D62A4C00
Node Id: rsfm014
Class:   S
Type:PERM
Resource Name:   SYSPROC

Description
SOFTWARE PROGRAM ABNORMALLY TERMINATED

Probable Causes
SOFTWARE PROGRAM

User Causes
USER GENERATED SIGNAL

Recommended Actions
CORRECT THEN RETRY

Failure Causes
SOFTWARE PROGRAM

Recommended Actions
RERUN THE APPLICATION PROGRAM
IF PROBLEM PERSISTS THEN DO THE FOLLOWING
CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
SIGNAL NUMBER
   6
USER'S PROCESS ID:
   44974
FILE SYSTEM SERIAL NUMBER
   2
INODE NUMBER
  215060
PROCESSOR ID
   0
PROGRAM NAME
dsmserv
ADDITIONAL INFORMATION
pthread_k A8
??
_p_raise 64
raise 34
abort B8
AbortServ 80
TrapHandl 13C
??
??

Symptom Data
REPORTABLE
1
INTERNAL ERROR
0
SYMPTOM CODE
PCSS/SPI2 FLDS/dsmserv SIG/6 FLDS/AbortServ VALU/80

End result is that TSM has gone very flakey and falls over a odd times.

Has anyone encounter this before. If so what are my options?

We are looking at doing an unloaddb then reload etc... (If I can get my
head around the procedure)..

Any help would be greatly appreciated

Thanks in advance TSM'ers!

Cheers



Stephen Pole
WA Dept of Health

[EMAIL PROTECTED]


Intermittant TSM Server - Database stops

2003-12-15 Thread Pole, Stephen
TSM'ers

Being the first time attempting such as process here is a plan

1. Perform database backup
2. Copy to another location (other machine)
a) dsmserv.opt
b) dsmserv.dsk
c) volhist (volume history file)
d) devconfig ( device config file)

3. Set the server to start in quiet mode. Modify the server dsmserv.opt
a) change client ports to 15000 (Stops client from logging on)
b) set expinterval to 0 as this prevents inventory from expiring
starting immediately after a server startup.
c) add NOMIGRRECL to prevent TSM from starting space reclamation or
migration.
d) Set DISABLESCHEDS to YES (this prevents any TSM Schedules from
running

4.  Edit devconfig file to contain file deveice class to store the
unloaded database on it.
define declass fileclass devtyep=file mountlimit=5 maxcap=5G
dir=/tsmtemp

5. From Server directory run DSMSERV UNLOADDB DEVclass=fileclass

Then we will perform the following steps after a successful UNLOADDB. This
is to ensure that the files created for the database and recovery logvolumes
do not exist on the system otherwise LOADFORMAT process will fail.

6. Create file called /usr/temp/LOGVOL.TXT. Edit the file to contain the
following:-
var/tsm/tsmlog/log01.dsm 512
var/tsm/tsmlog/log02.dsm 512
var/tsm/tsmlog/log03.dsm 512
var/tsm/tsmlog/log04.dsm 512

7. Create a file called /usr/temp/DBVOL.TXT, edit the file to contain the
following:-
/var/tsm/tsmdb1/db01.dsm 5000
/var/tsm/tsmdb1/db02.dsm 5000
/var/tsm/tsmdb1/db03.dsm 5000
/var/tsm/tsmdb1/db04.dsm 5000

8. From server directory issue DSMSERV loadformat 4
FILE:/usr/temp/LOGVOL.TXT 4 FILE:/usr/temp/DBVOL.TXT

9. DSMSERV LOADDB DEVclass=fileclass
VOLumenames=/tsmtemp/volname1,/tsmtemp/volname2,
/tsmtemp/volname3,/tsmtemp/volname4

If LOADDB spits up any errors then run AUDITDB
eg DSMSERV AUDITDB FIX=YES DETAIL=YES FILE=/usrtemp/AUDITDB.TXT

10. After finshing the above. Restore the orginal devconfig.out to original
settings

11 Start TSM Server from commandline

12. Perform a Full DB Backup
13 HALT the TSM Server
14. Restore the orginal DSMSERV.OPT file
15. Define mirror volumes (db and log vols using the dsmfmt command)

Define the group of mirrored volumes

16 Restart the TSM Server

Any comments or gottcha's would be greatly appreciated.

Thanks in advance


Stephen


-
- Original Message -
From: Pole, Stephen [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, December 16, 2003 11:36 AM
Subject: Re: Intermittant TSM Server - Database stops


 Hi all,

 Sorry to trouble all of you.

 Here is an event that happened last week during normal operations, while a
 large query was being run on the TSM database.

 12/11/03 01:08:13 ANR2561I Schedule prompter contacting ICMC02RPA
 (session
4174) to start a scheduled operation.

 12/11/03 01:08:32 ANR2017I Administrator OPS issued command: FETCH
NEXT
 50
 12/11/03 01:08:37 ANR2958E SQL temporary table storage has been
 exhausted.

 The explanation seems pretty self explanantory.

 The server was restarted all seemed ok for about 12 hours.

 We have added another extended the database by means of another dataase
 volume and refrained from performing any major SQL queries etc..

 Since then, TSM Server stops at random times. (no messages in the actlog)
 The server just falls and has to be restarted.

 The only clue something has happened is in the errpt -a

 LABEL:  CORE_DUMP
 IDENTIFIER: 1F0B7B49

 Date/Time:   Mon Dec 15 23:33:11 WAUS
 Sequence Number: 40038
 Machine Id:  D62A4C00
 Node Id: rsfm014
 Class:   S
 Type:PERM
 Resource Name:   SYSPROC

 Description
 SOFTWARE PROGRAM ABNORMALLY TERMINATED

 Probable Causes
 SOFTWARE PROGRAM

 User Causes
 USER GENERATED SIGNAL

 Recommended Actions
 CORRECT THEN RETRY

 Failure Causes
 SOFTWARE PROGRAM

 Recommended Actions
 RERUN THE APPLICATION PROGRAM
 IF PROBLEM PERSISTS THEN DO THE FOLLOWING
 CONTACT APPROPRIATE SERVICE REPRESENTATIVE

 Detail Data
 SIGNAL NUMBER
6
 USER'S PROCESS ID:
44974
 FILE SYSTEM SERIAL NUMBER
2
 INODE NUMBER
   215060
 PROCESSOR ID
0
 PROGRAM NAME
 dsmserv
 ADDITIONAL INFORMATION
 pthread_k A8
 ??
 _p_raise 64
 raise 34
 abort B8
 AbortServ 80
 TrapHandl 13C
 ??
 ??

 Symptom Data
 REPORTABLE
 1
 INTERNAL ERROR
 0
 SYMPTOM CODE
 PCSS/SPI2 FLDS/dsmserv SIG/6 FLDS/AbortServ VALU/80

 End result is that TSM has gone very flakey and falls over a odd times.

 Has anyone encounter this before. If so what are my options?

 We are looking at doing an unloaddb then reload etc... (If I can get
my
 head around the procedure)..

 Any help would be greatly appreciated

 Thanks in advance TSM'ers!

 Cheers



 Stephen Pole
 WA Dept of Health

 [EMAIL PROTECTED]

-Original Message-
From: Sony Priyambodo [mailto:[EMAIL PROTECTED]
Sent: 16 December, 2003

Re: ODBC driver

2003-12-11 Thread Pole, Stephen
Hi Peter,

Try this. NOTE it is for V5R1


The installation package is available from the FTP server
  ftp.software.ibm.com.

Directory:


/storage/tivoli-storage-management/maintenance/client/v5r1/Windows/WinNT/v51
6

Files:

  IP22660_ODBC.EXE   ODBC driver install image
  IP22660_ODBC_README.TXTODBC driver README file
  IP22660_ODBC_README.FTPThis README file

  When downloading the files, be sure to download the *.EXE file in BINARY
  mode.

  Before installing the TSM ODBC driver, it is highly recommended that you
  read the ODBC driver README file, which includes install instructions and
  other important information.

  After downloading the files, see section Installing the ODBC Driver in
the
  README file for instructions on how to install the driver.

Hope this has helped you. BTW it works a treat!

Cheers

Stephen Pole


-Original Message-
From: Peter Schrijvers [mailto:[EMAIL PROTECTED]
Sent: 11 December, 2003 4:40 PM
To: [EMAIL PROTECTED]
Subject: ODBC driver


Hello,

I want to install an ODBC driver so I can prepare some reports.

I want to use excel for it (W2K).

Is there anybody who can tell me where I can find this driver?

Many thanks,

Peter Schrijvers
BASF-IT-Services NV


Re: Exchange server backup

2003-12-08 Thread Pole, Stephen
Hi,

I guess what I wanted to say was, YES you can backup Exchange using TSM but
you need TDP to do it effectively in a 24 * 365 day environment.

As mentioned the Redbook explains all detail of how to implement TDP.

I am sure my learned collegues here will be more than happy to assist you
further if required.

Best regards


Stephen




-Original Message-
From: Pan, Sanjoy [mailto:[EMAIL PROTECTED]
Sent: 08 December, 2003 12:49 PM
To: [EMAIL PROTECTED]
Subject: Re: Exchange server backup


Hi,
We are not using TDP. Looks like TDP is an excellent solution.
Thanks for the link though.
Best regards,

-Original Message-
From: Pole, Stephen [mailto:[EMAIL PROTECTED]
Sent: Monday, December 08, 2003 1:15 PM
To: [EMAIL PROTECTED]
Subject: Re: Exchange server backup

Hello ,

Looks like you are trying to backup without using TDP or TDP is
incorrectly
configured.

Are you using TDP for Exchange?? If not then read this redbook.


http://www.redbooks.ibm.com/redbooks/pdfs/sg246147.pdf

This IBM Redbook explains how to use Tivoli Data Protection (TDP) for
Microsoft Exchange v.2.2 to perform backups and restores in your
Exchange
environment. Tivoli Data Protection for Microsoft Exchange server
performs
online backups of Microsoft Exchange server databases to Tivoli Storage
Manager (TSM) storage. We demonstrate how to back up and recover data on
Exchange 5.5 and Exchange 2000 on a single server installation and a
clustered environment. Windows 2000 (Service Pack 1) is used as the
operating system and Exchange 5.5 as well as Exchange 2000. However, we
do
not cover backing up the operating system itself.

Version 2.2 provides new functionality as well as support for Exchange
2000.
The new version of Tivoli Data Protection for Exchange supports a very
important TSM feature: automatic expiration and version control by
policy.
This frees users from having to explicitly delete backup objects in the
Tivoli Storage Manager server. TDP for Exchange supports LAN-free data
movement. We use TDP for Exchange to perform backups across a
traditional
LAN, as well as utilizing TSM LAN-free to support backups across Storage
Area Networks (SANs).

This document is written for Exchange administrators as well as TSM
administrators with a need to understand the issues and considerations
pertinent to utilizing TSM and TDP to back up and restore Microsoft
Exchange


Best Regards


Stephen

-Original Message-
From: Pan, Sanjoy [mailto:[EMAIL PROTECTED]
Sent: 08 December, 2003 11:13 AM
To: [EMAIL PROTECTED]
Subject: Re: Exchange server backup


How do you take exchange server backup? I am getting below errors
everyday.
It's a 2K exchange server and the TSM version is 4.2.4.1 running on
Solaris 2.6.

12/07/03   10:06:14  ANE4987E (Session: 16578, Node: TKYOEXCH1-IPP)
 Error processing
'\\tkyoexch1\e$\Program
Files\Exchsrvr\MDBDATA\E00.log': the object is in
use by another process
12/07/03   09:59:19  ANE4987E (Session: 16578, Node: TKYOEXCH1-IPP)
Error processing '\\tkyoexch1\c$\Documents and
Settings\All Users\Application
Data\Microsoft\Network\Downloader
 \qmgr1.dat': the object is in use by another
process


Best regards,
Sanjoy K Pan


Re: Exchange server backup

2003-12-07 Thread Pole, Stephen
Hello ,

Looks like you are trying to backup without using TDP or TDP is incorrectly
configured.

Are you using TDP for Exchange?? If not then read this redbook.


http://www.redbooks.ibm.com/redbooks/pdfs/sg246147.pdf

This IBM Redbook explains how to use Tivoli Data Protection (TDP) for
Microsoft Exchange v.2.2 to perform backups and restores in your Exchange
environment. Tivoli Data Protection for Microsoft Exchange server performs
online backups of Microsoft Exchange server databases to Tivoli Storage
Manager (TSM) storage. We demonstrate how to back up and recover data on
Exchange 5.5 and Exchange 2000 on a single server installation and a
clustered environment. Windows 2000 (Service Pack 1) is used as the
operating system and Exchange 5.5 as well as Exchange 2000. However, we do
not cover backing up the operating system itself.

Version 2.2 provides new functionality as well as support for Exchange 2000.
The new version of Tivoli Data Protection for Exchange supports a very
important TSM feature: automatic expiration and version control by policy.
This frees users from having to explicitly delete backup objects in the
Tivoli Storage Manager server. TDP for Exchange supports LAN-free data
movement. We use TDP for Exchange to perform backups across a traditional
LAN, as well as utilizing TSM LAN-free to support backups across Storage
Area Networks (SANs).

This document is written for Exchange administrators as well as TSM
administrators with a need to understand the issues and considerations
pertinent to utilizing TSM and TDP to back up and restore Microsoft Exchange


Best Regards


Stephen

-Original Message-
From: Pan, Sanjoy [mailto:[EMAIL PROTECTED]
Sent: 08 December, 2003 11:13 AM
To: [EMAIL PROTECTED]
Subject: Re: Exchange server backup


How do you take exchange server backup? I am getting below errors
everyday.
It's a 2K exchange server and the TSM version is 4.2.4.1 running on
Solaris 2.6.

12/07/03   10:06:14  ANE4987E (Session: 16578, Node: TKYOEXCH1-IPP)
 Error processing
'\\tkyoexch1\e$\Program
Files\Exchsrvr\MDBDATA\E00.log': the object is in
use by another process
12/07/03   09:59:19  ANE4987E (Session: 16578, Node: TKYOEXCH1-IPP)
Error processing '\\tkyoexch1\c$\Documents and
Settings\All Users\Application
Data\Microsoft\Network\Downloader
 \qmgr1.dat': the object is in use by another
process


Best regards,
Sanjoy K Pan


Re: TDPSQL Restore Problem

2003-10-08 Thread Pole, Stephen
Thank you fellow worker!!

-Original Message-
From: Bock, Kris [mailto:[EMAIL PROTECTED]
Sent: 09 October, 2003 1:11 PM
To: [EMAIL PROTECTED]
Subject: FW: TDPSQL Restore Problem


Komrades,
I have resolved the issue.  I was required to restore from
the MSSQL full backup, then apply the TDP diff.  The ability to restore a
TDP diff without the corresponding TDP full was not obvious due to a slight
nuance with the TDP GUI.  If, in the Restore window, you select a diff
from the Tree view - the TDP automatically selects the previous full.
However, to select only the diff, you must select the Database from the tree
view, and select the diff from inside the right-hand side panel window.
Confused?  I am just writing this.  Thanks for the help.

Cheers,
  K.B.

Kris Bock
Storage Consultant
Information Services Division
QR
mailto:[EMAIL PROTECTED]
Phone: 07 3235-3610
Fax:   07 3235-3620


**
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the system manager of
QR.

This message has been scanned for the presence of computer viruses. No
warranty is given that this message upon its receipt is
virus free and no liability is accepted by the sender in this respect.

This email is a message only; does not constitute advice and should not be
relied upon as such.
**


Re: How many volumes can I delete at once?

2003-10-07 Thread Pole, Stephen
I agree with Ray and Wira.

Either write a script or use a macro to delete bulk volumes
volume  discarddata=y wait=yes

Regards



-Original Message-
From: Wira Chinwong [mailto:[EMAIL PROTECTED]
Sent: 08 October, 2003 10:15 AM
To: [EMAIL PROTECTED]
Subject: Re: How many volumes can I delete at once?


I agree with ray. It's fast and no risk. I used to delete more than 4
volumes in concurrent.
It makes the logical log to full and TSM server came down.



Wira Chinwong
Professional Service Consultant


-Original Message-
From: Ray Baughman [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 07, 2003 9:50 PM
To: [EMAIL PROTECTED]
Subject: Re: How many volumes can I delete at once?

I never delete more than one at a time.  If I have multiple volumes to
delete, I write a shell script to delete them one at time, using delete
volume  discarddata=y wait=yes.  I've found this is the fastest way to
delete multiple volumes even if the volumes are from different storage
pools.

Ray Baughman
National Machinery LLC

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Joe Howell
 Sent: Tuesday, October 07, 2003 9:38 AM
 To: [EMAIL PROTECTED]
 Subject: How many volumes can I delete at once?


 I have to delete some volumes from a copy storage pool.  I have
 attempted to delete as many as four volumes at once (del vol 
 discarddata=yes) and had deadlock problems.  How many volumes
 should I expect to be able to process at once?  TSM 5.1.5 on OS/390.


 Joe Howell
 Shelter Insurance Companies
 Columbia, MO

 -
 Do you Yahoo!?
 The New Yahoo! Shopping - with improved product search



Re: Tape Technology Comparison

2003-10-02 Thread Pole, Stephen
Here's another one.

Try this link it may help you.

ftp://ftp.software.ibm.com/software/tivoli/analystreports/tsm.pdf

or
http://www-3.ibm.com/software/tivoli/library/analystreports/

As for STK vs 34XX. I've used 3494 for years. A few of the sites I have had
have a mixture of 3494 +  STK PowderHorns etc.. all with 3590's. All are
reliable. Rarely have trouble. Some are more than 6 - 7 years old. ATL's
will more than likely outlast your servers and tape drives! Afterall what is
there to wear out??

On the drive side, tape access speeds are good. But where we have long tape
retention times, read/write speed, and reliablity are all important. On the
other hand we also use the TSM API for cutting pieces out of the data stored
within TSM so there is a lot of start stop activity. Hence, the 3590, 3590E.


Guess it depends on the size you bosses piggy bank :-)
Cheers


-Original Message-
From: Leonard Lauria [mailto:[EMAIL PROTECTED]
Sent: 30 September, 2003 9:30 PM
To: [EMAIL PROTECTED]
Subject: Re: Tape Technology Comparison


Our experience has been with 9840 (A's) and LTO gen 1 drives.  The 9840s
have
been rock solid for years, while the LTO has had more failures in the past
14 months
or so than all of our STK drives over the past 8 or so years.

The LTOs started off rough, but I must admit that lately things have
stabalized, though
we still have some oddities occur (may be SAN backup related).  I wouldn't
be concerned
with the proprietary drives from STK.

Our libraries are a STK 4410, STK 9310 (11,000 slots total) and a IBM 3584
single frame.
No problems with the robotics in any of them.

leonard

At 01:09 PM 9/30/2003 +0100, you wrote:
Hello

Anybody out there able to share their thoughts and experiences?

We're doing a technology refresh on some of our libraries and have a number
of products in the frame:
IBM 3584 / LTO-2
ADIC Scalar i2000 / LTO-2
HP ESL9322 / LTO-2
StorageTek L700e / T9940B

The interesting one from my point of view is the L700e with its T9940B
drives. The performance/capacity comparisons between LTO-2 and T9940B seem
close enough to make no difference which leaves cost and reliability as the
differentiators.

Reliability will be an important factor in our decision and the T9940B
seems to be marketed as a high duty cycle, 24 x 7 drive. Does anybody have
any real world experieces in a TSM environment which suggest the T9940B is
more (or less) reliable than LTO-2? Should I be concerned about going for a
'proprietary' technology like 9x40 instead of an 'open' standard like LTO?

Also, any thoughts on the libraries we are considering would be gratefully
appreciated.


Re: Tape Technology Comparison

2003-10-01 Thread Pole, Stephen
Well, that's a pretty broad question. I am sure someone will get around to
an answer soon.

Here's my piece for what it is worth.

Surprised no one offered 3590 technology!! Price could be it. But the second
hand market is pretty ok. Mind you the sales guy would not make as much then
would he? Never stopped me in the past though. But it depends on how big
your enterprise is and what data your trying to get at. Think about the data
you are backing up. Think about how your are going to go about restoring it.
Think about the mechanics of the drive... Are the data a mix of big files,
small files. On restore, will the tape perform?  How will it handle skipping
past data it does not require? For example,3584
This tape technology offers high performance and reliability and can connect
to multiple operating systems for
library sharing.

3590's are very strong in stop start operations (small files), and just as
good in streaming operations where huge data files. Good in big system
operations as well. Such as mainframs.

If you have the chance speak to as many people in operations, Customer
Service Engineers.

Get with someone from each company who understands the demands you have.

Andy Raibeck from this group has written some really good stuff on tape
drive techologies. You can gain access to his writting's on the subject from
the adsm.org website as I recall.

Also, the choice of library may have a bit to do with it as well. Some offer
both 3584 and DLT in the same cabinet.

Ok, each to his own.. my afternoon tea is over ...

Happy researching.

Regards

Stephen


-Original Message-
From: Leonard Lauria [mailto:[EMAIL PROTECTED]
Sent: 30 September, 2003 9:25 PM
To: [EMAIL PROTECTED]
Subject: Re: Tape Technology Comparison


At 01:09 PM 9/30/2003 +0100, you wrote:
Hello

Anybody out there able to share their thoughts and experiences?

We're doing a technology refresh on some of our libraries and have a number
of products in the frame:
IBM 3584 / LTO-2
ADIC Scalar i2000 / LTO-2
HP ESL9322 / LTO-2
StorageTek L700e / T9940B

The interesting one from my point of view is the L700e with its T9940B
drives. The performance/capacity comparisons between LTO-2 and T9940B seem
close enough to make no difference which leaves cost and reliability as the
differentiators.

Reliability will be an important factor in our decision and the T9940B
seems to be marketed as a high duty cycle, 24 x 7 drive. Does anybody have
any real world experieces in a TSM environment which suggest the T9940B is
more (or less) reliable than LTO-2? Should I be concerned about going for a
'proprietary' technology like 9x40 instead of an 'open' standard like LTO?

Also, any thoughts on the libraries we are considering would be gratefully
appreciated.

For info, no suppliers offered 3590 or 3592. This was probably on grounds
of cost but I'm guessing the first is looking obsolete now while the second
is a bit too new. Also, we're upgrading from DLT7000 libraries so chances
are we'll be ecstatic with whatever we get.

Thanks
Neil Schofield
Yorkshire Water Services



Visit our web site to find out about our award winning
Cool Schools community campaign at
http://www.yorkshirewater.com/yorkshirewater/schools.html
The information in this e-mail is confidential and may also be legally
privileged. The contents are intended for recipient only and are subject
to the legal notice available at http://www.keldagroup.com/email.htm
Yorkshire Water Services Limited
Registered Office Western House Halifax Road Bradford BD6 2SZ
Registered in England and Wales No 2366682


Re: restore/retrieve optimalisation

2003-09-30 Thread Pole, Stephen
Maybe this will help.

Try this

Redbook SG246844

Disaster recovery Strategies using TSM

Section 8.3.3 page 179

Tape storage pools

Tape storage pools in most TSM installations store the majority of the data
volume. Tape storage pools can hold more data that disk storage pools.
Unlike
disk, the tape medium provides sequential access. TSM maintains and
optimizes
the utilization of tape media by the space reclamation process. Space
reclamation does not directly influence the DR process; however if tape
volumes
are sparsely utilized due to expiring and deleted files, data recoveries
will take
much longer.

The TSM server has the ability to collocate client data on tape volumes.
When
files are moved to a collocated storage pool, TSM ensures that the files for
a
specific client are written to the same set of tapes. This can limit the
number of
tapes that must be mounted when restoring that client's system. Collocation
can
be done at the client level or by individual filespace. When you are
deciding
whether or not to enable collocation, keep in the mind:

* Non-collocation increases performance on backups because TSM does not
have to select specific tapes.

* Collocation increases performance on restores because client data is
confined to their own dedicated set of tapes. Therefore there is less
necessity
to skip data not needed for the restore.

Collocation is a parameter of any given sequential access storage pool. Each
client whose data is placed in that storage pool will have its files
collocated. From
a DR perspective, collocation is recommended for storage pools containing
data
which has the shortest RTO. Backup data of common workstations can be held
in non-collocated tape storage pools. Consider carefully whether to use
collocation on copy storage pools. It will dramatically increase the number
of
tapes used each day to create the offsite copies.

A good exercise would be to have the good professor draw a diagram of how
this works.

Regards




-Original Message-
From: Remco Post [mailto:[EMAIL PROTECTED]
Sent: 30 September, 2003 9:05 PM
To: [EMAIL PROTECTED]
Subject: restore/retrieve optimalisation


Hi all,

last week I received a question from a retired professor who was digging
into back-up solutions. One of his major concers was the amount of
tape-mounts required in a large restore or retrieve operation. I told him
that TSM optimises for a minimum amount of tape mount based on the list of
files to be restored or retrieved (with the possible state of collocation as
a given fact at that point). He then asked me to provide documents stating
exactly that.. Now we all know it's true, and Andy has been heard making
this exact statement last week in Oxford, but where is this written down?
I've been reading through several TSM documents, including the TSM concepts
redbook, but I've not come across any document that explained exactly this.

Could anyone point out a document to me that I could give to this professor?

--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167

I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer industry
didn't even foresee that the century was going to end. -- Douglas Adams


Re: Saving data on a defective cartridge

2003-09-18 Thread Pole, Stephen
Well said Steve,

I concur backups to /dev/null run every so much faster, and really save on
tapes!! I've tried it

No watch and see if someone actually takes the advise !! Hahahahahah

I wonder if these so-called managers are prepared to standup and be
counted when the site burns down and there is no off-site tapes to recall.
Doh!!

You forgot one thing though. Never have a disaster recover plan! Instead
have your CV ready! To easy!

Thanks so much for the humour, it's Friday, and it's made a good start to my
week-end.

:-))

Stephen

Quote of the day
Backups to /dev/null run so much faster, and you save on tapes.



-Original Message-
From: Steve Harris [mailto:[EMAIL PROTECTED]
Sent: 19 September, 2003 7:52 AM
To: [EMAIL PROTECTED]
Subject: Re: Saving data on a defective cartridge


Don't you hate it when management thinks that *any* cost is too much?

How about figuring out what a GFS system would cost in tapes to back up the
same as you do now.
Say you have a 90 day retention, that would cost say 10 daily copies + 3
weekly ones + 4 monthies or 17 full copies of the active data

You should  find that TSM is more economical with two copies than GFS is
with one.

Point this out to the Pointy Haired manager.

If that doesn't work, then point out how backups to /dev/null run so much
faster and use even less tape :)

Regards

Steve Harris
AIX and TSM Admin
Queensland Health, Brisbane Australia


 [EMAIL PROTECTED] 19/09/2003 3:05:06 
A little off the subject, and I already heard Richard Simms view on not
having a second copy...  but what are most shops doing with respect to a
second copy.
I'm in a pretty large shop and upper management, in a cost savings effort,
wants us to turn off the creation of a second tape copy.  I'm not too
comfortable with the idea.  What are your thoughts?

-Original Message-
From: Coats, Jack [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 18, 2003 12:38 PM
To: [EMAIL PROTECTED]
Subject: Re: Saving data on a defective cartridge


Try doing a move data to get the data off of the tape.  If I find I am
starting to have problems I usually do a:

update vol VOLUMENAME acc=reado
move data VOLUMENAME

This should move all data that is recoverable from the volume to another
volume in the same storage pool.  I then eject the offending volume and
check it for apparent physical issues.  Then the part I hate:

delete vol VOLUMENAME discarddata=yes

Sometimes I am able to re-label the volume and use it again.  But
typically it gets moved to a less critical use, returned to the vendor
for a new tape [my preferred method], or degaused and distroyed by
a certified vendor [least preferred, paying to have it thrown away].

If anyone has a better method, please let me know! ... JC

-Original Message-
From: Gerhard Rentschler [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 18, 2003 11:24 AM
To: [EMAIL PROTECTED]
Subject: Re: Saving data on a defective cartridge


Hello,
I forgot to mention that because of lack of resources I can't afford a
copypool for the backup files. I have one for the archives.
Best regards
Gerhard

---
Gerhard Rentschleremail:[EMAIL PROTECTED]
Regional Computing Center tel.   ++49/711/685 5806
University of Stuttgart   fax:   ++49/711/682357
Allmandring 30a
D 70550
Stuttgart
Germany



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Juan Manuel Lopez Azanon
 Sent: Thursday, September 18, 2003 6:02 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Saving data on a defective cartridge


 Disaster recovery management: Restore it from outside volumes from copy
 stgpool





***
This email, including any attachments sent with it, is confidential and for
the sole use of the intended recipients(s).  This confidentiality is not
waived or lost, if you receive it and you are not the intended recipient(s),
or if it is transmitted/received in error.

Any unauthorised use, alteration, disclosure, distribution or review of this
email is prohibited.  It may be subject to a statutory duty of
confidentiality if it relates to health service matters.

If you are not the intended recipients(s), or if you have received this
e-mail in error, you are asked to immediately notify the sender by telephone
or by return e-mail.  You should also delete this e-mail message and destroy
any hard copies produced.

***


Re: status=private

2003-08-14 Thread Pole, Stephen
Hi Hector,

I used to have the same questions as you, except I was the manager, not the
TSM guru.

Here is whatI recall.

Generally speaking a newly labeled tape gets assigned as a scratch tape.

For example, as I recall at my old site..1

To tell adsm/tsm of a tapes existance you would
checkin libvol 3494 search=yes status=scratch devtype=3590

STATus (Required)

  Specifies the volume status. Possible values are:

PRIvate

Specifies that the volume is a private volume that is mounted only when
it

is requested by name.

SCRatch

Specifies that the volume is a new scratch volume. This volume can be

mounted to satisfy scratch mount requests during either data storage

operations or export operations.

Once TSM grabs a scratch tape, uses it, the category then is private.

If the tape is removed, or dropped in the library, becomes homeless, or
requires rechecking in the the status is private Been a while, but I am
sure someone (with everyday experience) could help.

On assigning the tape to storage pools.

This is done using DEFINE along the following lines:-

Syntax

-DEFine Volume--pool_name--volume_name



   .-ACCess--=--READWrite.

--+-+--

   '-ACCess--=--+-READWrite+-'

+-READOnly-+

+-UNAVailable--+

| (1)  |

'-OFfsite--'


As for the assigning of Tapes Labels to storage pools. You can for example.

Assign all A  series tapes to a storage pool that contains only certain
client data. For example Backups from Servers. B series could be backups
from PC's and so forth. It could also be that C series tapes are controlled
by another server. D series for Disaster Recovery Tapes, etc..

We did this for a few years, but it became tedious and was unneccessary. Out
business also changed considerably and this was no longer required. But, if
we had to pull out all the tapes, and put them in another 3494 then at least
we had an idea what tapes contained what data.

I am sure someone with more day to day operational experience will be more
than happy to help you on this.

Cheers


Stephen






-Original Message-
From: Hector Chan [mailto:[EMAIL PROTECTED]
Sent: 14 August, 2003 6:50 AM
To: [EMAIL PROTECTED]
Subject: status=private


Hi there,

A generic question. I cannot find a satisfactory answer from the manuals.

Under what circumstances do we want to check in a newly labelled tape with
status=private ? Do we have to assign it to a storage pool as a subsequent
step or it's not going to be available?


Hector Chan


Re: Large file backup strategies

2003-08-14 Thread Pole, Stephen
Ever thought about storing the bulk items outside of of the database, and
create within the database, links to the files contained in TSM via the API?
This will have a two fold effect.

1) Keep your database doing what it does best, referencing data not storing
Bulk objects.

2) Speed up access, depending on how you stored the data. On tape, in a
dedicated storage pool, HSM or whatever you decide on given your
circumstances.

Ie. Store the objects name in the database, and reference the TSM path name
in the database. These items would have to be loaded into TSM first, then
referenced in the database for later retreival.

Of course, it depends on how far you are into the project I guess, and
whether you have already have explored this already and discounted it.

IMHO..

regards

Stephen



-Original Message-
From: Zlatko Krastev [mailto:[EMAIL PROTECTED]
Sent: 10 August, 2003 11:08 PM
To: [EMAIL PROTECTED]
Subject: Re: Large file backup strategies


Many comments:
1. Multi-gigabyte backup needs time to be created on that file. Afterwards
you need to backup the file using the backup product (be it TSM or not).
Hope you are not adding to the sum backup to diskpool and migration to
tape.
Use TDP for SQL as already suggested. It is not that expensive compared to
*restore* you will have to perform - restore the file and just after that
restore the database. And applying transaction logs might be very useful,
isn't it.

2. Using single backup file forces single-threaded backup, thus
single-threaded restore. As result even if database data files are spread
across several spindles, the speed is bottlenecked by the performance of
the filesystem the backup file is created in. Also that same filesystem
must accomodate whole database, therefore be big, and from this probably
is on RAID-5 - again backup performance decrease.
Answer is again TDP for SQL with its multi-stripe backup/restore
capabilities.

3. What is the business requirement justifying those 60 versions of a file
- it is hard to be understood for both data and OS files??? Maybe the
actual requirement is to keep last 60 days but usage of some other product
transformed the requirement to 60 versions.
Find the root cause! Ask requirements to be defined in *business* terms
and not in IT terms (from your signature it seems you ought to know the
ArcView requirements, but maybe again in IT terms).

4. Your administrator is wrong. TSM is a mature product and definitely can
distinguish between different files and their corresponding requirements.
Ask your administrator to become familiar with redbook TSM Concepts
(SG24-4877) or recommend him/her to pass TSM education.

Zlatko Krastev
IT Consultant






Randy Kreuziger [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
08.08.2003 17:58
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Large file backup strategies


We resently purchased a couple of sql server boxes for use in storing ESRI
GIS layers. This includes the storing of orthophotography images covering
our entire state. The orthos are stored in SQL Server with the initial
load creating a backup file 55 GBs in size. As we load more TIFs this
backup file will only grow.

According to the administrator of our  backups the policy governing this
machine is set to maintain 60 versions of a file. I thought this was over
kill before but with SQL Server backups that will approach 100GBs our
backup server will probably drown in orthophoto backups.

My administrator states that the system can not be configured to retain x
number of versions of one file type and y number of versions for all
others . Are there any work arounds so that I don't have 60 versions of
.bak files while retaining that number for all other files?

Thank you,
Randy Kreuziger
ArcSDE Administrator


Re: help:server recovery

2003-07-24 Thread Pole, Stephen
I've just rethought my post of the other day, it was generally unhelpful.
Hope you have all this sorted out by the time you get this.

Unfortunately (fortunately), I have never had your problem. 

Let me know whow you get on. 

Good luck

Stephen

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: 23 July, 2003 11:24 AM
To: [EMAIL PROTECTED]
Subject: help:server recovery


Hi all,I deleted my logvol form my OS,my logmode is normal and I don't
backup my db also.How can I recovery the server?
Thanx!
liming
[EMAIL PROTECTED]
2003-07-23


Re: help:server recovery

2003-07-22 Thread Pole, Stephen
I s your resume up to date :-)

I am sure someone else here will help you.

Cheeers

Stephen

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: 23 July, 2003 11:24 AM
To: [EMAIL PROTECTED]
Subject: help:server recovery


Hi all,I deleted my logvol form my OS,my logmode is normal and I don't
backup my db also.How can I recovery the server?
Thanx!
liming
[EMAIL PROTECTED]
2003-07-23