Re: schedule of SQL LOG backup

2004-09-22 Thread P Baines
Hello Luc,

I think the only way to do this from the TSM scheduler is to define
three schedules, one with a start time of 00:00, one at 00:20 and one at
00:40. Then set the periodicity to one hour for each of the three
schedules.

Cheers,
Paul.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Luc Beaudoin
Sent: Tuesday 21 September 2004 22:33
To: [EMAIL PROTECTED]
Subject: Re: schedule of SQL LOG backup


Hi Mark ...
So with that setup ... worst case ... they will loose 4 hours of work
???

I'm working in a hospital ... so even 1 hours lost of lab result or
patient appointment can be kind of hell

anyway .. if there is no way of putting minutes  ... I will put the
minimum ... 1 hours ...

thanks a lot Mark

Luc





Stapleton, Mark [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
2004-09-21 04:28 PM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: schedule of SQL LOG backup


From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
Behalf Of Luc Beaudoin
I thought of doing Full backup every 8 hours and LOG backup
every 20 minutes ...
Is there a best pratice for SQL backup ??

What works best is whatever meets your business needs. Most of my
customers do a full backup of databases once a day, and periodic log
backups (every 4 hours, for example) throughout the day.

YMMV.

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
Office 262.521.5627



Any e-mail message from the European Central Bank (ECB) is sent in good faith but 
shall neither be binding nor construed as constituting a commitment by the ECB except 
where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately via 
e-mail and delete this e-mail from your system.


Re: Duel tape write to LTO's

2004-09-22 Thread Daniel Sparrman
Hi Milton

Yes, we are successfully using the COPYSTGPOOLS feature. We have large 
Oracle and DB/2 database backups going directly to tape. The stgpool has 
the COPYSTGPOOLS feature set, which resides in a different library, even 
on a remote location(we're using 9310 with 9840B as the primary tape 
stgpool and 9310 with 9840C as copypools located 20km away from the main 
office).

During the backup process, if a mount request is denied or there aren't 
any idle drives in the copypool, the backup will continue, removing the 
copypool temporarily from the primary tape copystgpool list. This all 
depends on the parameter CONTINUECOPYONERROR which is also set on the 
primary stgpool. If you set this parameter to YES, the backup will 
continue, not requesting any more mount points in your copy storage pool. 
If you set this parameter to NO however, the backup will either go into 
mount wait status, or fail, depending on why it cant get a mount point in 
your copy storage pool.

Remember, this feature is only available on LAN-based backups. It is still 
not available for LAN-free backups, something I think is strange as the 
largest backups are using the LAN-free functionality. 

We managed to reduce the amount of data being backed up by the BACKUP 
STGPOOL process to 40%. If you dont use any LAN-free clients, this feature 
should be able to completely remove the need for backup stgpool. I do 
however recommend running the process, event if it doesnt backup any data. 
Its always good to know that ALL your data resides in the COPYPOOL.

I do not recommend using this feature on your primary diskpools. It will 
only mean that every backup session that is started from your clients, 
will need a mount point in your copy storage pool. This is a good way to 
make all clients come in to mount wait status or a good way of needing 
more drives.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervägen 6B
183 62 TÄBY
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51



Johnson, Milton [EMAIL PROTECTED] 
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
2004-09-21 17:44
Please respond to
ADSM: Dist Stor Manager [EMAIL PROTECTED]


To
[EMAIL PROTECTED]
cc

Subject
Re: Duel tape write to LTO's






 It depends upon where you define the copypool to reside.  If it is
contained in the 2nd library then yes.  Has anyone out there in TSM
land actually used this feature?  What happens to the back-up when one
of the tape volumes fills up?  Does it go into a media wait state until
the next volume is mounted?  What happens if there isn't a volume
available in the copypool?  Any other gotcha's?

H. Milton Johnson

 
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Timothy Hughes
Sent: Tuesday, September 21, 2004 10:05 AM
To: [EMAIL PROTECTED]
Subject: Re: Duel tape write to LTO's

Hi Milton,

When TSM writes simultaneously to the copypool would this be on the 2nd
Library for duel tape backup?

Johnson, Milton wrote:

  You should be able to create a PRIMARY STGPOOL named TAPEPOOL and a 
 COPY STGPOOL named COPYPOOL with both of them having a sequential 
 access
 (tape) DEVICE CLASS such as DLT or LTO.  Both stgpools can be in the 
 same library.  On the stgpool TAPEPOOL definition you set the 
 COPYSTGPOOLS parameter to COPYPOOL.  Then when your client backs up to

 TAPEPOOL, TSM will simultaneously write to COPYPOOL.  Of course having

 an adequate number of tape drives is required.

 H. Milton Johnson


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf 
 Of Timothy Hughes
 Sent: Friday, September 17, 2004 8:01 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Duel tape write to LTO's

 Hello,

 I was told that this could work If I have 2 backup disk pools.

 Like I have backup diskpool, then I can have like say a DB2 backup 
 diskpool then I can have the next storage pool setting for the db 
 backup pool so I can migrate to the one Library then for the other 
 backup disk pool I can have it migrate to the other Library.

 I think I can have simultaneous write to two different libraries this 
 way. Still not sure if this would work.

TSM Library setup

   TSM  SERVER  LTO_LIB  LIBRARY

   TSM  SERVER  RMT1  DRIVE  LTO_LIB
   TSM  SERVER  RMT2  DRIVE  LTO_LIB
   TSM  SERVER  RMT3  DRIVE  LTO_LIB
   TSM  SERVER  RMT4  DRIVE  LTO_LIB

   TSM  SERVER  RMT_LTO  LIBRARY

   TSM  SERVER  RMT5  DRIVE  RMT_LTO
   TSM  SERVER  RMT6  DRIVE  RMT_LTO
   TSM  SERVER  RMT7  DRIVE  RMT_LTO
   TSM  SERVER  RMT8  DRIVE  RMT_LTO

 Any other ideas comments are welcome!

 Thanks

 Johnson, Milton wrote:

  It is the basic philosophy of TSM to have only one copy of a file in

  a

  PRIMARY STORAGE POOL. With TSM 5.x you can simultaneously write to a

  PRIMARY STORAGE POOL and COPY STORAGE POOL (see HELP DEFINE
STGPOOL).
 
  H. Milton Johnson
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager 

Excessive SCSI media errors

2004-09-22 Thread Gordon Woodward
Been having some problems with our Dell Poweredge 136T library with LTO drives lately, 
we've been getting alot of bad media errors (ANR8944E) and sparodic I/O errors 
(ANR8302E). There is about 6 of our LTO tapes at the moment which have been 
experiencing in excess of 180 read errors each, and occassionally they have hung 
reclaimation or storage pool backup processes forcing the reboot of the server.  
Several of our other tapes have one or two read errors attributed to them as well but 
not to the level of some of the others.

Looking back at the activity log, the errors primarily occur on DRIVE0 and DRIVE1 in 
the library (4 LTO drives in total installed). I've run a AUDIT VOLUME on a couple of 
the tapes and damaged files were encountered. Drives 0  1 do not have their cleaning 
lights illuminated and they get cleaned every two weeks or when required. The 
library's and LTO drives firmware are at the latest releases supplied by Dell.

The Eventlog for the TSM servers has some of these errors reported in it:

The description for Event ID ( 3 ) in Source ( AdsmScsi ) cannot be found. The local 
computer may not have the necessary registry information or message DLL files to 
display messages from a remote computer. The following information is part of the 
event: \Device\mt1.2.0.3, Locate Block ID, DD_DRIVE_OR_MEDIA_FAIL.

The device, \Device\mt1.2.0.3, has a bad block.

I'm unsure whether this is a symptom of our drives needing a service or perhaps our 
tapes are getting a bit worn as most have been stored in the library for the past 3 
years and used regularly. Anyone had similar problems in the past?

Environment:

TSM for Windows v5.2.3
TSM Device Driver v5.2.3
Windows 2000 Server w/sp4
Dell PowerEdge 136T Library

Thanks,

Gordon Woodward
Wintel Server Support



--

This e-mail may contain confidential and/or privileged information. If you are not the 
intended recipient (or have received this e-mail in error) please notify the sender 
immediately and destroy this e-mail. Any unauthorized copying, disclosure or 
distribution of the material in this e-mail is strictly forbidden.


AW: disk storage pools and windows compressed file systems

2004-09-22 Thread Stefan Holzwarth
Hi Steve,
i think ist better to use the client cpu for that task to save disk space
and cpu at the tsm server. 
That also reduces networkload for backup and restore.
We use compression at the client on all tsm clients including TDP for SQL
for a long time.
The load during backuptime is typical low, so we can use the cpu for that
task without problems.
We are quite satisfied with the performance of that solution.

Regards 
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Steve Bennett [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 21. September 2004 19:11
An: [EMAIL PROTECTED]
Betreff: disk storage pools and windows compressed file systems


Have any of you used disk primary storage pools which use windows
compressed file systems? Comments on performance, etc?

We are investigating use of a multi TB raid5 array to use as a buffer
between our local primary disk pool and the tapepool. Have seen the
posts regarding file vs disk device classes but what about compression?
Good, bad, etc.

Win 2000 sp4 with TSM server 5.2.3.2

--

Steve Bennett, (907) 465-5783
State of Alaska, Enterprise Technology Services, Technical Services Section


Re: D2D on AIX

2004-09-22 Thread Robert HECKO
Hello

Can you send this presentation also to me ?

thank you.

best regards

Robert Hecko

- Original Message -
From: Johnson, Milton [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, September 20, 2004 7:57 PM
Subject: Re: D2D on AIX


 It depends upon how you configure things.  For dynamic allocation of
 volumes, then yes you are limited to the size of the file system that
 you mount on that mount point.  However if you define the stgpool
 volumes explicitly using the DEFINE VOLUME command, you can place the
 volumes across as many file systems as you want.  I will email you a PDF
 presentation IBM has on Disk Only backups.


 H. Milton Johnson

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Eliza Lau
 Sent: Monday, September 20, 2004 12:11 PM
 To: [EMAIL PROTECTED]
 Subject: D2D on AIX

 Our 3494 with 3590K tapes in 3 frames is getting full.  Instead of
 adding another frame or upgrading to 3590H or 3592 tapes we are looking
 into setting up a bunch of cheap ATA disks as primary storage.

 The FILE devclass defines a directory as its destination and JFS2 has a
 max file system size of 1TB.  Does it mean the largest stgpool I can
 define is 1TB?

 My Exchange stgpool alone has 8TB of data.  Do I have to split it up
 into 8 pieces?

 server: TSM 5.2.2.5 on AIX 5.2
 database 90GB at 70%
 Total backup data - 22TB

 Eliza Lau
 Virginia Tech Computing Center
 [EMAIL PROTECTED]



SWAP !

2004-09-22 Thread goc
is there other way to install this miracle ? i mean i'll never see the
input boxes in xwindow ... and that means of course i cannot install this
... help.

thanks

goran


Re: D2D on AIX

2004-09-22 Thread TSM_User
Good questions. Our real world example:We went from around 8 - 12 GB/hr restore off of 
tape to over 40 GB/hr from the file device classes.  Our test was a file server with a 
little over 300 GB of data.  The File server and the TSM server both had 1 GB NIC's.  
Resource utilization was set to 10 in both cases.  The data was fragemented on tape 
for a little over a year for the first test.  The data was fragmented over disk for 
nearly 8 months.

Steve Harris [EMAIL PROTECTED] wrote: How does TSM access the data on file volumes? 
Does it keep an offset of the start of every file or aggregate?

If it does, then yes we could skip to the start of each file or aggregate. If it does 
not, then we need to read through the volume to find the file we are going to restore. 
Where we have a large number of concurrent restores happening, this could cause 
performance issues on the array.

Now TSM has some smarts on later technology tape drives that have block addressability 
and on-cartridge memory and can find a spot on the tape quickly, but does this 
translate to file volumes?

Regards

Steve.

 [EMAIL PROTECTED] 22/09/2004 4:49:55 
True. Seek time is tiny compared to tape mounts. I am just concerned that
the TSM db has to keep track of thousands of volume. How much will it increase
the size of the db. Ours is already 90G at 70% utilized.

Eliza


 == In article [EMAIL PROTECTED], Eliza Lau writes:

  What is the recommended volume size. I have seen someone mentioned 5G, but
  then the number of volumes will explode from about 800 (current # of 3590
  primary tapes) to thousands.

 Consider, this doesn't really cost you much. Seek time in a directory of
 thousands of files is still tiny compared to tape behavior.

 I probably wouldn't go as low as 5G, but 10G (much less than the average size
 of my 3590 vols) seems pretty reasonable to me. 20G is getting big, from my
 perspective.



  How about keeping the staging space so clients backup to staging then
  migrate to FILE volumes. Then every volume will be filled up.


 I like this, too.


 - Allen S. Rout




***
This email, including any attachments sent with it, is confidential and for the sole 
use of the intended recipient(s). This confidentiality is not waived or lost, if you 
receive it and you are not the intended recipient(s), or if it is transmitted/received 
in error.

Any unauthorised use, alteration, disclosure, distribution or review of this email is 
prohibited. It may be subject to a statutory duty of confidentiality if it relates to 
health service matters.

If you are not the intended recipient(s), or if you have received this email in error, 
you are asked to immediately notify the sender by telephone or by return email. You 
should also delete this email and destroy any hard copies produced.
***



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


Re: D2D on AIX

2004-09-22 Thread Daniel Sparrman
Hi

Comparing  these types of numbers are abit unfair. We have customers 
running 9840 and LTO-2. They have alot higher throughput than 8-12GB/hour 
over a GB nic.

For example, we have a customer running Netware. The TSM server is an AIX 
server(pSeries 615) connected to a 3584-L32 library with 3 LTO-2 drives. 
The Netware server has about 200GB of data. The AIX server has three 
100Mbs nic, bundled togheter in an Etherchannel interface(theoretic speed 
is 300Mbs or 30MB/s). The netware server is connected through 100Mbs 
ethernet(single adapter). The server have a restore time of about 5½ hours 
which means we have an hourly throughput of almost 40GB/hour. Average 
networkspeed is 11MB/s. The Netware server utilizes multi-session restore, 
which means it can mount multiple volumes at once for restores.

We have another customer running a pSeries 650 clustrer. The cluster is 
attached to a 3584-L32 library with 9 LTO-2 drives. The pSeries server is 
equipped with an Etherchannel interface which consists of 2 GB nics. 
During testing of a restore scenario on one of their Lotus Domino 
servers(300GB of data), they reached about 50MB/s restoring directly from 
tape. In this case, we didnt utilize multi-session restore, which meant 
that the single LTO-2 drive could deliver 180GB/hour.

Today, the new tape technologies can easily outrun disks. To match LTO-2 
drives against disks, you'll ned large, fiber-attached disk subsystems, 
with no other load than the TSM server load. Internal SCSI-disks can never 
outrun fiber-attached LTO-2 drives. The LTO-2 drive has a native speed of 
35MB/s, compressed around 50-70MB/s depending on the type of data. They 
also have dynamic speed, which means you dont get the back-hitch as long 
as you keep writing data with at least 15MB/s. We've seen theese drives 
push up to 90MB/s on database backups and restores. During the testing 
phase of the implementation, we had up to 380MB/s from the disks(two 
mirrored FAStT900 connected through 4 FC HBA:s with 34 15K 36.4GB fiber 
disks per FAStT system) and almost 650MB/s from the drives(9 LTO-2 drives 
connected through 4 FC HBA:s).

The speed of the drives is all about design. If you attach a large number 
of drives to a single FC HBA, you'll easily get back-hitch. With the LTO-2 
drives, a fair number of drives/adapter is around 3-4 / adapter.

Designing disk to match the tape drives is all about cost. S-ATA drives 
can never outrun LTO-2 drives, at least not when it comes to large files 
or database backups and restores. Designing FC disks to match the drives 
will mean the cost is 10 times the cost of the tape drives.

This is all my opinion, and I'm sure that there are others out there that 
dont agree.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervägen 6B
183 62 TÄBY
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51



TSM_User [EMAIL PROTECTED] 
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
2004-09-22 04:27
Please respond to
ADSM: Dist Stor Manager [EMAIL PROTECTED]


To
[EMAIL PROTECTED]
cc

Subject
Re: D2D on AIX






Good questions. Our real world example:We went from around 8 - 12 GB/hr 
restore off of tape to over 40 GB/hr from the file device classes.  Our 
test was a file server with a little over 300 GB of data.  The File server 
and the TSM server both had 1 GB NIC's.  Resource utilization was set to 
10 in both cases.  The data was fragemented on tape for a little over a 
year for the first test.  The data was fragmented over disk for nearly 8 
months.

Steve Harris [EMAIL PROTECTED] wrote: How does TSM access 
the data on file volumes? Does it keep an offset of the start of every 
file or aggregate?

If it does, then yes we could skip to the start of each file or aggregate. 
If it does not, then we need to read through the volume to find the file 
we are going to restore. Where we have a large number of concurrent 
restores happening, this could cause performance issues on the array.

Now TSM has some smarts on later technology tape drives that have block 
addressability and on-cartridge memory and can find a spot on the tape 
quickly, but does this translate to file volumes?

Regards

Steve.

 [EMAIL PROTECTED] 22/09/2004 4:49:55 
True. Seek time is tiny compared to tape mounts. I am just concerned that
the TSM db has to keep track of thousands of volume. How much will it 
increase
the size of the db. Ours is already 90G at 70% utilized.

Eliza


 == In article [EMAIL PROTECTED], Eliza Lau 
writes:

  What is the recommended volume size. I have seen someone mentioned 5G, 
but
  then the number of volumes will explode from about 800 (current # of 
3590
  primary tapes) to thousands.

 Consider, this doesn't really cost you much. Seek time in a directory of
 thousands of files is still tiny compared to tape behavior.

 I probably wouldn't go as low as 5G, but 10G (much less than the average 
size
 of my 3590 vols) seems pretty reasonable to me. 

Re: Estimate of backup files

2004-09-22 Thread RAMNAWAZ NAVEEN
 H i,

I am posting this mail again as I have not received any response so far. Can anybody 
help, please.

I would like to use the command line interface to run a command (or select statement) 
to get an estimate of the total size of objects backed up under a particular 
filesystem by a particular node for a particular date. In fact I want to achive the 
same result as provided by the TSM client GUI where we have the functionailty to 
select the ESTIMATES option after having selected the individual files or filesystem 
before starting a restore process.

Would very much appreciaet if someone could help me please.

Thanks  warm regards





The information contained in this e-mail message, and any attachment thereto, is 
confidential and may not be disclosed without our express permission. If you are not 
the intended recipient or an employee or agent responsible for delivering this message 
to the intended recipient, you are hereby notified that you have received this message 
in error and that any review, dissemination, distribution or copying of this message, 
or any attachment thereto, in whole or in part, is strictly prohibited. If you have 
received this message in error, please immediately notify us by telephone, fax or 
e-mail and delete the message and all of its attachments. Thank you.

Every effort is made to keep our network free from viruses. You should, however, 
review this e-mail message, as well as any attachment thereto, for viruses. We take no 
responsibility and have no liability for any computer virus which may be transferred 
via this e-mail message.


Re: Estimate of backup files

2004-09-22 Thread Mike
Hi RAMNAWAZ!

On Wed, 22 Sep 2004, RAMNAWAZ NAVEEN wrote:

  H i,

 I am posting this mail again as I have not received any response so far. Can anybody 
 help, please.

 I would like to use the command line interface to run a command (or select 
 statement) to get an estimate of the total size of objects backed up under a 
 particular filesystem by a particular node for a particular date. In fact I want to 
 achive the same result as provided by the TSM client GUI where we have the 
 functionailty to select the ESTIMATES option after having selected the individual 
 files or filesystem before starting a restore process.

 Would very much appreciaet if someone could help me please.

How close does a 'query files' get you through dsmc or dsmadmc?

Mike


Re: D2D on AIX

2004-09-22 Thread asr
== In article [EMAIL PROTECTED], Eliza Lau [EMAIL PROTECTED] writes:


 True. Seek time is tiny compared to tape mounts.  I am just concerned that
 the TSM db has to keep track of thousands of volume.  How much will it increase
 the size of the db.  Ours is already 90G at 70% utilized.

It's always a good idea to keep the DB size in the back of your mind; but my
take is that you probably don't need to think too hard about adding something
with size 'thousands' to it.

I started to feel that I had too many volumes at one point when I had remote
server volumes at 2G, and I had several hundred thousands of them.


To help yourself feel more comfortable with this, I suggest that you take a 'Q
occ' of a few nodes you consider small, moderate, and large.  Count the
total number of objects, and compare.  I find that even my small nodes have
hundreds of thousands of objects.


I don't think that the db size per object is particularly close to the size
per volume (i.e. the per-volume overhead is probably much much less) , but you
can get a taste of the general order of how big is big.


- Allen S. Rout


Re: D2D on AIX

2004-09-22 Thread Thach, Kevin G
Could someone please email me the presentation as well?  Or send me the
link?  I just spent 20 minutes on IBM's website and couldn't find it.
Thanks!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Robert HECKO
Sent: Wednesday, September 22, 2004 4:14 AM
To: [EMAIL PROTECTED]
Subject: Re: D2D on AIX


Hello

Can you send this presentation also to me ?

thank you.

best regards

Robert Hecko

- Original Message -
From: Johnson, Milton [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, September 20, 2004 7:57 PM
Subject: Re: D2D on AIX


 It depends upon how you configure things.  For dynamic allocation of 
 volumes, then yes you are limited to the size of the file system that 
 you mount on that mount point.  However if you define the stgpool 
 volumes explicitly using the DEFINE VOLUME command, you can place the 
 volumes across as many file systems as you want.  I will email you a 
 PDF presentation IBM has on Disk Only backups.


 H. Milton Johnson

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf 
 Of Eliza Lau
 Sent: Monday, September 20, 2004 12:11 PM
 To: [EMAIL PROTECTED]
 Subject: D2D on AIX

 Our 3494 with 3590K tapes in 3 frames is getting full.  Instead of 
 adding another frame or upgrading to 3590H or 3592 tapes we are 
 looking into setting up a bunch of cheap ATA disks as primary storage.

 The FILE devclass defines a directory as its destination and JFS2 has 
 a max file system size of 1TB.  Does it mean the largest stgpool I can

 define is 1TB?

 My Exchange stgpool alone has 8TB of data.  Do I have to split it up 
 into 8 pieces?

 server: TSM 5.2.2.5 on AIX 5.2
 database 90GB at 70%
 Total backup data - 22TB

 Eliza Lau
 Virginia Tech Computing Center
 [EMAIL PROTECTED]



latest tdp for Oracle

2004-09-22 Thread Jim Kirkman
My bookmarks are a mess! Can someone please point me to the latest
documentation on this product (for AIX).
Thanks,
--
Jim Kirkman
AIS - Systems
UNC-Chapel Hill
919-698-8615


Digest

2004-09-22 Thread Jonathan Kaufman
Jonathan Kaufman

Foot Locker Corporate Services, Inc.
E-Mail: [EMAIL PROTECTED]
Tel:414-357-4062
Fax:717-972-3700
Tie Line:89-221-4062


Re: Upgrade time...

2004-09-22 Thread Joe Crnjanski
Check the article TSM server upgrade best practices:

http://www-1.ibm.com/support/docview.wss?rs=663context=SSGSG7q1=migrate+5.2uid=swg21177863loc=en_UScs=utf-8lang=en

and check Quick start guide for instruction to upgrade from different versions of TSM

Joe Crnjanski
Infinity Network Solutions Inc.
Phone: 416-235-0931 x26
Fax: 416-235-0265
Web:  www.infinitynetwork.com



-Original Message-
From: Coats, Jack [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 21, 2004 5:55 PM
To: [EMAIL PROTECTED]
Subject: Upgrade time...


I could use a little help.  I am at TSM Win 4.2.3.1 on my server and need to
upgrade to 5.2 or so.  Probably to 5.2.2.3.

I have a 3583 with LTO1's, all freshly upgraded to current microcode a week
or two ago.



Any special gotcha's I should look for?



From what I have skimmed (I would say read, but I know I miss stuff),  it
looks like:



Do an un-install of the TSM device driver.

Do an un-install of the TSM server.

Install the new versions of both.



This seems to simple.  Are there any conversion utilities I should use?
Special commands to clean up the database? Etc? etc?



Just not wanting to miss anything.


DSMC ARCHIVE doesn't set %errorlevel%?

2004-09-22 Thread Thomas Rupp, Vorarlberger Illwerke AG
Hi TSM-ers,

I'm running a DOS Skript (archive.cmd) on a Windows XP professional
computer
with a TSM 5.2.0.1 client to archive some files.

DSMC Archive D:\* -deletefiles  C:\temp\tsm.log
If errorlevel 1 Goto Error

Unfortunately the variable errorlevel always returns 0 even if the
tsm.log
contains the following lines:

IBM Tivoli Storage Manager 
*** Fixtest, Please see README file for more information *** 
Command Line Backup/Archive Client Interface - Version 5, Release 2,
Level 0.1 
(c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights
Reserved.

Archive function invoked.

Node Name: PCS506
Session established with server NSM: AIX-RS/6000
  Server Version 4, Release 2, Level 2.8
  Server date/time: 22.09.2004 16:02:18  Last access: 22.09.2004
16:00:50

ANS1016I No eligible files were found.
ANS1803E Archive processing of '\\pcs506\d$\Messwertarchivierung\*'
finished with failures.

I have searched Richard Sims Quickfacts, ADSM-L and the Internet but
could not find a 
solution.

Could this be a bug or is it just me not seeing the obvious?

Thanks in advance
Thomas Rupp


Volume history on my WINDOWS 2000 TSM server

2004-09-22 Thread Luc Beaudoin
Hi all

Little question about the command BACKUP VOLHIST

- Is there a best practice saying how often shoul I run it
- My file VOLHIST.OUT is updating every day but I dont figure why ... or
witch other command or task is running it ..

thanks

Luc


Re: TSM TDP Domino.

2004-09-22 Thread William Rosette
Hello TSM dr's,

I have a question that I cannot ask support with.  Here goes.  I have a TSM
TDP Domino 1.1 (uppers will not splurge for current License) with TSM 5.1.6
client, and when I am running the dsminc.cmd, it seems to get stuck
querying Databases.  I did see 1 27 GB McAfee Quarentine.nsf.  I put an
exclude McAfee\...\*  in the dsm.opt file.  I just recently uninstalled all
and reinstalled.  I don't know of anything else to do.  Any helps are
always a bonus.  Thanks in advance.

PS.  This is a WinNT 4.0 sp6 box.

Thank You,
Bill Rosette
Data Center/IS/Papa Johns International
WWJD


DSMC ARCHIVE doesn't set %errorlevel%?

2004-09-22 Thread Thomas Rupp, Vorarlberger Illwerke AG
Hi TSM-ers,

please forget my last posting ...
I was allowed to install the latest TSM client on this very special
machine 
and now everything runs just fine.

BTW: Does anyone know of a freeware program which lets me delete all
files
older x days and which can cope with long filenames and which can be
called from
the windows command line?

Thanks a lot and greetings from Austria
Thomas Rupp


Netware 6.5 upgrade with sans

2004-09-22 Thread Mark Hayden
Hi All, we have upgraded our NetWare servers to 6.5. I will try and make
this as clear as I can, we have a Novell cluster with 4 Servers. We have
sans attached to them. TSM does not treat them as a cluster. I have
installed TSM to each Novell Server and treat them as any normal TSM
client. IN the inclexcl.dsm file, I just exclude volumes that reside on
the other 3 servers. So if these volumes get pushed over to any other
Server within the cluster, they will NOT get backed up. This is the way
we want it, due to Novell cluster server and TSM not getting along. Up
until this last upgrade of NetWare 6.5 this worked fine, we ran INC
backups daily and TSM would see all the Sans volumes assigned to each
Server within the cluster. What has happened since the upgrade is SPEED
(or lack of) of backups , and as you will see below we have to tell the
Tsm schedule to include the sans volumes. has anyone ran into this
problem? We are running 5.2.2 on the client with TSM server code of
5.2.2.1. Adding the volumes to the schedule is one thing, but take a
look at how long our backups are now taking.This is also very slow
when doing restores.We have just added NetWare SP2 as well, but did
not help.

09/21/2004 19:59:38 --- SCHEDULEREC QUERY BEGIN
09/21/2004 19:59:38 --- SCHEDULEREC QUERY END
09/21/2004 19:59:38 Next operation scheduled:
09/21/2004 19:59:38

09/21/2004 19:59:38 Schedule Name: INC_S_IEPA_03
09/21/2004 19:59:38 Action:Incremental
09/21/2004 19:59:38 Objects:   v004:\ v005:\ v009:\ v011:\
v012:\ v013:\ v016:\ v018:\
09/21/2004 19:59:38 Options:
09/21/2004 19:59:38 Server Window Start:   20:00:00 on 09/21/2004
09/21/2004 19:59:38

09/21/2004 19:59:38
Executing scheduled command now.
09/21/2004 19:59:38 --- SCHEDULEREC OBJECT BEGIN INC_S_IEPA_03
09/21/2004 20:00:00
09/21/2004 20:51:54 ANS1228E Sending of object
'V005:/Users/EPA8806/Archive/Software/Favorites/Software/dtSearch --
Text Retrieval - Full Text Search Engine [0226].url' failed
09/21/2004 20:51:55 ANS4005E Error processing
'V005:/Users/EPA8806/Archive/Software/Favorites/Software/dtSearch --
Text Retrieval - Full Text Search Engine [0226].url': file not found
09/21/2004 21:31:02 ANS1802E Incremental backup of 'V005:/*' finished
with 1 failure

09/22/2004 02:27:38 --- SCHEDULEREC STATUS BEGIN
09/22/2004 02:27:39 Total number of objects inspected: 1,544,740
09/22/2004 02:27:39 Total number of objects backed up:   10,643
09/22/2004 02:27:39 Total number of objects updated:  0
09/22/2004 02:27:39 Total number of objects rebound:  0
09/22/2004 02:27:39 Total number of objects deleted:  0
09/22/2004 02:27:39 Total number of objects expired:105
09/22/2004 02:27:40 Total number of objects failed:   1
09/22/2004 02:27:40 Total number of bytes transferred: 2.82 GB
09/22/2004 02:27:40 Data transfer time:  221.26 sec
09/22/2004 02:27:40 Network data transfer rate:13,386.33
KB/sec
09/22/2004 02:27:40 Aggregate data transfer rate:127.22 KB/sec
09/22/2004 02:27:40 Objects compressed by:0%
09/22/2004 02:27:41 Elapsed processing time:   06:28:00
09/22/2004 02:27:41 --- SCHEDULEREC STATUS END
09/22/2004 02:27:41 --- SCHEDULEREC OBJECT END INC_S_IEPA_03 09/21/2004
20:00:00
09/22/2004 02:27:41 Scheduled event 'INC_S_IEPA_03' completed
successfully.
09/22/2004 02:27:41 Sending results for scheduled event
'INC_S_IEPA_03'.
09/22/2004 02:27:41 Results sent to server for scheduled event
'INC_S_IEPA_03'.


Re: Volume history on my WINDOWS 2000 TSM server

2004-09-22 Thread Richard Sims
With VOLUMEHistory in your server options file, there is no need to
execute
that command - the history backup file is updated automatically.
(See notes in http://people.bu.edu/rbs/ADSM.QuickFacts)
   Richard Sims.
On Sep 22, 2004, at 11:13 AM, Luc Beaudoin wrote:
Hi all
Little question about the command BACKUP VOLHIST
- Is there a best practice saying how often shoul I run it
- My file VOLHIST.OUT is updating every day but I dont figure why ...
or
witch other command or task is running it ..
thanks
Luc


Re: Upgrade time...

2004-09-22 Thread Bill Boyer
First, going up to TSM 5.2 version will require you to upgrade the Ultrium
drivers. Assuming you're using IBM LTO drives. In previous releases the
Ultrium drives were controlled via the native driver, but the library was
controlled via the ADSMSCSI driver. This changes...the library is now
controlled via the native device driver. You need to make sure that you
update the driver for the medium changer or TSM won't initialize the
library.

As with all Windows TSM versions, you must first uninstall before installing
the new version. There is a problem with the uninstall for the TSM Server
Licenses. The license file DLL is not deleted. This prevents the install
from working. Just delete or rename the ADSMLICN.DLL in the C:\Program
Files\Tivoli\TSM\server directory after the uninstall.

During the install it should(!) recognize your instance and perform the
UPGRADE DB. Be patient! Depending on your platform and DB size it could take
a while.

After the upgrade (and you should probably go to the latest 5.2.3.x release)
you will have to re-register your licenses. Upgrades do not preserve this.
You can get this from your DRM recovery plan file or Autovault whichever you
use. Or just QUERY LICENSE and write it all down.

As always before you start this...BACKUP YOUR DATABASE!

Bill Boyer
Some days you are the bug, some days you are the windshield. - ??


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Joe Crnjanski
Sent: Wednesday, September 22, 2004 10:23 AM
To: [EMAIL PROTECTED]
Subject: Re: Upgrade time...


Check the article TSM server upgrade best practices:

http://www-1.ibm.com/support/docview.wss?rs=663context=SSGSG7q1=migrate+5.
2uid=swg21177863loc=en_UScs=utf-8lang=en

and check Quick start guide for instruction to upgrade from different
versions of TSM

Joe Crnjanski
Infinity Network Solutions Inc.
Phone: 416-235-0931 x26
Fax: 416-235-0265
Web:  www.infinitynetwork.com



-Original Message-
From: Coats, Jack [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 21, 2004 5:55 PM
To: [EMAIL PROTECTED]
Subject: Upgrade time...


I could use a little help.  I am at TSM Win 4.2.3.1 on my server and need to
upgrade to 5.2 or so.  Probably to 5.2.2.3.

I have a 3583 with LTO1's, all freshly upgraded to current microcode a week
or two ago.



Any special gotcha's I should look for?



From what I have skimmed (I would say read, but I know I miss stuff),  it
looks like:



Do an un-install of the TSM device driver.

Do an un-install of the TSM server.

Install the new versions of both.



This seems to simple.  Are there any conversion utilities I should use?
Special commands to clean up the database? Etc? etc?



Just not wanting to miss anything.


Re: D2D on AIX

2004-09-22 Thread Richard Rhodes
Tim, we recently ran a bunch of tests on client side compression.
In every test the backup ran for 2 to 3 times longer.  In some cases
this wouldn't be a big deal when you look at the backup alone being
incremental and all.  However, we also believed that it would also
cause the restore to run 2 to 3 times as long to uncompress the data.
As a result of these tests and thoughts we decided not to
implement client side compression.

Uncompress should be much faster and less cpu intensive than compression.
In compression you are searching for redundant tokens.  In uncompression
you are basically performing token substitution.

Rick


-
The information contained in this message is intended only for the personal
and confidential use of the recipient(s) named above. If the reader of this
message is not the intended recipient or an agent responsible for
delivering it to the intended recipient, you are hereby notified that you
have received this document in error and that any review, dissemination,
distribution, or copying of this message is strictly prohibited. If you
have received this communication in error, please notify us immediately,
and delete the original message.


dsmerror.log configuration

2004-09-22 Thread Greg
Hi,
I am having a little trouble getting the client error log set up
correctly for non-administrators on MacOS X servers. I have defined
ERRORLOGName in  /Library/Preferences/Tivoli Storage Manager/TSM System
Preferences file. As it should this definition overrides any
environmental export of DSM_LOG by the non-administrator user. However
when the dsmerror.log is written by the TSM scheduler the mode on the
file is 644 and owned by root:wheel. So when the non-administrator user
invokes DSMC they get the infamous 'ANS0110E Unable to open error log
file '/path/to/dsmerror.log' for output problem.
If leave the path undefined in the dsm.sys the export of DSM_LOG for
each user works correctly. But I want all system invoked TSM logs
written to the hosts central log path, and any user TSM logs written to
the path of there choice. I just can't seem to find a way to do it.
Can someone shed some light on this for me?
Thanks,
Greg


Re: Netware 6.5 upgrade with sans

2004-09-22 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Mark Hayden
Up until this last upgrade of NetWare 
6.5 this worked fine, we ran INC backups daily and TSM would 
see all the Sans volumes assigned to each Server within the 
cluster. What has happened since the upgrade is SPEED (or lack 
of) of backups , and as you will see below we have to tell the 
Tsm schedule to include the sans volumes. has anyone ran into 
this problem? We are running 5.2.2 on the client with TSM 
server code of 5.2.2.1. Adding the volumes to the schedule is 
one thing, but take a look at how long our backups are now 
taking.This is also very slow when doing restores.We 
have just added NetWare SP2 as well, but did not help.

Executing scheduled command now.
09/21/2004 19:59:38 --- SCHEDULEREC OBJECT BEGIN INC_S_IEPA_03
09/22/2004 02:27:38 --- SCHEDULEREC STATUS BEGIN
09/22/2004 02:27:39 Total number of objects inspected: 1,544,740
09/22/2004 02:27:39 Total number of objects backed up:   10,643
09/22/2004 02:27:39 Total number of objects updated:  0
09/22/2004 02:27:39 Total number of objects rebound:  0
09/22/2004 02:27:39 Total number of objects deleted:  0
09/22/2004 02:27:39 Total number of objects expired:105
09/22/2004 02:27:40 Total number of objects failed:   1
09/22/2004 02:27:40 Total number of bytes transferred: 2.82 GB
09/22/2004 02:27:40 Data transfer time:  221.26 sec
09/22/2004 02:27:40 Network data transfer rate:13,386.33
KB/sec
09/22/2004 02:27:40 Aggregate data transfer rate:127.22 KB/sec
09/22/2004 02:27:40 Objects compressed by:0%
09/22/2004 02:27:41 Elapsed processing time:   06:28:00

Look carefully. 
1. While elapsed time from beginning to end of the backup is 6+ hours,
the actual amount of time transferring data is 221 seconds.
2. Look at the number of objects getting inspected--1.5+ million.
3. Look at transfer speed rate at the time the backup
completed--13MB/sec

What appears to be happening is that the time it takes to scan the
(large) directory structure is eating up most of the 6.5 hours. There is
not much to be done for this as is--there is no TSM journaling service
for NetWare as there is for Windows.

You may want to consider the idea of breaking up the backup by using
multiple TSM node definitions, so that the backup (and directory scan)
are multithreaded. Multiple discussions on how to do this exist in the
adsm-l mailing list archive at http://search.adsm.org.

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
Office 262.521.5627


Re: TSM TDP Domino.

2004-09-22 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of William Rosette
I have a question that I cannot ask support with.  Here goes.  
I have a TSM TDP Domino 1.1 (uppers will not splurge for 
current License) with TSM 5.1.6 client, and when I am running 
the dsminc.cmd, it seems to get stuck querying Databases.  I 
did see 1 27 GB McAfee Quarentine.nsf.  I put an exclude 
McAfee\...\*  in the dsm.opt file.  I just recently 
uninstalled all and reinstalled.  I don't know of anything 
else to do.  Any helps are always a bonus.  Thanks in advance.

What level of Notes/Domino are you running?

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
Office 262.521.5627  


Re: D2D on AIX

2004-09-22 Thread TSM_User
These numbers are from STK 9840B drives.  I am not talking about a backup and then a 
restore.  I am talking about a daily backup for 8 months and then a restore.  File 
fragemenation dramatically effects your througput over time.  Sure we can spin 9840 
drives to 100 GB/hr for large files.  I have even backed up a file server to 38 GB/hr 
(throgh 1 GB NIC's) with millions of small files.  But over time the speed is 
effected.  It's being unfair it is what it is.

For that restore test was it right after a the backup was done?  What is the change 
rate of the data. If it was after 8 months and you still got 40 GB/hr throughput then 
good deal.  I haven't seen that.

Daniel Sparrman [EMAIL PROTECTED] wrote:
Hi

Comparing these types of numbers are abit unfair. We have customers
running 9840 and LTO-2. They have alot higher throughput than 8-12GB/hour
over a GB nic.

For example, we have a customer running Netware. The TSM server is an AIX
server(pSeries 615) connected to a 3584-L32 library with 3 LTO-2 drives.
The Netware server has about 200GB of data. The AIX server has three
100Mbs nic, bundled togheter in an Etherchannel interface(theoretic speed
is 300Mbs or 30MB/s). The netware server is connected through 100Mbs
ethernet(single adapter). The server have a restore time of about 5= hours
which means we have an hourly throughput of almost 40GB/hour. Average
networkspeed is 11MB/s. The Netware server utilizes multi-session restore,
which means it can mount multiple volumes at once for restores.

We have another customer running a pSeries 650 clustrer. The cluster is
attached to a 3584-L32 library with 9 LTO-2 drives. The pSeries server is
equipped with an Etherchannel interface which consists of 2 GB nics.
During testing of a restore scenario on one of their Lotus Domino
servers(300GB of data), they reached about 50MB/s restoring directly from
tape. In this case, we didnt utilize multi-session restore, which meant
that the single LTO-2 drive could deliver 180GB/hour.

Today, the new tape technologies can easily outrun disks. To match LTO-2
drives against disks, you'll ned large, fiber-attached disk subsystems,
with no other load than the TSM server load. Internal SCSI-disks can never
outrun fiber-attached LTO-2 drives. The LTO-2 drive has a native speed of
35MB/s, compressed around 50-70MB/s depending on the type of data. They
also have dynamic speed, which means you dont get the back-hitch as long
as you keep writing data with at least 15MB/s. We've seen theese drives
push up to 90MB/s on database backups and restores. During the testing
phase of the implementation, we had up to 380MB/s from the disks(two
mirrored FAStT900 connected through 4 FC HBA:s with 34 15K 36.4GB fiber
disks per FAStT system) and almost 650MB/s from the drives(9 LTO-2 drives
connected through 4 FC HBA:s).

The speed of the drives is all about design. If you attach a large number
of drives to a single FC HBA, you'll easily get back-hitch. With the LTO-2
drives, a fair number of drives/adapter is around 3-4 / adapter.

Designing disk to match the tape drives is all about cost. S-ATA drives
can never outrun LTO-2 drives, at least not when it comes to large files
or database backups and restores. Designing FC disks to match the drives
will mean the cost is 10 times the cost of the tape drives.

This is all my opinion, and I'm sure that there are others out there that
dont agree.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervdgen 6B
183 62 TDBY
Vdxel: 08 - 754 98 00
Mobil: 070 - 399 27 51



TSM_User
Sent by: ADSM: Dist Stor Manager
2004-09-22 04:27
Please respond to
ADSM: Dist Stor Manager


To
[EMAIL PROTECTED]
cc

Subject
Re: D2D on AIX






Good questions. Our real world example:We went from around 8 - 12 GB/hr
restore off of tape to over 40 GB/hr from the file device classes. Our
test was a file server with a little over 300 GB of data. The File server
and the TSM server both had 1 GB NIC's. Resource utilization was set to
10 in both cases. The data was fragemented on tape for a little over a
year for the first test. The data was fragmented over disk for nearly 8
months.

Steve Harris wrote: How does TSM access
the data on file volumes? Does it keep an offset of the start of every
file or aggregate?

If it does, then yes we could skip to the start of each file or aggregate.
If it does not, then we need to read through the volume to find the file
we are going to restore. Where we have a large number of concurrent
restores happening, this could cause performance issues on the array.

Now TSM has some smarts on later technology tape drives that have block
addressability and on-cartridge memory and can find a spot on the tape
quickly, but does this translate to file volumes?

Regards

Steve.

 [EMAIL PROTECTED] 22/09/2004 4:49:55 
True. Seek time is tiny compared to tape mounts. I am just concerned that
the TSM db has to keep track of thousands of volume. How much 

Re: D2D on AIX

2004-09-22 Thread TSM_User
This is true, I was just wondering if others were seeing the same thing.  We expected 
it to take longer but not two to three times as long.  In the end after compression 
there would be less data to transfer so we thought there would be some gain there.



Richard Rhodes [EMAIL PROTECTED] wrote:
Tim, we recently ran a bunch of tests on client side compression.
In every test the backup ran for 2 to 3 times longer. In some cases
this wouldn't be a big deal when you look at the backup alone being
incremental and all. However, we also believed that it would also
cause the restore to run 2 to 3 times as long to uncompress the data.
As a result of these tests and thoughts we decided not to
implement client side compression.

Uncompress should be much faster and less cpu intensive than compression.
In compression you are searching for redundant tokens. In uncompression
you are basically performing token substitution.

Rick


-
The information contained in this message is intended only for the personal
and confidential use of the recipient(s) named above. If the reader of this
message is not the intended recipient or an agent responsible for
delivering it to the intended recipient, you are hereby notified that you
have received this document in error and that any review, dissemination,
distribution, or copying of this message is strictly prohibited. If you
have received this communication in error, please notify us immediately,
and delete the original message.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


Re: DSMC ARCHIVE doesn't set %errorlevel%?

2004-09-22 Thread John Monahan
ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 09/22/2004
10:42:48 AM:
 BTW: Does anyone know of a freeware program which lets me delete all
 files
 older x days and which can cope with long filenames and which can be
 called from
 the windows command line?

http://www.michna.com/software.htm#DelOld

It's kind of old, but very effective and simple to use from the command
line.
delold.exe filepattern  #days


__
John Monahan
Senior Consultant Enterprise Solutions
Computech Resources, Inc.
Office: 952-833-0930 ext 109
Cell: 952-221-6938
http://www.computechresources.com


Re: dsmerror.log configuration

2004-09-22 Thread Richard Sims
Greg - It's unusual and unhealthy, from both logical an physical
standpoints,
   to mingle the error logging from all sessions - which may involve
simultaneous sessions.  In such an error log, you want a clear-cut
sequence
of operations and consequences reflected.
What I would recommend is the creation of a wrapper script for dsmc,
named
the same or differently, which will put all error logs into a single,
all-writable directory, with an error log path spec which appends the
username, for uniqueness and singularity.  The wrapper script would
invoke
dsmc with the -ERRORLOGname= option spec.
   Richard Sims   http://people.bu.edu/rbs
On Sep 22, 2004, at 12:38 PM, Greg wrote:
Hi,
I am having a little trouble getting the client error log set up
correctly for non-administrators on MacOS X servers. I have defined
ERRORLOGName in  /Library/Preferences/Tivoli Storage Manager/TSM System
Preferences file. As it should this definition overrides any
environmental export of DSM_LOG by the non-administrator user. However
when the dsmerror.log is written by the TSM scheduler the mode on the
file is 644 and owned by root:wheel. So when the non-administrator user
invokes DSMC they get the infamous 'ANS0110E Unable to open error log
file '/path/to/dsmerror.log' for output problem.
If leave the path undefined in the dsm.sys the export of DSM_LOG for
each user works correctly. But I want all system invoked TSM logs
written to the hosts central log path, and any user TSM logs written to
the path of there choice. I just can't seem to find a way to do it.
Can someone shed some light on this for me?
Thanks,
Greg


Re: TSM TDP Domino.

2004-09-22 Thread William Rosette
Lotus Notes Domino 5.0.12

Thank You,
Bill Rosette
Data Center/IS/Papa Johns International
WWJD



  Stapleton, Mark
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  ERBEE.COM   cc:
  Sent by: ADSM:  Subject:  Re: TSM TDP Domino.
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  09/22/2004 01:02
  PM
  Please respond to
  ADSM: Dist Stor
  Manager






From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
Behalf Of William Rosette
I have a question that I cannot ask support with.  Here goes.
I have a TSM TDP Domino 1.1 (uppers will not splurge for
current License) with TSM 5.1.6 client, and when I am running
the dsminc.cmd, it seems to get stuck querying Databases.  I
did see 1 27 GB McAfee Quarentine.nsf.  I put an exclude
McAfee\...\*  in the dsm.opt file.  I just recently
uninstalled all and reinstalled.  I don't know of anything
else to do.  Any helps are always a bonus.  Thanks in advance.

What level of Notes/Domino are you running?

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
 Office 262.521.5627


Re: TSM TDP Domino.

2004-09-22 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of William Rosette
Lotus Notes Domino 5.0.12

IIRC, TDP for Notes 1.1 was built to try to cope with older versions of
the Notes backup API, which was poorly written. One of the symptoms that
this TDP would display while backing up versions 3 and 4 of Notes was to
hang while running backups.

I suspect your only real alternative is to upgrade to a later version of
TSM for Mail (Domino). If management says there is no money for such an
upgrade, ask them to compare the cost of a Domino license vs. the worth
of the data it would back up. If they still refuse, I would insist on
something in writing on letterhead stating that you're not responsible
for proper restores. To do the right job, you need the right tools.

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
Office 262.521.5627  


Hsm for Novell

2004-09-22 Thread Timothy Hughes
How can one tell if the HSM component is installed on
a Novell 5.2.2 client.

Tsm version 5.2.3.1

Thanks in advance!


Re: Hsm for Novell

2004-09-22 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Timothy Hughes
How can one tell if the HSM component is installed on a Novell 
5.2.2 client.

Sorry. There is no TSM-based HSM client for NetWare; it only exists for
AIX, Solaris, and (I think) HP-UX.

There have been various discussion in this list as to an HSM client for
NetWare. Search the archives at http://search.adsm.org.

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
Office 262.521.5627  


Re: Hsm for Novell

2004-09-22 Thread Timothy Hughes
Thanks Mark!

Stapleton, Mark wrote:

 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
 Behalf Of Timothy Hughes
 How can one tell if the HSM component is installed on a Novell
 5.2.2 client.

 Sorry. There is no TSM-based HSM client for NetWare; it only exists for
 AIX, Solaris, and (I think) HP-UX.

 There have been various discussion in this list as to an HSM client for
 NetWare. Search the archives at http://search.adsm.org.

 --
 Mark Stapleton ([EMAIL PROTECTED])
 Berbee Information Networks
 Office 262.521.5627


Re: Client Compression (was D2D on AIX)

2004-09-22 Thread Rushforth, Tim
I've done some tests in the past (but I have to search for my results ...).

Note that these are with 100 Mbs Ethernet.

Recent ones:

Exchange TDP
Exchange DB compressed 50.3 GB - 1:19:30 elapsed time, 10.81 MB/sec (Backup)
Exchange DB uncompressed 63.1 GB - 1:36:30 elapsed time, 11.15 MB/sec
(Backup)

We backup Oracle directly with the b/a client, (No TDP) and always get huge
compressions rates (88% compression).

This was a multi-session backup test a while ago:

(Again on 100 Mbs Ethernet), 9.9GB of source data (compressed to 1.19GB)
elapsed time of 152 seconds - this translates to 65 MB/sec which we could
not achieve with 100 Mbs Ethernet without the compression.  (We tend to max
out on CPU on these types of backups).

With client compression on backups you have to make sure no files are
growing during compression and being resent - this will add time to your
backups.  We exclude files like .zip etc from compression and also scan the
client logs for any files that grow during compression (you can also use
compressalways to avoid the resends).

I'm going to search for other tests I've done (or do some again) and I'll
post those results.

Tim Rushforth
City of Winnipeg

-Original Message-
From: TSM_User [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 22, 2004 12:19 PM
To: [EMAIL PROTECTED]
Subject: Re: D2D on AIX

This is true, I was just wondering if others were seeing the same thing.  We
expected it to take longer but not two to three times as long.  In the end
after compression there would be less data to transfer so we thought there
would be some gain there.



Richard Rhodes [EMAIL PROTECTED] wrote:
Tim, we recently ran a bunch of tests on client side compression.
In every test the backup ran for 2 to 3 times longer. In some cases
this wouldn't be a big deal when you look at the backup alone being
incremental and all. However, we also believed that it would also
cause the restore to run 2 to 3 times as long to uncompress the data.
As a result of these tests and thoughts we decided not to
implement client side compression.

Uncompress should be much faster and less cpu intensive than compression.
In compression you are searching for redundant tokens. In uncompression
you are basically performing token substitution.

Rick


-
The information contained in this message is intended only for the personal
and confidential use of the recipient(s) named above. If the reader of this
message is not the intended recipient or an agent responsible for
delivering it to the intended recipient, you are hereby notified that you
have received this document in error and that any review, dissemination,
distribution, or copying of this message is strictly prohibited. If you
have received this communication in error, please notify us immediately,
and delete the original message.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


Off-Topic: Question regarding IBM vs. EMC storage

2004-09-22 Thread Thach, Kevin G
My organization currently has three IBM ESS's in place with about 37TB
of disk total.  For storage and midrange servers, we are almost
exclusively an IBM shop.  We are evaluating purchasing a large amount of
EMC storage, and from everything I can tell, and the customer references
we have talked to, they really seem to have their act together.

I know there are numerous people on this forum using EMC storage, and
I'd appreciate it if a few of you wouldn't mind giving me your opinion
of their products and customer service.  Anyone having experience with
both vendors would really be helpful.

Thank you!


Re: Off-Topic: Question regarding IBM vs. EMC storage

2004-09-22 Thread John Benik
It depends on what type you are going with for EMC.  We went with some EMC
storage, in our open systems environment, but I have not heard much good
about it.  We have gone with IBM on the mainframe, and really like it.  I
am primarily on the mainframe so don't know exactly what was not liked
about the EMC other then the support was bad when we had problems.



John Benik




Thach, Kevin G [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
09/22/2004 02:29 PM
Please respond to
ADSM: Dist Stor Manager [EMAIL PROTECTED]


To
[EMAIL PROTECTED]
cc

Subject
Off-Topic:  Question regarding IBM vs. EMC storage






My organization currently has three IBM ESS's in place with about 37TB
of disk total.  For storage and midrange servers, we are almost
exclusively an IBM shop.  We are evaluating purchasing a large amount of
EMC storage, and from everything I can tell, and the customer references
we have talked to, they really seem to have their act together.

I know there are numerous people on this forum using EMC storage, and
I'd appreciate it if a few of you wouldn't mind giving me your opinion
of their products and customer service.  Anyone having experience with
both vendors would really be helpful.

Thank you!



The information contained in this communication may be confidential,
and is intended only for the use of the recipient(s) named above.
If the reader of this message is not the intended recipient, you
are hereby notified that any dissemination, distribution, or
copying of this communication, or any of its contents, is strictly
prohibited. If you have received this communication in error,
please return it to the sender immediately and delete the original
message and any copy of it from your computer system. If you have
any questions concerning this message, please contact the sender.

Unencrypted, unauthenticated Internet e-mail is inherently insecure.
Internet messages may be corrupted or incomplete, or may incorrectly
identify the sender.


storage media inaccessible?

2004-09-22 Thread Nancy Reeves
I have 3 tape volumes that I cannot access. Anything that accesses the
tape (including Space Reclamation  Move Data) get  errors (different for
each command) that include  terminated for volume 99 - storage
media inaccessible.

I can't figure out why! It doesn't seem to matter whether the volumes are
checked into the library or not. (If not, I should get a mount request.)
Here is the result from a q vol 9 f=d for one of them.

Volume Name: 075D8D
 Storage Pool Name: WSUTAPEBACKUP
 Device Class Name: MAGSTARTAPE
   Estimated Capacity (MB): 10,240.0
  Pct Util: 6.3
 Volume Status: Filling
Access: Read/Write
Pct. Reclaimable Space: 35.5
   Scratch Volume?: Yes
   In Error State?: No
  Number of Writable Sides: 1
   Number of Times Mounted: 1
 Write Pass Number: 1
 Approx. Date Last Written: 09/17/04 02:43:24
Approx. Date Last Read: 09/17/04 01:00:44
   Date Became Pending:
Number of Write Errors: 0
 Number of Read Errors: 0
   Volume Location:
Last Update by (administrator): REEVES
 Last Update Date/Time: 09/22/04 15:00:41


Thanks for any ideas. (TSM server is 4.1.0.0 running on AIX with a 3575
tape library.)

Nancy Reeves
Technical Support, Wichita State University
[EMAIL PROTECTED]  316-978-3860


Re: storage media inaccessible?

2004-09-22 Thread Doug Thorneycroft
Do the tapes show up in your library inventory
and are they in the same slots that the q library output shows?


-Original Message-
From: Nancy Reeves [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 22, 2004 1:45 PM
To: [EMAIL PROTECTED]
Subject: storage media inaccessible?


I have 3 tape volumes that I cannot access. Anything that accesses the
tape (including Space Reclamation  Move Data) get  errors (different for
each command) that include  terminated for volume 99 - storage
media inaccessible.

I can't figure out why! It doesn't seem to matter whether the volumes are
checked into the library or not. (If not, I should get a mount request.)
Here is the result from a q vol 9 f=d for one of them.

Volume Name: 075D8D
 Storage Pool Name: WSUTAPEBACKUP
 Device Class Name: MAGSTARTAPE
   Estimated Capacity (MB): 10,240.0
  Pct Util: 6.3
 Volume Status: Filling
Access: Read/Write
Pct. Reclaimable Space: 35.5
   Scratch Volume?: Yes
   In Error State?: No
  Number of Writable Sides: 1
   Number of Times Mounted: 1
 Write Pass Number: 1
 Approx. Date Last Written: 09/17/04 02:43:24
Approx. Date Last Read: 09/17/04 01:00:44
   Date Became Pending:
Number of Write Errors: 0
 Number of Read Errors: 0
   Volume Location:
Last Update by (administrator): REEVES
 Last Update Date/Time: 09/22/04 15:00:41


Thanks for any ideas. (TSM server is 4.1.0.0 running on AIX with a 3575
tape library.)

Nancy Reeves
Technical Support, Wichita State University
[EMAIL PROTECTED]  316-978-3860


Re: storage media inaccessible?

2004-09-22 Thread Nancy Reeves
Two of the 2 are not listed in the library inventory. The third is listed
in Home Element 61. I'll have to dig up the manual to see which slot that
is.

I am running an AUDIT LIBR CHECKLABEL=YES right now. I had previously
opened the library, so it would check the elements, then run an AUDIT LIBR
CHECKLABEL=BARCODE which I was hoping would straighten things out, but
didn't.

Nancy Reeves
Technical Support, Wichita State University
[EMAIL PROTECTED]  316-978-3860



Doug Thorneycroft [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
09/22/2004 03:50 PM
Please respond to
ADSM: Dist Stor Manager [EMAIL PROTECTED]


To
[EMAIL PROTECTED]
cc

Subject
Re: storage media inaccessible?






Do the tapes show up in your library inventory
and are they in the same slots that the q library output shows?


-Original Message-
From: Nancy Reeves [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 22, 2004 1:45 PM
To: [EMAIL PROTECTED]
Subject: storage media inaccessible?


I have 3 tape volumes that I cannot access. Anything that accesses the
tape (including Space Reclamation  Move Data) get  errors (different for
each command) that include  terminated for volume 99 - storage
media inaccessible.

I can't figure out why! It doesn't seem to matter whether the volumes are
checked into the library or not. (If not, I should get a mount request.)
Here is the result from a q vol 9 f=d for one of them.

Volume Name: 075D8D
 Storage Pool Name: WSUTAPEBACKUP
 Device Class Name: MAGSTARTAPE
   Estimated Capacity (MB): 10,240.0
  Pct Util: 6.3
 Volume Status: Filling
Access: Read/Write
Pct. Reclaimable Space: 35.5
   Scratch Volume?: Yes
   In Error State?: No
  Number of Writable Sides: 1
   Number of Times Mounted: 1
 Write Pass Number: 1
 Approx. Date Last Written: 09/17/04 02:43:24
Approx. Date Last Read: 09/17/04 01:00:44
   Date Became Pending:
Number of Write Errors: 0
 Number of Read Errors: 0
   Volume Location:
Last Update by (administrator): REEVES
 Last Update Date/Time: 09/22/04 15:00:41


Thanks for any ideas. (TSM server is 4.1.0.0 running on AIX with a 3575
tape library.)

Nancy Reeves
Technical Support, Wichita State University
[EMAIL PROTECTED]  316-978-3860


Re: client schedule failed to start after a win client cluster fa ilback

2004-09-22 Thread Yuhico, Alexandra
we have a TSM on a AIX backing up a Windows clustered server, C1  C2.

C2 failed and all the services went over to C1. When we fixed C2 we wanted
all the resources and services to go back to the way it was.

When C2 failed, the shared drive went over to C1 and the client scheduler
was running and the drive could be backed up.  But when C2 was rebooted and
the shared drive was moved back onto the server.  The TSM service again to
start.  We restarted C2 and it still would not start.

The shared drive was moved back to C1.  TSM scheduler services started
successfully on each server giving the ability to back up the entire
environment again whilst leaving the cluster balanced.

The Event logs did not indicate any specific error relating to the TSM
service not starting.  Investigation of the Scheduler configuration showed
that the security settings and configurations were working correctly and
that the TSM client could contact the TSM server successfully.

Do you guys have any idea what's wrong. Why we TSM can't backup the shared
drive when we fail it back to the original setup?

Sandra

This e-mail is privileged and may contain confidential information intended only for 
the person(s) named above. If you receive this e-mail
in error, please notify the addressee immediately by telephone or return e-mail. 
Although the sender endeavours to maintain a computer
virus-free network, the sender does not warrant that this transmission is virus-free 
and will not be liable for any damages resulting from
any virus transmitted.