TDP for Oracle monitoring

2003-03-10 Thread Wholey, Joseph (TGA\\MLOL)
What is the easiest way to determine the number of bytes transferred for a TDP for 
Oracle backup (full or incremental).  The backups are client driven with multiple 
sessions.   The activity log shows
bytes sent (ANE499I) for each session started, but I'm sure there's an easier way to 
compile this info rather than manually adding bytes sent (ANE499I) for the duration of 
the backup.  thx.


client code download

2003-03-05 Thread Wholey, Joseph (TGA\\MLOL)
Can anyone tell me where on the IBM web site I can download the tsm backup archive 
client?   w/out having to log in as a registered user.


Re: remote agent

2003-02-20 Thread Wholey, Joseph (TGA\\MLOL)
Andy,

Thanks,  what would cause a connection refused.

I'm defined on server w/ proper passwords etc...

here's what I supply on http://xxx.xx.xxx.xxx:1581/

of course the x's are a real ip address.

Any ideas?

Regards, Joe

-Original Message-
From: Andrew Raibeck [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 19, 2003 12:35 PM
To: [EMAIL PROTECTED]
Subject: Re: remote agent


You're not supposed to manually start this. Just start the CAD. The first
time you try the Web GUI, the CAD will automatically start the remote
agent for you.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.




Wholey, Joseph (TGA\\MLOL) [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
02/19/2003 10:24
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:remote agent



Unable to start the remote agent.

Running XP Professional
central scheduler installed and started successfully
CAD installed and started successfully
Remoteagent installed but gives the below error when I try to start it.

Could not start the TSM AGENT services on local computer
Error 0x: 0x

Any suggestions???

Regards, Joe



TDP for oracle set up

2003-02-20 Thread Wholey, Joseph (TGA\\MLOL)
Client info
Oracle v9.2
TDP v 2.2.0  (64 bit api)
TSM v 4.2.2
Solaris v 2.8

Server
zOS running TSM Server 4.2.2.9

I'm trying to run TDPOCONF PASSWORD to authenticate the client to the server.
I get prompted for the password, enter it... and then get the following error message:
ANS0263E (RC2230) Either the dsm.sys file was not found, or the Inclexcl file 
specified in the dsm.sys was not found
Both files exist and the permissions are correct
Any help

Regards, Joe



remote agent

2003-02-19 Thread Wholey, Joseph (TGA\\MLOL)
Unable to start the remote agent.

Running XP Professional
central scheduler installed and started successfully
CAD installed and started successfully
Remoteagent installed but gives the below error when I try to start it.

Could not start the TSM AGENT services on local computer
Error 0x: 0x

Any suggestions???

Regards, Joe



TDP monitoring

2003-02-13 Thread Wholey, Joseph (TGA\\MLOL)
Tyring to get a general survey of how TSMers managing large scale deployments of TDP 
for Oracle.

Since it's a client driven process (in our environment), the success or failure is 
difficult to monitor.
Also, TDPOSYNC requires information only the DBAs would know.
e.g.Catalog user name
Catalog Password
Catalog Connect String


How do most people manage that?  How do you keep a DBA honest and make sure he runs it?
Any info would be greatly appreciated.

Regards, Joe



Re: TDP monitoring

2003-02-13 Thread Wholey, Joseph (TGA\\MLOL)
Bill,

How would you pass this info on the command line  as it prompts you for it when 
you execute TDPOSYNC.

Catalog user name
Catalog Password
Catalog Connect String

-Original Message-
From: Bill Boyer [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 13, 2003 9:52 AM
To: [EMAIL PROTECTED]
Subject: Re: TDP monitoring


Let your DBA create the shell scripts and RMAN scripts for the backups and
TDPOSYNC, then YOU schedule them via TSM. Create a command schedule that
SU's to the Oracle user and runs the shell script(s). The result is then
posted as the completion code for the schedule event. The DBA (or whoever
creates the shell scripts) needs to make sure and return from the script
with an meaningfull code.

Like su - ORACLE -c /path/to/shell/script.sh

Bill Boyer
My problem was caused by a loose screw at the keyboard - ??

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Wholey, Joseph (TGA\MLOL)
Sent: Thursday, February 13, 2003 9:45 AM
To: [EMAIL PROTECTED]
Subject: TDP monitoring


Tyring to get a general survey of how TSMers managing large scale
deployments of TDP for Oracle.

Since it's a client driven process (in our environment), the success or
failure is difficult to monitor.
Also, TDPOSYNC requires information only the DBAs would know.
e.g.Catalog user name
Catalog Password
Catalog Connect String


How do most people manage that?  How do you keep a DBA honest and make sure
he runs it?
Any info would be greatly appreciated.

Regards, Joe



RPM packages for TSM v 5.1.5 i386

2003-02-04 Thread Wholey, Joseph (TGA\\MLOL)
I'm trying to get a copy TSM server for Linux to test.  Is there any way I can procure 
it on a trial basis?  My company has an enterprise license for IBM products, but it 
seems Tivoli is not included
in that.



Re: TSM server on Linux

2003-02-01 Thread Wholey, Joseph (TGA\\MLOL)
Thanks Christian, 

Anyone else have experience w/ this?  

Regards, Joe

-Original Message-
From: Christian Svensson [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 31, 2003 3:43 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM server on Linux



   
   
   


Hi Joseph!
I have try to get it to work in a Red Hat 8.
But I can not instell TSM on it.
But if I have Red Hat 7.3 and install TSM and upgrade the kernal and X
Windows to Ver 8. Then is everything works perfect.
Got a better performance with Red Hat 8.

Good Look

Med Vänliga Hälsningar/Best Regards
Christian Svensson

---

Cristie Nordic AB
Box 2
SE-131 06 Nacka
Sweden

Phone : +46-(0)8-641 96 30
Mobil : +46-(0)70-325 15 77
eMail : [EMAIL PROTECTED]
MSN : [EMAIL PROTECTED]


   
  Wholey, Joseph  
  (TGA\\MLOL) To:   [EMAIL PROTECTED]
  JWholey@EXCHANGEcc: 
  .ML.COM Subject:  TSM server on Linux
  Sent by: ADSM:  
  Dist Stor
  Manager ADSM-  
  [EMAIL PROTECTED] 
   
   
  2003-01-30 21:59 
  Please respond to
  ADSM: Dist Stor 
  Manager 
   
   




Does TSM server run on RedHat v8.0, intel platform?  If not, what versions
of Linux does it run on (other than the mainframe)?



TSM server on Linux

2003-01-30 Thread Wholey, Joseph (TGA\\MLOL)
Does TSM server run on RedHat v8.0, intel platform?  If not, what versions of Linux 
does it run on (other than the mainframe)?



Re: TSM server on Linux

2003-01-30 Thread Wholey, Joseph (TGA\\MLOL)
Yes, it runs under VM.

-Original Message-
From: Zlatko Krastev/ACIT [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 30, 2003 5:27 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM server on Linux


It does but is not supported. And for sure there is no TSM server for
Linux on mainframe.

Zlatko Krastev
IT Consultant






Wholey, Joseph (TGA\\MLOL) [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.01.2003 22:59
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:TSM server on Linux


Does TSM server run on RedHat v8.0, intel platform?  If not, what versions
of Linux does it run on (other than the mainframe)?



set adsm-l nomail

2002-12-19 Thread Wholey, Joseph (TGA\\MLOL)
 set adsm-l nomail



Re: TSM in a MS Cluster environment

2002-12-11 Thread Wholey, Joseph (TGA\\MLOL)
Bruce,

Was it an active/passive or active/active cluster environment.

Regards, Joe

-Original Message-
From: Bruce Kamp [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 9:48 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM in a MS Cluster environment


I set it up is the same way.  Three schedulers  3 opt files.  One thing to
make sure is that you build both sides of the cluster before adding the
registry key replication in the cluster resource.  Documentation isn't
specific on this  Caused me about 3 days on the phone with level 1 
level 2 support  I ended up figuring it out on my own by dumb luck!
Hope this helps!

---
Bruce Kamp
Midrange Systems Analyst II
Memorial Healthcare System
E-mail: [EMAIL PROTECTED]
Phone: (954) 987-2020 x4597
Fax: (954) 985-1404
---


-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 4:52 PM
To: [EMAIL PROTECTED]
Subject: TSM in a MS Cluster environment


I'm about to install TSM in a MS cluster environment.  I've done this with
success in the past in an active/active environment where each server has 3
TSM scheduler services.  e.g.  local scheduler,
scheduler_service_for_group_A, scheduler_service_for_group_B.

I've not done it in an active/passive environment where all resources
normally belong to the active server.  The documentation is very vague.
Here are my questions.

How many nodes will I have?
How many scheduler services will I have on each server?
Do both nodes share one cluster dsm.opt file that resides in the same
directory?

Does anyone have step by step doc for an active/passive TSM cluster server
install?

thx.


Regards, Joe



Re: TSM in a MS Cluster environment

2002-12-11 Thread Wholey, Joseph (TGA\\MLOL)
Manuel,

This does help...  I have only one group.  Can I, for example, put the dsm.opt file on 
an H drive, and have both client 1 and 2 point to that dsm.opt file.  So if server 1 
fails, server 2 will start
up the group scheduler and manage the backups.

Regards, Joe

-Original Message-
From: Manuel Schweiger [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 5:19 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM in a MS Cluster environment


Joe,

You will need to set up 1 TSM client for each cluster node and 1 TSM client
for each cluster group.
Best to do a simple example on this:

Cluster: 2 Nodes, 3 Groups

Installation Node 1:

TSM Client Local1
TSM Client Group1
TSM Client Group2
TSM Client Group3
Every group has its own schedulers, client acceptors and remote agents
(those of group1-3 have to switch with the cluster)..

Installation Node 2:

TSM Client Local2
TSM Client Group1
TSM Client Group2
TSM Client Group3
(Group 1-3 have to be set up exactly like on Node 1!!)

Hope that helps a little.

regards, Manuel


- Original Message -
From: Wholey, Joseph (TGA\MLOL) [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, December 10, 2002 10:52 PM
Subject: TSM in a MS Cluster environment


 I'm about to install TSM in a MS cluster environment.  I've done this with
success in the past in an active/active environment where each server has 3
TSM scheduler services.  e.g.  local scheduler,
 scheduler_service_for_group_A, scheduler_service_for_group_B.

 I've not done it in an active/passive environment where all resources
normally belong to the active server.  The documentation is very vague.
Here are my questions.

 How many nodes will I have?
 How many scheduler services will I have on each server?
 Do both nodes share one cluster dsm.opt file that resides in the same
directory?

 Does anyone have step by step doc for an active/passive TSM cluster server
install?

 thx.


 Regards, Joe



Oracle v9.2 and TDP for Oracle

2002-11-18 Thread Wholey, Joseph (TGA\\MLOL)
Anyone know when or if there are plans for TDP to support Oracle v9.2?



Re: Oracle v9.2 and TDP for Oracle

2002-11-18 Thread Wholey, Joseph (TGA\\MLOL)
 it to the intended recipient, you are hereby notified that any
dissemination or copying of this E-mail is strictly prohibited.  If you
have
received this E-mail in error, please immediately notify us .




Wholey, Joseph (TGA\\MLOL) [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
11/18/2002 12:24 PM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Oracle v9.2 and TDP for Oracle


Anyone know when or if there are plans for TDP to support Oracle v9.2?



TDP for Oracle

2002-11-11 Thread Wholey, Joseph (TGA\\MLOL)
Can the 32 bit verstion of TDP co-exist with the 64 bit version.  i.e. can they run 
concurrently on the same client?



client configuration for DB2

2002-11-01 Thread Wholey, Joseph (TGA\\MLOL)
How would you prevent the following problem/scenario from occuring?
One client
Two separate instances
Each instance has a DB by the same name.
e.g.Client1
InstX has a DBa
InstY has a DBa
If you issue a q fi of InstX's DBa, you can see IntY's filespace.  How can you prevent 
this?

Regards, Joe





TDP for Oracle and backing up DB2 with TSM

2002-10-30 Thread Wholey, Joseph (TGA\\MLOL)
Here's an easy one...

How do most people set up their TDP nodes/filesystems for the following...

First TDPO (this is how I was going to set it up)
One client with multiple Databases (13)
one unique node name per database
one unique filesystem per database.

Backing up DB2 with TSM
if you have multiple DBs on the same client, do you want these DBs to have unique node 
names and filespace names?

Also, do most people create separate domains for different databases or can it they 
both reside in the same Database domain.

Any recommendations as to how others are doing this would be greatly appreciated.  
Don't want to set up a whole environmet incorrectly.

Regards, Joe



TDPO for Oracle v 9.2

2002-10-30 Thread Wholey, Joseph (TGA\\MLOL)
Is TDPO for Oracle v 9.2 supported?



TDP for MS SQL

2002-10-29 Thread Wholey, Joseph (TGA\\MLOL)
What are the copygroup settings for MS SQL?  Do they work similarly to TDP for 
oracle... i.e.  vde(1), vdd(0), rev(0), rov(0)?  Again, the documentation is very lite.

Any simple explanation would be apprecieated.

Regards, Joe



TDP for Oracle and Multiple DBs

2002-10-28 Thread Wholey, Joseph (TGA\\MLOL)
The TDP for Oracle documentation is a little lite... need a little assistance with the 
following questions.

1.  If multiple Oracle Databases reside on same client, is it recommended to write 
them to their own filespace on TSM Server?
2.  If so... do I need multiple tdpo.opt files for each database instance?
3.  (this is kind of broad) what are the ramifications of multiple instance of Oracle 
(different versions) running on same client?
what are the ramifications of multiple instance of RMAN 
(different versions) running on same client?
4.  How do you tell what version of Tivoli Storage Manager API is running?

any help would be greatly appreciated.

Regards, Joe



TDP for Oracle Configuration

2002-10-28 Thread Wholey, Joseph (TGA\\MLOL)
If I'm going to have multiple Oracle Databases on the same client, each with it's own 
node_name and filespace (specified in the tdpo.opt file) all pointing to a domain 
called DATABASE, does the
nodename in the dsm.opt file for the TSM client need to be registered to the DATABASE 
domain?  Can the TSM client (non tdp) be registered to a domain other than DATABASE?  
Will the TDPO backups work?



Regards, Joe



Re: adsm.org unusable

2002-10-25 Thread Wholey, Joseph (TGA\\MLOL)
Paul,

I haven't had any problems with the site, however, it is not very intuitive.

Regards, Joe

-Original Message-
From: Seay, Paul [mailto:seay_pd;NAPTHEON.COM]
Sent: Thursday, October 24, 2002 10:08 PM
To: [EMAIL PROTECTED]
Subject: Re: adsm.org unusable


I posted the information on this the other day.

You do not have to login.  Just select the ADSM-L archives in that left bar
area.  It will take you to the search engine area.

I have found that Netscape 4.76 does not work with this site.

However, search.adsm.org does look like a better choice.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Nelson, Doug [mailto:DNelson;CHITTENDEN.COM]
Sent: Thursday, October 24, 2002 8:15 PM
To: [EMAIL PROTECTED]
Subject: Re: adsm.org unusable


Try http://search.adsm.org/   it works for me. I agree about the home page
(adsm.org).

Douglas C. Nelson
Distributed Computing Consultant
Alltel Information Services
Chittenden Data Center
2 Burlington Square
Burlington, Vt. 05401
802-660-2336



-Original Message-
From: Kai Hintze [mailto:kai.hintze;ALBERTSONS.COM]
Sent: Thursday, October 24, 2002 7:16 PM
To: [EMAIL PROTECTED]
Subject: adsm.org unusable


What happened to adsm.org? I tried to go and look for something in the
archives but they weren't there! Instead the site was some funky discussion
board with several columns.

The meat of the board was the middle column. But the column was too narrow
so I couldn't see an entire message without scrolling left and right, but
the scroll bar was two screens below the message so I couldn't ever read an
entire message.

The right hand column was a poll that didn't let me participate.

The left hand column invited me to log in, and had numerous resource
lists--one of which was the archives, but I STILL CAN'T READ THEM BECAUSE
THE COLUMN IS TOO NARROW! And I DON'T want yet another place to log in.

Please, _please_, PLEASE give me back the archives.

- Kai



TDPO.OPT/filespace Question

2002-10-24 Thread Wholey, Joseph (TGA\\MLOL)
I have 3 Oracle databases (DB1, DB2, DB3) that reside on SERVERA.  I'm going to call 
the client SERVERA_ORA.  Should I point to 3 distinct tdpo.opt files so I can create 3 
distinct file spaces?

'ENV=(TDPO_OPTFILE=.../DB1/tdpo.opt)'
'ENV=(TDPO_OPTFILE=.../DB2/tdpo.opt)'
'ENV=(TDPO_OPTFILE=.../DB3/tdpo.opt)'

Any suggestions would be appreciated.  thx.

-jgw-



Re: TDPO.OPT/filespace Question

2002-10-24 Thread Wholey, Joseph (TGA\\MLOL)
Then how do I create distinct filespaces for each database as is suggested?


-Original Message-
From: David Longo [mailto:David.Longo;HEALTH-FIRST.ORG]
Sent: Thursday, October 24, 2002 3:46 PM
To: [EMAIL PROTECTED]
Subject: Re: TDPO.OPT/filespace Question


No, not unless you have a reason for doing so.  Like if these DB's
are for different applications or something like that.  It depends
on your DBA's requirements and how Oracle and RMAN is setup.

More complicated to do it with 3, so don't unless necessary.

David Longo

 [EMAIL PROTECTED] 10/24/02 02:05PM 
I have 3 Oracle databases (DB1, DB2, DB3) that reside on SERVERA.  I'm
going to call the client SERVERA_ORA.  Should I point to 3 distinct
tdpo.opt files so I can create 3 distinct file spaces?

'ENV=(TDPO_OPTFILE=.../DB1/tdpo.opt)'
'ENV=(TDPO_OPTFILE=.../DB2/tdpo.opt)'
'ENV=(TDPO_OPTFILE=.../DB3/tdpo.opt)'

Any suggestions would be appreciated.  thx.

-jgw-


MMS health-first.org made the following
 annotations on 10/24/2002 03:47:39 PM
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any
mistransmission.  If you receive this message in error, please immediately delete it 
and all copies of it from your system, destroy any hard copies of it, and notify the 
sender.  You must not,
directly or indirectly, use, disclose, distribute, print, or copy any part of this 
message if you are not the intended recipient.  Health First reserves the right to 
monitor all e-mail communications
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a
particular entity;  and (2) the sender is authorized by the entity to give such views 
or opinions.

==



AIX client version recommendations

2002-08-28 Thread Wholey, Joseph (TGA\\MLOL)

 I'm about to upgrade TSM clients (v3.1.0.6 and v4.1.3.0) on AIX and Sun  platforms 
backing up to TSM server 4.2.2.9 on Z/OS.  Does anyone have any recommendations as to 
what client version I should
 go with and any caveats I should be aware of?  Any suggestions would be greatly 
appreciated.

 Regards, Joe




recovery log filling

2002-07-26 Thread Wholey, Joseph (TGA\\MLOL)

Environment:
TSM SERVER: 4.1.3.0
S/390

We're currently in the process of consolidating 4 TSM servers into 1. Over the course 
of about 3 weeks we've been redirecting clients to the new server.  Originally this 
new TSM server was in logmode
roll forward.  As the volume to this server increased (incrementals and base 
incrementals running nightly), our recovery log could not handle the volume.  We 
changed logmode to normal.  Last night
this server processed 1.4 million files.  Backups started to fail due to a recovery 
log full condition.  On prior nights this server's processed well over 2 million files 
and the recovery log did not
fill.  What's the determining factor?
Also, in normal mode, what determines when the recovery log clears?

Regards, Joe



Re: command file execution

2002-07-23 Thread Wholey, Joseph (TGA\\MLOL)

Don,

Veritas, in conjunction with Tivoli wrote an API for Backup Exec to TSM communication. 
 It's installed on the Backup Exec side with the IBM ADSM option that Backup Exec 
provides (it's an add on).  I
don't know exactly how it works under the covers, that's difficult info to obtain 
from either IBM and/or Veritas.
As for the strange output you see from the show session command, that's correct.

Platform: LA4701S001: pulled from a Veritas Registry entry (that we hack) is the name 
of the Server
ID: LA4701S001BE: also pulled from a Veritas Registry entry (we also hack) is to 
differentiate the Backup Exec session from a plain old vanilla TSM node.
Also, the last event was definitely not a restore in this instance.

Regards, Joe

-Original Message-
From: Don France [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 22, 2002 5:37 PM
To: [EMAIL PROTECTED]
Subject: Re: command file execution


Joe,

Are you using Win2K's RSM (rather than TSM driver) for library manager?
(How did you connect the BackupExec service to TSM library manager?)

The output from the show session looks strange -- it indicates the last
event was restore, and that it ended, but Platform ID should be WinNT;
rather it looks like (maybe) the service-id?

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Wholey, Joseph (TGA\MLOL)
Sent: Monday, July 22, 2002 7:19 AM
To: [EMAIL PROTECTED]
Subject: Re: command file execution


Don,

I agree with all that you state below... and that is how I thought it worked
as well.
Here's what's really happening in my case though.
I execute a command schedule to recycle Backup Exec services on NT servers
(we use Backup Exec to backup 100s of Exchange servers to TSM).  We have
Backup Exec set up to use TSM as its robotic library
(virtual device).  Once the Backup Exec services come back up, Backup Exec
creates a session with TSM to confirm the robotic library that it's
connecting to.  This connection hangs out on the TSM
server until the idletimout parm in TSM kicks it out.

NOTE:  Backup Exec is a Veritas backup product that we use to backup only
Exchange data.

Below is the output of a show session command of 1 of the sessions that I'm
referring to.

THE QUESTION: Does Last Verb ( EndTxn ), Last Verb State ( Sent ) mean
that TSM sent a message back to the client to end the session?  Is this a
problem with my Veritas Backup Exec software?  Why
does this session stay in the system?

Session 24806:Type=Node,   Id=LA4701S001BE
   Platform=LA4701S001, NodeId=119, Owner=LA4701S001
   SessType=4, Index=1, TermReason=0
   RecvWaitTime=0.000  (samples=0)
   Backup  Objects ( bytes )  Inserted: 0 ( 0.0 )
   Backup  Objects ( bytes )  Restored: 1 ( 0.1035 )
   Archive Objects ( bytes )  Inserted: 0 ( 0.0 )
   Archive Objects ( bytes ) Retrieved: 0 ( 0.0 )
   Last Verb ( EndTxn ), Last Verb State ( Sent )

Any help would be greatly appreciated.

Regards, Joe


-Original Message-
From: Don France [mailto:[EMAIL PROTECTED]]
Sent: Saturday, July 13, 2002 3:25 AM
To: [EMAIL PROTECTED]
Subject: Re: command file execution


The command file is launched, that's all;  there is no connection to
break, per se -- all that happens is the dsmc schedule daemon gets the
command (from the server's schedule arguments, of course), closes the
session (if prompted scheduling is used) and executes the command.   (The
client already has the command args, in polling-type schedules, so there
would be no session at all, unless/until the command script initiates a dsmc
command.)

The connection between server and client for launching a command file ends
as soon as the data gets passed to the client-scheduler daemon... BEFORE the
command file even runs (essentially).  To see this in action, match the
actlog entries with the dsmsched.log info;  unless you are using
server-prompted scheduling, there is no session/connection between TSM
client  TSM server until/unless the command script contains a
session-creating command (like dsmc).

Most folks will likely have a dsmc args (or similar) in the command file,
which creates a session with TSM server to run whatever the args say.  Upon
completion of dsmc command, that completion terminates associated sessions.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Wholey, Joseph (TGA\MLOL)
Sent: Wednesday, July 10, 2002 6:39 AM
To: [EMAIL PROTECTED]
Subject: command file execution


An easy one...

When

Re: command file execution

2002-07-22 Thread Wholey, Joseph (TGA\\MLOL)

Don,

I agree with all that you state below... and that is how I thought it worked as well.
Here's what's really happening in my case though.
I execute a command schedule to recycle Backup Exec services on NT servers (we use 
Backup Exec to backup 100s of Exchange servers to TSM).  We have Backup Exec set up to 
use TSM as its robotic library
(virtual device).  Once the Backup Exec services come back up, Backup Exec creates a 
session with TSM to confirm the robotic library that it's connecting to.  This 
connection hangs out on the TSM
server until the idletimout parm in TSM kicks it out.

NOTE:  Backup Exec is a Veritas backup product that we use to backup only Exchange 
data.

Below is the output of a show session command of 1 of the sessions that I'm referring 
to.

THE QUESTION: Does Last Verb ( EndTxn ), Last Verb State ( Sent ) mean that TSM sent 
a message back to the client to end the session?  Is this a problem with my Veritas 
Backup Exec software?  Why
does this session stay in the system?

Session 24806:Type=Node,   Id=LA4701S001BE
   Platform=LA4701S001, NodeId=119, Owner=LA4701S001
   SessType=4, Index=1, TermReason=0
   RecvWaitTime=0.000  (samples=0)
   Backup  Objects ( bytes )  Inserted: 0 ( 0.0 )
   Backup  Objects ( bytes )  Restored: 1 ( 0.1035 )
   Archive Objects ( bytes )  Inserted: 0 ( 0.0 )
   Archive Objects ( bytes ) Retrieved: 0 ( 0.0 )
   Last Verb ( EndTxn ), Last Verb State ( Sent )

Any help would be greatly appreciated.

Regards, Joe


-Original Message-
From: Don France [mailto:[EMAIL PROTECTED]]
Sent: Saturday, July 13, 2002 3:25 AM
To: [EMAIL PROTECTED]
Subject: Re: command file execution


The command file is launched, that's all;  there is no connection to
break, per se -- all that happens is the dsmc schedule daemon gets the
command (from the server's schedule arguments, of course), closes the
session (if prompted scheduling is used) and executes the command.   (The
client already has the command args, in polling-type schedules, so there
would be no session at all, unless/until the command script initiates a dsmc
command.)

The connection between server and client for launching a command file ends
as soon as the data gets passed to the client-scheduler daemon... BEFORE the
command file even runs (essentially).  To see this in action, match the
actlog entries with the dsmsched.log info;  unless you are using
server-prompted scheduling, there is no session/connection between TSM
client  TSM server until/unless the command script contains a
session-creating command (like dsmc).

Most folks will likely have a dsmc args (or similar) in the command file,
which creates a session with TSM server to run whatever the args say.  Upon
completion of dsmc command, that completion terminates associated sessions.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Wholey, Joseph (TGA\MLOL)
Sent: Wednesday, July 10, 2002 6:39 AM
To: [EMAIL PROTECTED]
Subject: command file execution


An easy one...

When initiating the execution of a command file from the server to the
client via schedule (the command file resides on client), what breaks the
connection between the server and the client after the
command file completes execution?  Is it the Idletimeout parm?  Is there
another way to break the connection after the cmd sched executes?

Regards, Joe



Re: Testing v4.2

2002-07-17 Thread Wholey, Joseph (TGA\\MLOL)

Michael,

Check your dsm.opt file.  You probably have most stuff excluded.  What you're probably 
seeing backed up is directory structures (without the actual files).  Take a look.

Regards, Joe

-Original Message-
From: Michael Moore [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 16, 2002 9:58 AM
To: [EMAIL PROTECTED]
Subject: Testing v4.2


I am testing v4.2 of the TSM server for OS/390.

I have found the following problems, or are they problems:




1)  I defined a new node MANAGE_ME to the TSM 4.2 server.  This is the
first backup for this node on this server.  No filespaces exist for this
on this server for this node.  When I run the backup (either via
scheduler, or command), I only backup 429 mb.  This node has a 3.2gb disk
drive, with 796mb used.  It does not look like everything is being
backed up.Missing about 300mb.


2)  When the backup completes, no information is related back to the
server.  In the previous version we would see the backup statistics for
each node in the server JESLOG.  In this version, it seems as though the
information is not
there.  For instance the following information (this information is from
the dsmsched.log on the node):

07/16/2002 08:59:16 --- SCHEDULEREC STATUS BEGIN
07/16/2002 08:59:16 Total number of objects inspected:8,761
07/16/2002 08:59:16 Total number of objects backed up:8,720
07/16/2002 08:59:16 Total number of objects updated:  0
07/16/2002 08:59:16 Total number of objects rebound:  0
07/16/2002 08:59:16 Total number of objects deleted:  0
07/16/2002 08:59:16 Total number of objects expired:  0
07/16/2002 08:59:16 Total number of objects failed:   0
07/16/2002 08:59:16 Total number of bytes transferred:   429.02 MB
07/16/2002 08:59:16 Data transfer time:   90.46 sec
07/16/2002 08:59:16 Network data transfer rate:4,856.42 KB/sec
07/16/2002 08:59:16 Aggregate data transfer rate:393.52 KB/sec
07/16/2002 08:59:16 Objects compressed by:   37%
07/16/2002 08:59:16 Elapsed processing time:   00:18:36
07/16/2002 08:59:16 --- SCHEDULEREC STATUS END
07/16/2002 08:59:16 --- SCHEDULEREC OBJECT END TEST 07/16/2002 08:33:00
07/16/2002 08:59:16 Scheduled event 'TEST' completed successfully.
07/16/2002 08:59:16 Sending results for scheduled event 'TEST'.
07/16/2002 08:59:21 Results sent to server for scheduled event 'TEST'

3)  Getting the following messages when trying to run a storage pool
backup:

09.02.48 STC12888  IKJ56241I DATA SET TSMT.BFS.V1E NOT ALLOCATED+

09.02.48 STC12888  IKJ56241I REQUEST REQUIRES MORE NON-SMS MANAGED
VOLUMES THAN ARE ELIGIBLE
09.02.48 STC12888 *ANR5035E Dynamic allocation of tape unit 3590-1
failed, return code 4, error
09.02.48 STC12888 *code 624, info code 0.

09.02.48 STC12888  ANR1401W Mount request denied for volume 306500 -
mount failed.

The tape 306500 is not in on of our libraries.


The test and production servers are running on the same processor, and
everything worked on the test serve prior to upgrade.

Thanks for your assistance!




Michael Moore
VF Services Inc.
121 Smith Street
Greensboro,  NC  27420-1488

Voice: 336-332-4423
Fax: 336-332-4544



dsm.opt / dsmerror.log / incl/excl oddity

2002-07-11 Thread Wholey, Joseph (TGA\\MLOL)

Envirionment A:
Client OS: NT 4
Client Version: Version 4, Release 2, Level 1.20
Server: Storage Management Server for MVS - Version 4, Release 2, Level 1.11

Environment B:
Client OS: NT 4
Client Version: Version 4, Release 2, Level 1.20
Server: Storage Management Server for MVS - Version 4, Release 1, Level 3.0

These clients are cookie cutter builds Starting from the OS down to the dsm.opt 
(with the exception of the TSM server they point to).

Question:  Why is my dsmerror.log filling up with the following messages in environmet 
B and not A when the directory structure and dsm.opt are identical for both servers?

ANS1115W File '\\l06101s001\e$\Services\BUEXECV7\NT\reports\saved\empty' excluded by 
Include/Exclude list

Any help would be appreciated.

Regards, Joe



command file execution

2002-07-10 Thread Wholey, Joseph (TGA\\MLOL)

An easy one...

When initiating the execution of a command file from the server to the client via 
schedule (the command file resides on client), what breaks the connection between the 
server and the client after the
command file completes execution?  Is it the Idletimeout parm?  Is there another way 
to break the connection after the cmd sched executes?

Regards, Joe



Re: Server-to-Server

2002-07-08 Thread Wholey, Joseph (TGA\\MLOL)

Are you sure it's registered as type=server?

-Original Message-
From: Remeta, Mark [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 05, 2002 11:22 AM
To: [EMAIL PROTECTED]
Subject: Server-to-Server


Does anyone know if there are any know problems doing server-to-server with
the source server being a 4.2.x server and the target server being a 5.1.x
server? I'm trying to do a database backup and the 5.1 server keeps coming
up saying 'node (Windows) refused - node name not registered' even though
the node is registered on the 5.1 server.

Thanks in advance,

Mark Remeta
Seligman Data Corp.
100 Park Avenue
New York, NY 10017


Confidentiality Note: The information transmitted is intended only for the
person or entity to whom or which it is addressed and may contain
confidential and/or privileged material. Any review, retransmission,
dissemination or other use of this information by persons or entities other
than the intended recipient is prohibited. If you receive this in error,
please delete this material immediately.



dsm.sys / dsm.opt

2002-06-29 Thread Wholey, Joseph (TGA\\MLOL)

Are dsm.opt and dsm.sys comapatible across releases.  We've just copied these files 
from a 2.6 client to a 4.2 client.  Were there any enhancements, ommissions?



TDP for Oracle session

2002-06-22 Thread Wholey, Joseph (TGA\\MLOL)

Probably an easy one for most...

What controls the number of TDP for Oracle Sessions that get spawned during an Oracle 
backup?  How can you throttle back or increase?  Where can i read about 
recommendations with respect to number of
sessions and amount of memory used on client?  I'm relatively new to TDP for Oracle... 
any help would be appreciated.

Regards, Joe



Re: PST Files and Backup Times Revisited

2002-06-13 Thread Wholey, Joseph (TGA\\MLOL)

Paul,

When you say compact it, what exactly do you mean?

Regards, Joe

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 12, 2002 10:21 PM
To: [EMAIL PROTECTED]
Subject: PST Files and Backup Times Revisited


This discussion has been circling for the last week and I had not had time
to checkout what I am going to point out here.

A PST is continually added to the end.  As stuff is added it just grows.
When you delete the stuff, IT DOES NOT SHRINK by itself.  You have to
COMPACT it.  I took the example of a PST that the properties FOLDER SIZE
function said only 53KB was used.  That looks great and why do I need to
compact it.  Take a look at the .PST file from the OS point of view.  This
one was over 3MB.  When I compacted it, it went to 169KB.  Guess what not
even subfile backup can beat that improvement.

So, we need to encourage our users to compact their PSTs regularly.  The
savings in space to them and backup times could be significant.  However, if
it is a .PST that is only added to and never anything deleted, subfile is
the way to go, and do not compact.

I have resolved myself that Bill Gates makes nothing on software.  He must
make all is money investing in storage companies.

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180



Re: pst files

2002-06-13 Thread Wholey, Joseph (TGA\\MLOL)

You're absolutely correct...  I that just occurred to me as I was going to test.  

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 12, 2002 11:44 PM
To: [EMAIL PROTECTED]
Subject: Re: pst files


Yeah, but the problem is the PST files are not on the client, they are on
the file server.

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Peter Pijpelink - P.L.C.S. BV Storage Consultants [mailto:[EMAIL PROTECTED]]

Sent: Tuesday, June 11, 2002 3:40 AM
To: [EMAIL PROTECTED]
Subject: Re: pst files


Hi people.

look at www.tsmmediamanager.nl under support/tips there you can download 
the outlook killer program. (smile: I was wrong, we did not wrote this, 
Microsoft did)
Seems Bill also makes suicide software for themselfs haha

Greetings

Peter

At 12:11 10-06-2002 +0200, you wrote:
Hi

We had this problem before. Our Outlook/Exchange users left their 
client running over night, which first of all produced a lot of errors, 
and second of all made backups both take a long time, and also not 
backing up open PST-files.

The solution is to install St Bernard Open File Manager. We have this 
software running on two large fileservers containing both PST, and 
PAB-files. Today, all files are backed up, and the backup time is a lot 
faster.

The installation of St Bernard is very easy, and requires a minimum of 
time.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervägen 6B
183 62 HÄGERNÄS
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51




Wholey, Joseph (TGA\\MLOL) [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 2002-06-07 
16:38 Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:pst files


Why do *.pst files take so long to back up?   Even those that are not in
use.  Does anyone have a strategy to back them up in a most efficient 
manner?  We have users that create multiple, large (gigs) *.pst files 
and it's a requirement that we back them up.



pst files

2002-06-07 Thread Wholey, Joseph (TGA\\MLOL)

Why do *.pst files take so long to back up?   Even those that are not in use.  Does 
anyone have a strategy to back them up in a most efficient manner?  We have users that 
create multiple, large (gigs)
*.pst files and it's a requirement that we back them up.



trace options/info

2002-06-07 Thread Wholey, Joseph (TGA\\MLOL)

What manual would have information with regard to tracing the ba client?  i.e. syntax, 
what parms to turn on, what to look for in the output etc...



Microsoft cluster servers yet again

2002-06-04 Thread Wholey, Joseph (TGA\\MLOL)

Is anyone aware of known degraded performance when backing up Microsoft Cluster 
Servers?  This data is going to TSMv413 on S/390 where there is no activity during the 
time of backup.  Why so long?
There is nothing to indicate where the hold up is in any of our logs (both client and 
server).  Any help would be appreciated.

Executing scheduled command now.
06/03/2002 20:21:46 --- SCHEDULEREC OBJECT BEGIN DAILY_INCR_BACKUP_8PM 06/03/2002 
20:21:45
06/03/2002 23:36:25 --- SCHEDULEREC STATUS BEGIN
06/03/2002 23:36:25 Total number of objects inspected:   13,115
06/03/2002 23:36:25 Total number of objects backed up:  103
06/03/2002 23:36:25 Total number of objects updated:  0
06/03/2002 23:36:25 Total number of objects rebound:  0
06/03/2002 23:36:25 Total number of objects deleted:  0
06/03/2002 23:36:25 Total number of objects expired:  1
06/03/2002 23:36:25 Total number of objects failed:   0
06/03/2002 23:36:25 Total number of bytes transferred: 5.05 GB
06/03/2002 23:36:25 Data transfer time:  401.17 sec
06/03/2002 23:36:25 Network data transfer rate:13,215.33 KB/sec
06/03/2002 23:36:25 Aggregate data transfer rate:453.94 KB/sec
06/03/2002 23:36:25 Objects compressed by:0%
06/03/2002 23:36:25 Elapsed processing time:   03:14:38

Executing scheduled command now.
06/03/2002 20:21:46 --- SCHEDULEREC OBJECT BEGIN DAILY_INCR_BACKUP_8PM 06/03/2002 
20:21:45
06/03/2002 23:36:25 --- SCHEDULEREC STATUS BEGIN
06/03/2002 23:36:25 Total number of objects inspected:   13,115
06/03/2002 23:36:25 Total number of objects backed up:  103
06/03/2002 23:36:25 Total number of objects updated:  0
06/03/2002 23:36:25 Total number of objects rebound:  0
06/03/2002 23:36:25 Total number of objects deleted:  0
06/03/2002 23:36:25 Total number of objects expired:  1
06/03/2002 23:36:25 Total number of objects failed:   0
06/03/2002 23:36:25 Total number of bytes transferred: 5.05 GB
06/03/2002 23:36:25 Data transfer time:  401.17 sec
06/03/2002 23:36:25 Network data transfer rate:13,215.33 KB/sec
06/03/2002 23:36:25 Aggregate data transfer rate:453.94 KB/sec
06/03/2002 23:36:25 Objects compressed by:0%
06/03/2002 23:36:25 Elapsed processing time:   03:14:38
06/03/2002 23:36:25 --- SCHEDULEREC STATUS END



data tansfer time ???

2002-05-31 Thread Wholey, Joseph (TGA\\MLOL)

Can someone expound on what exactly DATA TRANSFER TIME specifies.  I've read the 
definition in the manual and it's still not clear to me.  In the below example, 
exactly what took 47.07 seconds?  thx.


05/30/2002 21:45:36 --- SCHEDULEREC STATUS BEGIN
05/30/2002 21:45:36 Total number of objects inspected:   12,987
05/30/2002 21:45:36 Total number of objects backed up:   29
05/30/2002 21:45:36 Total number of objects updated:  0
05/30/2002 21:45:36 Total number of objects rebound:  0
05/30/2002 21:45:36 Total number of objects deleted:  0
05/30/2002 21:45:36 Total number of objects expired:  0
05/30/2002 21:45:36 Total number of objects failed:   1
05/30/2002 21:45:36 Total number of bytes transferred: 1.70 GB
05/30/2002 21:45:36 Data transfer time:   47.07 sec
05/30/2002 21:45:36 Network data transfer rate:38,081.71 KB/sec
05/30/2002 21:45:36 Aggregate data transfer rate:283.08 KB/sec
05/30/2002 21:45:36 Objects compressed by:   37%
05/30/2002 21:45:36 Elapsed processing time:   01:45:32
05/30/2002 21:45:36 --- SCHEDULEREC STATUS END



Microsoft cluster servers and TSM?

2002-05-31 Thread Wholey, Joseph (TGA\\MLOL)

Maybe someone can help me out here.  Here are the stats from two cluster groups going 
to the same S/390 server.  H: drive cluster belongs to Server3.  I: drive cluster 
belongs to Server4.  There is no
other activity (client or admin) on the TSM server while these backups are in 
progress.  These two servers are the local servers of the MS cluster and are identical 
with respect to
hardware/disk/network (according to the OS folks).  There is nothing in the event logs 
that indicates anything would be slowing down the backup of Sever3 H drive.  Could it 
be a MS clustering thing.
Has anyone ever seen anything like this?  I know it's a bit to look at, but any help 
would be greatly appreciated.  Look at the thruputs for each... they speak for 
themselves.  thx

Server3
H DRIVE
05/30/2002 21:45:36 --- SCHEDULEREC STATUS BEGIN
05/30/2002 21:45:36 Total number of objects inspected:   12,987
05/30/2002 21:45:36 Total number of objects backed up:   29
05/30/2002 21:45:36 Total number of objects updated:  0
05/30/2002 21:45:36 Total number of objects rebound:  0
05/30/2002 21:45:36 Total number of objects deleted:  0
05/30/2002 21:45:36 Total number of objects expired:  0
05/30/2002 21:45:36 Total number of objects failed:   1
05/30/2002 21:45:36 Total number of bytes transferred: 1.70 GB
05/30/2002 21:45:36 Data transfer time:   47.07 sec
05/30/2002 21:45:36 Network data transfer rate:38,081.71 KB/sec
05/30/2002 21:45:36 Aggregate data transfer rate:283.08 KB/sec
05/30/2002 21:45:36 Objects compressed by:   37%
05/30/2002 21:45:36 Elapsed processing time:   01:45:32
05/30/2002 21:45:36 --- SCHEDULEREC STATUS END

Server4
I DRIVE
05/30/2002 20:03:55 --- SCHEDULEREC STATUS BEGIN
05/30/2002 20:03:55 Total number of objects inspected:   12,680
05/30/2002 20:03:55 Total number of objects backed up:   10
05/30/2002 20:03:55 Total number of objects updated:  0
05/30/2002 20:03:55 Total number of objects rebound:  0
05/30/2002 20:03:55 Total number of objects deleted:  0
05/30/2002 20:03:55 Total number of objects expired:  0
05/30/2002 20:03:55 Total number of objects failed:   0
05/30/2002 20:03:55 Total number of bytes transferred:   408.34 MB
05/30/2002 20:03:55 Data transfer time:   23.58 sec
05/30/2002 20:03:55 Network data transfer rate:17,732.21 KB/sec
05/30/2002 20:03:55 Aggregate data transfer rate:  1,553.26 KB/sec
05/30/2002 20:03:55 Objects compressed by:   32%
05/30/2002 20:03:55 Elapsed processing time:   00:04:29
05/30/2002 20:03:55 --- SCHEDULEREC STATUS END



co question

2002-05-23 Thread Wholey, Joseph (TGA\\MLOL)

User wants his data for 30 days, no more, no less.  Don't want to Archive due to 
capacity issues... tape, disk, network...

Will this due?

VDE 30
VDD 30
REV 30
ROV 30



Reset Bufpool

2002-05-23 Thread Wholey, Joseph (TGA\\MLOL)

Is there any good reason to be resetting your bufferpool statistics every half hour?



Re: co question

2002-05-23 Thread Wholey, Joseph (TGA\\MLOL)

Yes, but in theory, you could have files that are 30 years old.  Or for that matter, 
300, or 3000 or nn years old.  e.g. if the user creates a file on the first of the 
month, and updates on the
first of every month, on month 31, you'll have copies that are 30 months old.  User 
doesn't want that.

-Original Message-
From: Gerald Wichmann [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 23, 2002 12:26 PM
To: [EMAIL PROTECTED]
Subject: Re: co question


I would do:

VDE nolimit
VDD nolimit
REV 30
ROV 30

That way you guarantee anything he backs up, be it via scheduled backup or
adhoc backup, will always be retained for 30 days..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 23, 2002 8:29 AM
To: [EMAIL PROTECTED]
Subject: co question

User wants his data for 30 days, no more, no less.  Don't want to Archive
due to capacity issues... tape, disk, network...

Will this due?

VDE 30
VDD 30
REV 30
ROV 30



Microsoft cluster server / TSM

2002-05-14 Thread Wholey, Joseph (TGA\\MLOL)

Need a little help.  I'm about to set up a TSM for Microsoft cluster server.  It will 
be an active/active configuration.  I plan on doing it as follows:


Machinge A
Physical disk H is a cluster group

Machine B
Physical disk I is a cluster group

Physical disk Q (quorum) is a cluster group

Machine A (normally owns disk H)
install local scheduler... always running
install scheduler for cluster group H (pointing to a dsm.opt file on the H drive)... 
always running
install scheduler for cluster group Q (pointing to a dsm.opt file on the Q drive)... 
always running
install failover scheduler pointing to the dsm.opt file on the I: drive which will 
manage cluster group I: in the event of failure (this will be a Generic Service 
Resource for Failover this
scheduler is not on auto start=yes)

Machine B (normally owns disk I:)
install local scheduler... always running
install scheduler for cluster group I (pointing to a dsm.opt file on the I drive)... 
always running
install scheduler for cluster group Q (pointing to a dsm.opt file on the Q drive)... 
always running
install failover scheduler pointing to the dsm.opt file on the H: drive which will 
manage cluster group H:in the event of Machine A failure.  (this will be a Generic 
Service Resource for Failover
this scheduler is not on auto start=yes)


Here are my two questions:
Should the scheduler for Q be running on both machine A and machine B?

For machine A...when I set up Depencies in cluster manager for the Generic Service 
Resource for Failover and it states to add all physical disk resources, do I add 
both H and I?

I can't seem to get a straight answer on this one.

Regards, Joe



Re: Management Class problem

2002-05-03 Thread Wholey, Joseph (TGA\\MLOL)

Can someone clarify Edgardo's response.  Particularly, when you set up NTWCLASS 
with a higher retention number of versions.  Higher than what?  Is this by design?  I 
also have ONLY directory
structures going to mgmtclasses that I would not suspect.  thx.  -joe-

-Original Message-
From: Edgardo Moso [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 26, 2002 9:37 AM
To: [EMAIL PROTECTED]
Subject: Re: Management Class problem


That happens when you set up NTWCLASS with higher retention number of
days or versions.   The directory backup goes to the
mgt classs with the highest retention.   Ours,  we specified the directory
backup by using DIRMC mgt classs.





From: David Longo [EMAIL PROTECTED] on 04/26/2002 11:59 AM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

To:   [EMAIL PROTECTED]
cc:
Subject:   Management Class problem

I have TSM server 4.2.1.10 on AIX 4.3.3 ML09.  I have AIX clients TSM
4.2.1.23 and NT clients 4.2.1.20.  This is a new setup, has been running
a few months.  I just noticed that some of the data from these clients
is being bound to mgt class NTWCLASS and not to the default  Class.

I double checked the ACTIVE management class and backup copy groups.
The DEFAULTCLASS is the default and NTWCLASS is not.  (I have
setup NTWCLASS, but not using it yet - or I thought not!!).  I do not have
ANY
CLIENTOPSETS defined.  I do not have these copygroups using each
other as NEXT.  I checked the dsm.opt and dsm.sys and backup.excl
files and I am not using this class.  Using default or other special
classes.

Notice I said some of the data is going to wrong class, some of it is
going
to correct class. It is not clear on the data as to the pattern of what's
going
to wrong place.

This data should all be bound to default.  Whats's the deal?



David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5525
[EMAIL PROTECTED]



MMS health-first.org made the following
 annotations on 04/26/02 12:13:55
--
This message is for the named person's use only.  It may contain
confidential, proprietary, or legally privileged information.  No
confidentiality or privilege is waived or lost by any mistransmission.  If
you receive this message in error, please immediately delete it and all
copies of it from your system, destroy any hard copies of it, and notify
the sender.  You must not, directly or indirectly, use, disclose,
distribute, print, or copy any part of this message if you are not the
intended recipient.  Health First reserves the right to monitor all e-mail
communications through its networks.  Any views or opinions expressed in
this message are solely those of the individual sender, except (1) where
the message states such views or opinions are on behalf of a particular
entity;  and (2) the sender is authorized by the entity to give such views
or opinions.

==



Re: Management Class problem

2002-05-03 Thread Wholey, Joseph (TGA\\MLOL)

... and check up what?


-Original Message-
From: PINNI, BALANAND (SBCSI) [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 03, 2002 6:59 AM
To: [EMAIL PROTECTED]
Subject: Re: Management Class problem


Go to GUI mode and check up.

-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 03, 2002 8:46 AM
To: [EMAIL PROTECTED]
Subject: Re: Management Class problem


Can someone clarify Edgardo's response.  Particularly, when you set up
NTWCLASS with a higher retention number of versions.  Higher than what?
Is this by design?  I also have ONLY directory
structures going to mgmtclasses that I would not suspect.  thx.  -joe-

-Original Message-
From: Edgardo Moso [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 26, 2002 9:37 AM
To: [EMAIL PROTECTED]
Subject: Re: Management Class problem


That happens when you set up NTWCLASS with higher retention number of
days or versions.   The directory backup goes to the
mgt classs with the highest retention.   Ours,  we specified the directory
backup by using DIRMC mgt classs.





From: David Longo [EMAIL PROTECTED] on 04/26/2002 11:59 AM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

To:   [EMAIL PROTECTED]
cc:
Subject:   Management Class problem

I have TSM server 4.2.1.10 on AIX 4.3.3 ML09.  I have AIX clients TSM
4.2.1.23 and NT clients 4.2.1.20.  This is a new setup, has been running
a few months.  I just noticed that some of the data from these clients
is being bound to mgt class NTWCLASS and not to the default  Class.

I double checked the ACTIVE management class and backup copy groups.
The DEFAULTCLASS is the default and NTWCLASS is not.  (I have
setup NTWCLASS, but not using it yet - or I thought not!!).  I do not have
ANY
CLIENTOPSETS defined.  I do not have these copygroups using each
other as NEXT.  I checked the dsm.opt and dsm.sys and backup.excl
files and I am not using this class.  Using default or other special
classes.

Notice I said some of the data is going to wrong class, some of it is
going
to correct class. It is not clear on the data as to the pattern of what's
going
to wrong place.

This data should all be bound to default.  Whats's the deal?



David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5525
[EMAIL PROTECTED]



MMS health-first.org made the following
 annotations on 04/26/02 12:13:55

--
This message is for the named person's use only.  It may contain
confidential, proprietary, or legally privileged information.  No
confidentiality or privilege is waived or lost by any mistransmission.  If
you receive this message in error, please immediately delete it and all
copies of it from your system, destroy any hard copies of it, and notify
the sender.  You must not, directly or indirectly, use, disclose,
distribute, print, or copy any part of this message if you are not the
intended recipient.  Health First reserves the right to monitor all e-mail
communications through its networks.  Any views or opinions expressed in
this message are solely those of the individual sender, except (1) where
the message states such views or opinions are on behalf of a particular
entity;  and (2) the sender is authorized by the entity to give such views
or opinions.


==



TDP Copy Groups

2002-05-02 Thread Wholey, Joseph (TGA\\MLOL)

I'm in the process of setting up TDP for Oracle and SQL.  Can someone tell me if the 
co group definitions work the same for all of the TDP products.

For example.  I've set up TDP for Exchange as follows...

VDE no limit
VDD no limit
REV 60
ROV 60

And I expect this to keep my exchange data available for 60 days.  On day 61, my first 
backup should expire (I hope).  Does it work the same for TDP for Oracle and SQL?

Any help would be greatly appreciated.

Regards, Joe



port numbers

2002-04-30 Thread Wholey, Joseph (TGA\\MLOL)

Environment:
Server: OS/390  TSMv4.1.3.0
Client. NT/AIX/Solaris

We have our Server set up to use the default port 1500.  In the past, we've also made 
sure that the firewall folks open/define port 1501.  Is it necessary to have port 1501 
open?  I've been told that
the server listens on port 1501.  If so, what is it listening for?
Not a lot of info in the guides or reference on this (unless I missed it).  Any help 
would be appreciated.  thx.

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



logging in as a different client???

2002-04-22 Thread Wholey, Joseph (TGA\\MLOL)

Environment: TSM Server @ V4.2 S390
Client: AIX @ 3106 and 4.2

Situation:  have client A and Client B.  Want to log in to TSM server from client A, 
but as Client B.  I know the syntax on NT, could anyone help me on the AIX platform?

Thx.

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



my timesheet

2002-04-05 Thread Wholey, Joseph (TGA\\MLOL)

 joseph_wholey_040402.xls

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]





joseph_wholey_040402.xls
Description: MS-Excel spreadsheet


my timesheet... please don't reply

2002-04-05 Thread Wholey, Joseph (TGA\\MLOL)

Please don't reply to the my timesheet message thx.  sorry for inadvertantly 
sending

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



Re: my timesheet

2002-04-05 Thread Wholey, Joseph (TGA\\MLOL)

An automated process gone awry...

-Original Message-
From: Adolph Kahan [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 05, 2002 10:49 AM
To: [EMAIL PROTECTED]
Subject: Re: my timesheet


Joe, why do you keep sending your timesheet to the ADSM list?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Wholey, Joseph (TGA\MLOL)
Sent: Friday, April 05, 2002 10:42 AM
To: [EMAIL PROTECTED]
Subject: my timesheet

 joseph_wholey_040402.xls

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



domain ?

2002-04-03 Thread Wholey, Joseph (TGA\\MLOL)

Is there any reason why I wouldn't create one huge domain and have production, 
development and QA servers backing up to it?  What are the pros and cons of 1 domain 
vs. many?  Thanks in advance.

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



Re: TSM database maximum recommended size

2002-04-03 Thread Wholey, Joseph (TGA\\MLOL)

Can anyone comment on the DB size on an S390 TSM deployment?

-Original Message-
From: Bill Mansfield [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 03, 2002 11:56 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM database maximum recommended size


80 GB is in the range of what I think of as barely manageable on your
average midrange Unix computer (IBM H80, Sun E450).  I know folks run
bigger DBs (somebody's got one in excess of 160GB) but you need
substantial server hardware and very fast disks.

_
William Mansfield
Senior Consultant
Solution Technology, Inc





Scott McCambly [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
04/03/2002 11:08 AM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:TSM database maximum recommended size


Hello all,

Our data storage requirements have been driving our TSM database usage to
new heights, and we are looking for some references to just how big is
too
big in order to help build a business case for deploying additional
servers.

Our current TSM database is 86.5GB and although backup and restore
operations are running fine, database management tasks are becoming
painfully long.  I personally think this database size is too big.  What
are your thoughts?
How many sites are running with databases this large or larger? On what
hardware?

IBM/Tivoli's maxiumum size recommendations seem to grow each year with the
introduction of more powerful hardware platforms, but issues such as
single
threaded db backup and restore continue to pose problems to unlimited
growth.

Any input would be greatly appreciated.

Thanks,

Scott.
Scott McCambly
AIX/NetView/ADSM Specialist - Unopsys Inc.  Ottawa, Ontario, Canada
(613)799-9269



Re: audit license

2002-03-22 Thread Wholey, Joseph (TGA\\MLOL)

How about Audit Lic...

-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 22, 2002 8:40 AM
To: [EMAIL PROTECTED]
Subject: audit license


I was going to do an audit license over Easter vacation to update the
status of my licenses.  I don't want to audit the storage so I was going to
do the command:

AUDIT LICENSE AUDITSTORAGE

I was just wondering if the syntax is correct?  I looked it up in the book,
but it doesn't really give the format, it just states that the
administrator can use the AUDITSTORAGE parameter to exclude auditing the
storage.

Thanks

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



backup sets

2002-03-22 Thread Wholey, Joseph (TGA\\MLOL)

environment:s/390 v2.10
TSM server v 4.1.3
TSM client NT4.0


Scenario:  notified of large scale restore in advance.  Usually takes numerous hours.

Would it make sense to create a backupset, archive it to tape and then retrieve it?  
Now, I don't incur the time associated with multiple tapemounts.


Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



Re: Utilizing All My drives

2002-03-22 Thread Wholey, Joseph (TGA\\MLOL)

Can 1 process write to multiple drives... i.e.  maxproc=1 mountlimit=drives?


-Original Message-
From: Christoph Pilgram
[mailto:[EMAIL PROTECTED]]
Sent: Friday, March 22, 2002 9:35 AM
To: [EMAIL PROTECTED]
Subject: AW: Utilizing All My drives


Hi Bassam,

you can define as many processes as you like, but the limitation is the
number of free drives. If you define more procs then you have drives, the
migration processes wait until there is a drive available (this also happens
if some of your drives are used by other processes).
For using more than 2 drives you have to set the parameter mount limit in
your device-definition to a apropriate value (normally you set this value to
'drives').
You normally set the value maxprocs to a value less then the number of
drives, so that other operations can work on tapes too.

Best regards
Chris

 -Ursprüngliche Nachricht-
 Von:  Al'shaebani, Bassam [SMTP:Bassam.Al'[EMAIL PROTECTED]]
 Gesendet am:  Freitag, 22. März 2002 14:50
 An:   [EMAIL PROTECTED]
 Betreff:  Utilizing All My drives
 
 Hello TSM'rs,
 I have a question. Here's my senario, We backup to disk. Then
 Copy/Migrate to tape once the backups complete. 
 I have 6 3590B drives. When my copy/migration from disk to tape is
 running, I only see about
 two tapes being mounted, thus I'm only using two out of my 6 drives. I
 have increaded the number 
 MAXPROC to 2, and noticed a considerable change in copy and migration
 time. 
 My question is 1) What is the max number of proc I can assign with 6
 drives?
 2) Is there something esle I should also be doing to utilize more
 drives?
 your assistance is greatly appreciated. 
 Thanks,
 -bassam
 nyc



max processes / mountlimit

2002-03-19 Thread Wholey, Joseph (TGA\\MLOL)

If maxprocesses is set to one, and mount limit set to a value greater than one, will 
it utilize more than one drive?

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



Re: virtual volumes?

2002-03-14 Thread Wholey, Joseph (TGA\\MLOL)

I don't think that's the case.  When I do a Q occ nodename, it displays the data as 
type ARCH, even though it's a backup coming from the source server.  This, I expect.  
As all of the doc states the
target server wiews the data, whether it's an archive or backup as an archive


-Original Message-
From: Richard Cowen [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 14, 2002 9:53 AM
To: [EMAIL PROTECTED]
Subject: Re: virtual volumes?

I believe you need the copygroup defined as type=archive, not the default
type=backup...
(maybe not the only thing missing..)

 -Original Message-
 From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, March 13, 2002 9:08 PM
 To: [EMAIL PROTECTED]
 Subject: virtual volumes?


 I could really use some help on this one...

 TSMv4.1.3 server on S/390
 TSMv4.1.5 server on AIX-RS/6000

 I set up server to server communications.  All works well.
 I set up virtual volumes from the TSM RS/6000 to the TSM  S/390 server.
 All looked well until I noticed that the utilization on the tape pool I
thought I was writing to never
 changed.  Upon further investigation, I noticed I was writing to a disk
pool.  Can't
 figure out how it's happening. e.g. I direct data (backup db type=full
dev=vvol) to the
 device class vvol, which of course is devtype server, it's
 going to a storage pool that writes to disk.  Unfortunately,
 this is not how I set this up.  I suspect it has something to do with my
copygroup
 or mgmtclass... here's the output of 4 queries... I know it's
 quite a bit to look at, but I'd appreciate if someone could
 shed some light on this one.  I'm a little confused.

 tsm: ADSM-ML-WSTPq co virtual-vols virtual-policy f=d

 Policy Domain Name: VIRTUAL-VOLS
Policy Set Name: VIRTUAL-POLICY
Mgmt Class Name: VV-DEFAULT
Copy Group Name: STANDARD
Copy Group Type: Backup
=
   Versions Data Exists: 2
  Versions Data Deleted: 1
  Retain Extra Versions: 30
Retain Only Version: 60
  Copy Mode: Modified
 Copy Serialization: Shared Static
 Copy Frequency: 0
   Copy Destination: VLDB-POOL
 Last Update by (administrator): TGADSJW
  Last Update Date/Time: 11/14/2001 11:22:08
   Managing profile:
 



virtual volumes?

2002-03-13 Thread Wholey, Joseph (TGA\\MLOL)

I could really use some help on this one...

TSMv4.1.3 server on S/390
TSMv4.1.5 server on AIX-RS/6000

I set up server to server communications.  All works well.
I set up virtual volumes from the TSM RS/6000 to the TSM S/390 server.  All looked 
well until I noticed that the utilization on the tape pool I thought I was writing to 
never changed.  Upon further
investigation, I noticed I was writing to a disk pool.  Can't figure out how it's 
happening.
e.g. I direct data (backup db type=full dev=vvol) to the device class vvol, which of 
course is devtype server, it's going to a storage pool that writes to disk.  
Unfortunately, this is not how I set
this up.  I suspect it has something to do with my copygroup or mgmtclass... here's 
the output of 4 queries... I know it's quite a bit to look at, but I'd appreciate if 
someone could shed some light
on this one.  I'm a little confused.

tsm: ADSM-ML-WSTPq mgmtclass virtual-vols virtual-policy vv-default f=d

Policy Domain Name: VIRTUAL-VOLS
   Policy Set Name: VIRTUAL-POLICY
   Mgmt Class Name: VV-DEFAULT
  Default Mgmt Class ?: Yes
   Description: vldb data
Space Management Technique: None
   Auto-Migrate on Non-Use: 0
Migration Requires Backup?: Yes
 Migration Destination: SPACEMGPOOL
Last Update by (administrator): TGADSJW
 Last Update Date/Time: 03/13/2002 15:20:11
  Managing profile:

**
**
tsm: ADSM-ML-WSTPq co virtual-vols virtual-policy f=d

Policy Domain Name: VIRTUAL-VOLS
   Policy Set Name: VIRTUAL-POLICY
   Mgmt Class Name: VV-DEFAULT
   Copy Group Name: STANDARD
   Copy Group Type: Backup
  Versions Data Exists: 2
 Versions Data Deleted: 1
 Retain Extra Versions: 30
   Retain Only Version: 60
 Copy Mode: Modified
Copy Serialization: Shared Static
Copy Frequency: 0
  Copy Destination: VLDB-POOL
Last Update by (administrator): TGADSJW
 Last Update Date/Time: 11/14/2001 11:22:08
  Managing profile:

**
tsm: ADSM-ML-WSTPq stg vldb-pool f=d

   Storage Pool Name: VLDB-POOL
   Storage Pool Type: Primary
   Device Class Name: WATL2BFS
 Estimated Capacity (MB): 0.0
Pct Util: 0.0
Pct Migr: 0.0
 Pct Logical: 100.0
High Mig Pct: 90
 Low Mig Pct: 70
 Migration Delay: 0
  Migration Continue: Yes
 Migration Processes:
   Next Storage Pool:
Reclaim Storage Pool:
  Maximum Size Threshold: No Limit
  Access: Read/Write
 Description: VLDB data stored at West Street
   Overflow Location:
   Cache Migrated Files?:
  Collocate?: No
   Reclamation Threshold: 60
 Maximum Scratch Volumes Allowed: 9,999
   Delay Period for Volume Reuse: 0 Day(s)
  Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
 Volume Being Migrated/Reclaimed:
  Last Update by (administrator): TGADSJW
   Last Update Date/Time: 03/08/2002 10:38:48
**
**
tsm: ADSM-ML-WSTPq devclass watl2bfs f=d
 Device Class Name: WATL2BFS
Device Access Strategy: Sequential
Storage Pool Count: 3
Last Update by (administrator): TGADSBJ
 Last Update Date/Time: 01/29/2001 10:48:05
   Device Type: 3590
 Maximum Capacity (MB):
   Estimated Capacity (MB): 61,440.0
   Dataset Name Prefix: ADSM
   Mount Limit: 6
 Mount Retention (min): 5
Label Type: IBMSL
   Expiration Date: 2155365
  Mount Wait (min): 60
 Unit Name: WATL2
Volser:
   Compression: No
Protection: No
 Retention:
   Server Name:
  Retry Period:
Retry Interval:



Re: Thanks to All That Provided Input for the Share Session onSQL Co mmands

2002-03-08 Thread Wholey, Joseph (TGA\\MLOL)

Me too...
[EMAIL PROTECTED]

-Original Message-
From: Ann Mason [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 08, 2002 8:49 AM
To: [EMAIL PROTECTED]
Subject: Re: Thanks to All That Provided Input for the Share Session
onSQL Co mmands


I would like to have acopy also...

Ann Mason
StFX University
www.stfx.ca
[EMAIL PROTECTED]

-Original Message-
From: Jane Bamberger [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 08, 2002 9:10 AM
To: [EMAIL PROTECTED]
Subject: Re: Thanks to All That Provided Input for the Share Session
onSQL Co mmands


I also would like a copy!

Thanks!!


Jane Bamberger
IS Department
Bassett Healthcare
607-547-4784


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Al Narzisi
Sent: Thursday, March 07, 2002 8:55 PM
To: [EMAIL PROTECTED]
Subject: Re: Thanks to All That Provided Input for the Share Session
onSQL Co mmands


I would also like a copy...
- Original Message -
From: Tom Melton [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, March 07, 2002 7:12 AM
Subject: Re: Thanks to All That Provided Input for the Share Session onSQL
Co mmands


 I would like a copy too...

 Tom Melton

  [EMAIL PROTECTED] 03/07/02 08:20AM 
 I'd like to get a copy also




 Justin Derrick [EMAIL PROTECTED]@VM.MARIST.EDU on 03/07/2002
 07:29:07 AM

 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

 Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


 To:   [EMAIL PROTECTED]
 cc:

 Subject:  Re: Thanks to All That Provided Input for the Share Session
 on
   SQL Co mmands


 Yes please.  =)

 -JD.

 If you want a copy of the presentation, please let me know.
 
 Paul D. Seay, Jr.
 Technical Specialist
 Naptheon, INC
 757-688-8180



Mirroring DB and Log

2002-03-06 Thread Wholey, Joseph (TGA\\MLOL)

In a RAID5 environment, is there really any reason to Mirror the database and logs?


Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



Re: Mirroring DB and Log

2002-03-06 Thread Wholey, Joseph (TGA\\MLOL)

That,  Wanda, is a great explanation. Thank you.

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 06, 2002 11:18 AM
To: [EMAIL PROTECTED]
Subject: Re: Mirroring DB and Log


I think, as usual, IT DEPENDS ON YOUR ENVIRONMENT.

The RAID5 prevents outage due to a disk failure.
But there is some exposure to other failures.

Your RAID5 controller can fail.  And there is always that traditional
bugaboo of HUMAN failure - I always worry about some enthusiastic trainee or
accident-prone vendor tech accidentally reconfiguring the RAID set while I'm
not arround.

There is also some, very LOW likelihood of a LOGICAL (software) error
writing to the DB or log.  Tivoli has stated in the past, that there are
cases where they can detect a bad write to the log and STOP before
replicating the bad write to the mirror copy (I've never seen this occur
myself, so I don't know how realistic this is), which is why they recommend
using TSM mirroring instead of OS mirroring.

So, I think you have to evaluate your situation; what is your exposure; what
is your tolerance to an outage, and what is your tolerance for data loss?
Do you want to have to explain ANY downtime  / data loss to your management?

In systems where the transaction load will allow it, I put one copy of the
DB on RAID5, put the log in ROLLFORWARD mode, then mirror just the LOG to an
internal disk (to avoid the WRITE penalty), or to a RAID5 device on a
different adapter.  That way, EVEN IF you lose the RAID5 set, you still have
the log, which means you can always restore the DB from a DB backup tape,
and roll forward to current; the only thing you lose is some time.  So you
achieve almost bullet-proof coverage with not much cost in space.

If your transaction load is too high to stand the overhead of ROLLFORWARD
mode, and you use NORMAL mode, then if you have a failure you lose your
transactions back to the last DB backup.  That's OK for a lot of sites.  If
all you are backing up is some servers under your control, and all you lose
is your previous night's backups, then fine.

But,  if you have a lot of far-flung clients not under your control, or if
you do a lot of archiving it gets trickier, 'cause you have to track down
what archives were lost and do them again.

And if you are in an HSM environment BOW HOWDY, then you can't afford ANY
outage, even TIME.

So, you have to think about

1) What is your exposure - do YOU understand exactly what you would lose if
you had an outage?
3) What are you willing to pay to cover your exposure?
4) Does your management understand your exposure?

and set yourself up accordingly.

Just my 2 cents.

(and by the way, does our 2 cents worth stay constant, or do we have to
pro-rate it for inflation from year to year?)

Wanda Prather



-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 06, 2002 9:00 AM
To: [EMAIL PROTECTED]
Subject: Mirroring DB and Log


In a RAID5 environment, is there really any reason to Mirror the database
and logs?


Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



SQL command

2002-03-06 Thread Wholey, Joseph (TGA\\MLOL)

Looking for an SQL command that will show me nodes by management class.  Can anyone 
help?

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



Next STG Pool

2002-03-05 Thread Wholey, Joseph (TGA\\MLOL)

Any thoughts on making NEXT STG POOL a virtual volume.  i.e.  going to device type 
server over the network?

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



my timesheet

2002-03-01 Thread Wholey, Joseph (TGA\\MLOL)

 joseph_wholey_030102.xls

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]





joseph_wholey_030102.xls
Description: MS-Excel spreadsheet


Re: my timesheet

2002-03-01 Thread Wholey, Joseph (TGA\\MLOL)

Sorry about that... wrong distribution list.

-Original Message-
From: Marc Lowers [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 01, 2002 9:21 AM
To: [EMAIL PROTECTED]
Subject: Re: my timesheet


you work too hard.



Re: TDP monitoring

2002-03-01 Thread Wholey, Joseph (TGA\\MLOL)

Server upgrade.

-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 01, 2002 10:07 AM
To: [EMAIL PROTECTED]
Subject: Re: TDP monitoring


10 or 20 Exchange IS restores in progress at the same time not being
unusual???

I would worry about why you have to do that!

The e-mail support group shold be able to do the restores themselves with
the TDP GUI - and it shows the progess.

-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 28, 2002 6:26 PM
To: [EMAIL PROTECTED]
Subject: Re: TDP monitoring


Tim, Del,

Thanks... how about monitoring a restore in progress?  Or monitoring many in
progress (like 10 or 20 which would not be that unusual). For example, your
e-mail support group paging and asking when is
it going to finish?
Is it simply Q the size of the full and the incrs and do the math?  That's
kind of cumbersome for multiple restores.

Regards, Joe



Re: TDP monitoring

2002-03-01 Thread Wholey, Joseph (TGA\\MLOL)

When running the Exchange full backup via a schedule, a C:\WINNT4\system32\cmd.exe 
window pops up on the client that is getting the Exchange full backup.  Any way to 
stop that, or get it to run in
the background?

Thanks, Joe



-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 01, 2002 10:07 AM
To: [EMAIL PROTECTED]
Subject: Re: TDP monitoring


10 or 20 Exchange IS restores in progress at the same time not being
unusual???

I would worry about why you have to do that!

The e-mail support group shold be able to do the restores themselves with
the TDP GUI - and it shows the progess.

-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 28, 2002 6:26 PM
To: [EMAIL PROTECTED]
Subject: Re: TDP monitoring


Tim, Del,

Thanks... how about monitoring a restore in progress?  Or monitoring many in
progress (like 10 or 20 which would not be that unusual). For example, your
e-mail support group paging and asking when is
it going to finish?
Is it simply Q the size of the full and the incrs and do the math?  That's
kind of cumbersome for multiple restores.

Regards, Joe



TDP monitoring

2002-02-28 Thread Wholey, Joseph (TGA\\MLOL)

I'm currently testing TDP for Exchange for possible deployment in a very large 
enterprise environment.  Is anyone aware of tools/scripts that I can use to monitor 
the backups/restores.  I'm aware that
I can look at the past history of backups/restores and determine approximately how 
long it will take, however, this can be quite time consuming.  Also, does anyone know 
how most people are monitoring
the success/failure of their respective backups.  I was going to scrape data out of 
the excfull.log or excincr.log.  This seems kind of primitive.

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



logmode/bufpoolsize/mpthreading

2002-02-28 Thread Wholey, Joseph (TGA\\MLOL)

 Environment:
 Server: 4.1.3.0
 Platform: S390

 1.  Logmode: We're going to change the Logmode from Normal to Roll Forward.  What 
determines the amount of disk I'll require?

 2.  Bufpoolsize: We're going to increase from 24576K to ?.  What's the determining 
factor?

 3.  Mpthreading:  We're going to turn it on.  Are there any considerations I should 
concern myself with?

 None of this info is in the manual that I'm aware of.  I get a log of try this 
form Tivoli support.  Unfortunately, I don't work in an environment where I can try 
this without first knowing what
 the repercussions are.


 Regards,
 Joe Wholey
 TGA Distributed Data Services
 Merrill Lynch
 Phone: 212-647-3018
 Page:  888-637-7450
 E-mail: [EMAIL PROTECTED]





logmode/bufpoolsize/mpthreading

2002-02-28 Thread Wholey, Joseph (TGA\\MLOL)

Environment:
Server: 4.1.3.0
Platform: S390

1.  Logmode: We're going to change the Logmode from Normal to Roll Forward.  What 
determines the amount of disk I'll require?

2.  Bufpoolsize: We're going to increase from 24576K to ?.  What's the determining 
factor?

3.  Mpthreading:  We're going to turn it on.  Are there any considerations I should 
concern myself with?

None of this info is in the manual that I'm aware of.  I get a log of try this form 
Tivoli support.  Unfortunately, I don't work in an environment where I can try this 
without first knowing what
the repercussions are.


Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



Re: TDP monitoring

2002-02-28 Thread Wholey, Joseph (TGA\\MLOL)

Tim, Del,

Thanks... how about monitoring a restore in progress?  Or monitoring many in progress 
(like 10 or 20 which would not be that unusual). For example, your e-mail support 
group paging and asking when is
it going to finish?
Is it simply Q the size of the full and the incrs and do the math?  That's kind of 
cumbersome for multiple restores.

Regards, Joe

-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 28, 2002 5:42 PM
To: [EMAIL PROTECTED]
Subject: Re: TDP monitoring

Not all error messages for TDP For Exchange (1.1.1.01 anyways) are logged on
the server.

We just ran into this on one of our test servers, messages in client log:
ACN3002E --
The Directory service is not running. A backup was attempted
but the necessary Exchange Services were not running.
01/29/2002 20:59:31,ACN3025E -- Backup error encountered.

On the server, all you see is
01/28/2002 21:00:02  ANR2561I Schedule prompter contacting EX-CSDSVTEMB

  (session 2400) to start a scheduled operation.

01/28/2002 21:00:03  ANR0406I Session 2403 started for node EX-CSDSVTEMB

  (WinNT) (Tcp/Ip 192.168.176.45(4083)).

01/28/2002 21:00:16  ANR0403I Session 2403 ended for node EX-CSDSVTEMB
(WinNT).

Now this may be a bad example because if it is a production system you would
know pretty quick that the directory service was not running on Exchange!

We monitor all backups via q event to look for failed schedules (this does
not catch all Exchange Failures)

We do a daily check for filespaces that have not been backed up (this will
catch Exchange failures).

We also run a report using the accounting file as input which will show you
how much data is backed up by each node and the throughput.

Tim Rushforth
City of Winnipeg

-Original Message-
From: Del Hoobler [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 28, 2002 3:42 PM
To: [EMAIL PROTECTED]
Subject: Re: TDP monitoring


 I'm currently testing TDP for Exchange for possible deployment
 in a very large enterprise environment.  Is anyone aware of
 tools/scripts that I can use to monitor the backups/restores.
 I'm aware that
 I can look at the past history of backups/restores and
 determine approximately how long it will take, however,
 this can be quite time consuming.  Also, does anyone
 know how most people are monitoring
 the success/failure of their respective backups.  I
 was going to scrape data out of the excfull.log or
 excincr.log.  This seems kind of primitive.

Joe,

Just one thought...
Also, keep in mind that all backup (and restore) events
for TDP for Exchange are logged in the TSM server
activity log... including their success or failure.
That way you can go to one central location
to find out the status of the backups, i.e. you
would not have to go to each Exchange server
to find out the status.

Thanks,

Del



Del Hoobler
IBM Corporation
[EMAIL PROTECTED]

Celebrate we will. Life is short but sweet for certain...  -- Dave



sessOpen: Error 137 from signon authentication

2002-02-27 Thread Wholey, Joseph (TGA\\MLOL)

Any help would be greatly appreciated...

ENVIRONMENT
client: 4.2.1.2  NT4 sp5
server: 4.1.3 S390
TDP for Exchange version 2.2


I'm trying to install TDP for Exchange Scheduler.  By all accounts, it installs 
successfully.  However, when I start the scheduler, I get the following errror in the 
dsierror.log.

sessOpen:  Error 137 from signon authentication.

Things I've tried:
- uninstalled TSM client
- uninstalled TDP for Exchange
- removed all reg entries for the above apps
- re-installed both apps
- reset the password for the exchange node on both the client and server
- removed the service and cleared the password entry in the reg
- re-installed the scheduler.  reset pws on client and server again
... on and on and on...

Anyone see this before?


Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



Re: sessOpen: Error 137 from signon authentication

2002-02-27 Thread Wholey, Joseph (TGA\\MLOL)

Tim,

Thanks... actually, the only other problem is that my sched stays in a pending state 
until I stop and start the scheduler.

Regards, Joe

-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 27, 2002 3:02 PM
To: [EMAIL PROTECTED]
Subject: Re: sessOpen: Error 137 from signon authentication


The readme for the win32 client states:

When starting/installing the TSM Scheduler Service, you will get an error
message in the dsierror.log that states:
 sessOpen: error 137 from signon authentication
This message can be ignored.

Are you experiencing any problems besides this message?

Tim Rushforth
City of Winnipeg

-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 27, 2002 1:39 PM
To: [EMAIL PROTECTED]
Subject: sessOpen: Error 137 from signon authentication


Any help would be greatly appreciated...

ENVIRONMENT
client: 4.2.1.2  NT4 sp5
server: 4.1.3 S390
TDP for Exchange version 2.2


I'm trying to install TDP for Exchange Scheduler.  By all accounts, it
installs successfully.  However, when I start the scheduler, I get the
following errror in the dsierror.log.

sessOpen:  Error 137 from signon authentication.

Things I've tried:
- uninstalled TSM client
- uninstalled TDP for Exchange
- removed all reg entries for the above apps
- re-installed both apps
- reset the password for the exchange node on both the client and
server
- removed the service and cleared the password entry in the reg
- re-installed the scheduler.  reset pws on client and server again
... on and on and on...

Anyone see this before?


Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



Re: sessOpen: Error 137 from signon authentication

2002-02-27 Thread Wholey, Joseph (TGA\\MLOL)

Del,

Thanks,

I did, and as Tim stated, I can ignore the error message if it's the only one I 
recieve. (and it is).  However, I still have to stop and start the TDP for Exchange 
Scheduler, otherwise my scheduled
job hangs in a pending state.  Any ideas?

Regards, Joe
-Original Message-
From: Del Hoobler [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 27, 2002 3:13 PM
To: [EMAIL PROTECTED]
Subject: Re: sessOpen: Error 137 from signon authentication


 ENVIRONMENT
 client: 4.2.1.2  NT4 sp5
 server: 4.1.3 S390
 TDP for Exchange version 2.2

 I'm trying to install TDP for Exchange Scheduler.
 By all accounts, it installs successfully.  However, when I
 start the scheduler, I get the following errror in the dsierror.log.

 sessOpen:  Error 137 from signon authentication.

Joe,

This is a delicate thing.
I recommend following the EXACT steps as documented
in the TDP for Exchange User's Guide Appendix C.
When I say exact, I mean don't leave out a step
and make sure you do them in the correct order.

Thanks,

Del



Del Hoobler
IBM Corporation
[EMAIL PROTECTED]

Celebrate we will. Life is short but sweet for certain...  -- Dave



How can I implement Backup Sets?

2002-02-15 Thread Wholey, Joseph (TGA\\MLOL)

TSM v413.0 running on os/390
primarily backing up NT4.0 and ADSMv3.1.06

Situation:
We backup branch offices spread thru out the US.  Each branch office has a primary 
server and an exact duplicate server running the os and the apps (no user data).  
Frequently, we will be notified
ahead of time (days) that the primary server needs to be rebuilt.  This requires us to 
restore the user data upon completion.  At times it can be as much as 13G going across 
fractional T1 or T3.  Due
to the network, this can be a really time consuming restore.

Question:
Is there a way I can utilize the secondary server as a medium to create Backup Sets on?
There are no local devices (zip, jazz etc...) at these branch offices.  Couldn't use 
anyhow because there is no personnel at these branch offices to physically load the 
media.  Any suggestions as to
how I can use the secondary server to put backupsets on and greatly reduce my restore 
time to the primary server?

Regards, Joe



Re: Renaming Node Name

2002-02-14 Thread Wholey, Joseph (TGA\\MLOL)

Yes, you're going to take a base backup (one time full) after you change the name... 
i.e. it's going to take longer than your perpetual incremental.

-Original Message-
From: Selva, Perpetua [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 14, 2002 12:17 PM
To: [EMAIL PROTECTED]
Subject: Renaming Node Name


I have been backing up with this name tonis137

and they want to rename this to tonis137old and recreate another server with
tonis137

any impacts? i should watch out for?



Re: PC Magazine Enterprise Backup Article - NO MENTION OF TSM!! W here's the Air Support?

2002-02-12 Thread Wholey, Joseph (TGA\\MLOL)

When someone asks A penny for your thoughts? and you give your 2 cents, where does 
that extra penny go?

-Original Message-
From: Lindsey Thomson [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 12, 2002 1:16 PM
To: [EMAIL PROTECTED]
Subject: Re: PC Magazine Enterprise Backup Article - NO MENTION OF TSM!!
Where's the Air Support?


Hi *,
 I am a newbie to this forum BUT have been using TSM Client side/Sysback
for all backup/recovery issues. Sysback works fine for the bare metal
recovery and ADSM/TSM works fine for the rest. Have been using this
methodology for the past 5+ years and have had very good results.
 When I posed the question to IBM: OK, TSM works for data recovery, what
about bare metal recovery issues? The response was Sysback. I don't
see why IBM would leave TSM out of an advertisement when given the
opportunity though...
Just my $0.02.

 I am just recently moving into the server side of TSM V42 and am having a
bit of a learning curve but it looks hopeful...  :^)

 I look forward to helpful hints from ya'll, thx.

lt
512 823 6522  (TL: 793)



Re: Backing up MS-SQL without the TDP

2002-02-08 Thread Wholey, Joseph (TGA\\MLOL)

If you have the space on the SQL server, use MS' built in feature to back it up 
locally, then back it with TSM and delete it.

-Original Message-
From: Zoltan Forray/AC/VCU [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 08, 2002 2:31 PM
To: [EMAIL PROTECTED]
Subject: Backing up MS-SQL without the TDP

Has anyone got a working procedure they use to backup a server with MS-SQL
on it, without using the TDP ?  We can't afford to by any more licenses of
the TDP.

I realize there is an SC.EXE program that can be used to shut-down and
start the SQL process.

Any help would be appreciated.

Zoltan Forray
Virginia Commonwealth University - University Computing Center
e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807



Re: Need help with TSM process slowdowns

2002-01-31 Thread Wholey, Joseph (TGA\\MLOL)

fyi... never set anything to autonegotiate...

-Original Message-
From: Rooij, FC de [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 31, 2002 6:02 AM
To: [EMAIL PROTECTED]
Subject: Re: Need help with TSM process slowdowns


We did see something similar. The network settings were set on
autorecogniate. After that there was a mix off full and half-duplex, which
caused a lot of collisions.
Fred de Rooij

 -Original Message-
 From: Hunley, Ike [EMAIL PROTECTED]@CORUS
 Sent: Thursday, January 31, 2002 1:38 AM
 To:   [EMAIL PROTECTED]
 Subject:  Re: Need help with TSM process slowdowns


 Tivoliv has me changing BUFPOOLSIZE to 1/2 the region size.

 -Original Message-
 From: George Lesho [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, January 30, 2002 5:32 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Need help with TSM process slowdowns


 Ike, I don't know much about your environment but if there were no changes
 to TSM configuration, I would look at the client corresponding to the slow
 session. If your
 client is AIX, run entstat (actually entstat -d ent0) to check your
 network
 interface.
 If there are large error totals, you may have hardware problems.

 George Lesho
 AFC Enterprises


 -Original Message-
 From: Hunley, Ike [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, January 30, 2002 4:03 PM
 To: [EMAIL PROTECTED]
 Subject: Need help with TSM process slowdowns


 HI, I need HELLLP!!

 TSM backup processes that last week took 51 minutes, now take 8 - 28
 hours.
 We know of no changes to the TSM 4.2.1.9 Server running on OS/390 V2R9.

 It looks like the servers are ready to send data and TSM is ready to
 receive.  The routers are indicate very low activity.

 We have a sniffer on now.

 If anyone out there has recovered from a similar experience please help?

 Thanks



 Blue Cross Blue Shield of Florida, Inc., and its subsidiary and
 affiliate companies are not responsible for errors or omissions in this
 e-mail message. Any personal comments made in this e-mail do not reflect
 the views of Blue Cross Blue Shield of Florida, Inc.



Re: low bandwitdth and big files

2002-01-31 Thread Wholey, Joseph (TGA\\MLOL)

Paul, 

I agree...
But what possible effect does shutting Devclass compression on the TSM server have?  

Regards, Joe 

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 9:08 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


Compression is set at the OS level in the mainframe.  So, whatever your MVS
folks say is what it is.

-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 4:07 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


Matt, 

Don't be so sure...  my MVS folks are telling me that there is no way TSM is
over riding their micro-code hardware level compression.  (but I will double
check with them and again as well as with
Tivoli).  

With regard to compressing data twice, I disagree.  There's something very
wrong with it.  That's why it is strongly recommended not to do it. (not
just with TSM, but with all data)  Some data that
goes thru multiple compression routines can blow up to 2x the size the
file started out as.

And finally, the reason we turn compression on at the client, is to compress
it before it rides our very slow network.  Otherwise, I wouldn't.  

Regards, Joe


  

-Original Message-
From: Matthew Glanville [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 3:42 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


From: Matthew Glanville

Joe,

I do not believe there is such a thing as 'server' level compression.  My
understanding is that the device class compression settings are reflecting
the hardware level compression settings, they can override what the
microcode may have set the 'default' to.

We have no problems at all with clients that compress with the tsm client,
and then compress again on the tape drive, you loose just a little bit of
space, and yes, the occupancy information does not know that the data has
already been compresssed.  There is really nothing wrong with compressing
data more than once, the files get a bit bigger, but it could be worth the
time and bandwith saved.  Also don't forget that lots of data is stored
already compressed in zipped files or compressed images like jpeg and mpeg.

I would not touch with the compression settings on the device class, keep
them on at the highest level, just turn on or off the TSM client
compression as needed.  Check and see if that helps your low bandwith
backups.

Matthew Glanville
[EMAIL PROTECTED]





Wholey, Joseph (TGA\\MLOL) [EMAIL PROTECTED]@VM.MARIST.EDU on
01/30/2002 01:39:10 PM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: low bandwitdth and big files


Matt,

Are you running two, maybe 3 compression routines.  i.e.  once at the
client, once at the server level (you'll see if you q devclass f=d on a
sequential storage device) and once at the hardware level
(micro code).

If so, have you kept check on the amount of data in your stgpool.  I say
this because a q occ on a file space is not going to give you an accurrate
indication of the amount of data that node/filespace
has dumped into your stgpool.
Although the IBMers and the manuals say don't run multiple compression
routines, they've yet to advise on what to do if you have to run client
side compression due to a slow network.  I can shut off
server side/devclass compression, but what about hardware compression.  Can
you shut off compression on a 3590 tape device.  Isn't that a micro code
issue.  i.e.  you can't shut it off?

Regards, Joe

-Original Message-
From: Matthew Glanville [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 12:43 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


From: Matthew Glanville

You might want to turn on TSM client side compression...
In my experience notes databases can get at least 50% compressed.
Your backups will most likely go down to 2 hours, or even more.

TSM: update node node_name compress=yes

Give it a try.  For low bandwith lines I always prefer to let TSM compress
the data first

Of course, we no longer back up notes as normal files but use the TDP for
Domino agent.
(but still use TSM client compression).

Matthew Glanville
[EMAIL PROTECTED]






Burak Demircan [EMAIL PROTECTED]@VM.MARIST.EDU on
01/30/2002 10:48:46 AM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: low bandwitdth and big files


I have one full backup. What could be the solution? The files change
with minor changes every day but 1.8GB file is backuped up from scratch
every
day.
Regards,
Burak




[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]

30.01.2002 16:39
Please respond to ADSM-L

        To:        [EMAIL PROTECTED]
        cc

Re: Antwort: Re: low bandwitdth and big files

2002-01-31 Thread Wholey, Joseph (TGA\\MLOL)

Peter, 

If I'm running compression on all of my clients, why would I attempt to turn it on at 
the devclass level.  The only thing I can see it doing is prevent me from streaming 
because it's first going to
try and compress anything I send to.  Hence, an overall performance hit.  

Regards, Joe

-Original Message-
From: Peter Sattler [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 31, 2002 12:17 PM
To: [EMAIL PROTECTED]
Subject: Antwort: Re: low bandwitdth and big files


Hi Joe,

I strongly agree with Matt. I've done then same thing, that is client side
compression and hardware compression on tape. With TSM, with Networker -
I've never seen problems. On the server side all you lose is a bit of tape
capacity. The compression is hardware (sic!) not software. The well known
advice is for software compression - and there it is necessary to avoid
double compression because it takes away resources.

So in your case try client side compression - it might well be that you
gain more than a doubled data rate.

Cheers Peter






Wholey, Joseph (TGA\\MLOL) [EMAIL PROTECTED]@VM.MARIST.EDU am
30.01.2002 22:07:19

Bitte antworten an ADSM: Dist Stor Manager [EMAIL PROTECTED]

Gesendet von:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


An:[EMAIL PROTECTED]
Kopie:
Thema: Re: low bandwitdth and big files

Matt,

Don't be so sure...  my MVS folks are telling me that there is no way TSM
is over riding their micro-code hardware level compression.  (but I will
double check with them and again as well as with
Tivoli).

With regard to compressing data twice, I disagree.  There's something very
wrong with it.  That's why it is strongly recommended not to do it. (not
just with TSM, but with all data)  Some data that
goes thru multiple compression routines can blow up to 2x the size the
file started out as.

And finally, the reason we turn compression on at the client, is to
compress it before it rides our very slow network.  Otherwise, I wouldn't.


Regards, Joe




-Original Message-
From: Matthew Glanville [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 3:42 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


From: Matthew Glanville

Joe,

I do not believe there is such a thing as 'server' level compression.  My
understanding is that the device class compression settings are reflecting
the hardware level compression settings, they can override what the
microcode may have set the 'default' to.

We have no problems at all with clients that compress with the tsm client,
and then compress again on the tape drive, you loose just a little bit of
space, and yes, the occupancy information does not know that the data has
already been compresssed.  There is really nothing wrong with compressing
data more than once, the files get a bit bigger, but it could be worth the
time and bandwith saved.  Also don't forget that lots of data is stored
already compressed in zipped files or compressed images like jpeg and mpeg.

I would not touch with the compression settings on the device class, keep
them on at the highest level, just turn on or off the TSM client
compression as needed.  Check and see if that helps your low bandwith
backups.

Matthew Glanville
[EMAIL PROTECTED]





Wholey, Joseph (TGA\\MLOL) [EMAIL PROTECTED]@VM.MARIST.EDU on
01/30/2002 01:39:10 PM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: low bandwitdth and big files


Matt,

Are you running two, maybe 3 compression routines.  i.e.  once at the
client, once at the server level (you'll see if you q devclass f=d on a
sequential storage device) and once at the hardware level
(micro code).

If so, have you kept check on the amount of data in your stgpool.  I say
this because a q occ on a file space is not going to give you an accurrate
indication of the amount of data that node/filespace
has dumped into your stgpool.
Although the IBMers and the manuals say don't run multiple compression
routines, they've yet to advise on what to do if you have to run client
side compression due to a slow network.  I can shut off
server side/devclass compression, but what about hardware compression.  Can
you shut off compression on a 3590 tape device.  Isn't that a micro code
issue.  i.e.  you can't shut it off?

Regards, Joe

-Original Message-
From: Matthew Glanville [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 12:43 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


From: Matthew Glanville

You might want to turn on TSM client side compression...
In my experience notes databases can get at least 50% compressed.
Your backups will most likely go down to 2 hours, or even more.

TSM: update node node_name compress=yes

Give it a try.  For low bandwith lines I always prefer to let TSM compress
the data first

Of course, we no longer back up notes as normal files but use the TDP for
Domino agent

Re: Antwort: Re: low bandwitdth and big files

2002-01-31 Thread Wholey, Joseph (TGA\\MLOL)

Nick,

I have client compression turned on also due to slow network (have no choice).
But no one has been able to answer the following questions definitively:

1. Am I potentially doubling the size of certain files in the stg pool by running 
multiple compression algorithms.?

2. By turning off DEVCLASS compression, is that effectively disabling hardware 
compression performed by my tape device (IBM 3590 TAPE Device / Cartridge)

3. If client compression and hardware compression are turned on, and hardware 
compression isn't really buying me anything... won't the attempt at hardware 
compression prevent streaming?  I think it
will.

I'm looking for YES/NO answers with a valid explanation.  Anyone?

Regards, Joe


-Original Message-
From: Nicholas Cassimatis [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 31, 2002 1:32 PM
To: [EMAIL PROTECTED]
Subject: Re: Antwort: Re: low bandwitdth and big files


A long while back, I had 36 boxes of the following config: Pentium 100's,
128MB RAM, 16Mbit Token Ring, running OS/2 2.11 with Lan Server 4, Notes
4.1, backing up mail files as flat files.  Turned client side compression
on, backup window went from 4 hours to 1.25 hours.  I cut the data sent
over the wire down by 66%, and got a corresponding reduction in the backup
time.  My machines were effectively offline for the backup window, due to
the network being saturated, so the fact they were also CPU bound really
didn't matter.

It all depends on the config.  The worst you could do is to test a little,
see what happens.

Oh, my library was a 3494 with 2xB11 drives in it.  I kept utilizing the
same number of tapes, but the capacity at full went from around 28GB to
11GB, as would be expected.

Nick Cassimatis
Technical Team Lead
e-Business Backup/Recovery Services
919-363-8894   T/L 223-8965
[EMAIL PROTECTED]

Today is the tomorrow of yesterday.



Re: Antwort: Re: low bandwitdth and big files

2002-01-31 Thread Wholey, Joseph (TGA\\MLOL)

HOO---RAAY... finally.  Thank you.

-Original Message-
From: Slag, Jerry B. [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 31, 2002 2:37 PM
To: [EMAIL PROTECTED]
Subject: Re: Antwort: Re: low bandwitdth and big files


1. Yes. Trying to compress an already compressed file can cause the file to
grow larger.
2. No. We run a 3494 Magstar - hardware compression is ALWAYS on, you can't
override it. It is controlled within the hardware and our ZOS system.
3. Prevent - No. Degrade or hinder - Yes, it can.

-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 31, 2002 1:05 PM
To: [EMAIL PROTECTED]
Subject: Re: Antwort: Re: low bandwitdth and big files


Nick,

I have client compression turned on also due to slow network (have no
choice).
But no one has been able to answer the following questions definitively:

1. Am I potentially doubling the size of certain files in the stg pool by
running multiple compression algorithms.?

2. By turning off DEVCLASS compression, is that effectively disabling
hardware compression performed by my tape device (IBM 3590 TAPE Device /
Cartridge)

3. If client compression and hardware compression are turned on, and
hardware compression isn't really buying me anything... won't the attempt at
hardware compression prevent streaming?  I think it
will.

I'm looking for YES/NO answers with a valid explanation.  Anyone?

Regards, Joe


-Original Message-
From: Nicholas Cassimatis [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 31, 2002 1:32 PM
To: [EMAIL PROTECTED]
Subject: Re: Antwort: Re: low bandwitdth and big files


A long while back, I had 36 boxes of the following config: Pentium 100's,
128MB RAM, 16Mbit Token Ring, running OS/2 2.11 with Lan Server 4, Notes
4.1, backing up mail files as flat files.  Turned client side compression
on, backup window went from 4 hours to 1.25 hours.  I cut the data sent
over the wire down by 66%, and got a corresponding reduction in the backup
time.  My machines were effectively offline for the backup window, due to
the network being saturated, so the fact they were also CPU bound really
didn't matter.

It all depends on the config.  The worst you could do is to test a little,
see what happens.

Oh, my library was a 3494 with 2xB11 drives in it.  I kept utilizing the
same number of tapes, but the capacity at full went from around 28GB to
11GB, as would be expected.

Nick Cassimatis
Technical Team Lead
e-Business Backup/Recovery Services
919-363-8894   T/L 223-8965
[EMAIL PROTECTED]

Today is the tomorrow of yesterday.



Re: low bandwitdth and big files

2002-01-30 Thread Wholey, Joseph (TGA\\MLOL)

Matt, 

Are you running two, maybe 3 compression routines.  i.e.  once at the client, once at 
the server level (you'll see if you q devclass f=d on a sequential storage device) and 
once at the hardware level
(micro code).  

If so, have you kept check on the amount of data in your stgpool.  I say this because 
a q occ on a file space is not going to give you an accurrate indication of the amount 
of data that node/filespace
has dumped into your stgpool.  
Although the IBMers and the manuals say don't run multiple compression routines, 
they've yet to advise on what to do if you have to run client side compression due to 
a slow network.  I can shut off
server side/devclass compression, but what about hardware compression.  Can you shut 
off compression on a 3590 tape device.  Isn't that a micro code issue.  i.e.  you 
can't shut it off? 

Regards, Joe

-Original Message-
From: Matthew Glanville [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 12:43 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


From: Matthew Glanville

You might want to turn on TSM client side compression...
In my experience notes databases can get at least 50% compressed.
Your backups will most likely go down to 2 hours, or even more.

TSM: update node node_name compress=yes

Give it a try.  For low bandwith lines I always prefer to let TSM compress
the data first

Of course, we no longer back up notes as normal files but use the TDP for
Domino agent.
(but still use TSM client compression).

Matthew Glanville
[EMAIL PROTECTED]






Burak Demircan [EMAIL PROTECTED]@VM.MARIST.EDU on
01/30/2002 10:48:46 AM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: low bandwitdth and big files


I have one full backup. What could be the solution? The files change
with minor changes every day but 1.8GB file is backuped up from scratch
every
day.
Regards,
Burak




[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]

30.01.2002 16:39
Please respond to ADSM-L

        To:        [EMAIL PROTECTED]
        cc:
        Subject:        Re: low bandwitdth and big files

Depending on circumstances, this might be a candidate for adaptive
differencing, TSMs version of a block level incremental.  You will still
have to at least once do a complete backup of the big files though.



_
William Mansfield
Senior Consultant
Solution Technology, Inc





                      Burak Demircan

                      burak.demircan@DAIMLERCH        To:
[EMAIL PROTECTED]

                      RYSLER.COM                      cc:

                      Sent by: ADSM: Dist Stor        Subject:  low
bandwitdth
and big
files
                     Manager

                      [EMAIL PROTECTED]





                      01/30/2002 12:01 AM

                      Please respond to ADSM:

                      Dist Stor Manager









Hi,
I am trying to backup up big files ~1GB from a low bandwidth line.
Some of the clients fail the schedule very frequently. Could you recommend
me
some options to increase timeout (any kind of timeout). I pasted the
schedlog
of
one of the clients below which starts at 19:00.
Regards
Burak



29-01-2002 10:26:47 --- SCHEDULEREC QUERY BEGIN
29-01-2002 10:26:47 --- SCHEDULEREC QUERY END
29-01-2002 10:26:47 Next operation scheduled:
29-01-2002 10:26:47

29-01-2002 10:26:47 Schedule Name: * * * * NTDAILY
29-01-2002 10:26:47 Action: * * * * * * * *Incremental
29-01-2002 10:26:47 Objects:
29-01-2002 10:26:47 Options:
29-01-2002 10:26:47 Server Window Start: * 19:00:00 on 29-01-2002
29-01-2002 10:26:47

29-01-2002 10:26:47 Waiting to be contacted by the server.
29-01-2002 19:03:59 TCP/IP accepted connection from server.
29-01-2002 19:03:59 Querying server for next scheduled event.
29-01-2002 19:03:59 Node Name: DCS_NOTESSRV1
29-01-2002 19:04:04 Session established with server TSM01.MBT: AIX-RS/6000
29-01-2002 19:04:04 * Server Version 4, Release 2, Level 1.9
29-01-2002 19:04:04 * Server date/time: 29-01-2002 19:00:10 *Last access:
29-01-2002 10:22:53

29-01-2002 19:04:04 --- SCHEDULEREC QUERY BEGIN
29-01-2002 19:04:04 --- SCHEDULEREC QUERY END
29-01-2002 19:04:04 Next operation scheduled:
29-01-2002 19:04:04

29-01-2002 19:04:04 Schedule Name: * * * * NTDAILY
29-01-2002 19:04:04 Action: * * * * * * * *Incremental
29-01-2002 19:04:04 Objects:
29-01-2002 19:04:04 Options:
29-01-2002 19:04:04 Server Window Start: * 19:00:00 on 29-01-2002
29-01-2002 19:04:04

29-01-2002 19:04:04
Executing scheduled command now.
29-01-2002 19:04:04 --- SCHEDULEREC OBJECT BEGIN NTDAILY 

Re: low bandwitdth and big files

2002-01-30 Thread Wholey, Joseph (TGA\\MLOL)

Matt, 

Don't be so sure...  my MVS folks are telling me that there is no way TSM is over 
riding their micro-code hardware level compression.  (but I will double check with 
them and again as well as with
Tivoli).  

With regard to compressing data twice, I disagree.  There's something very wrong with 
it.  That's why it is strongly recommended not to do it. (not just with TSM, but with 
all data)  Some data that
goes thru multiple compression routines can blow up to 2x the size the file started 
out as.

And finally, the reason we turn compression on at the client, is to compress it before 
it rides our very slow network.  Otherwise, I wouldn't.  

Regards, Joe


  

-Original Message-
From: Matthew Glanville [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 3:42 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


From: Matthew Glanville

Joe,

I do not believe there is such a thing as 'server' level compression.  My
understanding is that the device class compression settings are reflecting
the hardware level compression settings, they can override what the
microcode may have set the 'default' to.

We have no problems at all with clients that compress with the tsm client,
and then compress again on the tape drive, you loose just a little bit of
space, and yes, the occupancy information does not know that the data has
already been compresssed.  There is really nothing wrong with compressing
data more than once, the files get a bit bigger, but it could be worth the
time and bandwith saved.  Also don't forget that lots of data is stored
already compressed in zipped files or compressed images like jpeg and mpeg.

I would not touch with the compression settings on the device class, keep
them on at the highest level, just turn on or off the TSM client
compression as needed.  Check and see if that helps your low bandwith
backups.

Matthew Glanville
[EMAIL PROTECTED]





Wholey, Joseph (TGA\\MLOL) [EMAIL PROTECTED]@VM.MARIST.EDU on
01/30/2002 01:39:10 PM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: low bandwitdth and big files


Matt,

Are you running two, maybe 3 compression routines.  i.e.  once at the
client, once at the server level (you'll see if you q devclass f=d on a
sequential storage device) and once at the hardware level
(micro code).

If so, have you kept check on the amount of data in your stgpool.  I say
this because a q occ on a file space is not going to give you an accurrate
indication of the amount of data that node/filespace
has dumped into your stgpool.
Although the IBMers and the manuals say don't run multiple compression
routines, they've yet to advise on what to do if you have to run client
side compression due to a slow network.  I can shut off
server side/devclass compression, but what about hardware compression.  Can
you shut off compression on a 3590 tape device.  Isn't that a micro code
issue.  i.e.  you can't shut it off?

Regards, Joe

-Original Message-
From: Matthew Glanville [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 12:43 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


From: Matthew Glanville

You might want to turn on TSM client side compression...
In my experience notes databases can get at least 50% compressed.
Your backups will most likely go down to 2 hours, or even more.

TSM: update node node_name compress=yes

Give it a try.  For low bandwith lines I always prefer to let TSM compress
the data first

Of course, we no longer back up notes as normal files but use the TDP for
Domino agent.
(but still use TSM client compression).

Matthew Glanville
[EMAIL PROTECTED]






Burak Demircan [EMAIL PROTECTED]@VM.MARIST.EDU on
01/30/2002 10:48:46 AM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: low bandwitdth and big files


I have one full backup. What could be the solution? The files change
with minor changes every day but 1.8GB file is backuped up from scratch
every
day.
Regards,
Burak




[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]

30.01.2002 16:39
Please respond to ADSM-L

        To:        [EMAIL PROTECTED]
        cc:
        Subject:        Re: low bandwitdth and big files

Depending on circumstances, this might be a candidate for adaptive
differencing, TSMs version of a block level incremental.  You will still
have to at least once do a complete backup of the big files though.



_
William Mansfield
Senior Consultant
Solution Technology, Inc





                      Burak Demircan

                      burak.demircan@DAIMLERCH        To:
[EMAIL PROTECTED]

                      RYSLER.COM                      cc:

                      Sent by: ADSM: Dist

Re: disk pool problem tsm 4.2.1.9

2002-01-29 Thread Wholey, Joseph (TGA\\MLOL)

Change your Hi Lo migration threshold (up the 60% on the hi) or throw more disk at 
your solution.  I'd first change the threshold though.

-Original Message-
From: Burak Demircan [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 28, 2002 7:25 AM
To: [EMAIL PROTECTED]
Subject: disk pool problem tsm 4.2.1.9


Hi,
I am using TSM 4.2.1.9 on AIX 4.3.3 with 3583 Library (3 Drives on it)

I have a disk pool about 3GB. All my schedules starts at 7pm but some
clients' files are over 1.8GB on a low bandwitdth. These clients
try to write next stg pool (to a tape pool) directly although their
destination pool is my disk pool. I came to a conclusion that due
to not enough space on disk pool during backup of many clients
some of them are trying to write tape pool even though tape drives are very
slow.

Do you have any idea to prevent clients to write my tape pools
directly (I use migration and at %60 the disk pool migrates to tape pool).

I want all clients to write to disk pool and migrate them according
to above condition whatever the size or bandwitdh of data.

Thank you in advance
Burak



Re: Performance drop after upgrade to TSM 4.2.1.8

2002-01-29 Thread Wholey, Joseph (TGA\\MLOL)

How do you go about unloading and loading your db.  Is this ever really necessary.  
Shouldn't TSM manage that sort of thing on it's own?

-Original Message-
From: Rupp Thomas (Illwerke) [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 28, 2002 8:50 AM
To: [EMAIL PROTECTED]
Subject: AW: Performance drop after upgrade to TSM 4.2.1.8


Hi,

maybe we've got the same problem (slow performance) but on a different
platform.
We upgraded from TSM 3.7.2.0 to 4.1.1.0 and our database cache hit rate
dropped to
about 95%.
We then upgraded to TSM 4.1.5.0, changed some settings in DSMSERV.OPT - to
no
avail.
Over this weekend we defragmented (UNLOADDB/LOADDB) our database and this
seemed
to solve our problem - I can tell you more tomorrow!

We are running on an H50 with AIX 4.3.3.0.

Kind regards
Thomas Rupp



--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG
--



Re: New install of TSM

2002-01-29 Thread Wholey, Joseph (TGA\\MLOL)

I'd go the easy route (thinner too) and just write a script to send out new 
dsm.opt/dsm.sys files to your clients.  And then have another script that recycles 
your central scheduler service.  If you
want to start getting silly, you can also have a script that retrieves copies of your 
dsm*.log files to a central repository and you can check stuff from there.  

-Original Message-
From: RB Páll Birkir Wolfram [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 28, 2002 8:17 AM
To: [EMAIL PROTECTED]
Subject: New install of TSM


Hi,  I am currently implementin a TSM 4.2 on OS/390 to backup a Windows NT /
2000,  AIX and OS/2 environment but not the OS/390 environment.  Can I use
Microsoft Managent Console (MMC) or a Web browser to administrate the
clients, so that the administrator doesn´t have to access each client
seperatly when changing what to backup?  Where can I get the MMC snap in for
TSM and how do I implement it?
 
Thanks in Advance
 
Pall B. Wolfram
System Administrator   



Re: Antwort: backing up to tape/OS/390

2002-01-26 Thread Wholey, Joseph (TGA\\MLOL)

Gerhard,

Thanks for the info... by the way, what kind of time frame are we looking at for a 
MOVE DATA operation.

Regards, Joe

-Original Message-
From: Gerhard Wolkerstorfer [mailto:[EMAIL PROTECTED]]
Sent: Saturday, January 26, 2002 3:17 AM
To: [EMAIL PROTECTED]
Subject: Antwort: backing up to tape/OS/390

You could try to check, on which tape your data is (Volumeusage-Table)
And than, make a MOVE DATA from this tape to your diskpool (Maybe you have to
increase your diskpool) and than start the restores. When all of your files are
on the diskpool, your restore will complete without any Media Waits.

Gerhard Wolkerstorfer





[EMAIL PROTECTED] am 26.01.2002 04:41:16

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:backing up to tape/OS/390



Environment:
System/390
device type: Cartridge/3590
TSM 4.1.3 Server
TSM client 3.1.06


Scenario:
I get notified in advance that I'll need to restore multiple clients (gigs worth
of data) on a given weekend.  I manually kick off a base backup to a spare TSM
server on Friday night so when the
call comes in (over the weekend) to restore the servers, I restore from a
relatively quiet TSM server.  I'm not contending with the production backup
cycle.

The problem:
This backed up data gets migrated to tape prior to my restore.  Invariably,
multiple servers' data ends up on one tape.  i.e. only one server can restore at
a time.

The question:
Is there a way to direct the backup data of a particular server to a specific
tape so I can run multiple restores from multiple tapes at the same time?  This
is killing me...  Good suggestions would
be appreciated.

Regards, Joe



Database Specs

2002-01-25 Thread Wholey, Joseph (TGA\\MLOL)

I'm running TSM v 4.1.3 on System\390.  My database is 47G and spread across 25 
physical volumes (gets a 78% cache hit / not so great).

One of my performance and tuning books recommends not spreading your database over 
more than 12 physical volumes.  Could anyone explain why?  What kind of 
performance/utilazation hit am I going to
incur by having my db spread across 25 volumes?

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



compression

2002-01-25 Thread Wholey, Joseph (TGA\\MLOL)

Currently running TSM v 4.1.3 on System\390.  Run compression on the client as well as 
harware compression on Device type Cartridge and 3590.  Ideally this is not the way to 
go... but we need to run
compression on the client due to network constraints.  Is it a FACT that running a 
compression routine 2x will ultimately double the size of the original file?  Where 
can I find more info on this?
Any suggestions would be appreciated.

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



Re: Database Specs

2002-01-25 Thread Wholey, Joseph (TGA\\MLOL)

Thanks...

-Original Message-
From: Malbrough, Demetrius [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 25, 2002 2:30 PM
To: [EMAIL PROTECTED]
Subject: Re: Database Specs


If you are going to increase the BUFPoolsize parameter make sure you do it
based on
the amount of System Memory (MB). 'An optimal setting for the database
buffer pool is one in which the cache hit percentage is greater than or
equal to 98%. If you have enough memory, increase in 1MB increments. A cache
hit percentage greater than 99% is an indication that the proper BUFPoolsize
has been reached. While increasing BUFPoolsize, care must be taken not to
cause paging in the virtual memory system. Monitor system memory usage to
check for any increased paging after the BUFPoolsize change. (Use the RESET
BUFP command to reset the cache hit statistics.)'

Regards,

Demetrius Malbrough
UNIX/TSM Administrator




-Original Message-
From: William F. Colwell [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 25, 2002 1:13 PM
To: [EMAIL PROTECTED]
Subject: Re: Database Specs


Joe - the number of volumes isn't the problem.  You need to increase the
bufpoolsize in the server
dsm.opt file.  Here is a 'q db f=d' from my system.  The bufpoolsize
parameter becomes the
'buffer pool pages' value in the q db.

Bill Colwell
- - -
tsm: NEW_ADSM_SERVER_ON_PORT_1600q db f=d

  Available Space (MB): 166,860
Assigned Capacity (MB): 166,860
Maximum Extension (MB): 0
Maximum Reduction (MB): 18,640
 Page Size (bytes): 4,096
Total Usable Pages: 42,716,160
Used Pages: 37,947,974
  Pct Util: 88.8
 Max. Pct Util: 88.8
  Physical Volumes: 45
 Buffer Pool Pages: 65,536
 Total Buffer Requests: 194,057,057
Cache Hit Pct.: 98.44
   Cache Wait Pct.: 0.00
   Backup in Progress?: No
Type of Backup In Progress:
  Incrementals Since Last Full: 5
Changed Since Last Backup (MB): 1,542.07
Percentage Changed: 1.04
Last Complete Backup Date/Time: 01/24/2002 15:49:51
- - -
At 12:54 PM 1/25/2002 -0500, you wrote:
I'm running TSM v 4.1.3 on System\390.  My database is 47G and spread
across 25 physical volumes (gets a 78% cache hit / not so great).

One of my performance and tuning books recommends not spreading your
database over more than 12 physical volumes.  Could anyone explain why?
What kind of performance/utilazation hit am I going to
incur by having my db spread across 25 volumes?

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



backing up to tape/OS/390

2002-01-25 Thread Wholey, Joseph (TGA\\MLOL)

Environment:
System/390
device type: Cartridge/3590
TSM 4.1.3 Server
TSM client 3.1.06


Scenario:
I get notified in advance that I'll need to restore multiple clients (gigs worth of 
data) on a given weekend.  I manually kick off a base backup to a spare TSM server 
on Friday night so when the
call comes in (over the weekend) to restore the servers, I restore from a relatively 
quiet TSM server.  I'm not contending with the production backup cycle.

The problem:
This backed up data gets migrated to tape prior to my restore.  Invariably, multiple 
servers' data ends up on one tape.  i.e. only one server can restore at a time.

The question:
Is there a way to direct the backup data of a particular server to a specific tape so 
I can run multiple restores from multiple tapes at the same time?  This is killing 
me...  Good suggestions would
be appreciated.

Regards, Joe